hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
466f047e17e0d6d7208910c763a4df77317279f9 | 4,596 | py | Python | tt/satisfiability/picosat.py | fkromer/tt | b4dfc90f7d0f9b5794e1f5054b640e22f6f75bf7 | [
"MIT"
] | 233 | 2016-02-05T20:13:06.000Z | 2022-03-26T13:01:10.000Z | tt/satisfiability/picosat.py | fkromer/tt | b4dfc90f7d0f9b5794e1f5054b640e22f6f75bf7 | [
"MIT"
] | 8 | 2017-12-20T17:07:58.000Z | 2020-08-06T15:44:55.000Z | tt/satisfiability/picosat.py | fkromer/tt | b4dfc90f7d0f9b5794e1f5054b640e22f6f75bf7 | [
"MIT"
] | 15 | 2016-03-22T23:37:56.000Z | 2022-02-27T17:51:08.000Z | """Python wrapper around the _clibs PicoSAT extension."""
import os
from tt.errors.arguments import (
InvalidArgumentTypeError,
InvalidArgumentValueError)
if os.environ.get('READTHEDOCS') != 'True':
from tt._clibs import picosat as _c_picosat
VERSION = _c_picosat.VERSION
def sat_one(clauses, assumptions=None):
"""Find a solution that satisfies the specified clauses and assumptions.
This provides a light Python wrapper around the same method in the PicoSAT
C-extension. While completely tested and usable, this method is probably
not as useful as the interface provided through the
:func:`sat_one <tt.expressions.bexpr.BooleanExpression.sat_one>` method in
the :class:`BooleanExpression <tt.expressions.bexpr.BooleanExpression>`
class.
:param clauses: CNF (AND of ORs) clauses; positive integers represent
non-negated terms and negative integers represent negated terms.
:type clauses: List[List[:class:`int <python:int>`]]
:param assumptions: Assumed terms; same negation logic from ``clauses``
applies here. Note that assumptions *cannot* be an empty list; leave it
as ``None`` if there are no assumptions to include.
:type assumptions: List[:class:`int <python:int>`]
:returns: If solution is found, a list of ints representing the terms of
the solution; otherwise, if no solution found, ``None``.
:rtype: List[:class:`int <python:int>`] or ``None``
:raises InvalidArgumentTypeError: If ``clauses`` is not a list of lists of
ints or ``assumptions`` is not a list of ints.
:raises InvalidArgumentValueError: If any literal ints are equal to zero.
Let's look at a simple example with no satisfiable solution::
>>> from tt import picosat
>>> picosat.sat_one([[1], [-1]]) is None
True
Here's an example where a solution exists::
>>> picosat.sat_one([[1, 2, 3], [-2, -3], [1, -2], [2, -3], [-2]])
[1, -2, -3]
Finally, here's an example using assumptions::
>>> picosat.sat_one([[1, 2, 3], [2, 3]], assumptions=[-1, -3])
[-1, 2, -3]
"""
try:
return _c_picosat.sat_one(clauses, assumptions=assumptions)
except TypeError as e:
raise InvalidArgumentTypeError(str(e))
except ValueError as e:
raise InvalidArgumentValueError(str(e))
def sat_all(clauses, assumptions=None):
"""Find all solutions that satisfy the specified clauses and assumptions.
This provides a light Python wrapper around the same method in the PicoSAT
C-extension. While completely tested and usable, this method is probably
not as useful as the interface provided through the
:func:`sat_all <tt.expressions.bexpr.BooleanExpression.sat_all>` method in
the :class:`BooleanExpression <tt.expressions.bexpr.BooleanExpression>`
class.
:param clauses: CNF (AND of ORs) clauses; positive integers represent
non-negated terms and negative integers represent negated terms.
:type clauses: List[List[:class:`int <python:int>`]]
:param assumptions: Assumed terms; same negation logic from ``clauses``
applies here. Note that assumptions *cannot* be an empty list; leave it
as ``None`` if there are no assumptions to include.
:type assumptions: List[:class:`int <python:int>`]
:returns: An iterator of solutions; if no satisfiable solutions exist, the
iterator will be empty.
:rtype: Iterator[List[:class:`int <python:int>`]]
:raises InvalidArgumentTypeError: If ``clauses`` is not a list of lists of
ints or ``assumptions`` is not a list of ints.
:raises InvalidArgumentValueError: If any literal ints are equal to zero.
Here's an example showing the basic usage::
>>> from tt import picosat
>>> for solution in picosat.sat_all([[1], [2, 3, 4], [2, 3]]):
... print(solution)
...
[1, 2, 3, 4]
[1, 2, 3, -4]
[1, 2, -3, 4]
[1, 2, -3, -4]
[1, -2, 3, 4]
[1, -2, 3, -4]
We can cut down on some of the above solutions by including an assumption::
>>> for solution in picosat.sat_all([[1], [2, 3, 4], [2, 3]],
... assumptions=[-3]):
... print(solution)
...
[1, 2, -3, 4]
[1, 2, -3, -4]
"""
try:
return _c_picosat.sat_all(clauses, assumptions=assumptions)
except TypeError as e:
raise InvalidArgumentTypeError(str(e))
except ValueError as e:
raise InvalidArgumentValueError(str(e))
| 37.672131 | 79 | 0.650131 | 615 | 4,596 | 4.821138 | 0.239024 | 0.012816 | 0.014165 | 0.013491 | 0.715346 | 0.662057 | 0.662057 | 0.662057 | 0.649916 | 0.649916 | 0 | 0.020011 | 0.238903 | 4,596 | 121 | 80 | 37.983471 | 0.827616 | 0.766319 | 0 | 0.47619 | 0 | 0 | 0.019182 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.095238 | false | 0 | 0.142857 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
4671817d5486f1ffa5048135771d27e1109e5cdd | 12,349 | py | Python | src/pyfmodex/sound.py | Loodoor/UnamedPy | 7d154c3a652992b3c1f28050f0353451f57b2a2d | [
"MIT"
] | 1 | 2017-02-21T16:46:21.000Z | 2017-02-21T16:46:21.000Z | src/pyfmodex/sound.py | Loodoor/UnamedPy | 7d154c3a652992b3c1f28050f0353451f57b2a2d | [
"MIT"
] | 1 | 2017-02-21T17:57:05.000Z | 2017-02-22T11:28:51.000Z | src/pyfmodex/sound.py | Loodoor/UnamedPy | 7d154c3a652992b3c1f28050f0353451f57b2a2d | [
"MIT"
] | null | null | null | from .fmodobject import *
from .fmodobject import _dll
from .structures import TAG, VECTOR
from .globalvars import get_class
class ConeSettings(object):
def __init__(self, sptr):
self._sptr = sptr
self._in = c_float()
self._out = c_float()
self._outvol = c_float()
ckresult(_dll.FMOD_Sound_Get3DConeSettings(self._sptr, byref(self._in), byref(self._out), byref(self._outvol)))
@property
def inside_angle(self):
return self._in.value
@inside_angle.setter
def inside_angle(self, angle):
self._in = c_float(angle)
self._commit()
@property
def outside_angle(self):
return self._out.value
@outside_angle.setter
def outside_angle(self, angle):
self._out = c_float(angle)
self._commit()
@property
def outside_volume(self):
return self._outvol.value
@outside_volume.setter
def outside_volume(self, vol):
self._outvol = c_float(vol)
self._commit()
def _commit(self):
ckresult(_dll.FMOD_Sound_Set3DConeSettings(self._sptr, self._in, self._out, self._outvol))
class Sound(FmodObject):
def add_sync_point(self, offset, offset_type, name):
s_ptr = c_void_p()
ckresult(_dll.FMOD_Sound_AddSyncPoint(self._ptr, offset, offset_type, name, byref(s_ptr)))
return s_ptr
def delete_sync_point(self, point):
ckresult(_dll.FMOD_Sound_DeleteSyncPoint(self._ptr, point))
@property
def threed_cone_settings(self):
return ConeSettings(self._ptr)
@property
def custom_rolloff(self):
"""Returns the custom rolloff curve.
:rtype: List of [x, y, z] lists.
"""
num = c_int()
self._call_fmod("FMOD_Sound_Get3DCustomRolloff", None, byref(num))
curve = (VECTOR * num.value)()
self._call_fmod("FMOD_Sound_Get3DCustomRolloff", byref(curve), 0)
return [p.to_list() for p in curve]
@custom_rolloff.setter
def custom_rolloff(self, curve):
"""Sets the custom rolloff curve.
:param curve: The curve to set.
:type curve: A list of something that can be treated as a list of [x, y, z] values e.g. implements indexing in some way.
"""
native_curve = (VECTOR * len(curve))(*[VECTOR.from_list(lst) for lst in curve])
self._call_fmod("FMOD_Sound_Set3DCustomRolloff", native_curve, len(native_curve))
@property
def _min_max_distance(self):
min = c_float()
max = c_float()
ckresult(_dll.FMOD_Sound_Get3DMinMaxDistance(self._ptr, byref(min), byref(max)))
return (min.value, max.value)
@_min_max_distance.setter
def _min_max_distance(self, dists):
ckresult(_dll.FMOD_Sound_Set3DMinMaxDistance(self._ptr, c_float(dists[0]), c_float(dists[1])))
@property
def min_distance(self):
return self._min_max_distance[0]
@min_distance.setter
def min_distance(self, dist):
self._min_max_distance = (dist, self._min_max_distance[1])
@property
def max_distance(self):
return self._min_max_distance[1]
@max_distance.setter
def max_distance(self, dist):
self._min_max_distance = (self._min_max_distance[0], dist)
@property
def _defaults(self):
freq = c_float()
vol = c_float()
pan = c_float()
pri = c_int()
ckresult(_dll.FMOD_Sound_GetDefaults(self._ptr, byref(freq), byref(vol), byref(pan), byref(pri)))
return [freq.value, vol.value, pan.value, pri.value]
@_defaults.setter
def _defaults(self, vals):
ckresult(_dll.FMOD_Sound_SetDefaults(self._ptr, c_float(vals[0]), c_float(vals[1]), c_float(vals[2]), vals[3]))
@property
def default_frequency(self):
return self._defaults[0]
@default_frequency.setter
def default_frequency(self, freq):
d = self._defaults
d[0] = freq
self._defaults = d
@property
def default_volume(self):
return self._defaults[1]
@default_volume.setter
def default_volume(self, vol):
d = self._defaults
d[1] = vol
self._defaults = d
@property
def default_pan(self):
return self._defaults[2]
@default_pan.setter
def default_pan(self, pan):
d = self._defaults
d[2] = pan
self._defaults = d
@property
def default_priority(self):
return self._defaults[3]
@default_priority.setter
def default_priority(self, pri):
d = self._defaults
d[3] = pri
self._defaults = d
@property
def format(self):
type = c_int()
format = c_int()
bits = c_int()
ckresult(_dll.FMOD_Sound_GetFormat(self._ptr, byref(type), byref(format), byref(bits)))
return so(type=type.value, format=format.value, bits=bits.value)
def get_length(self, ltype):
len = c_uint()
ckresult(_dll.FMOD_Sound_GetLength(self._ptr, byref(len), ltype))
return len.value
@property
def loop_count(self):
c = c_int()
ckresult(_dll.FMOD_Sound_GetLoopCount(self._ptr, byref(c)))
return c.value
@loop_count.setter
def loop_count(self, count):
ckresult(_dll.FMOD_Sound_SetLoopCount(self._ptr, count))
@property
def loop_points(self):
"""Returns tuple of two tuples ((start, startunit),(end, endunit))"""
start = c_uint()
startunit = c_int()
end = c_uint()
endunit = c_int()
ckresult(_dll.FMOD_Sound_GetLoopPoints(self._ptr, byref(start), byref(startunit), byref(end), byref(endunit)))
return ((start.value, startunit.value), (end.value, endunit.value))
@loop_points.setter
def loop_points(self, p):
"""Same format as returned from this property is required to successfully call this setter."""
ckresult(_dll.FMOD_Sound_SetLoopPoints(self._ptr, p[0][0], p[0][1], p[1][0], p[1][1]))
@property
def mode(self):
mode = c_int()
ckresult(_dll.FMOD_Sound_GetMode(self._ptr, byref(mode)))
return mode.value
@mode.setter
def mode(self, m):
ckresult(_dll.FMOD_Sound_SetMode(self._ptr, m))
def get_music_channel_volume(self, channel):
v = c_float()
ckresult(_dll.FMOD_Sound_GetMusicChannelVolume(self._ptr, channel, byref(v)))
return v.value
def set_music_channel_volume(self, id, vol):
ckresult(_dll.FMOD_Sound_SetMusicChannelVolume(self._ptr, id, c_float(vol)))
@property
def num_music_channels(self):
num = c_int()
ckresult(_dll.FMOD_Sound_GetMusicNumChannels(self._ptr, byref(num)))
return num.value
@property
def name(self):
name = create_string_buffer(256)
ckresult(_dll.FMOD_Sound_GetName(self._ptr, byref(name), 256))
return name.value
@property
def num_subsounds(self):
num = c_int()
ckresult(_dll.FMOD_Sound_GetNumSubSounds(self._ptr, byref(num)))
return num.value
@property
def num_sync_points(self):
num = c_int()
ckresult(_dll.FMOD_Sound_GetNumSyncPoints(self._ptr, byref(num)))
return num.value
@property
def num_tags(self):
num = c_int()
ckresult(_dll.FMOD_Sound_GetNumTags(self._ptr, byref(num)))
return num.value
@property
def open_state(self):
state = c_int()
percentbuffered = c_uint()
starving = c_bool()
diskbusy = c_bool()
ckresult(_dll.FMOD_Sound_GetOpenState(self._ptr, byref(state), byref(percentbuffered), byref(starving),
byref(diskbusy)))
return so(state=state.value, percent_buffered=percentbuffered.value, starving=starving.value,
disk_busy=diskbusy.value)
@property
def sound_group(self):
grp_ptr = c_void_p()
ckresult(_dll.FMOD_Sound_GetSoundGroup(self._ptr, byref(grp_ptr)))
return get_class("SoundGroup")(grp_ptr)
@sound_group.setter
def sound_group(self, group):
check_type(group, get_class("SoundGroup"))
ckresult(_dll.FMOD_Sound_SetSoundGroup(self._ptr, group._ptr))
def get_subsound(self, index):
sh_ptr = c_void_p()
ckresult(_dll.FMOD_Sound_GetSubSound(self._ptr, index, byref(sh_ptr)))
return Sound(sh_ptr)
def get_sync_point(self, index):
sp = c_int()
ckresult(_dll.FMOD_Sound_GetSyncPoint(self._ptr, index, byref(sp)))
return sp.value
def get_sync_point_info(self, point):
name = c_char_p()
offset = c_uint()
offsettype = c_int()
ckresult(_dll.FMOD_Sound_GetSyncPointInfo(self._ptr, point, byref(name), 256, byref(offset), byref(offsettype)))
return so(name=name.value, offset=offset.value, offset_type=offsettype.value)
@property
def system_object(self):
sptr = c_void_p()
ckresult(_dll.FMOD_Sound_GetSystemObject(self._ptr, byref(sptr)))
return get_class("System")(sptr, False)
def play(self, paused=False):
return self.system_object.play_sound(self, paused)
def get_tag(self, index, name=None):
tag = TAG()
ckresult(_dll.FMOD_Sound_GetTag(self._ptr, name, index, byref(tag)))
return tag
@property
def _variations(self):
freq = c_float()
vol = c_float()
pan = c_float()
ckresult(_dll.FMOD_Sound_GetVariations(self._ptr, byref(freq), byref(vol), byref(pan)))
return [freq.value, vol.value, pan.value]
@_variations.setter
def _variations(self, vars):
ckresult(_dll.FMOD_Sound_SetVariations(self._ptr, c_float(vars[0]), c_float(vars[1]), c_float(vars[2])))
@property
def frequency_variation(self):
return self._variations[0]
@frequency_variation.setter
def frequency_variation(self, var):
v = self._variations
v[0] = var
self._variations = var
@property
def volume_variation(self):
return self._variations[1]
@volume_variation.setter
def volume_variation(self, var):
v = self._variations
v[1] = var
self._variations = var
@property
def pan_variation(self):
return self._variations[2]
@pan_variation.setter
def pan_variation(self, var):
v = self._variations
v[2] = var
self._variations = var
def lock(self, offset, length):
ptr1 = c_void_p()
len1 = c_uint()
ptr2 = c_void_p()
len2 = c_uint()
ckresult(_dll.FMOD_Sound_Lock(self._ptr, offset, length, byref(ptr1), byref(ptr2), byref(len1), byref(len2)))
return ((ptr1, len1), (ptr2, len2))
def release(self):
ckresult(_dll.FMOD_Sound_Release(self._ptr))
def set_subsound(self, index, snd):
check_type(snd, Sound)
ckresult(_dll.FMOD_Sound_SetSubSound(self._ptr, index, snd._ptr))
def set_subsound_sentence(self, sounds):
a = c_int * len(sounds)
ptrs = [o._ptr for o in sounds]
ai = a(*ptrs)
ckresult(_dll.FMOD_Sound_SetSubSoundSentence(self._ptr, ai, len(ai)))
def unlock(self, i1, i2):
"""I1 and I2 are tuples of form (ptr, len)."""
ckresult(_dll.FMOD_Sound_Unlock(self._ptr, i1[0], i2[0], i1[1], i2[1]))
@property
def music_speed(self):
speed = c_float()
self._call_fmod("FMOD_Sound_GetMusicSpeed", byref(speed))
return speed.value
@music_speed.setter
def music_speed(self, speed):
self._call_fmod("FMOD_Sound_SetMusicSpeed", c_float(speed))
def read_data(self, length):
"""Read a fragment of the sound's decoded data.
:param length: The requested length.
:returns: The data and the actual length.
:rtype: Tuple of the form (data, actual)."""
buf = create_string_buffer(length)
actual = c_uint()
self._call_fmod("FMOD_Sound_ReadData", buf, length, byref(actual))
return buf.value, actual.value
def seek_data(self, offset):
"""Seeks for data reading purposes.
:param offset: The offset to seek to in PCM samples.
:type offset: Int or long, but must be in range of an unsigned long, not python's arbitrary long."""
self._call_fmod("FMOD_Sound_SeekData", offset) | 31.745501 | 128 | 0.639809 | 1,617 | 12,349 | 4.611008 | 0.155844 | 0.054319 | 0.076449 | 0.101931 | 0.252548 | 0.195279 | 0.121647 | 0.078192 | 0.030848 | 0.020118 | 0 | 0.008162 | 0.246012 | 12,349 | 389 | 129 | 31.745501 | 0.792611 | 0.064297 | 0 | 0.206667 | 0 | 0 | 0.017398 | 0.011803 | 0 | 0 | 0 | 0 | 0 | 1 | 0.233333 | false | 0 | 0.013333 | 0.046667 | 0.386667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
46725864f7a8f29464ea63af729e3e78c2a1218d | 370 | py | Python | GreenMoon/forms.py | ma010/green-moon | 25ed395f1e19c180995b22508143c8819bf40fae | [
"CNRI-Python"
] | null | null | null | GreenMoon/forms.py | ma010/green-moon | 25ed395f1e19c180995b22508143c8819bf40fae | [
"CNRI-Python"
] | null | null | null | GreenMoon/forms.py | ma010/green-moon | 25ed395f1e19c180995b22508143c8819bf40fae | [
"CNRI-Python"
] | null | null | null | """
Implement a class function for user to put in a zip-code and
search relevant information about business entities in that zip-code area.
"""
from flask.ext.wtf import Form
from wtforms import StringField, BooleanField
from wtforms.validators import DataRequired
class inputZipForm(Form):
inputZip = StringField('inputZip', validators=[DataRequired()])
| 28.461538 | 78 | 0.77027 | 48 | 370 | 5.9375 | 0.6875 | 0.049123 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.156757 | 370 | 12 | 79 | 30.833333 | 0.913462 | 0.364865 | 0 | 0 | 0 | 0 | 0.036697 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.6 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
467742b9ee49da3193dfeffba9fb6976ebe7eb72 | 2,391 | py | Python | nncf/experimental/onnx/algorithms/quantization/default_quantization.py | vuiseng9/nncf_pytorch | c2b1f069c867327203629201aecae3b7815e7895 | [
"Apache-2.0"
] | 136 | 2020-06-01T14:03:31.000Z | 2020-10-28T06:10:50.000Z | nncf/experimental/onnx/algorithms/quantization/default_quantization.py | vuiseng9/nncf_pytorch | c2b1f069c867327203629201aecae3b7815e7895 | [
"Apache-2.0"
] | 133 | 2020-05-26T13:48:04.000Z | 2020-10-28T05:25:55.000Z | nncf/experimental/onnx/algorithms/quantization/default_quantization.py | vuiseng9/nncf_pytorch | c2b1f069c867327203629201aecae3b7815e7895 | [
"Apache-2.0"
] | 36 | 2020-05-28T08:18:39.000Z | 2020-10-27T14:46:58.000Z | """
Copyright (c) 2022 Intel Corporation
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
"""
from nncf.common.quantization.quantizer_propagation.structs import QuantizationTrait
from nncf.experimental.onnx.graph.metatypes.onnx_ops import ONNXConvolutionMetatype
from nncf.experimental.onnx.graph.metatypes.onnx_ops import ONNXLinearMetatype
from nncf.experimental.onnx.graph.metatypes.onnx_ops import ONNXSigmoidMetatype
from nncf.experimental.onnx.graph.metatypes.onnx_ops import ONNXHardSigmoidMetatype
from nncf.experimental.onnx.graph.metatypes.onnx_ops import ONNXAveragePoolMetatype
from nncf.experimental.onnx.graph.metatypes.onnx_ops import ONNXGlobalAveragePoolMetatype
from nncf.experimental.onnx.graph.metatypes.onnx_ops import ONNXAddLayerMetatype
from nncf.experimental.onnx.graph.metatypes.onnx_ops import ONNXMulLayerMetatype
from nncf.experimental.onnx.graph.metatypes.onnx_ops import ONNXConcatLayerMetatype
from nncf.experimental.onnx.graph.metatypes.onnx_ops import ONNXBatchNormMetatype
from nncf.experimental.onnx.graph.metatypes.onnx_ops import ONNXResizeMetatype
from nncf.experimental.onnx.graph.metatypes.onnx_ops import ONNXSoftmaxMetatype
from nncf.common.graph.operator_metatypes import UnknownMetatype
DEFAULT_ONNX_QUANT_TRAIT_TO_OP_DICT = {
QuantizationTrait.INPUTS_QUANTIZABLE: [
ONNXConvolutionMetatype,
ONNXLinearMetatype,
ONNXAveragePoolMetatype,
ONNXGlobalAveragePoolMetatype,
ONNXAddLayerMetatype,
ONNXMulLayerMetatype,
ONNXBatchNormMetatype,
ONNXHardSigmoidMetatype,
ONNXResizeMetatype,
],
QuantizationTrait.NON_QUANTIZABLE: [ONNXSigmoidMetatype,
ONNXSoftmaxMetatype,
UnknownMetatype],
QuantizationTrait.CONCAT: [ONNXConcatLayerMetatype],
QuantizationTrait.OUTPUT_QUANTIZATION_AS_WEIGHTS: []
}
| 49.8125 | 89 | 0.79632 | 261 | 2,391 | 7.199234 | 0.394636 | 0.059606 | 0.127728 | 0.153273 | 0.325705 | 0.325705 | 0.325705 | 0.325705 | 0.325705 | 0 | 0 | 0.003929 | 0.148473 | 2,391 | 47 | 90 | 50.87234 | 0.918959 | 0.233793 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.4375 | 0 | 0.4375 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
4677cd39827e65c98f0ade72fd58eb0f79b2c0cc | 671 | py | Python | packages/pyre/tracking/Chain.py | lijun99/pyre | 004dfd4c06489b4ba5b32877338ca6440f2d523b | [
"BSD-3-Clause"
] | 3 | 2019-08-02T21:02:47.000Z | 2021-09-08T13:59:43.000Z | packages/pyre/tracking/Chain.py | lijun99/pyre | 004dfd4c06489b4ba5b32877338ca6440f2d523b | [
"BSD-3-Clause"
] | null | null | null | packages/pyre/tracking/Chain.py | lijun99/pyre | 004dfd4c06489b4ba5b32877338ca6440f2d523b | [
"BSD-3-Clause"
] | null | null | null | # -*- coding: utf-8 -*-
#
# michael a.g. aïvázis
# orthologue
# (c) 1998-2019 all rights reserved
#
# declaration
class Chain:
"""
A locator that ties together two others in order to express that something in {next}
caused {this} to be recorded
"""
# meta methods
def __init__(self, this, next):
self.this = this
self.next = next
return
def __str__(self):
# if {next} is non-trivial, show the chain
if self.next: return "{0.this}, {0.next}".format(self)
# otherwise don't
return "{0.this}".format(self)
# implementation details
__slots__ = "this", "next"
# end of file
| 18.638889 | 88 | 0.593145 | 88 | 671 | 4.386364 | 0.647727 | 0.041451 | 0.056995 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.025052 | 0.28614 | 671 | 35 | 89 | 19.171429 | 0.780793 | 0.47541 | 0 | 0 | 0 | 0 | 0.105919 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.222222 | false | 0 | 0 | 0 | 0.666667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
4689fd0a503a48da1fc4fb1000e346ebf2f7be93 | 605 | py | Python | tests/port_tests/point_tests/test_bounding_box.py | skrat/martinez | 86db48324cb50ecb52be8ab2e4278a6d5cdd562b | [
"MIT"
] | 7 | 2020-05-07T08:13:44.000Z | 2021-12-17T07:33:51.000Z | tests/port_tests/point_tests/test_bounding_box.py | skrat/martinez | 86db48324cb50ecb52be8ab2e4278a6d5cdd562b | [
"MIT"
] | 17 | 2019-11-29T23:17:26.000Z | 2020-12-20T15:47:17.000Z | tests/port_tests/point_tests/test_bounding_box.py | skrat/martinez | 86db48324cb50ecb52be8ab2e4278a6d5cdd562b | [
"MIT"
] | 1 | 2020-12-17T22:44:21.000Z | 2020-12-17T22:44:21.000Z | from hypothesis import given
from tests.port_tests.hints import (PortedBoundingBox,
PortedPoint)
from tests.utils import equivalence
from . import strategies
@given(strategies.points)
def test_basic(point: PortedPoint) -> None:
assert isinstance(point.bounding_box, PortedBoundingBox)
@given(strategies.points, strategies.points)
def test_bijection(first_point: PortedPoint,
second_point: PortedPoint) -> None:
assert equivalence(first_point == second_point,
first_point.bounding_box == second_point.bounding_box)
| 31.842105 | 77 | 0.707438 | 64 | 605 | 6.5 | 0.390625 | 0.115385 | 0.115385 | 0.110577 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.219835 | 605 | 18 | 78 | 33.611111 | 0.881356 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.153846 | 1 | 0.153846 | false | 0 | 0.307692 | 0 | 0.461538 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
468d721e5802a550fe36c1b0efccab7310faf51c | 697 | py | Python | thgsp/sampling/__init__.py | qiuyy20/thgsp | 2cd09ba09716cc716a3d4e125d2d0b20f5cc942d | [
"BSD-3-Clause"
] | null | null | null | thgsp/sampling/__init__.py | qiuyy20/thgsp | 2cd09ba09716cc716a3d4e125d2d0b20f5cc942d | [
"BSD-3-Clause"
] | null | null | null | thgsp/sampling/__init__.py | qiuyy20/thgsp | 2cd09ba09716cc716a3d4e125d2d0b20f5cc942d | [
"BSD-3-Clause"
] | null | null | null | from ._utils import construct_dia, construct_hth, construct_sampling_matrix
from .bsgda import bsgda, computing_sets, recon_bsgda, solving_set_covering
from .ess import ess, ess_sampling, recon_ess
from .fastgsss import fastgsss, recon_fastssss
from .rsbs import cheby_coeff4ideal_band_pass, estimate_lk, recon_rsbs, rsbs
__all__ = [
"ess",
"ess_sampling",
"bsgda",
"computing_sets",
"solving_set_covering",
"cheby_coeff4ideal_band_pass",
"estimate_lk",
"rsbs",
"fastgsss",
# reconstruction
"recon_fastssss",
"recon_bsgda",
"recon_ess",
"recon_rsbs",
# utils
"construct_sampling_matrix",
"construct_hth",
"construct_dia",
]
| 25.814815 | 76 | 0.71736 | 82 | 697 | 5.646341 | 0.329268 | 0.051836 | 0.090713 | 0.103672 | 0.146868 | 0.146868 | 0 | 0 | 0 | 0 | 0 | 0.003515 | 0.183644 | 697 | 26 | 77 | 26.807692 | 0.810193 | 0.028694 | 0 | 0 | 0 | 0 | 0.295252 | 0.077151 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.086957 | 0.217391 | 0 | 0.217391 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
469393ea6c4b1c5c7b78ca579da1a18fef848cb3 | 625 | py | Python | tests/test_env_helpers.py | Azraeht/py-ndebug | b7d13b39adc6c0ece6b0d527752869fd94eb9f8a | [
"MIT"
] | null | null | null | tests/test_env_helpers.py | Azraeht/py-ndebug | b7d13b39adc6c0ece6b0d527752869fd94eb9f8a | [
"MIT"
] | 1 | 2020-03-24T17:29:40.000Z | 2020-03-24T17:29:40.000Z | tests/test_env_helpers.py | Azraeht/py-ndebug | b7d13b39adc6c0ece6b0d527752869fd94eb9f8a | [
"MIT"
] | 1 | 2020-03-24T16:41:31.000Z | 2020-03-24T16:41:31.000Z | from ndebug import env_helpers
def test_inspect_ops(mocker):
mocker.patch.dict('os.environ', {'DEBUG_COLORS': 'no',
'DEBUG_DEPTH': '10',
'DEBUG_SHOW_HIDDEN': 'enabled',
'DEBUG_SOMETHING': 'null'})
actual = env_helpers.options()
assert actual == {'colors': False, 'depth': 10, 'show_hidden': True, 'something': None}
def test_load_and_save():
actual = env_helpers.load()
assert actual == ''
env_helpers.save('test:data')
actual = env_helpers.load()
assert actual == 'test:data'
| 31.25 | 91 | 0.5536 | 67 | 625 | 4.925373 | 0.507463 | 0.151515 | 0.193939 | 0.121212 | 0.193939 | 0.193939 | 0 | 0 | 0 | 0 | 0 | 0.009281 | 0.3104 | 625 | 19 | 92 | 32.894737 | 0.756381 | 0 | 0 | 0.142857 | 0 | 0 | 0.2064 | 0 | 0 | 0 | 0 | 0 | 0.214286 | 1 | 0.142857 | false | 0 | 0.071429 | 0 | 0.214286 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
4694573c6edf0ff0ed4f4786ad3fb6ae431575db | 29,122 | py | Python | commands/__init__.py | CorneliaXaos/Command-Block-Assembly | 6ed002c7df856d95d8cb2b8e5346c2bb807bf4bc | [
"MIT"
] | 1 | 2020-06-13T13:57:11.000Z | 2020-06-13T13:57:11.000Z | commands/__init__.py | CorneliaXaos/Command-Block-Assembly | 6ed002c7df856d95d8cb2b8e5346c2bb807bf4bc | [
"MIT"
] | null | null | null | commands/__init__.py | CorneliaXaos/Command-Block-Assembly | 6ed002c7df856d95d8cb2b8e5346c2bb807bf4bc | [
"MIT"
] | null | null | null | import abc
class CommandBlock:
def __init__(self, command, conditional=True, mode='CHAIN', auto=True,
opposite=False, single_use=True):
self.command = command
self.cond = conditional
self.mode = mode
self.auto = auto
self.opposite = opposite
self.single_use = single_use
def resolve(self, scope):
return self.command.resolve(scope)
class Resolvable(metaclass=abc.ABCMeta):
@abc.abstractmethod
def resolve(self, scope):
pass
class SimpleResolve(Resolvable):
def __init__(self, *args):
self.args = args
def resolve(self, scope):
return ' '.join(map(lambda el: el.resolve(scope) \
if isinstance(el, Resolvable) \
else el, self.args))
class Command(Resolvable):
pass
class EntityRef(Resolvable):
def is_single_entity(self, scope):
raise NotImplementedError()
@property
def ref(self):
return EntityReference(self)
class ObjectiveRef(Resolvable):
def __init__(self, name):
assert type(name) == str
self.objective = name
def resolve(self, scope):
return scope.objective(self.objective)
class NameRef(EntityRef):
def __init__(self, name):
assert type(name) == str
self.name = name
@property
def is_single_entity(self, scope):
return True
def resolve(self, scope):
return self.name
class ScoreRef:
def __init__(self, target, objective):
assert isinstance(target, EntityRef)
assert isinstance(objective, ObjectiveRef)
self.target = target
self.objective = objective
def resolve_pair(self, scope):
return '%s %s' % (self.target.resolve(scope),
self.objective.resolve(scope))
class Var(ScoreRef):
def __init__(self, nameref):
super().__init__(GlobalEntity, ObjectiveRef(nameref))
def make_selector(selector, **kwargs):
output = '@' + selector
if not kwargs:
return output
def str_pairs(items):
output = []
for key, value in items:
if type(value) == dict:
value = '{%s}' % str_pairs(value.items())
output.append('%s=%s' % (key, value))
return ','.join(output)
return '%s[%s]' % (output, str_pairs(kwargs.items()))
class Selector(EntityRef):
def __init__(self, type, args=None):
assert type in 'aespr'
self.type = type
assert args is None or isinstance(args, SelectorArgs)
self.args = args
def resolve_params(self, scope):
if not self.args:
return {}
return self.args.resolve(scope)
def is_single_entity(self, scope):
if self.type in 'spr':
return True
params = self.resolve_params(scope)
return 'limit' in params and params['limit'] == '1'
def resolve(self, scope):
return make_selector(self.type, **self.resolve_params(scope))
class _GlobalEntity(EntityRef):
def is_single_entity(self, scope):
return True
def resolve(self, scope):
return scope.global_entity()
GlobalEntity = _GlobalEntity()
class _PosUtil(EntityRef):
def is_single_entity(self, scope):
return True
def resolve(self, scope):
return scope.pos_util_entity()
PosUtil = _PosUtil()
class NbtPath(Resolvable):
def __init__(self, path):
self.path = path
def subpath(self, childpath):
# TODO path validation
return self.__class__(self.path + childpath)
def resolve(self, scope):
return self.path
def __eq__(self, other):
if type(other) != type(self):
return False
return self.path == other.path
def __repr__(self):
return '%s(%s)' % (self.__class__.__name__, self.path)
class Path(NbtPath):
def resolve(self, scope):
return scope.custom_nbt_path(self.path)
class ArrayPath(Path):
def __init__(self, index=None, key=None):
sub = '[%d]' % index if index is not None else ''
assert key is None or index is not None
sub += '.%s' % key if key else ''
super().__init__('%s%s' % (self.name, sub))
def subpath(self, childpath):
# Don't use our constructor
return Path(self.path).subpath(childpath)
class StackPath(ArrayPath):
name = 'stack'
def StackFrame(index):
class StackFramePath(ArrayPath):
name = 'stack[%d].stack' % (-index - 1)
return StackFramePath
StackFrameHead = StackFrame(0)
class GlobalPath(ArrayPath):
name = 'globals'
class Cmd(Command):
def __init__(self, cmd):
self.command = cmd
def resolve(self, scope):
return self.command
class Execute(Command):
def __init__(self, chain):
self.chain = SimpleResolve(*chain._components)
def resolve(self, scope):
return 'execute %s' % self.chain.resolve(scope)
def ensure_selector(sel_arg):
assert isinstance(sel_arg, EntityRef), sel_arg
return sel_arg
class ExecuteChain:
def __init__(self):
self._components = []
self.can_terminate = False
def add(self, *args):
for arg in args:
if type(arg) in [str, int, float]:
self._components.append(str(arg))
elif isinstance(arg, Resolvable):
self._components.append(arg)
else:
assert False, type(arg)
return self
def run(self, cmd):
self.add('run', cmd)
return Execute(self)
def finish(self):
assert self.can_terminate
return Execute(self)
def as_entity(self, select_arg):
self.can_terminate = False
return self.add('as', ensure_selector(select_arg))
def at(self, select_arg):
self.can_terminate = False
return self.add('at', ensure_selector(select_arg))
def at_pos(self, pos):
self.can_terminate = False
return self.add('positioned', pos)
def at_entity_pos(self, select_arg):
self.can_terminate = False
return self.add('positioned', 'as', ensure_selector(select_arg))
def align(self, axes):
self.can_terminate = False
assert ''.join(axis for axis in axes if axis in 'xyz') == axes
return self.add('align', axes)
def facing(self, pos):
self.can_terminate = False
return self.add('facing', pos)
def facing_entity(self, select_arg, feature):
self.can_terminate = False
assert feature == 'eyes' or feature == 'feet'
return self.add('facing', 'entity', ensure_selector(select_arg), \
feature)
def rotated(self, y, x):
self.can_terminate = False
return self.add('rotated', y, x)
def rotated_as_entity(self, select_arg):
self.can_terminate = False
return self.add('rotated', 'as', ensure_selector(select_arg))
def anchored(self, anchor):
self.can_terminate = False
assert anchor == 'feet' or anchor == 'eyes'
return self.add('anchored', anchor)
def cond(self, cond_type):
self.can_terminate = False
assert cond_type == 'if' or cond_type == 'unless'
return ExecuteChain.Cond(self, cond_type)
class Cond:
def add(self, *args):
self.parent.can_terminate = True
return self.parent.add(*((self.cond_type,) + args))
def __init__(self, parent, cond_type):
self.parent = parent
self.cond_type = cond_type
def entity(self, entityref):
return self.add('entity', ensure_selector(entityref))
def score(self, targetref, operator, sourceref):
assert isinstance(targetref, ScoreRef)
assert isinstance(sourceref, ScoreRef)
assert operator in ['<', '<=', '=', '>=', '>']
return self.add('score', targetref.target, targetref.objective,
operator, sourceref.target, sourceref.objective)
def score_range(self, scoreref, range):
assert isinstance(scoreref, ScoreRef)
assert isinstance(range, ScoreRange)
return self.add('score', scoreref.target, scoreref.objective,
'matches', range)
def block(self, pos, block):
assert isinstance(pos, WorldPos) and pos.block_pos
return self.add('block', pos, block)
def blocks_match(self, begin, end, dest, type):
assert type in ['all', 'masked']
return self.add('blocks', begin, end, dest, type)
def store(self, store_type):
assert store_type in ['result', 'success']
self.can_terminate = False
return ExecuteChain.Store(self, store_type)
class Store:
def add(self, *args):
return self.parent.add(*(('store', self.store_type) + args))
def __init__(self, parent, store_type):
self.parent = parent
self.store_type = store_type
def score(self, scoreref):
assert isinstance(scoreref, ScoreRef)
return self.add('score', scoreref.target, scoreref.objective)
def entity(self, target, path, data_type, scale=1):
return self.add('entity', ensure_selector(target), \
path, data_type, scale)
def bossbar(self, bar, attr):
assert attr in ['value', 'max']
return self.add('bossbar', bar, attr)
class BlockOrEntityRef(Resolvable):
pass
class EntityReference(BlockOrEntityRef):
def __init__(self, target):
assert isinstance(target, EntityRef)
self.target = target
def resolve(self, scope):
assert self.target.is_single_entity(scope)
return 'entity %s' % self.target.resolve(scope)
class WorldPos(Resolvable):
def __init__(self, x, y, z, block_pos=False):
is_anchor = self._check_coord(x, True, not block_pos)
was_anchor = self._check_coord(y, is_anchor, not block_pos)
is_anchor = self._check_coord(z, was_anchor, not block_pos)
if was_anchor:
assert is_anchor
self.x, self.y, self.z = x, y, z
self.block_pos = block_pos
def _check_coord(self, val, allow_anchor, allow_float):
if isinstance(val, AnchorRelCoord):
assert allow_anchor
return True
if type(val) == float:
assert allow_float
return False
if type(val) == int:
return False
if isinstance(val, WorldRelCoord):
return False
assert False, val
@property
def ref(self):
return BlockReference(self)
def resolve(self, scope):
return '%s %s %s' % (self.x, self.y, self.z)
class RelativeCoord:
def __init__(self, val):
self.str = self.marker
if type(val) == int:
if val != 0:
self.str += '%d' % val
elif type(val) == float:
if val != 0.0:
# https://stackoverflow.com/a/2440786
self.str += ('%f' % val).rstrip('0').rstrip('.')
else:
assert False, val
self.val = val
def __str__(self):
return self.str
class WorldRelCoord(RelativeCoord):
marker = '~'
class AnchorRelCoord(RelativeCoord):
marker = '^'
class BlockReference(BlockOrEntityRef):
def __init__(self, pos):
assert isinstance(pos, WorldPos) and pos.block_pos
self.pos = pos
def resolve(self, scope):
return 'block %s' % self.pos.resolve(scope)
class _UtilBlockPos(WorldPos):
def __init__(self, is_zero_tick):
self.block_pos = True
self.is_zero_tick = is_zero_tick
def resolve(self, scope):
if self.is_zero_tick:
return scope.get_zero_tick_block()
return scope.get_util_block()
UtilBlockPos = _UtilBlockPos(False)
ZeroTickBlockPos = _UtilBlockPos(True)
class DataGet(Command):
def __init__(self, target, path, scale=1):
assert isinstance(target, BlockOrEntityRef)
assert isinstance(scale, (int, float))
self.target = target
self.path = path
self.scale = int(scale) if scale == int(scale) else scale
def resolve(self, scope):
return 'data get %s %s %s' % (self.target.resolve(scope),
self.path.resolve(scope), self.scale)
class DataMerge(Command):
def __init__(self, ref, nbt):
assert isinstance(ref, BlockOrEntityRef)
self.ref = ref
self.nbt = nbt
def resolve(self, scope):
return 'data merge %s %s' % (self.ref.resolve(scope),
self.nbt.resolve(scope))
class DataModify(Command):
def __init__(self, ref, path, action, *rest):
assert isinstance(ref, BlockOrEntityRef)
self.ref = ref
self.path = path
self.action = action
self.init(*rest)
def resolve(self, scope):
return 'data modify %s %s %s' % (
self.ref.resolve(scope), self.path.resolve(scope), self.action)
class DataModifyValue(DataModify):
def init(self, val):
self.val = val
def resolve(self, scope):
return '%s value %s' % (super().resolve(scope), self.val.resolve(scope))
class DataModifyFrom(DataModify):
def init(self, ref, path):
assert isinstance(ref, BlockOrEntityRef)
self.fromref = ref
self.frompath = path
def resolve(self, scope):
return '%s from %s %s' % (super().resolve(scope),
self.fromref.resolve(scope), self.frompath.resolve(scope))
class DataModifyStack(DataModifyValue):
def __init__(self, index, key, action, value, path=StackPath):
super().__init__(GlobalEntity.ref, path(index, key), action,
value)
class DataRemove(Command):
def __init__(self, ref, path):
assert isinstance(ref, BlockOrEntityRef)
self.ref = ref
self.path = path
def resolve(self, scope):
return 'data remove %s %s' % (self.ref.resolve(scope),
self.path.resolve(scope))
class Function(Command):
def __init__(self, func_name):
self.name = func_name
def resolve(self, scope):
return 'function %s' % scope.function_name(self.name)
class Tellraw(Command):
def __init__(self, text, target):
assert isinstance(text, TextComponentHolder)
assert isinstance(target, EntityRef)
self.text = text
self.target = target
def resolve(self, scope):
return 'tellraw %s %s' % (self.target.resolve(scope),
self.text.resolve_str(scope))
class TextComponent(Resolvable):
pass
class TextComponentHolder(TextComponent):
def __init__(self, style, children):
self.style = style
self.children = children
def resolve_str(self, scope):
import json
return json.dumps(self.resolve(scope), separators=(',', ':'))
def resolve(self, scope):
text = {}
for key, value in self.style.items():
text[key] = self._resolve_style(key, value, scope)
extra = []
for child in self.children:
if isinstance(child, TextComponentHolder) and not child.style:
for child_child in child.children:
extra.append(child_child.resolve(scope))
else:
extra.append(child.resolve(scope))
if not self.style:
return extra
if extra:
if len(extra) == 1 and type(extra[0]) == dict:
text.update(extra[0])
else:
text['extra'] = extra
return text
def _resolve_style(self, key, value, scope):
if key == 'clickEvent':
assert isinstance(value, TextClickAction)
return value.resolve(scope)
return value
class TextStringComponent(TextComponent):
def __init__(self, stringval):
self.val = stringval
def resolve(self, scope):
return {'text': self.val}
class TextNBTComponent(TextComponent):
def __init__(self, entity, path):
assert isinstance(entity, EntityRef)
assert isinstance(path, Path)
self.entity = entity
self.path = path
def resolve(self, scope):
assert self.entity.is_single_entity(scope)
return {'nbt': self.path.resolve(scope),
'entity': self.entity.resolve(scope)}
class TextScoreComponent(TextComponent):
def __init__(self, ref):
assert isinstance(ref, ScoreRef)
self.ref = ref
def resolve(self, scope):
return {'score':
{'name': self.ref.target.resolve(scope),
'objective': self.ref.objective.resolve(scope)}}
class TextClickAction(Resolvable):
def __init__(self, action, value):
self.action = action
self.value = value
def resolve(self, scope):
if type(self.value) == str:
value = self.value
else:
assert self.action in ['run_command', 'suggest_command'] \
and isinstance(self.value, Command)
value = self.value.resolve(scope)
return {'action': self.action, 'value': value}
class Teleport(Command):
def __init__(self, target, *more):
assert isinstance(target, EntityRef)
self.args = [target]
self.args.extend(more)
def resolve(self, scope):
return 'tp %s' % ' '.join(a.resolve(scope) for a in self.args)
class Clone(Command):
def __init__(self, src0, src1, dest):
self.src0 = src0
self.src1 = src1
self.dest = dest
def resolve(self, scope):
return 'clone %s %s %s' % (self.src0.resolve(scope),
self.src1.resolve(scope),
self.dest.resolve(scope))
class Setblock(Command):
def __init__(self, pos, block):
assert isinstance(pos, WorldPos) and pos.block_pos
self.pos = pos
self.block = block
def resolve(self, scope):
return 'setblock %s %s' % (self.pos.resolve(scope),
self.block.resolve(scope))
class Scoreboard(Command):
allows_negative = False
def __init__(self, varref, value):
assert isinstance(varref, ScoreRef)
assert isinstance(value, int)
assert self.allows_negative or value >= 0
self.var = varref
self.value = value
def resolve(self, scope):
return 'scoreboard players %s %s %d' % (
self.op, self.var.resolve_pair(scope), self.value)
class SetConst(Scoreboard):
op = 'set'
allows_negative = True
class AddConst(Scoreboard):
op = 'add'
class RemConst(Scoreboard):
op = 'remove'
class GetValue(Command):
def __init__(self, scoreref):
assert isinstance(scoreref, ScoreRef)
self.ref = scoreref
def resolve(self, scope):
return 'scoreboard players get %s' % self.ref.resolve_pair(scope)
class Operation(Command):
def __init__(self, left, right):
assert isinstance(left, ScoreRef)
assert isinstance(right, ScoreRef)
self.left = left
self.right = right
def resolve(self, scope):
return 'scoreboard players operation %s %s %s' % (
self.left.resolve_pair(scope), self.op,
self.right.resolve_pair(scope))
class OpAssign(Operation): op = '='
class OpAdd(Operation): op = '+='
class OpSub(Operation): op = '-='
class OpMul(Operation): op = '*='
class OpDiv(Operation): op = '/='
class OpMod(Operation): op = '%='
class OpIfLt(Operation): op = '<'
class OpIfGt(Operation): op = '>'
class OpSwap(Operation): op = '><'
class SelectorArgs(Resolvable):
pass
class SimpleSelectorArgs(SelectorArgs):
def __init__(self, args):
self.args = args
def resolve(self, scope):
return dict(self.args)
class ScoreRange(Resolvable):
def __init__(self, min=None, max=None):
assert min is not None or max is not None
self.min = min
self.max = max
def resolve(self, scope):
range = ''
if self.min is not None:
range = '%d' % self.min
if self.max is not None and self.max != self.min:
range += '..%d' % self.max
elif self.max is None:
range += '..'
return range
class SelRange(SelectorArgs):
def __init__(self, objective, min=None, max=None):
assert isinstance(objective, ObjectiveRef)
self.objective = objective
self.range = ScoreRange(min, max)
def resolve(self, scope):
return {'scores': { self.objective.resolve(scope):
self.range.resolve(scope) }}
class SelEquals(SelRange):
def __init__(self, objective, value):
super().__init__(objective, value, value)
class ComboSelectorArgs(SelectorArgs):
@staticmethod
def new(first, second):
if first is None: return second
if second is None: return first
return ComboSelectorArgs(first, second)
def __init__(self, first, second):
self.first = first
self.second = second
def resolve(self, scope):
sel = {}
sel.update(self.first.resolve(scope))
sel.update(self.second.resolve(scope))
return sel
class SelNbt(SelectorArgs):
def __init__(self, path, value):
self.nbt_spec = {}
if not path:
self.nbt_spec = value
else:
self.build_selector(path, self.nbt_spec, value)
def build_selector(self, path, parent, value):
for i in range(len(path) - 1):
node = path[i]
if node.isdigit():
pos = int(node)
while len(parent) < pos + 1:
parent.append({})
parent = parent[pos]
continue
if node not in parent:
parent[node] = {}
if len(path) > i + 1:
if path[i+1].isdigit():
if not parent[node]:
parent[node] = []
else:
assert type(parent[node]) == list
parent = parent[node]
if path[-1].isdigit():
pos = int(path[-1])
while len(parent) < pos + 1:
parent.append({})
path[-1] = pos
parent[path[-1]] = value
def stringify_nbt(self, node, scope):
# TODO quoted keys
if type(node) == dict:
return '{%s}' % ','.join('%s:%s' % (k, self.stringify_nbt(v, scope))
for k,v in node.items())
if type(node) == list:
return '[%s]' % ','.join(map(lambda n:self.stringify_nbt(n, scope), node))
if isinstance(node, Resolvable):
return node.resolve(scope)
assert False, type(node)
def resolve(self, scope):
return {'nbt': self.stringify_nbt(self.nbt_spec, scope)}
class TeamName(Resolvable):
def __init__(self, name):
self.name = name
def resolve(self, scope):
return scope.team_name(self.name)
class TeamModify(Command):
def __init__(self, team, attr, value):
assert isinstance(team, TeamName)
self.team = team
assert attr in ['color', 'friendlyFire', 'seeFriendlyInvisibles',
'nametagVisibility', 'deathMessageVisibility',
'collisionRule', 'displayName', 'prefix', 'suffix']
self.attr = attr
self.value = value
def resolve(self, scope):
return 'team modify %s %s %s' % (self.team.resolve(scope), self.attr,
self.value)
class JoinTeam(Command):
def __init__(self, team, members):
assert isinstance(team, TeamName)
assert members is None or isinstance(members, EntityRef)
self.team = team
self.members = members
def resolve(self, scope):
members = (' ' + self.members.resolve(scope)) if self.members else ''
return 'team join %s%s' % (self.team.resolve(scope), members)
class Bossbar(Resolvable):
def __init__(self, name):
self.name = name
def resolve(self, scope):
return scope.bossbar(self.name)
class BossbarSet(Command):
def __init__(self, bar, prop, value):
assert isinstance(bar, Bossbar)
self.bar = bar
self.prop = prop
self.value = value
def resolve(self, scope):
value = (' ' + self.value.resolve(scope)) if self.value else ''
return 'bossbar set %s %s%s' % (self.bar.resolve(scope), self.prop,
value)
class Kill(Command):
def __init__(self, target):
assert isinstance(target, EntityRef)
self.target = target
def resolve(self, scope):
return 'kill %s' % self.target.resolve(scope)
class ReplaceItem(Command):
def __init__(self, ref, slot, item, amount=None):
assert isinstance(ref, BlockOrEntityRef)
self.ref = ref
self.slot = slot
self.item = item
self.amount = amount
def resolve(self, scope):
amount = (' %d' % self.amount) if self.amount is not None else ''
return 'replaceitem %s %s %s%s' % (self.ref.resolve(scope), self.slot,
self.item.resolve(scope), amount)
class GiveItem(Command):
def __init__(self, targets, item, count=1):
assert isinstance(targets, EntityRef)
self.targets = targets
self.item = item
self.count = count
def resolve(self, scope):
return 'give %s %s %d' % (self.targets.resolve(scope),
self.item.resolve(scope), self.count)
class ClearItem(Command):
def __init__(self, targets, item, max_count=-1):
assert isinstance(targets, EntityRef)
self.targets = targets
self.item = item
self.max_count = max_count
def resolve(self, scope):
return 'clear %s %s %d' % (self.targets.resolve(scope),
self.item.resolve(scope), self.max_count)
class EffectGive(Command):
def __init__(self, target, effect, seconds=None, amp=None, hide=None):
assert isinstance(target, EntityRef)
self.target = target
self.effect = effect
self.seconds = seconds if seconds is not None else 30
self.amp = amp if amp is not None else 0
self.hide = hide if hide is not None else False
def resolve(self, scope):
return 'effect give %s %s %d %d %s' % (self.target.resolve(scope),
self.effect, self.seconds, self.amp,
'true' if self.hide else 'false')
class Particle(Command):
def __init__(self, name, pos, delta, speed, count, mode, players):
self.name = name
self.pos = pos
self.delta = delta
self.speed = speed
self.count = count
self.mode = mode
self.players = players
def resolve(self, scope):
players = (' ' + self.players.resolve(scope)) if self.players else ''
return 'particle %s %s %s %f %d %s%s' % (self.name,
self.pos.resolve(scope), self.delta.resolve(scope),
self.speed, self.count, self.mode, players)
class Title(Command):
def __init__(self, target, action, *args):
assert isinstance(target, EntityRef)
self.target = target
self.action = action
self.args = args
def resolve(self, scope):
args = (' ' + SimpleResolve(*self.args).resolve(scope)) \
if self.args else ''
return 'title %s %s%s' % (self.target.resolve(scope), self.action, args)
class Summon(Command):
def __init__(self, entity_name, pos, data=None):
assert pos is None or isinstance(pos, WorldPos)
self.name = entity_name
self.pos = pos
self.data = data
def resolve(self, scope):
pos = (' ' + self.pos.resolve(scope)) if self.pos else \
(' ~ ~ ~' if self.data else '')
data = (' ' + self.data.resolve(scope)) if self.data else ''
return 'summon %s%s%s' % (self.name, pos, data)
class Advancement(Command):
def __init__(self, action, target, range, *args):
assert action in ['grant', 'revoke']
assert isinstance(target, EntityRef)
self.action = action
self.target = target
self.range = range
self.args = args
def resolve(self, scope):
args = (' ' + SimpleResolve(*self.args).resolve(scope)) \
if self.args else ''
return 'advancement %s %s %s%s' % (self.action,
self.target.resolve(scope),
self.range, args)
class AdvancementRef(Resolvable):
def __init__(self, name):
self.name = name
def resolve(self, scope):
return scope.advancement_name(self.name)
| 29.327291 | 86 | 0.588215 | 3,381 | 29,122 | 4.931381 | 0.097013 | 0.046063 | 0.039585 | 0.062676 | 0.325496 | 0.23571 | 0.177652 | 0.148384 | 0.113837 | 0.1046 | 0 | 0.002054 | 0.297816 | 29,122 | 992 | 87 | 29.356855 | 0.813292 | 0.003399 | 0 | 0.282086 | 0 | 0 | 0.03708 | 0.001482 | 0 | 0 | 0 | 0.001008 | 0.096257 | 1 | 0.219251 | false | 0.008021 | 0.002674 | 0.070856 | 0.513369 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
469d18528989ab40a67eb477eeda37c2533ddfd8 | 5,448 | py | Python | RecoEgamma/ElectronIdentification/python/Identification/mvaElectronID_Fall17_noIso_V1_cff.py | ckamtsikis/cmssw | ea19fe642bb7537cbf58451dcf73aa5fd1b66250 | [
"Apache-2.0"
] | 852 | 2015-01-11T21:03:51.000Z | 2022-03-25T21:14:00.000Z | RecoEgamma/ElectronIdentification/python/Identification/mvaElectronID_Fall17_noIso_V1_cff.py | ckamtsikis/cmssw | ea19fe642bb7537cbf58451dcf73aa5fd1b66250 | [
"Apache-2.0"
] | 30,371 | 2015-01-02T00:14:40.000Z | 2022-03-31T23:26:05.000Z | RecoEgamma/ElectronIdentification/python/Identification/mvaElectronID_Fall17_noIso_V1_cff.py | ckamtsikis/cmssw | ea19fe642bb7537cbf58451dcf73aa5fd1b66250 | [
"Apache-2.0"
] | 3,240 | 2015-01-02T05:53:18.000Z | 2022-03-31T17:24:21.000Z | import FWCore.ParameterSet.Config as cms
from RecoEgamma.ElectronIdentification.Identification.mvaElectronID_tools import *
# Documentation of the MVA
# https://twiki.cern.ch/twiki/bin/viewauth/CMS/MultivariateElectronIdentificationRun2
# https://rembserj.web.cern.ch/rembserj/notes/Electron_MVA_ID_2017_documentation
#
# In this file we define the locations of the MVA weights, cuts on the MVA values
# for specific working points, and configure those cuts in VID
#
# The tag is an extra string attached to the names of the products
# such as ValueMaps that needs to distinguish cases when the same MVA estimator
# class is used with different tuning/weights
mvaTag = "Fall17NoIsoV1"
# There are 6 categories in this MVA. They have to be configured in this strict order
# (cuts and weight files order):
# 0 EB1 (eta<0.8) pt 5-10 GeV | pt < ptSplit && |eta| < ebSplit
# 1 EB2 (eta>=0.8) pt 5-10 GeV | pt < ptSplit && |eta| >= ebSplit && |eta| < ebeeSplit
# 2 EE pt 5-10 GeV | pt < ptSplit && |eta| >= ebeeSplit
# 3 EB1 (eta<0.8) pt 10-inf GeV | pt >= ptSplit && |eta| < ebSplit
# 4 EB2 (eta>=0.8) pt 10-inf GeV | pt >= ptSplit && |eta| >= ebSplit && |eta| < ebeeSplit
# 5 EE pt 10-inf GeV | pt >= ptSplit && |eta| >= ebeeSplit
mvaFall17WeightFiles_V1 = cms.vstring(
"RecoEgamma/ElectronIdentification/data/Fall17/EIDmva_EB1_5_2017_puinfo_BDT.weights.xml.gz",
"RecoEgamma/ElectronIdentification/data/Fall17/EIDmva_EB2_5_2017_puinfo_BDT.weights.xml.gz",
"RecoEgamma/ElectronIdentification/data/Fall17/EIDmva_EE_5_2017_puinfo_BDT.weights.xml.gz",
"RecoEgamma/ElectronIdentification/data/Fall17/EIDmva_EB1_10_2017_puinfo_BDT.weights.xml.gz",
"RecoEgamma/ElectronIdentification/data/Fall17/EIDmva_EB2_10_2017_puinfo_BDT.weights.xml.gz",
"RecoEgamma/ElectronIdentification/data/Fall17/EIDmva_EE_10_2017_puinfo_BDT.weights.xml.gz"
)
## The working point for this MVA that is expected to have about 90% signal
# WP tuned to give about 90 and 80% signal efficiecny for electrons from Drell-Yan with pT > 25 GeV
# The working point for the low pt categories is just taken over from the high pt
idName90 = "mvaEleID-Fall17-noIso-V1-wp90"
MVA_WP90 = EleMVA_WP(
idName = idName90, mvaTag = mvaTag,
cutCategory0 = "0.9165112826974601 - exp(-pt / 2.7381703555094217) * 1.03549199648109", # EB1 low pt
cutCategory1 = "0.8655738322220173 - exp(-pt / 2.4027944652597073) * 0.7975615613282494", # EB2 low pt
cutCategory2 = "-3016.035055227131 - exp(-pt / -52140.61856333602) * -3016.3029387236506", # EE low pt
cutCategory3 = "0.9616542816132922 - exp(-pt / 8.757943837889817) * 3.1390200321591206", # EB1
cutCategory4 = "0.9319258011430132 - exp(-pt / 8.846057432565809) * 3.5985063793347787", # EB2
cutCategory5 = "0.8899260780999244 - exp(-pt / 10.124234115859881) * 4.352791250718547", # EE
)
idName80 = "mvaEleID-Fall17-noIso-V1-wp80"
MVA_WP80 = EleMVA_WP(
idName = idName80, mvaTag = mvaTag,
cutCategory0 = "0.9530240956555949 - exp(-pt / 2.7591425841003647) * 0.4669644718545271", # EB1 low pt
cutCategory1 = "0.9336564763961019 - exp(-pt / 2.709276284272272) * 0.33512286599215946", # EB2 low pt
cutCategory2 = "0.9313133688365339 - exp(-pt / 1.5821934800715558) * 3.8889462619659265", # EE low pt
cutCategory3 = "0.9825268564943458 - exp(-pt / 8.702601455860762) * 1.1974861596609097", # EB1
cutCategory4 = "0.9727509457929913 - exp(-pt / 8.179525631018565) * 1.7111755094657688", # EB2
cutCategory5 = "0.9562619539540145 - exp(-pt / 8.109845366281608) * 3.013927699126942", # EE
)
### WP tuned for HZZ analysis with very high efficiency (about 98%)
# The working points were found by requiring the same signal efficiencies in
# each category as for the Spring 16 HZZ ID
# (see RecoEgamma/ElectronIdentification/python/Identification/mvaElectronID_Spring16_HZZ_V1_cff.py)
idNamewpLoose = "mvaEleID-Fall17-noIso-V1-wpLoose"
MVA_WPLoose = EleMVA_WP(
idName = idNamewpLoose, mvaTag = mvaTag,
cutCategory0 = "-0.13285867293779202", # EB1 low pt
cutCategory1 = "-0.31765300958836074", # EB2 low pt
cutCategory2 = "-0.0799205914718861" , # EE low pt
cutCategory3 = "-0.856871961305474" , # EB1
cutCategory4 = "-0.8107642141584835" , # EB2
cutCategory5 = "-0.7179265933023059" # EE
)
#
# Finally, set up VID configuration for all cuts
#
# Create the PSet that will be fed to the MVA value map producer
mvaEleID_Fall17_noIso_V1_producer_config = cms.PSet(
mvaName = cms.string(mvaClassName),
mvaTag = cms.string(mvaTag),
# Category parameters
nCategories = cms.int32(6),
categoryCuts = cms.vstring(*EleMVA_6CategoriesCuts),
# Weight files and variable definitions
weightFileNames = mvaFall17WeightFiles_V1,
variableDefinition = cms.string("RecoEgamma/ElectronIdentification/data/ElectronMVAEstimatorRun2Fall17V1Variables.txt")
)
# Create the VPset's for VID cuts
mvaEleID_Fall17_V1_wpLoose = configureVIDMVAEleID( MVA_WPLoose )
mvaEleID_Fall17_V1_wp90 = configureVIDMVAEleID( MVA_WP90 )
mvaEleID_Fall17_V1_wp80 = configureVIDMVAEleID( MVA_WP80 )
mvaEleID_Fall17_V1_wpLoose.isPOGApproved = cms.untracked.bool(True)
mvaEleID_Fall17_V1_wp90.isPOGApproved = cms.untracked.bool(True)
mvaEleID_Fall17_V1_wp80.isPOGApproved = cms.untracked.bool(True)
| 54.48 | 124 | 0.727423 | 693 | 5,448 | 5.611833 | 0.350649 | 0.015428 | 0.064798 | 0.023142 | 0.253793 | 0.195166 | 0.181281 | 0.157367 | 0.132168 | 0.132168 | 0 | 0.202083 | 0.171623 | 5,448 | 99 | 125 | 55.030303 | 0.65965 | 0.36362 | 0 | 0 | 0 | 0 | 0.498389 | 0.207735 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.036364 | 0 | 0.036364 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
46a4f53ed5b4a611b18a262f155eca68d71783fb | 7,831 | py | Python | test/python/test_elementwise_ops.py | avijit-chakroborty/ngraph-bridge | b691d57412a40582ea93c6e564d80c750b7f2e8e | [
"Apache-2.0"
] | 142 | 2019-02-21T00:53:06.000Z | 2022-03-11T07:46:28.000Z | test/python/test_elementwise_ops.py | tensorflow/ngraph | ea6422491ec75504e78a63db029e7f74ec3479a5 | [
"Apache-2.0"
] | 252 | 2019-03-11T19:27:59.000Z | 2021-03-19T10:58:17.000Z | test/python/test_elementwise_ops.py | tensorflow/ngraph | ea6422491ec75504e78a63db029e7f74ec3479a5 | [
"Apache-2.0"
] | 65 | 2019-03-13T15:27:29.000Z | 2021-07-16T07:09:16.000Z | # ==============================================================================
# Copyright 2018-2020 Intel Corporation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""nGraph TensorFlow bridge elementwise operations test
"""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import pytest
import numpy as np
import tensorflow as tf
tf.compat.v1.disable_eager_execution()
from common import NgraphTest
class TestElementwiseOperations(NgraphTest):
@pytest.mark.parametrize(("v1", "v2", "expected"),
((1.0, -1.0, [1.0]), (100, 200, ([200],)),
([0.0, 5.0, 10.0], [6.0],
(np.array([[6.0, 6.0, 10.0]]),))))
def test_maximum(self, v1, v2, expected):
val1 = tf.compat.v1.placeholder(tf.float32, shape=(None))
val2 = tf.compat.v1.placeholder(tf.float32, shape=(None))
out = tf.maximum(val1, val2)
sess_fn = lambda sess: sess.run((out,),
feed_dict={
val1: (v1,),
val2: (v2,)
})[0]
assert (self.with_ngraph(sess_fn) == self.without_ngraph(sess_fn)).all()
assert (self.with_ngraph(sess_fn) == expected).all()
@pytest.mark.parametrize(
("v1", "v2", "expected"),
((1.4, 1.0, [False]), (-1.0, -1.0, ([True],)), (-1.0, 1000, [True]),
(200, 200, ([True],)), ([-1.0, 1.0, -4], [0.1, 0.1, -4],
(np.array([[True, False, True]]),)),
([-1.0, 1.0, -4], [-1.0], (np.array([[True, False, True]]),))))
def test_less_equal(self, v1, v2, expected):
val1 = tf.compat.v1.placeholder(tf.float32, shape=(None))
val2 = tf.compat.v1.placeholder(tf.float32, shape=(None))
out = tf.less_equal(val1, val2)
sess_fn = lambda sess: sess.run((out,),
feed_dict={
val1: (v1,),
val2: (v2,)
})[0]
assert (self.with_ngraph(sess_fn) == self.without_ngraph(sess_fn)).all()
assert (self.with_ngraph(sess_fn) == expected).all()
@pytest.mark.parametrize(
("v1", "v2", "expected"),
((1.4, 1.0, [False]), (-1.0, -1.0, ([False],)), (-1.0, 1000, [True]),
(200, 200, ([False],)), ([-1.0, 1.0, -4], [0.1, 0.1, -4],
(np.array([[True, False, False]]),)),
([-1.0, 1.0, -4], [-1.0], (np.array([[False, False, True]]),))))
def test_less(self, v1, v2, expected):
val1 = tf.compat.v1.placeholder(tf.float32, shape=(None))
val2 = tf.compat.v1.placeholder(tf.float32, shape=(None))
out = tf.less(val1, val2)
sess_fn = lambda sess: sess.run((out,),
feed_dict={
val1: (v1,),
val2: (v2,)
})[0]
assert (self.with_ngraph(sess_fn) == self.without_ngraph(sess_fn)).all()
assert (self.with_ngraph(sess_fn) == expected).all()
@pytest.mark.parametrize(
("v1", "v2", "expected"),
((1.4, 1.0, [True]), (-1.0, -1.0, ([True],)), (-1.0, 1000, [False]),
(200, 200, ([True],)), ([-1.0, 1.0, -4], [0.1, 0.1, -4],
(np.array([[False, True, True]]),)),
([-1.0, 1.0, -4], [-1.0], (np.array([[True, True, False]]),))))
def test_greater_equal(self, v1, v2, expected):
val1 = tf.compat.v1.placeholder(tf.float32, shape=(None))
val2 = tf.compat.v1.placeholder(tf.float32, shape=(None))
out = tf.greater_equal(val1, val2)
sess_fn = lambda sess: sess.run((out,),
feed_dict={
val1: (v1,),
val2: (v2,)
})[0]
assert (self.with_ngraph(sess_fn) == self.without_ngraph(sess_fn)).all()
assert (self.with_ngraph(sess_fn) == expected).all()
@pytest.mark.parametrize(
("v1", "v2", "expected"),
((1.4, 1.0, [True]), (-1.0, -1.0, ([False],)), (-1.0, 1000, [False]),
(200, 200, ([False],)), ([-1.0, 1.0, -4], [0.1, 0.1, -4],
(np.array([[False, True, False]]),)),
([-1.0, 1.0, -4], [-1.0], (np.array([[False, True, False]]),))))
def test_greater(self, v1, v2, expected):
val1 = tf.compat.v1.placeholder(tf.float32, shape=(None))
val2 = tf.compat.v1.placeholder(tf.float32, shape=(None))
out = tf.greater(val1, val2)
sess_fn = lambda sess: sess.run((out,),
feed_dict={
val1: (v1,),
val2: (v2,)
})[0]
assert (self.with_ngraph(sess_fn) == self.without_ngraph(sess_fn)).all()
assert (self.with_ngraph(sess_fn) == expected).all()
@pytest.mark.parametrize(("v1", "v2", "expected"),
((True, True, [True]), (True, False, ([False],)),
(1.0, -2.0, ([True],)), (False, 100, ([False],)),
([False, True, False], [True],
(np.array([[False, True, False]]),))))
def test_logical_and(self, v1, v2, expected):
val1 = tf.compat.v1.placeholder(tf.bool, shape=(None))
val2 = tf.compat.v1.placeholder(tf.bool, shape=(None))
out = tf.logical_and(val1, val2)
sess_fn = lambda sess: sess.run((out,),
feed_dict={
val1: (v1,),
val2: (v2,)
})[0]
assert (self.with_ngraph(sess_fn) == self.without_ngraph(sess_fn)).all()
assert (self.with_ngraph(sess_fn) == expected).all()
@pytest.mark.parametrize(("test_input", "expected"), ((False, True),
(True, False)))
def test_logicalnot_1d(self, test_input, expected):
val = tf.compat.v1.placeholder(tf.bool, shape=(1,))
out = tf.logical_not(val)
sess_fn = lambda sess: sess.run((out,), feed_dict={val: (test_input,)})[
0]
assert (self.with_ngraph(sess_fn) == self.without_ngraph(sess_fn)).all()
assert (self.with_ngraph(sess_fn) == expected).all()
def test_logicalnot_2d(self):
test_input = ((True, False, True), (False, True, False))
expected = np.logical_not(test_input)
val = tf.compat.v1.placeholder(tf.bool, shape=(2, 3))
out = tf.logical_not(val)
sess_fn = lambda sess: sess.run((out,), feed_dict={val: test_input})[0]
assert (self.with_ngraph(sess_fn) == self.without_ngraph(sess_fn)).all()
assert (self.with_ngraph(sess_fn) == expected).all()
| 47.75 | 80 | 0.477461 | 916 | 7,831 | 3.967249 | 0.146288 | 0.024216 | 0.079252 | 0.088057 | 0.724546 | 0.695377 | 0.690424 | 0.66896 | 0.623555 | 0.623555 | 0 | 0.06106 | 0.334951 | 7,831 | 163 | 81 | 48.042945 | 0.636713 | 0.099732 | 0 | 0.576 | 0 | 0 | 0.012806 | 0 | 0 | 0 | 0 | 0 | 0.128 | 1 | 0.064 | false | 0 | 0.056 | 0 | 0.128 | 0.008 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
46aa55bc676b909ffd23d501d1007af51f171f16 | 293 | py | Python | mydict.py | zengboming/python | 13018f476554adc3bff831af27c08f7c216d4b09 | [
"Apache-2.0"
] | null | null | null | mydict.py | zengboming/python | 13018f476554adc3bff831af27c08f7c216d4b09 | [
"Apache-2.0"
] | null | null | null | mydict.py | zengboming/python | 13018f476554adc3bff831af27c08f7c216d4b09 | [
"Apache-2.0"
] | null | null | null | #unit
#mydict.py
class Dict(dict):
def __init__(self,**kw):
super(Dict,self).__init__(**kw)
def __getattr__(self,key):
try:
return self[key]
except KeyError:
raise AttributeError(r"'Dict' object han no attribute'%s'" %key)
def __setattr__(self,key,value):
self[key]=value
| 19.533333 | 67 | 0.68942 | 43 | 293 | 4.325581 | 0.604651 | 0.150538 | 0.129032 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.153584 | 293 | 14 | 68 | 20.928571 | 0.75 | 0.044369 | 0 | 0 | 0 | 0 | 0.122302 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.3 | false | 0 | 0 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
46b1624f4a6a70026386fb13d4c9f4cd8b816721 | 492 | py | Python | server/petsAPI/views.py | StoyanDimStoyanov/ReactDJango | 8c30730fbd3af0064f97444a91e65a9029a1dc0f | [
"MIT"
] | null | null | null | server/petsAPI/views.py | StoyanDimStoyanov/ReactDJango | 8c30730fbd3af0064f97444a91e65a9029a1dc0f | [
"MIT"
] | null | null | null | server/petsAPI/views.py | StoyanDimStoyanov/ReactDJango | 8c30730fbd3af0064f97444a91e65a9029a1dc0f | [
"MIT"
] | null | null | null | from django.shortcuts import render
from rest_framework import generics
# Create your views here.
from petsAPI.models import Pets
from petsAPI.serializers import PetSerializer
def index(req):
return render(req, 'index.html')
class PetsListApiView(generics.ListCreateAPIView):
queryset = Pets.objects.all()
serializer_class = PetSerializer
class PetDetailsApiView(generics.RetrieveUpdateDestroyAPIView):
queryset = Pets.objects.all()
serializer_class = PetSerializer | 24.6 | 63 | 0.792683 | 54 | 492 | 7.166667 | 0.574074 | 0.056848 | 0.098191 | 0.113695 | 0.258398 | 0.258398 | 0.258398 | 0 | 0 | 0 | 0 | 0 | 0.138211 | 492 | 20 | 64 | 24.6 | 0.912736 | 0.046748 | 0 | 0.333333 | 0 | 0 | 0.021368 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.083333 | false | 0 | 0.333333 | 0.083333 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
46b37f4db1428d7e3e970e352faebde87a24d82f | 7,329 | py | Python | src/bbdata/endpoint/output/objects.py | big-building-data/bbdata-python | 46335c9f8db9ceccbd795c4931db0e3041ba9a50 | [
"MIT"
] | null | null | null | src/bbdata/endpoint/output/objects.py | big-building-data/bbdata-python | 46335c9f8db9ceccbd795c4931db0e3041ba9a50 | [
"MIT"
] | null | null | null | src/bbdata/endpoint/output/objects.py | big-building-data/bbdata-python | 46335c9f8db9ceccbd795c4931db0e3041ba9a50 | [
"MIT"
] | null | null | null | import requests
from bbdata.config import output_api_url
from bbdata.util import handle_response
class Objects:
base_path = "/objects"
auth = None
def __init__(self, auth):
self.auth = auth
def get_all(self, tags=None, search=None, page=None, per_page=None,
writable=False):
"""
Get the list of accessible objects.
GET /objects
https://bbdata.daplab.ch/api/#objects_get
"""
params = {
"tags": tags,
"search": search,
"page": page,
"perPage": per_page,
"writable": writable,
}
url = output_api_url + self.base_path
r = requests.get(url, params, headers=self.auth.headers)
return handle_response(r.status_code, r.json())
def put(self, name, unit_symbol, owner, description=None):
"""
Create a new object.
PUT /objects
https://bbdata.daplab.ch/api/#objects_put
"""
json = {
"name": name,
"description": description,
"unitSymbol": unit_symbol,
'owner': owner
}
url = output_api_url + self.base_path
r = requests.put(url, json=json, headers=self.auth.headers)
return handle_response(r.status_code, r.json())
def get(self, object_id):
"""
Get an object.
GET /objects/{objectIs}
https://bbdata.daplab.ch/api/#objects__objectid__get
"""
url = output_api_url + self.base_path + "/" + str(object_id)
r = requests.get(url, headers=self.auth.headers)
# return ObjectResponse(r.json())
return handle_response(r.status_code, r.json())
def post(self, object_id, data):
"""
Edit the name and/or the description of the object.
Only the properties appearing in the body will be modified.
POST /objects/{objectId}
https://bbdata.daplab.ch/api/#objects__objectid__post
"""
# TODO The data to send isn't define in the API Docs
url = output_api_url + self.base_path + "/" + str(object_id)
r = requests.post(url, data, headers=self.auth.headers)
return handle_response(r.status_code, r.json())
def delete(self, object_id):
"""
Delete the object with the given id
POST /objects/{objectId}
https://bbdata.daplab.ch/api/#objects__objectid__delete
"""
# TODO This method is in the Postman profile but isn't in the docs
url = output_api_url + self.base_path + "/" + str(object_id)
r = requests.delete(url, headers=self.auth.headers)
return handle_response(r.status_code, r.json())
def post_disable(self, object_id):
"""
Disable this object. All associated tokens will be removed.
POST /objects/{objectId}/disable
https://bbdata.daplab.ch/api/#objects__objectid__disable_post
"""
url = output_api_url + self.base_path + "/" + str(object_id) \
+ "/disable"
r = requests.post(url, headers=self.auth.headers)
return handle_response(r.status_code, True)
def post_enable(self, object_id):
"""
Enable this object.
POST /objects/{objectId}/enable
https://bbdata.daplab.ch/api/#objects__objectid__enable_post
"""
url = output_api_url + self.base_path + "/" + str(object_id) \
+ "/enable"
r = requests.post(url, headers=self.auth.headers)
return handle_response(r.status_code, True)
def get_tokens(self, object_id, description=None):
"""
Get the list of tokens for the object. A token is used to submit new
measures (see input-api).
An optional description can be passed in the
body (max 65 characters).
GET /objects/{objectId}/tokens
https://bbdata.daplab.ch/api/#objects__objectid__tokens_get
"""
# TODO The API docs says it's possible to pass an optional description
# but it looks like it's a mistake for a GET request...
url = output_api_url + self.base_path + "/" + str(object_id) \
+ "/tokens"
json = {
"description": description
}
r = requests.get(url, json, headers=self.auth.headers)
return handle_response(r.status_code, r.json())
def put_tokens(self, object_id):
"""
Generate a new secured token.
PUT /objects/{objectId}/tokens
https://bbdata.daplab.ch/api/#objects__objectid__tokens_put
"""
# TODO The optional description should probably be added in this
# method
url = output_api_url + self.base_path + "/" + str(object_id) \
+ "/tokens"
r = requests.put(url, headers=self.auth.headers)
return handle_response(r.status_code, r.json())
def post_tokens(self, object_id, description):
"""
Edit the token's description.
POST /objects/{objectId}/tokens
https://bbdata.daplab.ch/api/#objects__objectid__tokens_post
"""
url = output_api_url + self.base_path + "/" + str(object_id) \
+ "/tokens"
json = {
"description": description
}
r = requests.post(url, json=json, headers=self.auth.headers)
return handle_response(r.status_code, r.json())
def delete_tokens(self, object_id, token_id):
"""
Revoke a token.
DELETE /objects/{objectId}/tokens
https://bbdata.daplab.ch/api/#objects__objectid__tokens_delete
"""
url = output_api_url + self.base_path + "/" + str(object_id) \
+ "/tokens"
params = {
"tokenId": token_id
}
r = requests.delete(url, params=params, headers=self.auth.headers)
return handle_response(r.status_code, True)
def put_tags(self, object_id, tags):
"""
Add tags to the object.
PUT /objects/{objectId}/tags
https://bbdata.daplab.ch/api/#objects__objectid__tags_put
"""
url = output_api_url + self.base_path + "/" + str(object_id) \
+ "/tags"
params = {
"tags": tags
}
r = requests.put(url, params=params, headers=self.auth.headers)
return handle_response(r.status_code, True)
def delete_tags(self, object_id, tags):
"""
Remove tags.
DELETE /objects/{objectId}/tags
https://bbdata.daplab.ch/api/#objects__objectid__tags_delete
"""
url = output_api_url + self.base_path + "/" + str(object_id) \
+ "/tags"
params = {
"tags": tags
}
r = requests.put(url, params=params, headers=self.auth.headers)
return handle_response(r.status_code, True)
def get_comments(self, object_id):
"""
Get all comments attached to this object. Use the /comments endpoint
for more actions.
GET /objects/{objectId}/comments
https://bbdata.daplab.ch/api/#objects__objectid__comments_get
"""
url = output_api_url + self.base_path + "/" + str(object_id) \
+ "/comments"
r = requests.get(url, headers=self.auth.headers)
return handle_response(r.status_code, r.json())
| 33.619266 | 78 | 0.592714 | 894 | 7,329 | 4.661074 | 0.147651 | 0.046076 | 0.043197 | 0.063835 | 0.62395 | 0.594912 | 0.594912 | 0.542117 | 0.542117 | 0.513799 | 0 | 0.000386 | 0.292673 | 7,329 | 217 | 79 | 33.774194 | 0.803434 | 0.301132 | 0 | 0.5 | 0 | 0 | 0.039302 | 0 | 0 | 0 | 0 | 0.018433 | 0 | 1 | 0.147059 | false | 0 | 0.029412 | 0 | 0.343137 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
46b4aae481a7dcad8401c1fdb98aae95f3b590c6 | 2,207 | py | Python | api/patients/urls.py | Wellheor1/l2 | d980210921c545c68fe9d5522bb693d567995024 | [
"MIT"
] | 10 | 2018-03-14T06:17:06.000Z | 2022-03-10T05:33:34.000Z | api/patients/urls.py | Wellheor1/l2 | d980210921c545c68fe9d5522bb693d567995024 | [
"MIT"
] | 512 | 2018-09-10T07:37:34.000Z | 2022-03-30T02:23:43.000Z | api/patients/urls.py | D00dleman/l2 | 0870144537ee340cd8db053a608d731e186f02fb | [
"MIT"
] | 24 | 2018-07-31T05:52:12.000Z | 2022-02-08T00:39:41.000Z | from django.urls import path
from . import views
urlpatterns = [
path('search-card', views.patients_search_card),
path('search-individual', views.patients_search_individual),
path('search-l2-card', views.patients_search_l2_card),
path('create-l2-individual-from-card', views.create_l2_individual_from_card),
path('card/<int:card_id>', views.patients_get_card_data),
path('card/save', views.patients_card_save),
path('card/archive', views.patients_card_archive),
path('card/unarchive', views.patients_card_unarchive),
path('individuals/search', views.individual_search),
path('individuals/sex', views.get_sex_by_param),
path('individuals/edit-doc', views.edit_doc),
path('individuals/edit-agent', views.edit_agent),
path('individuals/update-cdu', views.update_cdu),
path('individuals/update-wia', views.update_wia),
path('individuals/sync-rmis', views.sync_rmis),
path('individuals/sync-tfoms', views.sync_tfoms),
path('individuals/load-anamnesis', views.load_anamnesis),
path('individuals/load-dreg', views.load_dreg),
path('individuals/load-screening', views.load_screening),
path('individuals/load-vaccine', views.load_vaccine),
path('individuals/load-ambulatory-data', views.load_ambulatory_data),
path('individuals/load-benefit', views.load_benefit),
path('individuals/load-dreg-detail', views.load_dreg_detail),
path('individuals/load-vaccine-detail', views.load_vaccine_detail),
path('individuals/load-ambulatorydata-detail', views.load_ambulatory_data_detail),
path('individuals/load-ambulatory-history', views.load_ambulatory_history),
path('individuals/load-benefit-detail', views.load_benefit_detail),
path('individuals/save-dreg', views.save_dreg),
path('individuals/save-plan-dreg', views.update_dispensary_reg_plans),
path('individuals/save-vaccine', views.save_vaccine),
path('individuals/save-ambulatory-data', views.save_ambulatory_data),
path('individuals/save-benefit', views.save_benefit),
path('individuals/save-anamnesis', views.save_anamnesis),
path('is-card', views.is_l2_card),
path('save-screening-plan', views.update_screening_reg_plan),
]
| 53.829268 | 86 | 0.752152 | 284 | 2,207 | 5.623239 | 0.169014 | 0.234815 | 0.13087 | 0.046963 | 0.032561 | 0 | 0 | 0 | 0 | 0 | 0 | 0.00252 | 0.101042 | 2,207 | 40 | 87 | 55.175 | 0.802419 | 0 | 0 | 0 | 0 | 0 | 0.354327 | 0.275487 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.051282 | 0 | 0.051282 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d3b45e5164e572fbde2110d62cb448013353f1cd | 1,593 | py | Python | gandyndns.py | nim65s/scripts | 2c61bd77bfca6ae6437654e43ad2bc95d611360a | [
"BSD-2-Clause"
] | 1 | 2020-12-17T09:41:42.000Z | 2020-12-17T09:41:42.000Z | gandyndns.py | nim65s/scripts | 2c61bd77bfca6ae6437654e43ad2bc95d611360a | [
"BSD-2-Clause"
] | null | null | null | gandyndns.py | nim65s/scripts | 2c61bd77bfca6ae6437654e43ad2bc95d611360a | [
"BSD-2-Clause"
] | null | null | null | #!/usr/bin/env python
'''update gandi DNS domain entry, with LiveDNS v5
Cf. https://doc.livedns.gandi.net/#work-with-domains
'''
import argparse
import ipaddress
import json
import os
from subprocess import check_output
import requests
parser = argparse.ArgumentParser(description=__doc__)
parser.add_argument('-v', '--verbose', action='store_true')
parser.add_argument('domain')
parser.add_argument('name')
parser.add_argument('--ip', help="defaults to ifconfig.me's return")
parser.add_argument('--api_key', help="defaults to GANDI_API_KEY env var, or the return of 'pass api/gandi'")
args = parser.parse_args()
if args.ip is None:
args.ip = requests.get('http://ifconfig.me', headers={'User-Agent': 'curl/7.61.1'}).content.decode().strip()
ip = ipaddress.ip_address(args.ip)
if args.api_key is None:
args.api_key = os.environ.get('GANDI_API_KEY', check_output(['pass', 'api/gandi'], text=True).strip())
key = {'X-Api-Key': args.api_key}
r = requests.get(f'https://dns.api.gandi.net/api/v5/domains/{args.domain}/records/{args.name}', headers=key)
r.raise_for_status()
if r.json()[0]['rrset_values'][0] == args.ip:
if args.verbose:
print('ok')
else:
type_ = 'AAAA' if isinstance(ip, ipaddress.IPv6Address) else 'A'
url = f'https://dns.api.gandi.net/api/v5/domains/{args.domain}/records/{args.name}/{type_}'
data = {'rrset_values': [args.ip]}
headers = {'Content-Type': 'application/json', **key}
r = requests.put(url, data=json.dumps(data), headers=headers)
if args.verbose:
print(r.json())
else:
r.raise_for_status()
| 32.510204 | 112 | 0.696171 | 245 | 1,593 | 4.404082 | 0.395918 | 0.038925 | 0.078777 | 0.022243 | 0.105653 | 0.105653 | 0.105653 | 0.105653 | 0.105653 | 0.105653 | 0 | 0.007168 | 0.124294 | 1,593 | 48 | 113 | 33.1875 | 0.766308 | 0.07533 | 0 | 0.176471 | 0 | 0.058824 | 0.295362 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.058824 | 0.176471 | 0 | 0.176471 | 0.058824 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
d3b76c1c0fc989bb41ad8f58fabce2395587d211 | 1,615 | py | Python | src/masonite/oauth/drivers/FacebookDriver.py | girardinsamuel/masonite-socialite | 04110601b299d8505ec453b7743124cb88047d9d | [
"MIT"
] | 1 | 2021-05-07T16:37:03.000Z | 2021-05-07T16:37:03.000Z | src/masonite/oauth/drivers/FacebookDriver.py | girardinsamuel/masonite-socialite | 04110601b299d8505ec453b7743124cb88047d9d | [
"MIT"
] | 11 | 2021-05-17T06:45:48.000Z | 2021-10-03T15:16:23.000Z | src/masonite/oauth/drivers/FacebookDriver.py | girardinsamuel/masonite-socialite | 04110601b299d8505ec453b7743124cb88047d9d | [
"MIT"
] | null | null | null | from .BaseDriver import BaseDriver
from ..OAuthUser import OAuthUser
class FacebookDriver(BaseDriver):
def get_default_scopes(self):
return ["email"]
def get_auth_url(self):
return "https://www.facebook.com/dialog/oauth"
def get_token_url(self):
return "https://graph.facebook.com/oauth/access_token"
def get_user_url(self):
return "https://graph.facebook.com/me?"
def get_request_options(self, token):
return {
"headers": {"Authorization": f"Bearer {token}", "Accept": "application/json"},
"query": {"prettyPrint": "false"},
}
def user(self):
user_data, token = super().user()
user = (
OAuthUser()
.set_token(token)
.build(
{
"id": user_data["sub"],
"nickname": user_data["nickname"],
"name": user_data["name"],
"email": user_data["email"],
"avatar": user_data["picture"],
}
)
)
return user
def user_from_token(self, token):
user_data = super().user_from_token(token)
user = (
OAuthUser()
.set_token(token)
.build(
{
"id": user_data["sub"],
"nickname": user_data["nickname"],
"name": user_data["name"],
"email": user_data["email"],
"avatar": user_data["picture"],
}
)
)
return user
| 28.333333 | 90 | 0.479257 | 149 | 1,615 | 5 | 0.322148 | 0.128859 | 0.052349 | 0.072483 | 0.448322 | 0.448322 | 0.448322 | 0.357047 | 0.357047 | 0.357047 | 0 | 0 | 0.389474 | 1,615 | 56 | 91 | 28.839286 | 0.755578 | 0 | 0 | 0.416667 | 0 | 0 | 0.18452 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.145833 | false | 0 | 0.041667 | 0.104167 | 0.354167 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 |
d3ba25fae7aacb5e43b639c41eadbd3d14fb7a48 | 303 | py | Python | ms_deisotope/qc/__init__.py | mstim/ms_deisotope | 29f4f466e92e66b65a2d21eca714aa627caa21db | [
"Apache-2.0"
] | 18 | 2017-09-01T12:26:12.000Z | 2022-02-23T02:31:29.000Z | ms_deisotope/qc/__init__.py | mstim/ms_deisotope | 29f4f466e92e66b65a2d21eca714aa627caa21db | [
"Apache-2.0"
] | 19 | 2017-03-12T20:40:36.000Z | 2022-03-31T22:50:47.000Z | ms_deisotope/qc/__init__.py | mstim/ms_deisotope | 29f4f466e92e66b65a2d21eca714aa627caa21db | [
"Apache-2.0"
] | 14 | 2016-05-06T02:25:30.000Z | 2022-03-31T14:40:06.000Z | """A collection of methods for determining whether a given spectrum is
of high quality (likely to produce a high quality interpretation)
"""
from .heuristic import xrea
from .isolation import CoIsolation, PrecursorPurityEstimator
__all__ = [
"xrea",
"CoIsolation", "PrecursorPurityEstimator"
]
| 27.545455 | 70 | 0.772277 | 34 | 303 | 6.764706 | 0.705882 | 0.095652 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.155116 | 303 | 10 | 71 | 30.3 | 0.898438 | 0.438944 | 0 | 0 | 0 | 0 | 0.239264 | 0.147239 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
d3bbc84b4a938b83b84adeff2d313509849c11f6 | 3,855 | py | Python | rpi_animations/message.py | Anski-D/rpi_animations_old | b019a301ba777d76e3cedc6b86359570e2c2f18b | [
"MIT"
] | null | null | null | rpi_animations/message.py | Anski-D/rpi_animations_old | b019a301ba777d76e3cedc6b86359570e2c2f18b | [
"MIT"
] | null | null | null | rpi_animations/message.py | Anski-D/rpi_animations_old | b019a301ba777d76e3cedc6b86359570e2c2f18b | [
"MIT"
] | null | null | null | from .item import Item
class Message(Item):
"""
Message feature object in the rpi_animations package.
"""
def __init__(self, group, screen_animator) -> None:
"""
Initialise Message object with sprite group and screen object. Run initial setup methods.
Args:
group (Group): Pygame sprite group to which the object will be added.
screen_animator (ScreenAnimator): Main package object controlling the animation.
"""
super().__init__(group, screen_animator)
# Store x position as float
self._x = float(self._rect.x)
# Set the flag that the message hasn't fully emerged
self._has_fully_emerged = False
def _setup_item(self) -> None:
"""
Run methods to setup the object.
Returns:
None
"""
self._set_text()
# Run parent method
super()._setup_item()
def _set_text(self) -> None:
"""
Set font, message text, and outline of text.
Returns:
None
"""
# Set font
self._font = self._settings.font
# Set the message text
self._text = self._settings.text
# Set the outline text
self._outline_text = self._font.render(
self._text,
self._settings.settings['text_aa'],
self._settings.outline_colour
)
def _set_item_content(self) -> None:
"""
Render the message text.
Returns:
None
"""
self.content = self._font.render(
self._text,
self._settings.settings['text_aa'],
self._settings.text_colour
)
def _place_item(self) -> None:
"""
Set the initial object position on the screen.
Returns:
None
"""
self._rect.midleft = self._screen_rect.midright
def _draw_outline(self) -> None:
"""
Draw the message text outline.
Returns:
None
"""
outline_width = self._settings.settings['outline_width']
self._screen.blit(self._outline_text, (self._rect.x - outline_width, self._rect.y - outline_width))
self._screen.blit(self._outline_text, (self._rect.x - outline_width, self._rect.y + outline_width))
self._screen.blit(self._outline_text, (self._rect.x + outline_width, self._rect.y - outline_width))
self._screen.blit(self._outline_text, (self._rect.x + outline_width, self._rect.y + outline_width))
def blit(self) -> None:
"""
Add the object to the pygame screen.
Returns:
None
"""
# Draw outline text
self._draw_outline()
# Draw the message
self._set_item_content()
# Run parent method
super().blit()
def update(self) -> None:
"""
Move the object position to the left during a frame update.
Returns:
None
"""
self._x -= self._settings.settings['text_speed'] / self._settings.settings['fps']
self._rect.x = self._x
def is_on_screen(self) -> bool:
"""
Determine whether the object is still on the screen.
Returns:
bool: True if still on screen, False otherwise.
"""
if self._rect.right <= self._screen_rect.left:
return False
return True
def has_just_emerged(self) -> bool:
"""
Determine whether the right side of the message is now visible on the screen.
Returns:
bool: True if right edge is now on screen, False otherwise.
"""
if not self._has_fully_emerged and self._rect.right <= self._screen_rect.right:
self._has_fully_emerged = True
return True
return False
| 27.147887 | 107 | 0.575357 | 450 | 3,855 | 4.682222 | 0.22 | 0.049359 | 0.068344 | 0.045088 | 0.298054 | 0.249644 | 0.224015 | 0.197437 | 0.197437 | 0.197437 | 0 | 0 | 0.332815 | 3,855 | 141 | 108 | 27.340426 | 0.819207 | 0.307912 | 0 | 0.170213 | 0 | 0 | 0.017809 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.212766 | false | 0 | 0.021277 | 0 | 0.340426 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d3bea2de7d4525c6881fb3abdb31815d971e7131 | 506 | py | Python | tests/conftest.py | Beanxx/alonememo | aa90bcca6a5dcaa41305b162ac5d6dbe8d0d2562 | [
"MIT"
] | null | null | null | tests/conftest.py | Beanxx/alonememo | aa90bcca6a5dcaa41305b162ac5d6dbe8d0d2562 | [
"MIT"
] | null | null | null | tests/conftest.py | Beanxx/alonememo | aa90bcca6a5dcaa41305b162ac5d6dbe8d0d2562 | [
"MIT"
] | null | null | null | import pytest
from pymongo import MongoClient
import app as flask_app
test_database_name = 'spartatest'
client = MongoClient('localhost', 27017)
db = client.get_database(test_database_name)
@pytest.fixture
def app():
test_app = flask_app.create_app(test_database_name)
# 제네레이터 문법(yield 구문까지만 실행하고 대기,
# 이후 다시 호출할 때 yield 구문 다음이 진행됨)
# app이 종료되는 것이 아니라 stop됨.
yield test_app
# 여기서부터는 모든 테스트가 완료되고 나서 시행됨
client.drop_database(test_database_name)
print('테스트 DB 제거 완료')
| 20.24 | 55 | 0.727273 | 78 | 506 | 4.525641 | 0.628205 | 0.135977 | 0.181303 | 0.107649 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.012346 | 0.199605 | 506 | 24 | 56 | 21.083333 | 0.859259 | 0.217391 | 0 | 0 | 0 | 0 | 0.079487 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.083333 | false | 0 | 0.25 | 0 | 0.333333 | 0.083333 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d3c2f371f8e9bd53dfa26410d72fcf0c4b952e00 | 1,004 | py | Python | settings.py | embrace-inpe/cycle-slip-correction | c465dd4d45ea7df63a18749e26ba4bf0aa27eb59 | [
"MIT"
] | 6 | 2019-05-20T21:23:41.000Z | 2021-06-23T15:00:30.000Z | settings.py | embrace-inpe/cycle-slip-correction | c465dd4d45ea7df63a18749e26ba4bf0aa27eb59 | [
"MIT"
] | null | null | null | settings.py | embrace-inpe/cycle-slip-correction | c465dd4d45ea7df63a18749e26ba4bf0aa27eb59 | [
"MIT"
] | 5 | 2018-12-27T16:46:45.000Z | 2020-09-14T13:44:00.000Z | """
Commom settings to all applications
"""
A = 40.3
TECU = 1.0e16
C = 299792458
F1 = 1.57542e9
F2 = 1.22760e9
factor_1 = (F1 - F2) / (F1 + F2) / C
factor_2 = (F1 * F2) / (F2 - F1) / C
DIFF_TEC_MAX = 0.05
LIMIT_STD = 7.5
plot_it = True
REQUIRED_VERSION = 3.01
CONSTELLATIONS = ['G', 'R']
COLUMNS_IN_RINEX = {'3.03': {'G': {'L1': 'L1C', 'L2': 'L2W', 'C1': 'C1C', 'P1': 'C1W', 'P2': 'C2W'},
'R': {'L1': 'L1C', 'L2': 'L2C', 'C1': 'C1C', 'P1': 'C1P', 'P2': 'C2P'}
},
'3.02': {'G': {'L1': 'L1', 'L2': 'L2', 'C1': 'C1C', 'P1': 'C1W', 'P2': 'C2W'},
'R': {'L1': 'L1', 'L2': 'L2', 'C1': 'C1C', 'P1': 'C1P', 'P2': 'C2P'}
},
'3.01': {'G': {'L1': 'L1', 'L2': 'L2', 'C1': 'C1C', 'P1': 'C1W', 'P2': 'C2W'},
'R': {'L1': 'L1', 'L2': 'L2', 'C1': 'C1C', 'P1': 'C1P', 'P2': 'C2P'}
}
}
| 33.466667 | 100 | 0.351594 | 127 | 1,004 | 2.708661 | 0.433071 | 0.087209 | 0.122093 | 0.093023 | 0.380814 | 0.380814 | 0.380814 | 0.331395 | 0.27907 | 0.27907 | 0 | 0.18859 | 0.371514 | 1,004 | 29 | 101 | 34.62069 | 0.356577 | 0.034861 | 0 | 0.090909 | 0 | 0 | 0.168574 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d3c3d276986b71cc9d8aae788f2dcd9c3f2eb96a | 1,009 | py | Python | tests/api/test_libcoap_api.py | ggravlingen/ikeatradfri | 9eef5317ab770de874c407449489604b2fdf35f1 | [
"MIT"
] | 726 | 2017-04-12T22:55:39.000Z | 2020-09-02T20:47:13.000Z | tests/api/test_libcoap_api.py | ggravlingen/ikeatradfri | 9eef5317ab770de874c407449489604b2fdf35f1 | [
"MIT"
] | 248 | 2017-04-12T21:45:10.000Z | 2020-09-03T08:48:37.000Z | tests/api/test_libcoap_api.py | ggravlingen/ikeatradfri | 9eef5317ab770de874c407449489604b2fdf35f1 | [
"MIT"
] | 140 | 2017-04-12T20:02:57.000Z | 2020-09-02T08:54:23.000Z | """Test API utilities."""
import json
from pytradfri.api.libcoap_api import APIFactory
from pytradfri.gateway import Gateway
def test_constructor_timeout_passed_to_subprocess(monkeypatch):
"""Test that original timeout is passed to subprocess."""
capture = {}
def capture_args(*args, **kwargs):
capture.update(kwargs)
return json.dumps([])
monkeypatch.setattr("subprocess.check_output", capture_args)
api = APIFactory("anything", timeout=20, psk="abc")
api.request(Gateway().get_devices())
assert capture["timeout"] == 20
def test_custom_timeout_passed_to_subprocess(monkeypatch):
"""Test that custom timeout is passed to subprocess."""
capture = {}
def capture_args(*args, **kwargs):
capture.update(kwargs)
return json.dumps([])
monkeypatch.setattr("subprocess.check_output", capture_args)
api = APIFactory("anything", psk="abc")
api.request(Gateway().get_devices(), timeout=1)
assert capture["timeout"] == 1
| 28.027778 | 64 | 0.698712 | 119 | 1,009 | 5.764706 | 0.327731 | 0.046647 | 0.104956 | 0.072886 | 0.699708 | 0.699708 | 0.699708 | 0.475219 | 0.475219 | 0.475219 | 0 | 0.007186 | 0.172448 | 1,009 | 35 | 65 | 28.828571 | 0.814371 | 0.119921 | 0 | 0.47619 | 0 | 0 | 0.094037 | 0.052752 | 0 | 0 | 0 | 0 | 0.095238 | 1 | 0.190476 | false | 0.095238 | 0.142857 | 0 | 0.428571 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
d3c5d75262328f54482b5a9f8b47cfdc49c36760 | 445 | py | Python | setup.py | korymath/JANN | 98468a2e90a6b55ccb15e905ee10a1d1130cf5d8 | [
"MIT"
] | 39 | 2018-09-25T21:40:38.000Z | 2022-01-19T23:26:51.000Z | setup.py | korymath/JANN | 98468a2e90a6b55ccb15e905ee10a1d1130cf5d8 | [
"MIT"
] | 22 | 2018-09-25T21:36:46.000Z | 2021-09-07T16:03:41.000Z | setup.py | korymath/JANN | 98468a2e90a6b55ccb15e905ee10a1d1130cf5d8 | [
"MIT"
] | 9 | 2018-09-26T00:38:35.000Z | 2020-02-27T05:59:03.000Z | from setuptools import setup
from setuptools import find_packages
setup(
name="Jann",
version="4.0.0",
description="Jann is a Nearest Neighbour retrieval-based chatbot.",
author="Kory Mathewson",
author_email="korymath@gmail.com",
license="MIT",
url="https://github.com/korymath/jann",
packages=find_packages(),
setup_requires=[
"pytest-runner"
],
tests_require=[
"pytest"
],
)
| 20.227273 | 71 | 0.647191 | 51 | 445 | 5.54902 | 0.705882 | 0.09894 | 0.141343 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.008646 | 0.220225 | 445 | 21 | 72 | 21.190476 | 0.806916 | 0 | 0 | 0.111111 | 0 | 0 | 0.331081 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.111111 | 0 | 0.111111 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d3cc0271bb0d934fe7034974b1385e41735a694e | 447 | py | Python | strings/#387/strings.py | sharmarkei/DSA-Practice | c98e9f5ae1824d86f02d1002d908dc24c8be8812 | [
"MIT"
] | null | null | null | strings/#387/strings.py | sharmarkei/DSA-Practice | c98e9f5ae1824d86f02d1002d908dc24c8be8812 | [
"MIT"
] | null | null | null | strings/#387/strings.py | sharmarkei/DSA-Practice | c98e9f5ae1824d86f02d1002d908dc24c8be8812 | [
"MIT"
] | null | null | null | class Solution(object):
def firstUniqChar(self, s):
"""
:type s: str
:rtype: int
"""
dict_1 = {}
for i in s:
if i not in dict_1:
dict_1[i] = 1
else:
dict_1[i] += 1
print(dict_1)
for idx, val in enumerate(s):
if dict_1[val] == 1:
return idx
return -1
| 20.318182 | 37 | 0.364653 | 51 | 447 | 3.078431 | 0.470588 | 0.191083 | 0.101911 | 0.089172 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.04878 | 0.541387 | 447 | 22 | 38 | 20.318182 | 0.717073 | 0.053691 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.076923 | false | 0 | 0 | 0 | 0.307692 | 0.076923 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d3d01a6d5b6d4e91e1847f49e77097d90f67ce9c | 906 | py | Python | pypy/module/cpyext/test/test_iterator.py | wdv4758h/mu-client-pypy | d2fcc01f0b4fe3ffa232762124e3e6d38ed3a0cf | [
"Apache-2.0",
"OpenSSL"
] | 34 | 2015-07-09T04:53:27.000Z | 2021-07-19T05:22:27.000Z | pypy/module/cpyext/test/test_iterator.py | wdv4758h/mu-client-pypy | d2fcc01f0b4fe3ffa232762124e3e6d38ed3a0cf | [
"Apache-2.0",
"OpenSSL"
] | 6 | 2015-05-30T17:20:45.000Z | 2017-06-12T14:29:23.000Z | pypy/module/cpyext/test/test_iterator.py | wdv4758h/mu-client-pypy | d2fcc01f0b4fe3ffa232762124e3e6d38ed3a0cf | [
"Apache-2.0",
"OpenSSL"
] | 11 | 2015-09-07T14:26:08.000Z | 2020-04-10T07:20:41.000Z | from pypy.module.cpyext.test.test_api import BaseApiTest
class TestIterator(BaseApiTest):
def test_check_iter(self, space, api):
assert api.PyIter_Check(space.iter(space.wrap("a")))
assert api.PyIter_Check(space.iter(space.newlist([])))
assert not api.PyIter_Check(space.w_type)
assert not api.PyIter_Check(space.wrap(2))
def test_getIter(self, space, api):
w_iter = api.PyObject_GetIter(space.wrap([1, 2, 3]))
assert space.unwrap(api.PyIter_Next(w_iter)) == 1
assert space.unwrap(api.PyIter_Next(w_iter)) == 2
assert space.unwrap(api.PyIter_Next(w_iter)) == 3
assert api.PyIter_Next(w_iter) is None
assert not api.PyErr_Occurred()
def test_iternext_error(self,space, api):
assert api.PyIter_Next(space.w_None) is None
assert api.PyErr_Occurred() is space.w_TypeError
api.PyErr_Clear()
| 39.391304 | 62 | 0.684327 | 135 | 906 | 4.392593 | 0.281481 | 0.136594 | 0.109612 | 0.128162 | 0.482293 | 0.451939 | 0.291737 | 0.177066 | 0 | 0 | 0 | 0.009642 | 0.198676 | 906 | 22 | 63 | 41.181818 | 0.807163 | 0 | 0 | 0 | 0 | 0 | 0.001104 | 0 | 0 | 0 | 0 | 0 | 0.611111 | 1 | 0.166667 | false | 0 | 0.055556 | 0 | 0.277778 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d3d9152a0002e3e05bd42184c7b5ca8570672123 | 1,046 | py | Python | setup.py | digicert/digicert_express | 292fb4e7f8a39e53c384a79c50a9488c51258f97 | [
"MIT"
] | 2 | 2017-03-03T20:37:29.000Z | 2018-06-01T22:22:15.000Z | setup.py | digicert/digicert_express | 292fb4e7f8a39e53c384a79c50a9488c51258f97 | [
"MIT"
] | null | null | null | setup.py | digicert/digicert_express | 292fb4e7f8a39e53c384a79c50a9488c51258f97 | [
"MIT"
] | 2 | 2018-01-26T07:11:42.000Z | 2019-03-06T23:30:39.000Z | from setuptools import setup, find_packages
def readme():
with open('README.rst') as f:
return f.read()
setup(
name='digicert-express',
version='1.1dev2',
description='Express Install for DigiCert, Inc.',
long_description=readme(),
classifiers=[
'Development Status :: 3 - Alpha',
'Intended Audience :: Information Technology',
'License :: OSI Approved :: MIT License',
'Topic :: Security',
'Programming Language :: Python :: 2.6',
'Programming Language :: Python :: 2.7',
],
url='https://github.com/digicert/digicert_express',
author='DigiCert, Inc.',
author_email='support@digicert.com',
license='MIT',
zip_safe=False,
packages=find_packages(exclude=['tests.*', '*.tests.*', '*.tests', 'tests', 'scripts']),
include_package_data=True,
install_requires=[
'python-augeas',
'requests>=2.8.1',
'ndg-httpsclient',
'pyasn1',
'pyOpenSSL' # prefer OS install but we can try here, too
],
)
| 29.885714 | 92 | 0.605163 | 113 | 1,046 | 5.522124 | 0.707965 | 0.048077 | 0.080128 | 0.083333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.015075 | 0.239006 | 1,046 | 34 | 93 | 30.764706 | 0.768844 | 0.040153 | 0 | 0.0625 | 0 | 0 | 0.443114 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.03125 | true | 0 | 0.03125 | 0 | 0.09375 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d3da88558c778364e49a959d5f0d8f942db1cc43 | 3,758 | py | Python | config.py | somritabanerjee/speedplusbaseline | 5913c611d8c182ad8070abcf5f1baffc554dfd90 | [
"MIT"
] | null | null | null | config.py | somritabanerjee/speedplusbaseline | 5913c611d8c182ad8070abcf5f1baffc554dfd90 | [
"MIT"
] | null | null | null | config.py | somritabanerjee/speedplusbaseline | 5913c611d8c182ad8070abcf5f1baffc554dfd90 | [
"MIT"
] | null | null | null | import argparse
PROJROOTDIR = {'mac': '/Users/taehapark/SLAB/speedplusbaseline',
'linux': '/home/somrita/Documents/Satellite_Pose_Estimation/speedplusbaseline'}
DATAROOTDIR = {'mac': '/Users/taehapark/SLAB/speedplus/data/datasets',
'linux': '/home/somrita/Documents/Satellite_Pose_Estimation/dataset'}
parser = argparse.ArgumentParser('Configurations for SPEED+ Baseline Study')
# ------------------------------------------------------------------------------------------
# Basic directories and names
parser.add_argument('--seed', type=int, default=2021)
parser.add_argument('--projroot', type=str, default=PROJROOTDIR['linux'])
parser.add_argument('--dataroot', type=str, default=DATAROOTDIR['linux'])
parser.add_argument('--dataname', type=str, default='speedplus')
parser.add_argument('--savedir', type=str, default='checkpoints/synthetic/krn')
parser.add_argument('--resultfn', type=str, default='')
parser.add_argument('--logdir', type=str, default='log/synthetic/krn')
parser.add_argument('--pretrained', type=str, default='')
# ------------------------------------------------------------------------------------------
# Model config.
parser.add_argument('--model_name', type=str, default='krn')
parser.add_argument('--input_shape', nargs='+', type=int, default=(224, 224))
parser.add_argument('--num_keypoints', type=int, default=11) # KRN-specific
parser.add_argument('--num_classes', type=int, default=5000) # SPN-specific
parser.add_argument('--num_neighbors', type=int, default=5) # SPN-specific
parser.add_argument('--keypts_3d_model', type=str, default='src/utils/tangoPoints.mat')
parser.add_argument('--attitude_class', type=str, default='src/utils/attitudeClasses.mat')
# ------------------------------------------------------------------------------------------
# Training config.
parser.add_argument('--start_over', dest='auto_resume', action='store_false', default=True)
parser.add_argument('--randomize_texture', dest='randomize_texture', action='store_true', default=False)
parser.add_argument('--perform_dann', dest='dann', action='store_true', default=False)
parser.add_argument('--texture_alpha', type=float, default=0.5)
parser.add_argument('--texture_ratio', type=float, default=0.5)
parser.add_argument('--use_fp16', dest='fp16', action='store_true', default=False)
parser.add_argument('--batch_size', type=int, default=32)
parser.add_argument('--max_epochs', type=int, default=75)
parser.add_argument('--num_workers', type=int, default=8)
parser.add_argument('--test_epoch', type=int, default=-1)
parser.add_argument('--optimizer', type=str, default='rmsprop')
parser.add_argument('--lr', type=float, default=0.001)
parser.add_argument('--momentum', type=float, default=0.9)
parser.add_argument('--weight_decay', type=float, default=5e-5)
parser.add_argument('--lr_decay_alpha', type=float, default=0.96)
parser.add_argument('--lr_decay_step', type=int, default=1)
# ------------------------------------------------------------------------------------------
# Dataset-related inputs
parser.add_argument('--train_domain', type=str, default='synthetic')
parser.add_argument('--test_domain', type=str, default='lightbox')
parser.add_argument('--train_csv', type=str, default='train.csv')
parser.add_argument('--test_csv', type=str, default='lightbox.csv')
# ------------------------------------------------------------------------------------------
# Other miscellaneous settings
parser.add_argument('--gpu_id', type=int, default=0)
parser.add_argument('--no_cuda', dest='use_cuda', action='store_false', default=True)
# End
cfg = parser.parse_args() | 58.71875 | 104 | 0.631453 | 428 | 3,758 | 5.359813 | 0.313084 | 0.145161 | 0.274194 | 0.037053 | 0.268527 | 0.129904 | 0.129904 | 0.088056 | 0 | 0 | 0 | 0.013235 | 0.095263 | 3,758 | 64 | 105 | 58.71875 | 0.661471 | 0.161788 | 0 | 0 | 0 | 0 | 0.308992 | 0.091518 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.022727 | 0 | 0.022727 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d3ebad071ed8577b67556835d306ad97a7a130ad | 217 | py | Python | algoritmos/ajuste-curvas/caso-linear/Teste.py | mauriciomoccelin/metodos-numericos | 67bdb305d4db8a59943a17128ba2c06fefcc4a36 | [
"MIT"
] | 3 | 2019-07-03T18:05:44.000Z | 2020-02-04T16:37:21.000Z | algoritmos/ajuste-curvas/caso-linear/Teste.py | mauriciomoccelin/metodos-numericos | 67bdb305d4db8a59943a17128ba2c06fefcc4a36 | [
"MIT"
] | null | null | null | algoritmos/ajuste-curvas/caso-linear/Teste.py | mauriciomoccelin/metodos-numericos | 67bdb305d4db8a59943a17128ba2c06fefcc4a36 | [
"MIT"
] | null | null | null | from RegressaoLinear import RegressaoLinear
planoCartesiano = {
0.5: 4.4,
2.8: 1.8,
4.2: 1.0,
6.7: 0.4,
8.3: 0.2
}
regressaoLinear = RegressaoLinear(planoCartesiano)
print(regressaoLinear.gerar_equacao())
| 16.692308 | 50 | 0.705069 | 32 | 217 | 4.75 | 0.5 | 0.394737 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.10929 | 0.156682 | 217 | 12 | 51 | 18.083333 | 0.721311 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.1 | 0 | 0.1 | 0.1 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d3f026b7191d98da19a4514bcacdc0c4c65fbbab | 433 | py | Python | UDEMY-Learn Python Programming Masterclass/Section 3-Stepping into the World of Python/exercise4.py | Sanjay9921/Python | 05ac161dd46f9b4731a5c14ff5ef52adb705e8e6 | [
"MIT"
] | null | null | null | UDEMY-Learn Python Programming Masterclass/Section 3-Stepping into the World of Python/exercise4.py | Sanjay9921/Python | 05ac161dd46f9b4731a5c14ff5ef52adb705e8e6 | [
"MIT"
] | null | null | null | UDEMY-Learn Python Programming Masterclass/Section 3-Stepping into the World of Python/exercise4.py | Sanjay9921/Python | 05ac161dd46f9b4731a5c14ff5ef52adb705e8e6 | [
"MIT"
] | null | null | null | #Integer division
#You have a shop selling buns for $2.40 each. A customer comes in with $15, and would like to buy as many buns as possible.
#Complete the code to calculate how many buns the customer can afford.
#Note: Your customer won't be happy if you try to sell them part of a bun.
#Print only the result, any other text in the output will cause the checker to fail.
bun_price = 2.40
money = 15
print( money // bun_price ) | 36.083333 | 124 | 0.745958 | 83 | 433 | 3.86747 | 0.698795 | 0.018692 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.028818 | 0.198614 | 433 | 12 | 125 | 36.083333 | 0.896254 | 0.840647 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.333333 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
310c8eff631db50cd8a05c87d1793446b7ad450c | 4,065 | py | Python | examples/fixed_play.py | wwxFromTju/malib | 7cd2a4af55cf1f56da8854e26ea7a4f3782ceea2 | [
"MIT"
] | 6 | 2021-05-19T10:25:36.000Z | 2021-12-27T03:30:33.000Z | examples/fixed_play.py | wwxFromTju/malib | 7cd2a4af55cf1f56da8854e26ea7a4f3782ceea2 | [
"MIT"
] | 1 | 2021-05-29T04:51:37.000Z | 2021-05-30T06:18:10.000Z | examples/fixed_play.py | ying-wen/malib_deprecated | 875338b81c4d87064ad31201f461ef742db05f25 | [
"MIT"
] | 1 | 2021-05-31T16:16:12.000Z | 2021-05-31T16:16:12.000Z | # Created by yingwen at 2019-03-16
from multiprocessing import Process
from malib.agents.agent_factory import *
from malib.environments import DifferentialGame
from malib.logger.utils import set_logger
from malib.samplers.sampler import MASampler
from malib.trainers import MATrainer
from malib.utils.random import set_seed
def get_agent_by_type(type_name, i, env, hidden_layer_sizes, max_replay_buffer_size):
if type_name == "SAC":
return get_sac_agent(
env,
hidden_layer_sizes=hidden_layer_sizes,
max_replay_buffer_size=max_replay_buffer_size,
)
elif type_name == "ROMMEO":
return get_rommeo_agent(
env,
agent_id=i,
hidden_layer_sizes=hidden_layer_sizes,
max_replay_buffer_size=max_replay_buffer_size,
)
elif type_name == "ROMMEO-UNI":
return get_rommeo_agent(
env,
agent_id=i,
hidden_layer_sizes=hidden_layer_sizes,
max_replay_buffer_size=max_replay_buffer_size,
uniform=True,
)
elif type_name == "DDPG-OM":
return get_ddpgom_agent(
env,
agent_id=i,
hidden_layer_sizes=hidden_layer_sizes,
max_replay_buffer_size=max_replay_buffer_size,
)
elif type_name == "DDPG-TOM":
return get_ddpgtom_agent(
env,
agent_id=i,
hidden_layer_sizes=hidden_layer_sizes,
max_replay_buffer_size=max_replay_buffer_size,
)
elif type_name == "DDPG":
return get_ddpg_agent(
env,
agent_id=i,
hidden_layer_sizes=hidden_layer_sizes,
max_replay_buffer_size=max_replay_buffer_size,
)
elif type_name == "MADDPG":
return get_maddpg_agent(
env,
agent_id=i,
hidden_layer_sizes=hidden_layer_sizes,
max_replay_buffer_size=max_replay_buffer_size,
)
elif type_name == "MFAC":
return get_maddpg_agent(
env,
agent_id=i,
hidden_layer_sizes=hidden_layer_sizes,
max_replay_buffer_size=max_replay_buffer_size,
)
def train_fixed(seed, agent_setting, game_name="ma_softq"):
set_seed(seed)
suffix = f"fixed_play1/{game_name}/{agent_setting}/{seed}"
set_logger(suffix)
batch_size = 512
training_steps = 2000
exploration_steps = 100
max_replay_buffer_size = 1e5
hidden_layer_sizes = (128, 128)
max_path_length = 1
agent_num = 2
env = DifferentialGame(game_name, agent_num)
agents = []
agent_types = agent_setting.split("_")
assert len(agent_types) == agent_num
for i, agent_type in enumerate(agent_types):
agents.append(
get_agent_by_type(
agent_type,
i,
env,
hidden_layer_sizes=hidden_layer_sizes,
max_replay_buffer_size=max_replay_buffer_size,
)
)
sampler = MASampler(
agent_num, batch_size=batch_size, max_path_length=max_path_length
)
sampler.initialize(env, agents)
trainer = MATrainer(
env=env,
agents=agents,
sampler=sampler,
steps=training_steps,
exploration_steps=exploration_steps,
training_interval=1,
extra_experiences=["annealing", "recent_experiences"],
batch_size=batch_size,
)
trainer.run()
def main():
settings = [
"ROMMEO_ROMMEO",
]
game = "ma_softq"
for setting in settings:
processes = []
for e in range(1):
seed = 1 + int(23122134 / (e + 1))
def train_func():
train_fixed(seed, setting, game)
# # # Awkward hacky process runs, because Tensorflow does not like
p = Process(target=train_func, args=tuple())
p.start()
processes.append(p)
for p in processes:
p.join()
if __name__ == "__main__":
main()
| 28.229167 | 85 | 0.613284 | 484 | 4,065 | 4.75 | 0.241736 | 0.095694 | 0.139191 | 0.165289 | 0.408873 | 0.406699 | 0.406699 | 0.391475 | 0.391475 | 0.391475 | 0 | 0.014648 | 0.311439 | 4,065 | 143 | 86 | 28.426573 | 0.806717 | 0.023862 | 0 | 0.311475 | 0 | 0 | 0.040121 | 0.011607 | 0 | 0 | 0 | 0 | 0.008197 | 1 | 0.032787 | false | 0 | 0.057377 | 0 | 0.155738 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
310e8a0bf4712762b03a5c5b70b449b48e5c9b02 | 776 | py | Python | hypha/apply/projects/templatetags/payment_request_tools.py | maxpearl/hypha | e181ebadfb744aab34617bb766e746368d6f2de0 | [
"BSD-3-Clause"
] | 16 | 2020-01-24T11:52:46.000Z | 2021-02-02T22:21:04.000Z | hypha/apply/projects/templatetags/payment_request_tools.py | maxpearl/hypha | e181ebadfb744aab34617bb766e746368d6f2de0 | [
"BSD-3-Clause"
] | 538 | 2020-01-24T08:27:13.000Z | 2021-04-05T07:15:01.000Z | hypha/apply/projects/templatetags/payment_request_tools.py | maxpearl/hypha | e181ebadfb744aab34617bb766e746368d6f2de0 | [
"BSD-3-Clause"
] | 17 | 2020-02-07T14:55:54.000Z | 2021-04-04T19:32:38.000Z | import decimal
from django import template
register = template.Library()
@register.simple_tag
def can_change_status(payment_request, user):
return payment_request.can_user_change_status(user)
@register.simple_tag
def can_delete(payment_request, user):
return payment_request.can_user_delete(user)
@register.simple_tag
def can_edit(payment_request, user):
return payment_request.can_user_edit(user)
@register.simple_tag
def percentage(value, total):
if not total:
return decimal.Decimal(0)
unrounded_total = (value / total) * 100
# round using Decimal since we're dealing with currency
rounded_total = unrounded_total.quantize(
decimal.Decimal('0.0'),
rounding=decimal.ROUND_DOWN,
)
return rounded_total
| 20.972973 | 59 | 0.748711 | 103 | 776 | 5.38835 | 0.378641 | 0.151351 | 0.122523 | 0.144144 | 0.425225 | 0.340541 | 0.243243 | 0.243243 | 0 | 0 | 0 | 0.009331 | 0.171392 | 776 | 36 | 60 | 21.555556 | 0.85381 | 0.068299 | 0 | 0.181818 | 0 | 0 | 0.004161 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.181818 | false | 0 | 0.090909 | 0.136364 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 |
311249ddd416775b05d1978d206331804a252949 | 3,016 | py | Python | arguments.py | nudles/a2c | 6225845ab450b5ea03b6a066455b0446d3f92ed0 | [
"MIT"
] | null | null | null | arguments.py | nudles/a2c | 6225845ab450b5ea03b6a066455b0446d3f92ed0 | [
"MIT"
] | 4 | 2021-03-19T03:19:18.000Z | 2022-01-13T01:35:04.000Z | arguments.py | nudles/a2c | 6225845ab450b5ea03b6a066455b0446d3f92ed0 | [
"MIT"
] | null | null | null | import argparse
import torch
def get_args():
parser = argparse.ArgumentParser(description='RL')
parser.add_argument('--algo', default='a2c',
help='algorithm to use: a2c | ppo ')
parser.add_argument('--lr', type=float, default=7e-5,
help='learning rate (default: 7e-4)')
parser.add_argument('--eps', type=float, default=1e-5,
help='RMSprop optimizer epsilon (default: 1e-5)')
parser.add_argument('--alpha', type=float, default=0.99,
help='RMSprop optimizer apha (default: 0.99)')
parser.add_argument('--gamma', type=float, default=0.99,
help='discount factor for rewards (default: 0.99)')
parser.add_argument('--max-grad-norm', type=float, default=0.5,
help='max norm off gradients (default: 0.5)')
parser.add_argument('--seed', type=int, default=1,
help='random seed (default: 1)')
parser.add_argument('--num-processes', type=int, default=1,
help='how many training CPU processes to use (default: 16)')
parser.add_argument('--num-steps', type=int, default=32,
help='number of forward steps in A2C (default: 5)')
parser.add_argument('--clip-param', type=float, default=0.2,
help='clip parameter (default: 0.2)')
parser.add_argument('--log-interval', type=int, default=50,
help='log interval, one log per n updates (default: 10)')
parser.add_argument('--num-frames', type=int, default=80000,
help='number of frames to train (default: 10e6)')
parser.add_argument('--cuda', action='store_true', default=False,
help='disables CUDA training')
parser.add_argument('--obs_size', type=int, default=200,
help='observation vector size')
parser.add_argument('--cycle_len', type=int, default=500,
help='observation vector size')
parser.add_argument('--debug', action='store_true', default=False,
help='whether to record the logfile')
parser.add_argument('--num_models', type=int, default=3,
help='number of the model to use')
parser.add_argument('--beta', type=float, default=1,
help='balance the accuracy and latency when calculate the reward')
parser.add_argument('--tau', type=float, default=2,
help='max waiting time for enqueue')
parser.add_argument('--max_latency', type=float, default=16,
help='accept latency for each request')
parser.add_argument('--policy', choices=['async', 'sync'], default='async', help='policy')
args = parser.parse_args()
print("cuda: %s" % str(args.cuda))
if args.cuda:
assert torch.cuda.is_available(), 'CUDA is not available in this machine!'
return args
if __name__ == '__main__':
get_args() | 52 | 94 | 0.590186 | 369 | 3,016 | 4.718157 | 0.365854 | 0.108558 | 0.205055 | 0.039058 | 0.163125 | 0.141298 | 0.048248 | 0 | 0 | 0 | 0 | 0.028131 | 0.269231 | 3,016 | 58 | 95 | 52 | 0.761797 | 0 | 0 | 0.038462 | 0 | 0 | 0.326483 | 0 | 0 | 0 | 0 | 0 | 0.019231 | 1 | 0.019231 | false | 0 | 0.038462 | 0 | 0.076923 | 0.019231 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
3117a458a92a3f74cb40891238fd7657a360b0d8 | 207 | py | Python | tests/test_backup.py | KonstantinPankratov/Backupy | bfbbc97242bbf3c16da5454b5ff8741bfafa74c0 | [
"MIT"
] | 1 | 2020-02-12T12:58:28.000Z | 2020-02-12T12:58:28.000Z | tests/test_backup.py | KonstantinPankratov/Backupy | bfbbc97242bbf3c16da5454b5ff8741bfafa74c0 | [
"MIT"
] | null | null | null | tests/test_backup.py | KonstantinPankratov/Backupy | bfbbc97242bbf3c16da5454b5ff8741bfafa74c0 | [
"MIT"
] | null | null | null | import os
from Backupy import Backupy
def test_backup():
backup = Backupy()
backup.add_directory('./')
backup.start()
assert os.path.exists(backup.filename)
os.remove(backup.filename)
| 17.25 | 42 | 0.690821 | 26 | 207 | 5.423077 | 0.576923 | 0.198582 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.188406 | 207 | 11 | 43 | 18.818182 | 0.839286 | 0 | 0 | 0 | 0 | 0 | 0.009662 | 0 | 0 | 0 | 0 | 0 | 0.125 | 1 | 0.125 | false | 0 | 0.25 | 0 | 0.375 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
312a5215e0e355ad2b4d5e01dca1809280fd23f6 | 647 | py | Python | peframe/modules/apialert.py | ki1556ki/MJUOpenSource | 4087db825bbc7c460f8275428703e5c7066a84ae | [
"MIT"
] | null | null | null | peframe/modules/apialert.py | ki1556ki/MJUOpenSource | 4087db825bbc7c460f8275428703e5c7066a84ae | [
"MIT"
] | null | null | null | peframe/modules/apialert.py | ki1556ki/MJUOpenSource | 4087db825bbc7c460f8275428703e5c7066a84ae | [
"MIT"
] | 1 | 2020-07-14T03:39:06.000Z | 2020-07-14T03:39:06.000Z | # -*- coding: utf-8 -*-
# json 형식 사용을 위한 임폴트
import json
# get함수, 각각의 반복문을 통해 apialert_found안에 문자열 삽입후 리스트형식으로 정렬하여 리턴값 반환.
def get(pe, strings_match):
alerts = strings_match['apialert']
apialert_found = []
# pe에 DIRECTORY_ENTRY_IMPORT라는 변수가 있는지 확인하여 있으면 참 없으면 거짓.
if hasattr(pe, 'DIRECTORY_ENTRY_IMPORT'):
for lib in pe.DIRECTORY_ENTRY_IMPORT:
for imp in lib.imports:
for alert in alerts:
if alert: # remove 'null'
# imp.name의 문자열안에 alert의 문자열이 있을경우 apialert_found안의 맨뒤에 imp.name을 넣음
if str(imp.name).startswith(alert):
apialert_found.append(imp.name)
return sorted(set(apialert_found))
| 32.35 | 75 | 0.693972 | 98 | 647 | 4.44898 | 0.663265 | 0.08945 | 0.073395 | 0.100917 | 0.114679 | 0 | 0 | 0 | 0 | 0 | 0 | 0.001965 | 0.213292 | 647 | 19 | 76 | 34.052632 | 0.854617 | 0.374034 | 0 | 0 | 0 | 0 | 0.079156 | 0.058047 | 0 | 0 | 0 | 0 | 0 | 1 | 0.083333 | false | 0 | 0.333333 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
312bfac4cf2875d133c13b3a00e0ae85f3c76c44 | 2,084 | py | Python | tests/conftest.py | Ninjagod1251/ape | 9b40ef15f25362ddb83cb6d571d60cab041fce4a | [
"Apache-2.0"
] | null | null | null | tests/conftest.py | Ninjagod1251/ape | 9b40ef15f25362ddb83cb6d571d60cab041fce4a | [
"Apache-2.0"
] | null | null | null | tests/conftest.py | Ninjagod1251/ape | 9b40ef15f25362ddb83cb6d571d60cab041fce4a | [
"Apache-2.0"
] | null | null | null | import shutil
from pathlib import Path
from tempfile import mkdtemp
import pytest
from click.testing import CliRunner
import ape
# NOTE: Ensure that we don't use local paths for these
ape.config.DATA_FOLDER = Path(mkdtemp()).resolve()
ape.config.PROJECT_FOLDER = Path(mkdtemp()).resolve()
@pytest.fixture(scope="session")
def config():
yield ape.config
@pytest.fixture(scope="session")
def data_folder(config):
yield config.DATA_FOLDER
@pytest.fixture(scope="session")
def plugin_manager():
yield ape.networks.plugin_manager
@pytest.fixture(scope="session")
def accounts():
yield ape.accounts
@pytest.fixture(scope="session")
def compilers():
yield ape.compilers
@pytest.fixture(scope="session")
def networks():
yield ape.networks
@pytest.fixture(scope="session")
def chain():
yield ape.chain
@pytest.fixture(scope="session")
def project_folder(config):
yield config.PROJECT_FOLDER
@pytest.fixture(scope="session")
def project(config):
yield ape.Project(config.PROJECT_FOLDER)
@pytest.fixture
def keyparams():
# NOTE: password is 'a'
return {
"address": "7e5f4552091a69125d5dfcb7b8c2659029395bdf",
"crypto": {
"cipher": "aes-128-ctr",
"cipherparams": {"iv": "7bc492fb5dca4fe80fd47645b2aad0ff"},
"ciphertext": "43beb65018a35c31494f642ec535315897634b021d7ec5bb8e0e2172387e2812",
"kdf": "scrypt",
"kdfparams": {
"dklen": 32,
"n": 262144,
"r": 1,
"p": 8,
"salt": "4b127cb5ddbc0b3bd0cc0d2ef9a89bec",
},
"mac": "6a1d520975a031e11fc16cff610f5ae7476bcae4f2f598bc59ccffeae33b1caa",
},
"id": "ee424db9-da20-405d-bd75-e609d3e2b4ad",
"version": 3,
}
@pytest.fixture
def temp_accounts_path(config):
path = Path(config.DATA_FOLDER) / "accounts"
path.mkdir(exist_ok=True, parents=True)
yield path
if path.exists():
shutil.rmtree(path)
@pytest.fixture
def runner(project):
yield CliRunner()
| 21.265306 | 93 | 0.65739 | 217 | 2,084 | 6.253456 | 0.410138 | 0.114959 | 0.119381 | 0.165807 | 0.238025 | 0.081061 | 0 | 0 | 0 | 0 | 0 | 0.101778 | 0.21737 | 2,084 | 97 | 94 | 21.484536 | 0.730227 | 0.035509 | 0 | 0.181818 | 0 | 0 | 0.216741 | 0.133533 | 0 | 0 | 0 | 0 | 0 | 1 | 0.181818 | false | 0 | 0.090909 | 0.015152 | 0.287879 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
31319d47ec8ad06ca44bd80af1576e1016d0086b | 837 | py | Python | leetcode/0015_3Sum/result.py | theck17/notes | f32f0f4b8f821b1ed38d173ef0913efddd094b91 | [
"MIT"
] | null | null | null | leetcode/0015_3Sum/result.py | theck17/notes | f32f0f4b8f821b1ed38d173ef0913efddd094b91 | [
"MIT"
] | null | null | null | leetcode/0015_3Sum/result.py | theck17/notes | f32f0f4b8f821b1ed38d173ef0913efddd094b91 | [
"MIT"
] | null | null | null | # !/usr/bin/env python3
# Author: C.K
# Email: theck17@163.com
# DateTime:2021-03-15 00:07:14
# Description:
class Solution:
def threeSum(self, nums: List[int]) -> List[List[int]]:
result = set()
for i in range(0, len(nums) - 1):
# Reduce the problem to two sum(0)
two_sum = -nums[i]
cache = set()
for num in nums[i + 1:]:
remaining = two_sum - num
if remaining in cache:
#sorting to create unique tuples
triplet = tuple(sorted([nums[i], remaining, num]))
# using tuple in a set will eliminate duplicates combinations
result.add(triplet)
else:
cache.add(num)
return result
if __name__ == "__main__":
pass
| 28.862069 | 81 | 0.51135 | 101 | 837 | 4.138614 | 0.643564 | 0.043062 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.046602 | 0.384707 | 837 | 28 | 82 | 29.892857 | 0.765049 | 0.265233 | 0 | 0 | 0 | 0 | 0.013201 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.0625 | false | 0.0625 | 0 | 0 | 0.1875 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
313582b593f74c9cfe2f0d1c30d9930aec3b40a3 | 12,957 | py | Python | src/robustness.py | mathigatti/sota-music-tagging-models | b4331b07fe45902af96830f2821926ab86e17d42 | [
"MIT"
] | null | null | null | src/robustness.py | mathigatti/sota-music-tagging-models | b4331b07fe45902af96830f2821926ab86e17d42 | [
"MIT"
] | null | null | null | src/robustness.py | mathigatti/sota-music-tagging-models | b4331b07fe45902af96830f2821926ab86e17d42 | [
"MIT"
] | null | null | null | # coding: utf-8
'''
Deformation codes are borrowed from MUDA
McFee et al., A software framework for musical data augmentation, 2015
https://github.com/bmcfee/muda
'''
import os
import time
import subprocess
import tempfile
import numpy as np
import pandas as pd
import datetime
import tqdm
import csv
import fire
import argparse
import pickle
from sklearn import metrics
import pandas as pd
import librosa
import soundfile as psf
import torch
import torch.nn as nn
from torch.autograd import Variable
from solver import skip_files
from sklearn.preprocessing import LabelBinarizer
import model as Model
TAGS = ['genre---downtempo', 'genre---ambient', 'genre---rock', 'instrument---synthesizer', 'genre---atmospheric', 'genre---indie', 'instrument---electricpiano', 'genre---newage', 'instrument---strings', 'instrument---drums', 'instrument---drummachine', 'genre---techno', 'instrument---guitar', 'genre---alternative', 'genre---easylistening', 'genre---instrumentalpop', 'genre---chillout', 'genre---metal', 'mood/theme---happy', 'genre---lounge', 'genre---reggae', 'genre---popfolk', 'genre---orchestral', 'instrument---acousticguitar', 'genre---poprock', 'instrument---piano', 'genre---trance', 'genre---dance', 'instrument---electricguitar', 'genre---soundtrack', 'genre---house', 'genre---hiphop', 'genre---classical', 'mood/theme---energetic', 'genre---electronic', 'genre---world', 'genre---experimental', 'instrument---violin', 'genre---folk', 'mood/theme---emotional', 'instrument---voice', 'instrument---keyboard', 'genre---pop', 'instrument---bass', 'instrument---computer', 'mood/theme---film', 'genre---triphop', 'genre---jazz', 'genre---funk', 'mood/theme---relaxing']
def read_file(tsv_file):
tracks = {}
with open(tsv_file) as fp:
reader = csv.reader(fp, delimiter='\t')
next(reader, None) # skip header
for row in reader:
track_id = row[0]
tracks[track_id] = {
'path': row[3].replace('.mp3', '.npy'),
'tags': row[5:],
}
return tracks
class Predict(object):
def __init__(self, config):
self.model_type = config.model_type
self.model_load_path = config.model_load_path
self.dataset = config.dataset
self.data_path = config.data_path
self.batch_size = config.batch_size
self.is_cuda = torch.cuda.is_available()
self.build_model()
self.get_dataset()
self.mod = config.mod
self.rate = config.rate
self.PRESETS = {
"radio": ["0.01,1", "-90,-90,-70,-70,-60,-20,0,0", "-5"],
"film standard": ["0.1,0.3", "-90,-90,-70,-64,-43,-37,-31,-31,-21,-21,0,-20", "0", "0", "0.1"],
"film light": ["0.1,0.3", "-90,-90,-70,-64,-53,-47,-41,-41,-21,-21,0,-20", "0", "0", "0.1"],
"music standard": ["0.1,0.3", "-90,-90,-70,-58,-55,-43,-31,-31,-21,-21,0,-20", "0", "0", "0.1"],
"music light": ["0.1,0.3", "-90,-90,-70,-58,-65,-53,-41,-41,-21,-21,0,-11", "0", "0", "0.1"],
"speech": ["0.1,0.3", "-90,-90,-70,-55,-50,-35,-31,-31,-21,-21,0,-20", "0", "0", "0.1"]
}
self.preset_dict = {1: "radio",
2: "film standard",
3: "film light",
4: "music standard",
5: "music light",
6: "speech"}
def get_model(self):
if self.model_type == 'fcn':
self.input_length = 29 * 16000
return Model.FCN()
elif self.model_type == 'musicnn':
self.input_length = 3 * 16000
return Model.Musicnn(dataset=self.dataset)
elif self.model_type == 'crnn':
self.input_length = 29 * 16000
return Model.CRNN()
elif self.model_type == 'sample':
self.input_length = 59049
return Model.SampleCNN()
elif self.model_type == 'se':
self.input_length = 59049
return Model.SampleCNNSE()
elif self.model_type == 'short':
self.input_length = 59049
return Model.ShortChunkCNN()
elif self.model_type == 'short_res':
self.input_length = 59049
return Model.ShortChunkCNN_Res()
elif self.model_type == 'attention':
self.input_length = 15 * 16000
return Model.CNNSA()
elif self.model_type == 'hcnn':
self.input_length = 5 * 16000
return Model.HarmonicCNN()
else:
print('model_type has to be one of [fcn, musicnn, crnn, sample, se, short, short_res, attention]')
def build_model(self):
self.model = self.get_model()
# cuda
if self.is_cuda:
self.model.cuda()
# load model
self.load(self.model_load_path)
def get_dataset(self):
if self.dataset == 'mtat':
self.test_list = np.load('./../split/mtat/test.npy')
self.binary = np.load('./../split/mtat/binary.npy')
if self.dataset == 'msd':
test_file = os.path.join('./../split/msd','filtered_list_test.cP')
test_list = pickle.load(open(test_file,'rb'), encoding='bytes')
self.test_list = [value for value in test_list if value.decode() not in skip_files]
id2tag_file = os.path.join('./../split/msd', 'msd_id_to_tag_vector.cP')
self.id2tag = pickle.load(open(id2tag_file,'rb'), encoding='bytes')
if self.dataset == 'jamendo':
test_file = os.path.join('./../split/mtg-jamendo', 'autotagging_top50tags-test.tsv')
self.file_dict= read_file(test_file)
self.test_list= list(self.file_dict.keys())
self.mlb = LabelBinarizer().fit(TAGS)
if self.dataset == 'jamendo-mood':
test_file = os.path.join('./../split/mtg-jamendo-mood', 'autotagging_moodtheme-test.tsv')
self.file_dict= read_file(test_file)
self.test_list= list(self.file_dict.keys())
self.mlb = LabelBinarizer().fit(TAGS)
def load(self, filename):
S = torch.load(filename)
self.model.load_state_dict(S)
def to_var(self, x):
if torch.cuda.is_available():
x = x.cuda()
return Variable(x)
def get_tensor(self, fn):
# load audio
if self.dataset == 'mtat':
npy_path = os.path.join(self.data_path, 'mtat', 'npy', fn.split('/')[1][:-3]) + 'npy'
elif self.dataset == 'msd':
msid = fn.decode()
filename = '{}/{}/{}/{}.npy'.format(msid[2], msid[3], msid[4], msid)
npy_path = os.path.join(self.data_path, filename)
elif self.dataset == 'jamendo':
filename = self.file_dict[fn]['path']
npy_path = os.path.join(self.data_path, filename)
elif self.dataset == 'jamendo-mood':
filename = self.file_dict[fn]['path']
npy_path = os.path.join(self.data_path, filename)
raw = np.load(npy_path, mmap_mode='r')
raw = self.modify(raw, self.rate, self.mod)
# split chunk
length = len(raw)
hop = (length - self.input_length) // self.batch_size
x = torch.zeros(self.batch_size, self.input_length)
for i in range(self.batch_size):
x[i] = torch.Tensor(raw[i*hop:i*hop+self.input_length]).unsqueeze(0)
return x
def modify(self, x, mod_rate, mod_type):
if mod_type == 'time_stretch':
return self.time_stretch(x, mod_rate)
elif mod_type == 'pitch_shift':
return self.pitch_shift(x, mod_rate)
elif mod_type == 'dynamic_range':
return self.dynamic_range_compression(x, mod_rate)
elif mod_type == 'white_noise':
return self.white_noise(x, mod_rate)
else:
print('choose from [time_stretch, pitch_shift, dynamic_range, white_noise]')
def time_stretch(self, x, rate):
'''
[2 ** (-.5), 2 ** (.5)]
'''
return librosa.effects.time_stretch(x, rate)
def pitch_shift(self, x, rate):
'''
[-1, 1]
'''
return librosa.effects.pitch_shift(x, 16000, rate)
def dynamic_range_compression(self, x, rate):
'''
[4, 6]
Music standard & Speech
'''
return self.sox(x, 16000, "compand", *self.PRESETS[self.preset_dict[rate]])
@staticmethod
def sox(x, fs, *args):
assert fs > 0
fdesc, infile = tempfile.mkstemp(suffix=".wav")
os.close(fdesc)
fdesc, outfile = tempfile.mkstemp(suffix=".wav")
os.close(fdesc)
psf.write(infile, x, fs)
try:
arguments = ["sox", infile, outfile, "-q"]
arguments.extend(args)
subprocess.check_call(arguments)
x_out, fs = psf.read(outfile)
x_out = x_out.T
if x.ndim == 1:
x_out = librosa.to_mono(x_out)
finally:
os.unlink(infile)
os.unlink(outfile)
return x_out
def white_noise(self, x, rate):
'''
[0.1, 0.4]
'''
n_frames = len(x)
noise_white = np.random.RandomState().randn(n_frames)
noise_fft = np.fft.rfft(noise_white)
values = np.linspace(1, n_frames * 0.5 + 1, n_frames // 2 + 1)
colored_filter = np.linspace(1, n_frames / 2 + 1, n_frames // 2 + 1) ** 0
noise_filtered = noise_fft * colored_filter
noise = librosa.util.normalize(np.fft.irfft(noise_filtered)) * (x.max())
if len(noise) < len(x):
x = x[:len(noise)]
return (1 - rate) * x + (noise * rate)
def get_auc(self, est_array, gt_array):
roc_aucs = metrics.roc_auc_score(gt_array, est_array, average='macro')
pr_aucs = metrics.average_precision_score(gt_array, est_array, average='macro')
return roc_aucs, pr_aucs
def test(self):
roc_auc, pr_auc, loss = self.get_test_score()
print('loss: %.4f' % loss)
print('roc_auc: %.4f' % roc_auc)
print('pr_auc: %.4f' % pr_auc)
def get_test_score(self):
self.model = self.model.eval()
est_array = []
gt_array = []
losses = []
reconst_loss = nn.BCELoss()
for line in tqdm.tqdm(self.test_list):
if self.dataset == 'mtat':
ix, fn = line.split('\t')
elif self.dataset == 'msd':
fn = line
if fn.decode() in skip_files:
continue
elif self.dataset == 'jamendo':
fn = line
elif self.dataset == 'jamendo-mood':
fn = line
# load and split
x = self.get_tensor(fn)
# ground truth
if self.dataset == 'mtat':
ground_truth = self.binary[int(ix)]
elif self.dataset == 'msd':
ground_truth = self.id2tag[fn].flatten()
elif self.dataset == 'jamendo':
ground_truth = np.sum(self.mlb.transform(self.file_dict[fn]['tags']), axis=0)
elif self.dataset == 'jamendo-mood':
ground_truth = np.sum(self.mlb.transform(self.file_dict[fn]['tags']), axis=0)
# forward
x = self.to_var(x)
y = torch.tensor([ground_truth.astype('float32') for i in range(self.batch_size)]).cuda()
out = self.model(x)
loss = reconst_loss(out, y)
losses.append(float(loss.data))
out = out.detach().cpu()
# estimate
estimated = np.array(out).mean(axis=0)
est_array.append(estimated)
gt_array.append(ground_truth)
est_array, gt_array = np.array(est_array), np.array(gt_array)
loss = np.mean(losses)
roc_auc, pr_auc = self.get_auc(est_array, gt_array)
return roc_auc, pr_auc, loss
if __name__ == '__main__':
parser = argparse.ArgumentParser(formatter_class=argparse.ArgumentDefaultsHelpFormatter)
parser.add_argument('--num_workers', type=int, default=0)
parser.add_argument('--dataset', type=str, default='mtat', choices=['mtat', 'msd', 'jamendo','jamendo-mood'])
parser.add_argument('--model_type', type=str, default='fcn',
choices=['fcn', 'musicnn', 'crnn', 'sample', 'se', 'short', 'short_res', 'attention', 'hcnn'])
parser.add_argument('--batch_size', type=int, default=16)
parser.add_argument('--model_load_path', type=str, default='.')
parser.add_argument('--data_path', type=str, default='./data')
parser.add_argument('--mod', type=str, default='time_stretch')
parser.add_argument('--rate', type=float, default=0)
config = parser.parse_args()
p = Predict(config)
p.test()
| 39.027108 | 1,080 | 0.56317 | 1,635 | 12,957 | 4.316208 | 0.223853 | 0.022956 | 0.025507 | 0.019272 | 0.225167 | 0.190591 | 0.166926 | 0.117614 | 0.093666 | 0.07822 | 0 | 0.032525 | 0.281006 | 12,957 | 331 | 1,081 | 39.145015 | 0.724989 | 0.025237 | 0 | 0.161417 | 0 | 0.023622 | 0.17918 | 0.058687 | 0 | 0 | 0 | 0 | 0.003937 | 1 | 0.066929 | false | 0 | 0.086614 | 0 | 0.248032 | 0.019685 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
3143e4df394889222436d2c1bdb781765f3da6bd | 223 | py | Python | example_bot/bot.py | JakeCover/Flare-DiscordPy | 24cc2541a6ef548583e46d58ae18abe72da5f37f | [
"MIT"
] | 1 | 2021-04-02T20:16:03.000Z | 2021-04-02T20:16:03.000Z | example_bot/bot.py | JakeCover/Flare-DiscordPy | 24cc2541a6ef548583e46d58ae18abe72da5f37f | [
"MIT"
] | null | null | null | example_bot/bot.py | JakeCover/Flare-DiscordPy | 24cc2541a6ef548583e46d58ae18abe72da5f37f | [
"MIT"
] | null | null | null | import os
from discord.ext.commands import Bot
from Flare import Flare
bot = Bot("~~")
bot.add_cog(Flare(bot))
@bot.command("ping")
async def ping_pong(ctx):
ctx.send("pong")
bot.run(os.environ.get("BOT_TOKEN"))
| 13.117647 | 36 | 0.695067 | 37 | 223 | 4.108108 | 0.567568 | 0.118421 | 0.144737 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.139013 | 223 | 16 | 37 | 13.9375 | 0.791667 | 0 | 0 | 0 | 0 | 0 | 0.085202 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
314a3567674f4832f50804842163798ba6755e31 | 1,322 | py | Python | 3rdParty/boost/1.71.0/libs/python/test/iterator.py | rajeev02101987/arangodb | 817e6c04cb82777d266f3b444494140676da98e2 | [
"Apache-2.0"
] | 12,278 | 2015-01-29T17:11:33.000Z | 2022-03-31T21:12:00.000Z | 3rdParty/boost/1.71.0/libs/python/test/iterator.py | rajeev02101987/arangodb | 817e6c04cb82777d266f3b444494140676da98e2 | [
"Apache-2.0"
] | 9,469 | 2015-01-30T05:33:07.000Z | 2022-03-31T16:17:21.000Z | 3rdParty/boost/1.71.0/libs/python/test/iterator.py | rajeev02101987/arangodb | 817e6c04cb82777d266f3b444494140676da98e2 | [
"Apache-2.0"
] | 892 | 2015-01-29T16:26:19.000Z | 2022-03-20T07:44:30.000Z | # Copyright David Abrahams 2004. Distributed under the Boost
# Software License, Version 1.0. (See accompanying
# file LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt)
from __future__ import print_function
'''
>>> from iterator_ext import *
>>> from input_iterator import *
>>> x = list_int()
>>> x.push_back(1)
>>> x.back()
1
>>> x.push_back(3)
>>> x.push_back(5)
>>> for y in x:
... print(y)
1
3
5
>>> z = range(x)
>>> for y in z:
... print(y)
1
3
5
Range2 wraps a transform_iterator which doubles the elements it
traverses. This proves we can wrap input iterators
>>> z2 = range2(x)
>>> for y in z2:
... print(y)
2
6
10
>>> l2 = two_lists()
>>> for y in l2.primes:
... print(y)
2
3
5
7
11
13
>>> for y in l2.evens:
... print(y)
2
4
6
8
10
12
>>> ll = list_list()
>>> ll.push_back(x)
>>> x.push_back(7)
>>> ll.push_back(x)
>>> for a in ll: #doctest: +NORMALIZE_WHITESPACE
... for b in a:
... print(b, end='')
... print('')
...
1 3 5
1 3 5 7
'''
def run(args = None):
import sys
import doctest
if args is not None:
sys.argv = args
return doctest.testmod(sys.modules.get(__name__))
if __name__ == '__main__':
print("running...")
import sys
status = run()[0]
if (status == 0): print("Done.")
sys.exit(status)
| 16.734177 | 71 | 0.599849 | 220 | 1,322 | 3.45 | 0.445455 | 0.063241 | 0.039526 | 0.031621 | 0.023715 | 0 | 0 | 0 | 0 | 0 | 0 | 0.055992 | 0.229955 | 1,322 | 78 | 72 | 16.948718 | 0.689587 | 0.133888 | 0 | 0.153846 | 0 | 0 | 0.065156 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.076923 | false | 0 | 0.307692 | 0 | 0.461538 | 0.230769 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
315f8e1d96aaa1b3755c0089f370b7d2dae3e33b | 646 | py | Python | setup.py | goofmint/qualityforward-py | 299c11cb4769fb8c42bfd2d553a5c1c1f042d2de | [
"MIT"
] | null | null | null | setup.py | goofmint/qualityforward-py | 299c11cb4769fb8c42bfd2d553a5c1c1f042d2de | [
"MIT"
] | null | null | null | setup.py | goofmint/qualityforward-py | 299c11cb4769fb8c42bfd2d553a5c1c1f042d2de | [
"MIT"
] | null | null | null | import setuptools
setuptools.setup(
name="qualityforward",
version="1.1",
author="Atsushi Nakatsugawa",
author_email="atsushi@moongift.jp",
description="Python library for QualityForward API",
long_description="This is python library for QualityForward API. QualityForward is cloud based test management service.",
long_description_content_type="text/markdown",
url="https://cloud.veriserve.co.jp/",
packages=setuptools.find_packages(),
classifiers=[
"Programming Language :: Python :: 3.7",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
]
)
| 34 | 125 | 0.695046 | 70 | 646 | 6.328571 | 0.7 | 0.058691 | 0.072235 | 0.13544 | 0.148984 | 0 | 0 | 0 | 0 | 0 | 0 | 0.007619 | 0.187307 | 646 | 18 | 126 | 35.888889 | 0.83619 | 0 | 0 | 0 | 0 | 0 | 0.534056 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.058824 | 0 | 0.058824 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
31658cd3302ac85eb923cde2f7d6f6b205979a8e | 675 | py | Python | examples/Api/channels.py | asheshambasta/csound-expression | 290567231c1d658e07ba882b1d1c726c96af67ce | [
"BSD-3-Clause"
] | null | null | null | examples/Api/channels.py | asheshambasta/csound-expression | 290567231c1d658e07ba882b1d1c726c96af67ce | [
"BSD-3-Clause"
] | null | null | null | examples/Api/channels.py | asheshambasta/csound-expression | 290567231c1d658e07ba882b1d1c726c96af67ce | [
"BSD-3-Clause"
] | null | null | null | import csnd6
class Control:
def __init__(self, volume, frequency):
engine = csnd6.Csound()
engine.SetOption("-odac")
engine.Compile("osc.csd")
thread = csnd6.CsoundPerformanceThread(engine)
thread.Play()
self.engine = engine
self.thread = thread
self.set_volume(volume)
self.set_frequency(frequency)
def set_volume(self, volume):
self.engine.SetChannel("volume", volume)
def set_frequency(self, frequency):
self.engine.SetChannel("frequency", frequency)
def close(self):
self.thread.Stop()
self.thread.Join()
| 25 | 55 | 0.58963 | 68 | 675 | 5.735294 | 0.352941 | 0.076923 | 0.107692 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.006383 | 0.303704 | 675 | 27 | 56 | 25 | 0.823404 | 0 | 0 | 0 | 0 | 0 | 0.039941 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.210526 | false | 0 | 0.052632 | 0 | 0.315789 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
316f3a2d1cd58368907db8ec0798ba08d6c7d1c7 | 527 | py | Python | util/mccLog.py | ccchooko/webControlClient | f12cf76d364c5270166c99a508d08999c7ed920c | [
"Apache-2.0"
] | null | null | null | util/mccLog.py | ccchooko/webControlClient | f12cf76d364c5270166c99a508d08999c7ed920c | [
"Apache-2.0"
] | null | null | null | util/mccLog.py | ccchooko/webControlClient | f12cf76d364c5270166c99a508d08999c7ed920c | [
"Apache-2.0"
] | null | null | null | #-*-coding:utf8-*-
import logging
from datetime import datetime
class mccLog(object):
def __init__(self):
logging.basicConfig(level=logging.DEBUG,
format='%(asctime)s %(levelname)s %(message)s',
datefmt='%Y-%m-%d %H:%M:%S',
filename= datetime.now().strftime("%Y%m%d%H%M%S") + '.log',
filemode='a')
def mccWriteLog(self, logContent):
logging.info(logContent)
def mccError(self, errorContent):
logging.error(errorContent)
| 29.277778 | 75 | 0.580645 | 60 | 527 | 5.033333 | 0.616667 | 0.013245 | 0.019868 | 0.02649 | 0.039735 | 0.039735 | 0 | 0 | 0 | 0 | 0 | 0.002564 | 0.259962 | 527 | 17 | 76 | 31 | 0.771795 | 0.032258 | 0 | 0 | 0 | 0 | 0.139489 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.230769 | false | 0 | 0.153846 | 0 | 0.461538 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
31717266252a0d2d31a5a6a58dab2e9b98e48596 | 821 | py | Python | tcptofpc.py | catenacyber/fuzzpcap | 100db6cefd77238f623b5127a3efed01c7333cad | [
"MIT"
] | 6 | 2021-04-09T03:13:39.000Z | 2022-01-26T14:49:31.000Z | tcptofpc.py | catenacyber/fuzzpcap | 100db6cefd77238f623b5127a3efed01c7333cad | [
"MIT"
] | null | null | null | tcptofpc.py | catenacyber/fuzzpcap | 100db6cefd77238f623b5127a3efed01c7333cad | [
"MIT"
] | null | null | null | #tshark -r input.pcap -qz "follow,tcp,raw,0"
import struct
import sys
import binascii
import subprocess
result = subprocess.Popen( ["tshark", "-r", sys.argv[1], "-qz", "follow,tcp,raw,0"],
stdout=subprocess.PIPE)
sys.stdout.buffer.write(b"FPC\x80")
for i in range(4):
result.stdout.readline()
dp=result.stdout.readline().split(b":")[2]
sp=result.stdout.readline().split(b":")[2]
sys.stdout.buffer.write(struct.pack('>H', int(sp)))
sys.stdout.buffer.write(struct.pack('>H', int(dp)))
for l in result.stdout.readlines():
s2c = 0
if l[0] == 9:
l = l[1:]
s2c = 1
try:
r = binascii.unhexlify(l[:-1])
except:
continue
sys.stdout.buffer.write(struct.pack('>B', int(s2c)))
sys.stdout.buffer.write(r)
sys.stdout.buffer.write(b"FPC0")
| 27.366667 | 84 | 0.615104 | 125 | 821 | 4.04 | 0.384 | 0.106931 | 0.178218 | 0.237624 | 0.443564 | 0.30099 | 0.134653 | 0.134653 | 0 | 0 | 0 | 0.027108 | 0.19123 | 821 | 29 | 85 | 28.310345 | 0.733434 | 0.052375 | 0 | 0 | 0 | 0 | 0.059202 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.16 | 0 | 0.16 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
3177cd1e5b203f5ec64cb02bb1434a9783422e6d | 312 | py | Python | app.py | rhedgeco/test_plaid_webapp | e821e354796da9f35689eb386df9366407e2907b | [
"MIT"
] | null | null | null | app.py | rhedgeco/test_plaid_webapp | e821e354796da9f35689eb386df9366407e2907b | [
"MIT"
] | null | null | null | app.py | rhedgeco/test_plaid_webapp | e821e354796da9f35689eb386df9366407e2907b | [
"MIT"
] | null | null | null | from plaid import Client
from backend.link_token import LinkToken
from general_falcon_webserver import WebApp
client = Client(client_id='5e2e3527dd6924001167e8e8', secret='0b89f518880456b6f60020f481b3d7', environment='sandbox')
app = WebApp()
app.add_route('link', LinkToken(client))
app.launch_webserver()
| 24 | 117 | 0.817308 | 36 | 312 | 6.916667 | 0.611111 | 0.096386 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.144366 | 0.089744 | 312 | 12 | 118 | 26 | 0.732394 | 0 | 0 | 0 | 0 | 0 | 0.208333 | 0.173077 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.428571 | 0 | 0.428571 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
31793cdf4264c68e2f922f5ab5df2e6e45071db9 | 539 | py | Python | kickeststats/exceptions.py | antimaLinux/kickscarper | c2607847f1c7ad8bc30014ab4e62f0976ace5f0f | [
"MIT"
] | null | null | null | kickeststats/exceptions.py | antimaLinux/kickscarper | c2607847f1c7ad8bc30014ab4e62f0976ace5f0f | [
"MIT"
] | 1 | 2020-10-14T06:44:12.000Z | 2020-10-14T06:44:12.000Z | kickeststats/exceptions.py | antimaLinux/kickscraper | c2607847f1c7ad8bc30014ab4e62f0976ace5f0f | [
"MIT"
] | null | null | null | """Exception utilities."""
class ParsingException(Exception):
pass
class EnvVariableNotSet(Exception):
def __init__(self, varname: str) -> None:
super(EnvVariableNotSet, self).__init__(f"Env variable [{varname}] not set.")
class InvalidLineUp(Exception):
pass
class UnsupportedLineUp(Exception):
def __init__(self, line_up_name: str) -> None:
super(UnsupportedLineUp, self).__init__(
f"Line-up [{line_up_name}] is not supported."
)
class InvalidTeamLineup(Exception):
pass
| 20.730769 | 85 | 0.684601 | 57 | 539 | 6.122807 | 0.45614 | 0.111748 | 0.103152 | 0.114613 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.198516 | 539 | 25 | 86 | 21.56 | 0.80787 | 0.037106 | 0 | 0.214286 | 0 | 0 | 0.146199 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | false | 0.214286 | 0 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
317a91916b77974fe06fa6db1fb05dbc3398d9cf | 1,262 | py | Python | setup.py | holoyan/python-data-validation | e928c4131072c53cb8ace1fbaa83216f06ab6bfe | [
"MIT"
] | 3 | 2021-03-16T05:47:46.000Z | 2021-03-23T17:43:55.000Z | setup.py | holoyan/python-data-validation | e928c4131072c53cb8ace1fbaa83216f06ab6bfe | [
"MIT"
] | null | null | null | setup.py | holoyan/python-data-validation | e928c4131072c53cb8ace1fbaa83216f06ab6bfe | [
"MIT"
] | null | null | null | from setuptools import setup, find_packages
# read the contents of your README file
from os import path
this_directory = path.abspath(path.dirname(__file__))
with open(path.join(this_directory, 'README.md'), encoding='utf-8') as f:
long_description = f.read()
setup(
name='pyva',
packages=find_packages(),
version='0.4.1',
license='MIT',
description='Simple and flexible python data validation library',
long_description=long_description,
long_description_content_type='text/markdown',
author='Artak',
author_email='artaksafaryanc@gmail.com',
url='https://github.com/holoyan/python-data-validation',
keywords=['data', 'validation', 'validator', 'data validator'],
install_requires=[ # I get to this in a second
'python-dateutil',
],
classifiers=[
'Development Status :: 3 - Alpha',
'Intended Audience :: Developers',
'Topic :: Software Development :: Build Tools',
'License :: OSI Approved :: MIT License',
'Programming Language :: Python :: 3',
'Programming Language :: Python :: 3.5',
'Programming Language :: Python :: 3.6',
'Programming Language :: Python :: 3.7',
'Programming Language :: Python :: 3.8',
],
)
| 35.055556 | 73 | 0.652139 | 145 | 1,262 | 5.565517 | 0.613793 | 0.11772 | 0.154895 | 0.16109 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.014056 | 0.210777 | 1,262 | 35 | 74 | 36.057143 | 0.796185 | 0.049921 | 0 | 0.0625 | 0 | 0 | 0.456522 | 0.020067 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.0625 | 0 | 0.0625 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
318a952b81c7d9540e9926622426293ecbdc84ee | 1,572 | py | Python | src/Application/PythonScriptModule/pymodules_old/apitest/rotate.py | antont/tundra | 5c9b0a3957071f08ab425dff701cdbb34f9e1868 | [
"Apache-2.0"
] | null | null | null | src/Application/PythonScriptModule/pymodules_old/apitest/rotate.py | antont/tundra | 5c9b0a3957071f08ab425dff701cdbb34f9e1868 | [
"Apache-2.0"
] | null | null | null | src/Application/PythonScriptModule/pymodules_old/apitest/rotate.py | antont/tundra | 5c9b0a3957071f08ab425dff701cdbb34f9e1868 | [
"Apache-2.0"
] | 1 | 2021-09-04T12:37:34.000Z | 2021-09-04T12:37:34.000Z | import circuits
from PythonQt.QtGui import QQuaternion as Quat
from PythonQt.QtGui import QVector3D as Vec
import naali
COMPNAME = "rotation"
class RotationHandler(circuits.BaseComponent):
def __init__(self, entity=None, comp=None, changetype=None):
circuits.BaseComponent.__init__(self)
self.entity = entity
self.comp = comp
if self.comp is not None: #normal run, check for nonEC run now
# Todo: OnChanged() is deprecated
comp.connect("OnChanged()", self.onChanged)
self.rot = Quat.fromAxisAndAngle(Vec(0, 1, 0), 1)
def onChanged(self):
y = self.comp.GetAttribute('y')
self.rot = Quat.fromAxisAndAngle(Vec(0, y, 0), 1)
#print self.rot, y
@circuits.handler("update")
def update(self, frametime):
if self.entity is not None:
p = self.entity.placeable
ort = p.Orientation
ort *= self.rot
p.Orientation = ort
# else: #testing without EC, as a autoloaded module
# entid = 2088826547
# try:
# self.entity = naali.getEntity(entid)
# except:
# pass #not there (yet)
# else:
# self.entity.createComponent("EC_DynamicComponent")
# oldent = r.getEntity(ent.id)
# self.comp = oldent.dynamic
@circuits.handler("on_logout")
def on_logout(self, evid):
self.entity = None #XXX figure out proper unregistering, preferrably in componenthandler.py / EC_Script biz
| 34.933333 | 115 | 0.600509 | 181 | 1,572 | 5.149171 | 0.475138 | 0.075107 | 0.036481 | 0.049356 | 0.066524 | 0.066524 | 0 | 0 | 0 | 0 | 0 | 0.016408 | 0.302163 | 1,572 | 44 | 116 | 35.727273 | 0.833181 | 0.304707 | 0 | 0 | 0 | 0 | 0.032498 | 0 | 0 | 0 | 0 | 0.022727 | 0 | 1 | 0.153846 | false | 0 | 0.153846 | 0 | 0.346154 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
318fbfd55bdcd7ac71d0dc2747eb31643026f551 | 3,021 | py | Python | bin/analysis/ipa/constraints/split.py | ncbray/pystream | 70bba5646d6512adb6803564c22268d3424c66d8 | [
"Apache-2.0"
] | 6 | 2015-09-19T18:22:33.000Z | 2020-11-29T15:21:17.000Z | bin/analysis/ipa/constraints/split.py | ncbray/pystream | 70bba5646d6512adb6803564c22268d3424c66d8 | [
"Apache-2.0"
] | 1 | 2015-08-04T08:03:46.000Z | 2015-08-04T08:03:46.000Z | bin/analysis/ipa/constraints/split.py | ncbray/pystream | 70bba5646d6512adb6803564c22268d3424c66d8 | [
"Apache-2.0"
] | 1 | 2019-12-09T08:27:09.000Z | 2019-12-09T08:27:09.000Z | # Copyright 2011 Nicholas Bray
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from language.python import ast
from . base import Constraint
from .. calling import cpa
class Splitter(Constraint):
def __init__(self, src):
assert src.isNode(), src
self.src = src
self.dst = []
self.callbacks = []
def addSplitCallback(self, callback):
self.callbacks.append(callback)
if self.objects: callback()
def attach(self):
self.src.addNext(self)
def localName(self):
return 'split_temp'
def makeTarget(self, context):
lcl = context.local(ast.Local(self.localName()))
lcl.addPrev(self)
self.dst.append(lcl)
return lcl
def makeConsistent(self, context):
# Make constraint consistent
if self.src.values:
self.changed(context, self.src, self.src.values)
if self.src.critical.values:
self.criticalChanged(context, self.src, self.src.critical.values)
def criticalChanged(self, context, node, diff):
for dst in self.dst:
dst.critical.updateValues(context, dst, diff)
def doNotify(self):
for callback in self.callbacks:
callback()
def isSplit(self):
return True
class TypeSplitConstraint(Splitter):
def __init__(self, src):
Splitter.__init__(self, src)
self.objects = {}
self.megamorphic = False
def localName(self):
return 'type_split_temp'
def types(self):
return self.objects.keys()
def makeMegamorphic(self):
assert not self.megamorphic
self.megamorphic = True
self.objects.clear()
self.objects[cpa.anyType] = self.src
self.doNotify()
def changed(self, context, node, diff):
if self.megamorphic: return
changed = False
for obj in diff:
cpaType = obj.cpaType()
if cpaType not in self.objects:
if len(self.objects) >= 4:
self.makeMegamorphic()
break
else:
temp = self.makeTarget(context)
self.objects[cpaType] = temp
changed = True
else:
temp = self.objects[cpaType]
temp.updateSingleValue(obj)
else:
if changed: self.doNotify()
# TODO prevent over splitting? All objects with the same qualifier should be grouped?
class ExactSplitConstraint(Splitter):
def __init__(self, src):
Splitter.__init__(self, src)
self.objects = {}
def localName(self):
return 'exact_split_temp'
def changed(self, context, node, diff):
changed = False
for obj in diff:
if obj not in self.objects:
temp = self.makeTarget(context)
self.objects[obj] = temp
changed = True
else:
temp = self.objects[obj]
temp.updateSingleValue(obj)
if changed: self.doNotify()
| 23.97619 | 86 | 0.716319 | 411 | 3,021 | 5.20438 | 0.335766 | 0.045816 | 0.025713 | 0.019635 | 0.183263 | 0.163628 | 0.080411 | 0.048621 | 0.048621 | 0.048621 | 0 | 0.003638 | 0.181066 | 3,021 | 125 | 87 | 24.168 | 0.860954 | 0.219133 | 0 | 0.333333 | 0 | 0 | 0.017499 | 0 | 0 | 0 | 0 | 0.008 | 0.02381 | 1 | 0.202381 | false | 0 | 0.035714 | 0.059524 | 0.345238 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
3191404cbd9e515326e447e2206e0f73b067c5bc | 5,866 | py | Python | test/worker/net.py | ameserole/Naumachia | dc13c33c5fcf053c74dfce8351a696d28857fd9d | [
"MIT"
] | null | null | null | test/worker/net.py | ameserole/Naumachia | dc13c33c5fcf053c74dfce8351a696d28857fd9d | [
"MIT"
] | null | null | null | test/worker/net.py | ameserole/Naumachia | dc13c33c5fcf053c74dfce8351a696d28857fd9d | [
"MIT"
] | null | null | null | import fcntl
import os
import socket
import struct
import warnings
import subprocess
import logging
import base64
logger = logging.getLogger(__name__)
# Dummy socket used for fcntl functions
_socket = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
class AddrMeta(type):
@property
def maxvalue(cls):
return (0x1 << (cls.bytelen * 8)) - 1
class Addr(metaclass=AddrMeta):
bytelen = 0
def __init__(self, addr):
self._str = None
self._int = None
self._bytes = None
if isinstance(addr, type(self)):
self._str = addr._str
self._bytes = addr._bytes
self._int = addr._int
elif isinstance(addr, str):
self._str = addr
elif isinstance(addr, int):
self._int = addr
elif isinstance(addr, bytes):
if len(addr) == self.bytelen:
self._bytes = addr
else:
self._str = addr.decode('utf-8')
else:
raise ValueError('Cannot create {!s} from {!s}'.format(type(self), type(addr)))
# Operations
def __and__(self, other):
return type(self)(int(self) & int(other))
def __or__(self, other):
return type(self)(int(self) | int(other))
def __xor__(self, other):
return type(self)(int(self) ^ int(other))
def __invert__(self):
return type(self)(int(self) ^ self.maxvalue)
# Conversions
def __str__(self):
if self._str is None:
self._str = self.bytes_to_str(bytes(self))
return self._str
def __int__(self):
return int.from_bytes(bytes(self), byteorder='big')
def __bytes__(self):
if self._bytes is None:
if self._str is not None:
self._bytes = self.str_to_bytes(self._str)
elif self._int is not None:
self._bytes = self._int.to_bytes(self.bytelen, byteorder='big')
return self._bytes
def __repr__(self):
return '<{0}.{1} {2!s}>'.format(__name__, type(self).__name__, self)
class Ip(Addr):
bytelen = 4
@staticmethod
def bytes_to_str(b):
return socket.inet_ntoa(b)
@staticmethod
def str_to_bytes(s):
return socket.inet_aton(s)
def slash(self):
x, i = int(self), 0
while x & 0x1 == 0:
x >>= 1
i += 1
return 32 - i
class Mac(Addr):
bytelen = 6
@staticmethod
def bytes_to_str(b):
return ':'.join('%02x' % byte for byte in b)
@staticmethod
def str_to_bytes(s):
return bytes.fromhex(s.replace(':', ''))
def _ifctl(ifname, code):
if isinstance(ifname, str):
ifname = ifname.encode('utf-8')
return fcntl.ioctl(
_socket.fileno(),
code,
struct.pack('256s', ifname[:15])
)
def ifaddr(ifname):
return Ip(_ifctl(ifname, 0x8915)[20:24]) # SIOCGIFADDR
def ifmask(ifname):
return Ip(_ifctl(ifname, 0x891b)[20:24]) # SIOCGIFNETMASK
def ifhwaddr(ifname):
return Mac(_ifctl(ifname, 0x8927)[18:24]) # SIOCGIFHWADDR
def cidr(ip, mask):
return "{!s}/{:d}".format(ip, mask.slash())
def parsecidr(ipnet):
ipstr, maskstr = ipnet.split('/')
ip = Ip(ipstr)
mask = Ip(0xffffffff ^ ((0x00000001 << (32-int(maskstr)))-1))
return ip, mask
def ifcidr(ifname):
return cidr(ifaddr(ifname), ifmask(ifname))
class OpenVpnError(Exception):
def __init__(self, instance, msg):
self.instance = instance
super().__init__(msg)
class OpenVpn:
exe = 'openvpn'
initmsg = b'Initialization Sequence Completed'
def __init__(self, **kwargs):
if 'daemonize' in kwargs:
warnings.warn("This class will not be able to close a daemonized tunnel", warnings.Warning)
self.options = kwargs
self.initialized = False
self._process = None
def args(self):
result = []
for name, value in self.options.items():
result.append('--{!s}'.format(name))
# None is special to indicate the option have no value
if value is not None:
result.append(str(value))
return result
def check(self):
if self._process is not None:
self._process.poll()
code = self._process.returncode
if code is not None and code != 0:
raise OpenVpnError(self, "`openvpn {:s}` exited with error code: {:d}".format(" ".join(self.args()), code))
def running(self):
return self._process is not None and self._process.poll() is None
@staticmethod
def maketun():
os.makedirs('/dev/net', exist_ok=True)
subprocess.run(['mknod', '/dev/net/tun', 'c', '10', '200'], check=True)
def connect(self):
if not os.path.exists('/dev/net/tun'):
self.maketun()
if not self.running():
self.initialized = False
self._process = subprocess.Popen(
[self.exe] + self.args(),
stdin=subprocess.PIPE,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE
)
self.check()
def disconnect(self):
if self.running():
self._process.terminate()
os.waitpid(self._process.pid, 0)
def waitforinit(self):
if not self.initialized:
for line in self._process.stdout:
logger.debug("openvpn: %s", line.decode('utf-8').strip())
if self.initmsg in line:
self.initialized = True
break
else:
self.check()
raise OpenVpnError(self, "OpenVPN exited with code 0, but did not display init msg")
def __enter__(self):
self.connect()
return self
def __exit__(self, *args, **kwargs):
self.disconnect()
| 27.283721 | 123 | 0.574668 | 717 | 5,866 | 4.524407 | 0.27894 | 0.025894 | 0.016646 | 0.020962 | 0.144883 | 0.091554 | 0.07799 | 0.058261 | 0.037916 | 0.037916 | 0 | 0.018836 | 0.303103 | 5,866 | 214 | 124 | 27.411215 | 0.774706 | 0.026253 | 0 | 0.096386 | 0 | 0 | 0.061185 | 0 | 0 | 0 | 0.007714 | 0 | 0 | 1 | 0.198795 | false | 0 | 0.048193 | 0.10241 | 0.457831 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 |
319239aac557dc3d968ccc908a828a9cd5002f12 | 2,161 | py | Python | kunrei.py | kosugi/alfred.romanizer | d2a3b4a9883f15101893e385f14e6dca115c1d7d | [
"BSD-2-Clause"
] | null | null | null | kunrei.py | kosugi/alfred.romanizer | d2a3b4a9883f15101893e385f14e6dca115c1d7d | [
"BSD-2-Clause"
] | null | null | null | kunrei.py | kosugi/alfred.romanizer | d2a3b4a9883f15101893e385f14e6dca115c1d7d | [
"BSD-2-Clause"
] | null | null | null | # -*- coding: utf-8 -*-
basic_table = dict(map(lambda s: s.split(u'\t'), u'''
あ a
い i
う u
え e
お o
か ka
き ki
く ku
け ke
こ ko
さ sa
し si
す su
せ se
そ so
た ta
ち ti
つ tu
て te
と to
な na
に ni
ぬ nu
ね ne
の no
は ha
ひ hi
ふ hu
へ he
ほ ho
ま ma
み mi
む mu
め me
も mo
や ya
ゆ yu
よ yo
ら ra
り ri
る ru
れ re
ろ ro
わ wa
を wo
ぁ a
ぃ i
ぅ u
ぇ e
ぉ o
が ga
ぎ gi
ぐ gu
げ ge
ご go
ざ za
じ zi
ず zu
ぜ ze
ぞ zo
だ da
ぢ di
づ du
で de
ど do
ば ba
び bi
ぶ bu
べ be
ぼ bo
ぱ pa
ぴ pi
ぷ pu
ぺ pe
ぽ po
きゃ kya
きゅ kyu
きょ kyo
しゃ sya
しゅ syu
しょ syo
ちゃ tya
ちゅ tyu
ちょ tyo
にゃ nya
にゅ nyu
にょ nyo
ひゃ hya
ひゅ hyu
ひょ hyo
みゃ mya
みゅ myu
みょ myo
りゃ rya
りゅ ryu
りょ ryo
ぎゃ gya
ぎゅ gyu
ぎょ gyo
じゃ zya
じゅ zyu
じょ zyo
でゃ dya
でゅ dyu
でょ dyo
びゃ bya
びゅ byu
びょ byo
ぴゃ pya
ぴゅ pyu
ぴょ pyo
クヮ kwa
グヮ gwa
ア a
イ i
ウ u
エ e
オ o
カ ka
キ ki
ク ku
ケ ke
コ ko
サ sa
シ si
ス su
セ se
ソ so
タ ta
チ ti
ツ tu
テ te
ト to
ナ na
ニ ni
ヌ nu
ネ ne
ノ no
ハ ha
ヒ hi
フ hu
ヘ he
ホ ho
マ ma
ミ mi
ム mu
メ me
モ mo
ヤ ya
ユ yu
ヨ yo
ラ ra
リ ri
ル ru
レ re
ロ ro
ワ wa
ヲ wo
ァ a
ィ i
ゥ u
ェ e
ォ o
ガ ga
ギ gi
グ gu
ゲ ge
ゴ go
ザ za
ジ zi
ズ zu
ゼ ze
ゾ zo
ダ da
ヂ di
ヅ du
デ de
ド do
バ ba
ビ bi
ブ bu
ベ be
ボ bo
パ pa
ピ pi
プ pu
ペ pe
ポ po
キャ kya
キュ kyu
キョ kyo
シャ sya
シュ syu
ショ syo
チャ tya
チュ tyu
チョ tyo
ニャ nya
ニュ nyu
ニョ nyo
ヒャ hya
ヒュ hyu
ヒョ hyo
ミャ mya
ミュ myu
ミョ myo
リャ rya
リュ ryu
リョ ryo
ギャ gya
ギュ gyu
ギョ gyo
ジャ zya
ジュ zyu
ジョ zyo
デャ dya
デュ dyu
デョ dyo
ビャ bya
ビュ byu
ビョ byo
ピャ pya
ピュ pyu
ピョ pyo
くゎ kwa
ぐゎ gwa
'''.strip(u'\n').split(u'\n')))
long_sound_table = dict(u'aâ iî uû eê oô'.split())
long_sounds = u'aa ii uu ee oo ou'.split()
def normalize(s):
roman = u''
l = len(s)
n = 0
while n < l:
c1 = s[n]
c2 = s[n:n+2]
c3 = s[n+1:n+2]
if roman and c1 == u'ー':
c1 = u''
if roman[-1] in u'aiueo':
roman = roman[:-1] + long_sound_table[roman[-1]]
elif c2 in long_sounds:
c1 = long_sound_table[c1]
n += 1
elif c1 in u'んン':
c1 = u'n'
if c3 and c3 in u'aiueoy':
c1 += u"'"
elif c1 in u'っッ':
if c3 in u'bcdfghjklmnpqrstvwxyz':
c1 = c3
else:
c1 = u''
roman += c1
n += 1
return roman
| 8.248092 | 64 | 0.553447 | 592 | 2,161 | 2.005068 | 0.650338 | 0.012637 | 0.035383 | 0.015164 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.021707 | 0.381768 | 2,161 | 261 | 65 | 8.279693 | 0.866766 | 0.009718 | 0 | 0.015564 | 0 | 0 | 0.626286 | 0.009822 | 0 | 0 | 0 | 0 | 0 | 1 | 0.003891 | false | 0 | 0 | 0 | 0.007782 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
3197d22a066fe34f613aab3ff51fd1a605e176ab | 2,895 | py | Python | 18.part2.py | elp2/advent_of_code_2018 | 0d359422dd04b0849481796005e97d05c30e9eb4 | [
"Apache-2.0"
] | 1 | 2021-12-02T15:19:36.000Z | 2021-12-02T15:19:36.000Z | 18.part2.py | elp2/advent_of_code_2018 | 0d359422dd04b0849481796005e97d05c30e9eb4 | [
"Apache-2.0"
] | null | null | null | 18.part2.py | elp2/advent_of_code_2018 | 0d359422dd04b0849481796005e97d05c30e9eb4 | [
"Apache-2.0"
] | null | null | null | from collections import defaultdict
def return_default():
return 0
REAL=open("18.txt").readlines()
SAMPLE=open("18.sample").readlines()
OPEN="."
TREE="|"
LUMBERYARD="#"
import copy
def safe_grid_get(grid, x, y, missing=None):
if x < 0 or y < 0:
return missing
if y >= len(grid):
return missing
if x >= len(grid[y]):
return missing
return grid[y][x]
def parse_lines(lines):
return list(map(lambda l: list(l.strip()), lines))
def next_sq(grid, x, y):
around = defaultdict(return_default)
for dy in [-1, 0, 1]:
for dx in [-1, 0, 1]:
if dx == 0 and dy == 0:
continue
a = safe_grid_get(grid, x + dx, y + dy)
if a is not None:
around[a] += 1
here = grid[y][x]
if here == OPEN:
if around[TREE] >= 3:
return TREE
else:
return OPEN
elif here == TREE:
if around[LUMBERYARD] >= 3:
return LUMBERYARD
else:
return TREE
else:
assert here == LUMBERYARD
if around[LUMBERYARD] >= 1 and around[TREE] >= 1:
return LUMBERYARD
else:
return OPEN
def resource_value(board):
lands = defaultdict(return_default)
for y in range(len(board)):
for x in range(len(board[0])):
lands[board[y][x]] += 1
return lands[TREE] * lands[LUMBERYARD]
def solve(lines, minutes):
cache = {}
old_board = parse_lines(lines)
for minute in range(minutes):
board = copy.deepcopy(old_board)
for y in range(len(board)):
for x in range(len(board[0])):
board[y][x] = next_sq(old_board, x, y)
old_board = board
key = "\n".join(map(lambda r: "".join(r), board))
# print(key)
if key in cache:
print(minute, cache[key])
else:
cache[key] = (minute, resource_value(board))
return resource_value(board)
sample = solve(SAMPLE, 10)
assert sample == 1147
print("*** SAMPLE PASSED ***")
# print(solve(REAL, 10000))
loop = """598 570 191420
599 571 189168
600 572 185082
601 573 185227
602 574 185320
603 575 185790
604 576 186120
605 577 189956
606 578 190068
607 579 191080
608 580 190405 # too low
609 581 193795
610 582 190950
611 583 193569
612 584 194350
613 585 196308
614 586 195364
615 587 197911
616 588 199755
617 589 201144
618 590 201607
619 591 203580
620 592 201260
621 593 201950
622 594 200675 # TOO HIGH
623 595 202208
624 596 200151
625 597 198948
626 570 191420
627 571 189168
628 572 185082
629 573 185227
630 574 185320
631 575 185790
632 576 186120
633 577 189956
634 578 190068
635 579 191080
636 580 190405
637 581 193795"""
num = 1000000000
nmod = 28
for num in range(570, 638):
print(num, (num - 570) % nmod + 570)
num = 1000000000 - 1
print(num, (num - 570) % nmod + 570 + nmod) | 21.444444 | 57 | 0.601382 | 433 | 2,895 | 3.979215 | 0.408776 | 0.024376 | 0.023215 | 0.034823 | 0.088218 | 0.069646 | 0.04527 | 0.04527 | 0.04527 | 0.04527 | 0 | 0.270864 | 0.292228 | 2,895 | 135 | 58 | 21.444444 | 0.570034 | 0.012435 | 0 | 0.156522 | 0 | 0 | 0.231362 | 0 | 0 | 0 | 0 | 0 | 0.017391 | 1 | 0.052174 | false | 0.008696 | 0.017391 | 0.017391 | 0.191304 | 0.034783 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
319925dc3819c9097723899fe8aef60117e396cb | 817 | py | Python | src/validate_model.py | mareklinka/esk-form-scanner-model | 30af9e1c5d652b3310222bc55f92e964bc524f2e | [
"MIT"
] | null | null | null | src/validate_model.py | mareklinka/esk-form-scanner-model | 30af9e1c5d652b3310222bc55f92e964bc524f2e | [
"MIT"
] | null | null | null | src/validate_model.py | mareklinka/esk-form-scanner-model | 30af9e1c5d652b3310222bc55f92e964bc524f2e | [
"MIT"
] | null | null | null |
import data_providers as gen
import model_storage as storage
import numpy as np
import data_visualizer
import time
def evaluate(model_name):
"""
Evaluates the model stored in the specified file.
Parameters
----------
model_name : string
The name of the file to read the model from
"""
model = storage.load_model(model_name)
model.summary()
start = time.clock()
score = model.evaluate_generator(gen.finite_generator("data\\validation"), steps=30)
end = time.clock()
print("Time per image: {} ".format((end-start)/300))
print (model.metrics_names)
print (score)
predictions = model.predict_generator(gen.finite_generator("data\\validation"), steps=30)
data_visualizer.draw_bounding_boxes("data\\validation", predictions, "data\\results") | 24.757576 | 93 | 0.69645 | 105 | 817 | 5.27619 | 0.47619 | 0.048736 | 0.064982 | 0.097473 | 0.173285 | 0.173285 | 0.173285 | 0.173285 | 0 | 0 | 0 | 0.010558 | 0.188494 | 817 | 33 | 94 | 24.757576 | 0.825038 | 0.172583 | 0 | 0 | 0 | 0 | 0.124224 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.0625 | false | 0 | 0.3125 | 0 | 0.375 | 0.1875 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
319ec31c5bec95f71fc86ec8dcab8ee33a9ec4c6 | 412 | py | Python | CeV - Gustavo Guanabara/exerc033.py | us19861229c/Meu-aprendizado-Python | 575c0714ac5377ff3122f4cb57952969e07ba89b | [
"Unlicense"
] | 1 | 2021-12-11T19:53:41.000Z | 2021-12-11T19:53:41.000Z | CeV - Gustavo Guanabara/exerc033.py | us19861229c/Meu-aprendizado-Python | 575c0714ac5377ff3122f4cb57952969e07ba89b | [
"Unlicense"
] | null | null | null | CeV - Gustavo Guanabara/exerc033.py | us19861229c/Meu-aprendizado-Python | 575c0714ac5377ff3122f4cb57952969e07ba89b | [
"Unlicense"
] | null | null | null | #033: ler tres numeros e dizer qual o maior e qual o menor:
print("Digite 3 numeros:")
maiorn = 0
n = int(input("Numero 1: "))
if n > maiorn:
maiorn = n
menorn = n
n = int(input("Numero 2: "))
if n > maiorn:
maiorn = n
if n < menorn:
menorn = n
n = int(input("Numero 3: "))
if n > maiorn:
maiorn = n
if n < menorn:
menorn = n
print(f"o maior numero foi {maiorn} e o menor foi {menorn}")
| 20.6 | 60 | 0.601942 | 73 | 412 | 3.39726 | 0.342466 | 0.060484 | 0.108871 | 0.181452 | 0.471774 | 0.407258 | 0.258065 | 0.258065 | 0.258065 | 0.258065 | 0 | 0.02623 | 0.259709 | 412 | 19 | 61 | 21.684211 | 0.786885 | 0.140777 | 0 | 0.647059 | 0 | 0 | 0.274788 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.117647 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
31ade7fa4d1318ceab82ad2826fc1a70514e9372 | 951 | py | Python | AxesFrame.py | Toyuri453/RSSP-Python-demo | 0adf92ad765b5a9334d7e2830611b98c8c4eb26d | [
"MIT"
] | 1 | 2021-05-22T18:06:49.000Z | 2021-05-22T18:06:49.000Z | AxesFrame.py | Toyuri453/RSSP-Python-demo | 0adf92ad765b5a9334d7e2830611b98c8c4eb26d | [
"MIT"
] | null | null | null | AxesFrame.py | Toyuri453/RSSP-Python-demo | 0adf92ad765b5a9334d7e2830611b98c8c4eb26d | [
"MIT"
] | null | null | null | import Terminal
class Axes():
def __init__(self, weak_terminal : 'Terminal.CartesianPoint'):
# self._initiator_x = weak_terminal._x
# self._initiator_y = weak_terminal._y
self._initiator = Terminal.CartesianPoint(0.0, 0.0, "UWB", "initiator")
self._weak_terminal = weak_terminal
self._terminal_set = {self._initiator._terminal_name : self._initiator, self._weak_terminal._terminal_name : self._weak_terminal}
self._terminal_measuring_point_set = {'Set' : {}} #Fill Later
print(self._terminal_set)
def add_terminal(self, terminal : 'Terminal.CartesianPoint'):
print("[DATA] Add Terminal {0} ".format(terminal))
self._terminal_set[terminal._terminal_name] = terminal
def show_terminal_names(self):
for key in self._terminal_set:
print("[DATA] Terminal Name: {0}, Color: {1}".format(key, self._terminal_set[key]._terminal_color)) | 50.052632 | 138 | 0.681388 | 114 | 951 | 5.263158 | 0.27193 | 0.14 | 0.125 | 0.08 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.009223 | 0.201893 | 951 | 19 | 139 | 50.052632 | 0.781291 | 0.087277 | 0 | 0 | 0 | 0 | 0.144038 | 0.054309 | 0 | 0 | 0 | 0 | 0 | 1 | 0.214286 | false | 0 | 0.071429 | 0 | 0.357143 | 0.214286 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
31be5bcba5067c3d0f88dba211c9dc9337d0bf13 | 2,560 | py | Python | src/Cogs/InfoCog.py | kodyVS/Discord-Bot-Development | 389bf69871adbe289f162ddbeeaf681023ca1f02 | [
"MIT"
] | 5 | 2020-05-27T20:03:45.000Z | 2020-06-24T11:27:26.000Z | src/Cogs/InfoCog.py | kodyVS/Discord-Bot-Development | 389bf69871adbe289f162ddbeeaf681023ca1f02 | [
"MIT"
] | 11 | 2020-05-28T10:56:26.000Z | 2020-07-02T13:38:02.000Z | src/Cogs/InfoCog.py | kodyVS/Discord-Bot-Development | 389bf69871adbe289f162ddbeeaf681023ca1f02 | [
"MIT"
] | 3 | 2020-05-28T20:31:02.000Z | 2020-06-17T23:51:51.000Z | from discord.ext import commands
import discord
import requests
from bs4 import BeautifulSoup
# work in progress! more languages welcome!
class InfoCog(commands.Cog):
def __init__(self, bot):
self.bot = bot
@commands.command(name = 'docs', brief = 'programming language documentation', description = 'documentation for languages, access by calling `.docs <language> <query>`', aliases = ['documentation', 'info'])
async def docs(self, ctx, language: str, query):
# access docs based on language
if language == 'python' or language == 'python3':
full_link = 'https://docs.python.org/3/genindex-all.html'
page = requests.get(full_link).content
soup = BeautifulSoup(page, 'html.parser')
link_descriptions = []
for link in soup.findAll('a'):
if query in link.contents[0]:
link_descriptions.append(f"[{link.contents[0]}](https://docs.python.org/3/{link['href']})")
link_descriptions = list(dict.fromkeys(link_descriptions))
link_descriptions = link_descriptions[:10]
### TODO: multi-lingual docs support (devdocs.io?)
### TODO: faster searching (current 4-5 secs)
### TODO: filter results -> currently only pick top ten, and there are some odd results as well
embed = discord.Embed(title="Python 3 Docs", color = 0x00ff00)
embed.add_field(name=f'{len(link_descriptions)} results found for `{query}` :', value='\n'.join(
link_descriptions), inline=False)
embed.set_thumbnail(url=
'https://upload.wikimedia.org/wikipedia/commons/thumb/c/c3/Python-logo-notext.svg/240px-Python-logo-notext.svg.png')
await ctx.send(embed=embed)
@commands.command(name='github', brief = 'view top 10 daily github repos', description = 'see the names and descriptions of the top x github repos today with `.github x` (default 10)', aliases=['gh'])
async def github(self, ctx, amount: int = 10):
'''Gets the GitHub first < amount > repositories without embeds'''
page = requests.get(
'https://github-trending-api.now.sh/repositories?q=sort=stars&order=desc&since=daily')
response = [
f"{entry['description']}: {'<' + entry['url'] + '>'}\n" for entry in page.json()[:amount]]
embed = discord.Embed(
title=f"**GitHub's top {str(amount)} today**", description='\n'.join(response), color=0x00ff00)
await ctx.send(embed=embed)
| 49.230769 | 210 | 0.632031 | 316 | 2,560 | 5.06962 | 0.506329 | 0.0799 | 0.02372 | 0.022472 | 0.051186 | 0 | 0 | 0 | 0 | 0 | 0 | 0.01576 | 0.231641 | 2,560 | 51 | 211 | 50.196078 | 0.798678 | 0.098438 | 0 | 0.058824 | 0 | 0.117647 | 0.333184 | 0.021076 | 0 | 0 | 0.007175 | 0.019608 | 0 | 1 | 0.029412 | false | 0 | 0.117647 | 0 | 0.176471 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
31c6e6ace01eea05877a86d1f6316d5a911da292 | 588 | py | Python | test/show-cifar10.py | tom01h/deep-learning-from-scratch | acb3c31976cd736b4abd21c3e8ab81c3bf0eb9bb | [
"MIT"
] | 3 | 2018-10-11T16:19:18.000Z | 2022-01-16T07:48:06.000Z | test/show-cifar10.py | tom01h/deep-learning-from-scratch | acb3c31976cd736b4abd21c3e8ab81c3bf0eb9bb | [
"MIT"
] | null | null | null | test/show-cifar10.py | tom01h/deep-learning-from-scratch | acb3c31976cd736b4abd21c3e8ab81c3bf0eb9bb | [
"MIT"
] | null | null | null | # coding: utf-8
import sys, os
sys.path.append(os.pardir) # 親ディレクトリのファイルをインポートするための設定
import numpy as np
from dataset.cifar10 import load_cifar10
from PIL import Image
np.set_printoptions(threshold=100)
(x_train, t_train), (x_test, t_test) = load_cifar10(flatten=False)
sample_image = x_test[0:100].reshape((10, 10, 3, 32, 32)).transpose((0, 3, 1, 4, 2)).reshape((320, 320, 3)) # 先頭100個をタイル状に並べ替える
Image.fromarray(np.uint8(sample_image*255)).save('sample.png')
print(t_test[0:100].reshape(10,10))
#pil_img = Image.fromarray(np.uint8(sample_image*255))
#pil_img.show()
| 34.588235 | 128 | 0.727891 | 96 | 588 | 4.3125 | 0.510417 | 0.07971 | 0.038647 | 0.072464 | 0.26087 | 0.26087 | 0.169082 | 0 | 0 | 0 | 0 | 0.104046 | 0.117347 | 588 | 16 | 129 | 36.75 | 0.693642 | 0.210884 | 0 | 0 | 0 | 0 | 0.022624 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.4 | 0 | 0.4 | 0.2 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
31c7910d7253d24e22e70937e36be79e678386eb | 10,533 | py | Python | PWWS/fool.py | ForeverZyh/ASCC | 2d76d679889953501c469221a37d486e7ee42ded | [
"MIT"
] | 21 | 2021-03-22T07:14:29.000Z | 2022-03-24T02:05:25.000Z | PWWS/fool.py | ForeverZyh/ASCC | 2d76d679889953501c469221a37d486e7ee42ded | [
"MIT"
] | 2 | 2021-04-07T11:31:01.000Z | 2022-01-10T03:41:10.000Z | PWWS/fool.py | ForeverZyh/ASCC | 2d76d679889953501c469221a37d486e7ee42ded | [
"MIT"
] | 4 | 2021-05-05T18:44:13.000Z | 2021-07-29T03:09:50.000Z | # coding: utf-8
from __future__ import division
from __future__ import print_function
from __future__ import unicode_literals
import sys
import argparse
import os
import numpy as np
from read_files import split_imdb_files, split_yahoo_files, split_agnews_files
from word_level_process import word_process, get_tokenizer
from char_level_process import char_process
from neural_networks import word_cnn, char_cnn, bd_lstm, lstm
from adversarial_tools import ForwardGradWrapper, adversarial_paraphrase
import tensorflow as tf
from keras import backend as K
import time
from unbuffered import Unbuffered
sys.stdout = Unbuffered(sys.stdout)
config = tf.ConfigProto(allow_soft_placement=True)
config.gpu_options.allow_growth = True
K.set_session(tf.Session(config=config))
# os.environ["CUDA_VISIBLE_DEVICES"] = "1"
parser = argparse.ArgumentParser(
description='Craft adversarial examples for a text classifier.')
parser.add_argument('--clean_samples_cap',
help='Amount of clean(test) samples to fool',
type=int, default=1000)
parser.add_argument('-m', '--model',
help='The model of text classifier',
choices=['word_cnn', 'char_cnn', 'word_lstm', 'word_bdlstm'],
default='word_cnn')
parser.add_argument('-d', '--dataset',
help='Data set',
choices=['imdb', 'agnews', 'yahoo'],
default='imdb')
parser.add_argument('-l', '--level',
help='The level of process dataset',
choices=['word', 'char'],
default='word')
def write_origin_input_texts(origin_input_texts_path, test_texts, test_samples_cap=None):
if test_samples_cap is None:
test_samples_cap = len(test_texts)
with open(origin_input_texts_path, 'a') as f:
for i in range(test_samples_cap):
f.write(test_texts[i] + '\n')
def fool_text_classifier():
clean_samples_cap = args.clean_samples_cap # 1000
print('clean_samples_cap:', clean_samples_cap)
# get tokenizer
dataset = args.dataset
tokenizer = get_tokenizer(opt)
# Read data set
x_test = y_test = None
test_texts = None
if dataset == 'imdb':
train_texts, train_labels, dev_texts, dev_labels, test_texts, test_labels = split_imdb_files(opt)
if args.level == 'word':
x_train, y_train, x_test, y_test = word_process(train_texts, train_labels, test_texts, test_labels, dataset)
elif args.level == 'char':
x_train, y_train, x_test, y_test = char_process(train_texts, train_labels, test_texts, test_labels, dataset)
elif dataset == 'agnews':
train_texts, train_labels, test_texts, test_labels = split_agnews_files()
if args.level == 'word':
x_train, y_train, x_test, y_test = word_process(train_texts, train_labels, test_texts, test_labels, dataset)
elif args.level == 'char':
x_train, y_train, x_test, y_test = char_process(train_texts, train_labels, test_texts, test_labels, dataset)
elif dataset == 'yahoo':
train_texts, train_labels, test_texts, test_labels = split_yahoo_files()
if args.level == 'word':
x_train, y_train, x_test, y_test = word_process(train_texts, train_labels, test_texts, test_labels, dataset)
elif args.level == 'char':
x_train, y_train, x_test, y_test = char_process(train_texts, train_labels, test_texts, test_labels, dataset)
# Write clean examples into a txt file
clean_texts_path = r'./fool_result/{}/clean_{}.txt'.format(dataset, str(clean_samples_cap))
if not os.path.isfile(clean_texts_path):
write_origin_input_texts(clean_texts_path, test_texts)
# Select the model and load the trained weights
assert args.model[:4] == args.level
model = None
if args.model == "word_cnn":
model = word_cnn(dataset)
elif args.model == "word_bdlstm":
model = bd_lstm(dataset)
elif args.model == "char_cnn":
model = char_cnn(dataset)
elif args.model == "word_lstm":
model = lstm(dataset)
model_path = r'./runs/{}/{}.dat'.format(dataset, args.model)
model.load_weights(model_path)
print('model path:', model_path)
# evaluate classification accuracy of model on clean samples
scores_origin = model.evaluate(x_test[:clean_samples_cap], y_test[:clean_samples_cap])
print('clean samples origin test_loss: %f, accuracy: %f' % (scores_origin[0], scores_origin[1]))
all_scores_origin = model.evaluate(x_test, y_test)
print('all origin test_loss: %f, accuracy: %f' % (all_scores_origin[0], all_scores_origin[1]))
grad_guide = ForwardGradWrapper(model)
classes_prediction = grad_guide.predict_classes(x_test[: clean_samples_cap])
print('Crafting adversarial examples...')
successful_perturbations = 0
failed_perturbations = 0
sub_rate_list = []
NE_rate_list = []
start_cpu = time.clock()
adv_text_path = r'./fool_result/{}/{}/adv_{}.txt'.format(dataset, args.model, str(clean_samples_cap))
change_tuple_path = r'./fool_result/{}/{}/change_tuple_{}.txt'.format(dataset, args.model, str(clean_samples_cap))
file_1 = open(adv_text_path, "a")
file_2 = open(change_tuple_path, "a")
for index, text in enumerate(test_texts[: clean_samples_cap]):
sub_rate = 0
NE_rate = 0
if np.argmax(y_test[index]) == classes_prediction[index]:
# If the ground_true label is the same as the predicted label
adv_doc, adv_y, sub_rate, NE_rate, change_tuple_list = adversarial_paraphrase(input_text=text,
true_y=np.argmax(y_test[index]),
grad_guide=grad_guide,
tokenizer=tokenizer,
dataset=dataset,
level=args.level)
if adv_y != np.argmax(y_test[index]):
successful_perturbations += 1
print('{}. Successful example crafted.'.format(index))
else:
failed_perturbations += 1
print('{}. Failure.'.format(index))
text = adv_doc
sub_rate_list.append(sub_rate)
NE_rate_list.append(NE_rate)
file_2.write(str(index) + str(change_tuple_list) + '\n')
file_1.write(text + " sub_rate: " + str(sub_rate) + "; NE_rate: " + str(NE_rate) + "\n")
end_cpu = time.clock()
print('CPU second:', end_cpu - start_cpu)
mean_sub_rate = sum(sub_rate_list) / len(sub_rate_list)
mean_NE_rate = sum(NE_rate_list) / len(NE_rate_list)
print('mean substitution rate:', mean_sub_rate)
print('mean NE rate:', mean_NE_rate)
file_1.close()
file_2.close()
def fool_text_classifier_pytorch(model, dataset='imdb'):
clean_samples_cap = 100
print('clean_samples_cap:', clean_samples_cap)
# get tokenizer
tokenizer = get_tokenizer(opt)
# Read data set
x_test = y_test = None
test_texts = None
if dataset == 'imdb':
train_texts, train_labels, dev_texts, dev_labels, test_texts, test_labels = split_imdb_files(opt)
x_train, y_train, x_test, y_test = word_process(train_texts, train_labels, test_texts, test_labels, dataset)
elif dataset == 'agnews':
train_texts, train_labels, test_texts, test_labels = split_agnews_files()
x_train, y_train, x_test, y_test = word_process(train_texts, train_labels, test_texts, test_labels, dataset)
elif dataset == 'yahoo':
train_texts, train_labels, test_texts, test_labels = split_yahoo_files()
x_train, y_train, x_test, y_test = word_process(train_texts, train_labels, test_texts, test_labels, dataset)
grad_guide = ForwardGradWrapper_pytorch(model)
classes_prediction = grad_guide.predict_classes(x_test[: clean_samples_cap])
print('Crafting adversarial examples...')
successful_perturbations = 0
failed_perturbations = 0
sub_rate_list = []
NE_rate_list = []
start_cpu = time.clock()
adv_text_path = r'./fool_result/{}/adv_{}.txt'.format(dataset, str(clean_samples_cap))
change_tuple_path = r'./fool_result/{}/change_tuple_{}.txt'.format(dataset, str(clean_samples_cap))
file_1 = open(adv_text_path, "a")
file_2 = open(change_tuple_path, "a")
for index, text in enumerate(test_texts[: clean_samples_cap]):
sub_rate = 0
NE_rate = 0
if np.argmax(y_test[index]) == classes_prediction[index]:
# If the ground_true label is the same as the predicted label
adv_doc, adv_y, sub_rate, NE_rate, change_tuple_list = adversarial_paraphrase(input_text=text,
true_y=np.argmax(y_test[index]),
grad_guide=grad_guide,
tokenizer=tokenizer,
dataset=dataset,
level='word')
if adv_y != np.argmax(y_test[index]):
successful_perturbations += 1
print('{}. Successful example crafted.'.format(index))
else:
failed_perturbations += 1
print('{}. Failure.'.format(index))
text = adv_doc
sub_rate_list.append(sub_rate)
NE_rate_list.append(NE_rate)
file_2.write(str(index) + str(change_tuple_list) + '\n')
file_1.write(text + " sub_rate: " + str(sub_rate) + "; NE_rate: " + str(NE_rate) + "\n")
end_cpu = time.clock()
print('CPU second:', end_cpu - start_cpu)
mean_sub_rate = sum(sub_rate_list) / len(sub_rate_list)
mean_NE_rate = sum(NE_rate_list) / len(NE_rate_list)
print('mean substitution rate:', mean_sub_rate)
print('mean NE rate:', mean_NE_rate)
file_1.close()
file_2.close()
if __name__ == '__main__':
args = parser.parse_args()
fool_text_classifier()
| 46.606195 | 122 | 0.619102 | 1,334 | 10,533 | 4.541979 | 0.142429 | 0.023766 | 0.047037 | 0.051989 | 0.66199 | 0.655554 | 0.628817 | 0.623205 | 0.623205 | 0.60472 | 0 | 0.005531 | 0.279123 | 10,533 | 225 | 123 | 46.813333 | 0.79244 | 0.035792 | 0 | 0.579787 | 0 | 0 | 0.097801 | 0.015873 | 0 | 0 | 0 | 0 | 0.005319 | 1 | 0.015957 | false | 0 | 0.085106 | 0 | 0.101064 | 0.095745 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
31d79e6d0a59cc3302d9155c1c4c15215d0a9e1b | 1,387 | py | Python | pygromos/tests/test_submission/test_hpc_queuing_submission_scheduling.py | pultar/PyGromosTools | 3c104c560c2e654972a036e2060b120ade96f655 | [
"MIT"
] | 13 | 2021-03-17T09:29:37.000Z | 2022-01-14T20:42:16.000Z | pygromos/tests/test_submission/test_hpc_queuing_submission_scheduling.py | pultar/PyGromosTools | 3c104c560c2e654972a036e2060b120ade96f655 | [
"MIT"
] | 185 | 2021-03-03T14:24:55.000Z | 2022-03-31T18:39:29.000Z | pygromos/tests/test_submission/test_hpc_queuing_submission_scheduling.py | pultar/PyGromosTools | 3c104c560c2e654972a036e2060b120ade96f655 | [
"MIT"
] | 13 | 2021-03-03T14:18:06.000Z | 2022-02-17T09:48:55.000Z | import unittest, tempfile
from pygromos.simulations.hpc_queuing.job_scheduling.schedulers import simulation_scheduler
from pygromos.data.simulation_parameters_templates import template_md
from pygromos.data.topology_templates import blank_topo_template
from pygromos.simulations.hpc_queuing.submission_systems import DUMMY
from pygromos.files.gromos_system.gromos_system import Gromos_System
from pygromos.tests.in_testfiles import in_test_file_path
from pygromos.tests.test_files import out_test_root_dir
class test_MD_scheduler(unittest.TestCase):
submissionSystem = DUMMY
def setUp(self) -> None:
self.tmp_test_dir = tempfile.mkdtemp(dir=out_test_root_dir, prefix="scheduling_Dummy_")
def test_do(self):
in_cnf = in_test_file_path+"/cnf/in_cnf1.cnf"
out_dir_path = self.tmp_test_dir
in_simSystem = Gromos_System(system_name="test_do", work_folder=out_dir_path,
in_top_path=blank_topo_template, in_cnf_path=in_cnf, in_imd_path=template_md,
in_gromosXX_bin_dir=None, in_gromosPP_bin_dir=None)
submission_system = self.submissionSystem()
simulation_scheduler.do(in_simSystem=in_simSystem, out_dir_path=out_dir_path,
submission_system=submission_system,
simulation_run_num=2, verbose= True)
| 46.233333 | 114 | 0.746215 | 184 | 1,387 | 5.211957 | 0.342391 | 0.087591 | 0.04171 | 0.054223 | 0.068822 | 0 | 0 | 0 | 0 | 0 | 0 | 0.001786 | 0.192502 | 1,387 | 29 | 115 | 47.827586 | 0.854464 | 0 | 0 | 0 | 0 | 0 | 0.028839 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.090909 | false | 0 | 0.363636 | 0 | 0.545455 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
31dbeeeb585ae91b3ec528faf0591108ed8cc73b | 848 | py | Python | hear_me_django_app/accounts/management/commands/initial_users.py | kamil1marczak/hear_me_django_app | 2a567c15acddbf6bf183c6c637a3785c2a9c9c5c | [
"MIT"
] | null | null | null | hear_me_django_app/accounts/management/commands/initial_users.py | kamil1marczak/hear_me_django_app | 2a567c15acddbf6bf183c6c637a3785c2a9c9c5c | [
"MIT"
] | null | null | null | hear_me_django_app/accounts/management/commands/initial_users.py | kamil1marczak/hear_me_django_app | 2a567c15acddbf6bf183c6c637a3785c2a9c9c5c | [
"MIT"
] | null | null | null | from django.contrib.auth import get_user_model
from django.contrib.auth.hashers import make_password
from django.core.management.base import BaseCommand
from ._private import populate_user
User = get_user_model()
class Command(BaseCommand):
help = 'admin deployment'
def add_arguments(self, parser):
parser.add_argument('total', type=int, help='Indicates the number of users to be created')
def handle(self, *args, **kwargs):
total = kwargs['total']
populate_user(number=total)
obj, created = User.objects.get_or_create(name="root", password=make_password('Kamil100!'), is_superuser=True)
message = "Successfully populated database with initial users"
if created:
message += f" Superuser {obj.name} ha been created"
self.stdout.write(self.style.SUCCESS(message))
| 36.869565 | 118 | 0.714623 | 110 | 848 | 5.381818 | 0.6 | 0.050676 | 0.057432 | 0.070946 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.004335 | 0.183962 | 848 | 22 | 119 | 38.545455 | 0.851156 | 0 | 0 | 0 | 0 | 0 | 0.199292 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.117647 | false | 0.117647 | 0.235294 | 0 | 0.470588 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
31ea1b716a1b8a3e2fc957132ac8497e9ccd0dcb | 10,826 | py | Python | 2015/day7/2015-day7-part2.py | matt-the-ogre/advent-of-code | 7188089d4db4a99fa09ef8366137fe28d1c28205 | [
"MIT"
] | 1 | 2021-12-03T18:17:54.000Z | 2021-12-03T18:17:54.000Z | 2015/day7/2015-day7-part2.py | matt-the-ogre/advent-of-code | 7188089d4db4a99fa09ef8366137fe28d1c28205 | [
"MIT"
] | null | null | null | 2015/day7/2015-day7-part2.py | matt-the-ogre/advent-of-code | 7188089d4db4a99fa09ef8366137fe28d1c28205 | [
"MIT"
] | null | null | null | # Advent of Code - 2015 - Day 7
# --- Day 7: Some Assembly Required ---
# This year, Santa brought little Bobby Tables a set of wires and bitwise logic gates! Unfortunately, little Bobby is a little under the recommended age range, and he needs help assembling the circuit.
# Each wire has an identifier (some lowercase letters) and can carry a 16-bit signal (a number from 0 to 65535). A signal is provided to each wire by a gate, another wire, or some specific value. Each wire can only get a signal from one source, but can provide its signal to multiple destinations. A gate provides no signal until all of its inputs have a signal.
# The included instructions booklet describes how to connect the parts together: x AND y -> z means to connect wires x and y to an AND gate, and then connect its output to wire z.
# For example:
# 123 -> x means that the signal 123 is provided to wire x.
# x AND y -> z means that the bitwise AND of wire x and wire y is provided to wire z.
# p LSHIFT 2 -> q means that the value from wire p is left-shifted by 2 and then provided to wire q.
# NOT e -> f means that the bitwise complement of the value from wire e is provided to wire f.
# Other possible gates include OR (bitwise OR) and RSHIFT (right-shift). If, for some reason, you'd like to emulate the circuit instead, almost all programming languages (for example, C, JavaScript, or Python) provide operators for these gates.
# For example, here is a simple circuit:
# 123 -> x
# 456 -> y
# x AND y -> d
# x OR y -> e
# x LSHIFT 2 -> f
# y RSHIFT 2 -> g
# NOT x -> h
# NOT y -> i
# After it is run, these are the signals on the wires:
# d: 72
# e: 507
# f: 492
# g: 114
# h: 65412
# i: 65079
# x: 123
# y: 456
# In little Bobby's kit's instructions booklet (provided as your puzzle input), what signal is ultimately provided to wire a?
import time, math
def createCircuitDict():
global circuitStrings
global circuitDict
# this function takes the string as input (circuitStrings) and converts them (parses them) into a dictionary (circuitDict)
for circuitLine in circuitStrings:
# the string "->" is the delimeter (sp?) between the left side (input) and the wire name (dictionary key)
leftSide = circuitLine[0 : circuitLine.find("->") - 1]
# if debug:
# print("leftSide:", leftSide)
rightSide = circuitLine[circuitLine.find("->") + 3 : ]
# if debug:
# print("rightSide:", rightSide)
# we set the outputValue to nan (not a number) as a way of checking if we have successfully evaluated the wires inputs or not: default = nan, not evaluated
outputValue = math.nan
# check for numeric input string -- this is easy, just make it the output
if leftSide.isnumeric():
leftSide = int(leftSide)
outputValue = leftSide # simple -- the input to this wire is also it's output
# check for duplicate wire names (dictionary keys) in the input string
if circuitDict.get(rightSide) != None:
print("Weird... dictionary key ", rightSide, "already exists. This shouldn't happen.")
circuitDict[rightSide] = {"input" : leftSide, "output" : outputValue}
def evaluateInput(circuit, operator):
global circuitDict
# if debug:
# print(circuit, operator)
# check left argument for circuit name or number
inputWire1 = circuitDict[circuit]["input"][: circuitDict[circuit]["input"].find(operator) - 1]
inputWire2 = circuitDict[circuit]["input"][circuitDict[circuit]["input"].find(operator) + len(operator) + 1 : ]
# if debug:
# print(circuit, "=", inputWire1, operator, inputWire2)
# look up the output of the inputWire
if inputWire1.isnumeric():
input1 = int(inputWire1)
else:
input1 = circuitDict[inputWire1]["output"]
if inputWire2.isnumeric():
input2 = int(inputWire2)
else:
input2 = circuitDict[inputWire2]["output"]
if math.isnan(input1):
# print("input wire 1 isn't calculated yet")
pass
elif math.isnan(input2):
# print("input wire 2 isn't calculated yet")
pass
else:
# do the bitwise complement on the input number and assign it to the output of this wire
if operator == "AND":
circuitDict[circuit]["output"] = input1 & input2
elif operator == "OR":
circuitDict[circuit]["output"] = input1 | input2
elif operator == "LSHIFT":
circuitDict[circuit]["output"] = input1 << input2
elif operator == "RSHIFT":
circuitDict[circuit]["output"] = input1 >> input2
else:
print("Unknown operator", operator)
# check for rollunder 0
# this occurs because we are using a signed integer for what should be an unsigned 16-bit integer
# TODO figure out if Python has an unsigned 16-bit integer type
if circuitDict[circuit]["output"] < 0:
# if debug:
# print("result under zero, fix it")
circuitDict[circuit]["output"] = 65535 + circuitDict[circuit]["output"]
def doConnection():
global circuitDict
unfinishedCount = len(circuitDict)
lowCount = unfinishedCount
while unfinishedCount:
unfinishedCount = len(circuitDict)
if debug:
print("lowCount", lowCount)
for circuit in circuitDict:
# if the output is not a number, evaluate the input
if math.isnan(circuitDict[circuit]["output"]):
# parse the left side
# we can have NOT, AND, OR, LSHIFT, and RSHIFT as possible commands
if "NOT" in circuitDict[circuit]["input"]:
# operation is logical NOT, invert the input line to be the output
inputWire1 = circuitDict[circuit]["input"][circuitDict[circuit]["input"].find("NOT")+4 : ]
# if debug:
# print(circuit, "= NOT", inputWire1)
# look up the output of the inputWire
if inputWire1.isnumeric():
input1 = int(inputWire1)
else:
input1 = circuitDict[inputWire1]["output"]
if math.isnan(input1):
# print("input wire isn't calculated yet")
pass
else:
# do the bitwise complement on the input number and assign it to the output of this wire
circuitDict[circuit]["output"] = ~input1
# check for rollunder 0
if circuitDict[circuit]["output"] < 0:
# if debug:
# print("result under zero, fix it")
circuitDict[circuit]["output"] = 65536 + circuitDict[circuit]["output"]
elif "AND" in circuitDict[circuit]["input"]:
evaluateInput(circuit, "AND")
elif "OR" in circuitDict[circuit]["input"]:
evaluateInput(circuit, "OR")
elif "LSHIFT" in circuitDict[circuit]["input"]:
evaluateInput(circuit, "LSHIFT")
elif "RSHIFT" in circuitDict[circuit]["input"]:
evaluateInput(circuit, "RSHIFT")
else:
# simplest case -- one input only!
# copy the input wire
# this could be improved by doing it only if the inputWire is resolved
inputWire1 = circuitDict[circuit]["input"]
if debug:
print("simplest case circuit", circuit, " inputWire", inputWire1)
circuitDict[circuit]["output"] = circuitDict[inputWire1]["output"]
else:
# this circuit is done, move on
# if debug:
# print("circuit",circuit,"is done with output ", circuitDict[circuit]["output"], "Break.")
pass
if math.isnan(circuitDict[circuit]["output"]) is False:
# this output is calculated, decrement the unfinished counter
unfinishedCount -= 1
if unfinishedCount < lowCount:
lowCount = unfinishedCount
# if debug:
# print("unfinishedCount", unfinishedCount)
startTime = time.perf_counter() # time in seconds (float)
debug = False
timing = True
unitTesting = False
# maybe a dictionary again?
# circuitStrings = {"a" : {"input" : 1, "output" : NaN}}
# parse the input text file to set up the circuitStrings inputs, then just roll through the dictionary to calculate the outputs
# how will I be sure that the output has been calculated to be the input for the next circuitStrings?
# can I assume the input file is "in order"? Probably not.
# does this mean some sort of recursion algorithm?
# maybe if I populate the outputs with 'NaN' (or Python equivalent) then check that it's not that before using it's output
# I can make it recurse through the inputs, calculating any that have fully realized inputs?
circuitStrings = []
circuitDict = {}
# unit tests, kind of
if unitTesting:
print("Unit Testing")
circuitStrings = ["123 -> x","456 -> y", "x AND y -> d", "x OR y -> e", "x LSHIFT 2 -> f", "y RSHIFT 2 -> g", "NOT x -> h", "NOT y -> i"]
else:
# read the input text file into a variable called presents
with open("2015/day7/input-part2.txt","r") as inputString:
circuitStrings = inputString.readlines()
# remove newlines
for i in range(0, len(circuitStrings)):
circuitStrings[i] = circuitStrings[i].rstrip()
# parse the input to create the dictionary
createCircuitDict()
doConnection()
# show the circuits
if debug:
for circuit in circuitDict:
print(circuit,":",circuitDict[circuit])
if unitTesting:
testPass = False
testPassOutput = {"d": {"output" : 72}, "e": {"output" : 507}, "f": {"output" : 492}, "g": {"output" : 114}, "h": {"output" : 65412}, "i": {"output" : 65079}, "x": {"output" : 123}, "y": {"output" : 456}}
for wire in testPassOutput:
testPassWire = testPassOutput[wire]["output"]
circuitWire = circuitDict[wire]["output"]
if debug:
print("wire", wire, "test:", testPassWire, "calc:", circuitWire)
testPass = testPassWire == circuitWire
if testPass is False:
break
print("testPass:", testPass)
else:
print(circuitDict["a"]["output"])
# this answer for my input is 46065 (part 1), 14134 (part 2)
endTime = time.perf_counter() # time in seconds (float)
if timing:
print("Execution took ", endTime - startTime, " seconds.")
| 42.289063 | 362 | 0.608627 | 1,321 | 10,826 | 4.986374 | 0.258138 | 0.076514 | 0.054653 | 0.022772 | 0.227873 | 0.2092 | 0.171246 | 0.139365 | 0.101108 | 0.101108 | 0 | 0.022882 | 0.293553 | 10,826 | 255 | 363 | 42.454902 | 0.838389 | 0.440052 | 0 | 0.325 | 0 | 0 | 0.101692 | 0.004188 | 0 | 0 | 0 | 0.003922 | 0 | 1 | 0.025 | false | 0.1 | 0.008333 | 0 | 0.033333 | 0.083333 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
31ee781effe2a319a7f8d1c8b7b12faf33878337 | 1,846 | py | Python | tests/dgds_functions_test.py | openearth/hydro-engine-service | 8e7eea489ee241dad2d6d8152d1c30af8a09a8d1 | [
"MIT"
] | 4 | 2019-02-15T13:53:01.000Z | 2021-12-13T09:53:02.000Z | tests/dgds_functions_test.py | openearth/hydro-engine-service | 8e7eea489ee241dad2d6d8152d1c30af8a09a8d1 | [
"MIT"
] | 12 | 2018-12-19T08:30:29.000Z | 2021-04-21T12:59:59.000Z | tests/dgds_functions_test.py | openearth/hydro-engine-service | 8e7eea489ee241dad2d6d8152d1c30af8a09a8d1 | [
"MIT"
] | 4 | 2018-10-17T23:48:21.000Z | 2020-08-05T18:36:14.000Z | import logging
import pytest
from . import auth
from hydroengine_service import dgds_functions
logger = logging.getLogger(__name__)
class TestDGDSFunctions:
@pytest.mark.parametrize('source, start_date, end_date, limit',
[
('projects/dgds-gee/bathymetry/gebco/2019', None, None, 10),
('projects/dgds-gee/glossis/currents', None, None, None),
('projects/dgds-gee/glossis/waterlevel', '2020-11-01', '2020-12-01', None),
('projects/dgds-gee/glossis/wind', '2020-11-01', '2020-11-10', 10),
('projects/dgds-gee/glossis/waveheight', None, None, None),
('projects/dgds-gee/gloffis/weather', None, None, 5),
('projects/dgds-gee/gloffis/hydro', None, None, 5),
('projects/dgds-gee/metocean/waves/percentiles', None, None, 5),
('projects/dgds-gee/chasm/waves', None, None, None),
('projects/dgds-gee/chasm/wind', None, None, None),
('projects/dgds-gee/crucial/evaporation_deficit', None, None, None),
('projects/dgds-gee/crucial/groundwater_declining_trend', None, None, None),
('projects/dgds-gee/msfd/chlorophyll', None, None, None)
])
def test_get_image_collection_info(self, source, start_date, end_date, limit):
image_date_list = dgds_functions.get_image_collection_info(source, start_date, end_date, limit)
assert len(image_date_list) >= 1
assert "imageId" in image_date_list[0]
assert "date" in image_date_list[0]
| 51.277778 | 109 | 0.538462 | 193 | 1,846 | 4.989637 | 0.34715 | 0.149533 | 0.202492 | 0.13811 | 0.458982 | 0.341641 | 0.070613 | 0 | 0 | 0 | 0 | 0.037954 | 0.343445 | 1,846 | 35 | 110 | 52.742857 | 0.756601 | 0 | 0 | 0 | 0 | 0 | 0.302275 | 0.255688 | 0 | 0 | 0 | 0 | 0.111111 | 1 | 0.037037 | false | 0 | 0.148148 | 0 | 0.222222 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
9ec42ebdeb8c357fae82c9abfd68ebde784ec5ba | 1,280 | py | Python | TeamClassificationUtils.py | Neerajj9/Computer-Vision-based-Offside-Detection-in-soccer | 744bfc636463f24c4f78f25684864c2ce4abb43f | [
"MIT"
] | 8 | 2020-10-17T14:54:53.000Z | 2022-02-09T11:03:01.000Z | TeamClassificationUtils.py | Neerajj9/Computer-Vision-based-Offside-Detection-in-soccer | 744bfc636463f24c4f78f25684864c2ce4abb43f | [
"MIT"
] | 4 | 2021-01-03T16:02:29.000Z | 2021-11-23T03:26:01.000Z | TeamClassificationUtils.py | Neerajj9/Computer-Vision-based-Offside-Detection-in-soccer | 744bfc636463f24c4f78f25684864c2ce4abb43f | [
"MIT"
] | 2 | 2021-04-10T07:05:55.000Z | 2021-09-19T23:22:18.000Z | import numpy as np
# TODO : add code for referee
def get_team_classifications(teamColor1, teamColor2, refColor, keeper1Color, keeper2Color, pose_estimations):
for pose in pose_estimations:
if len(pose[1]) < 2:
pose.append('color not found')
continue
colorDiffs = {}
colorList = np.array(pose[1][0]) + np.array(pose[1][1])
colorList = np.divide(colorList, 2)
colorList = colorList.tolist()
diffTeam1 = list(abs(np.array(teamColor1) - np.array(colorList)))
colorDiffs['team1'] = diffTeam1
diffTeam2 = list(abs(np.array(teamColor2) - np.array(colorList)))
colorDiffs['team2'] = diffTeam2
diffRef = list(abs(np.array(refColor) - np.array(colorList)))
colorDiffs['ref'] = diffRef
diffKeep1 = list(abs(np.array(refColor) - np.array(colorList)))
colorDiffs['keep1'] = diffKeep1
diffKeep2 = list(abs(np.array(refColor) - np.array(colorList)))
colorDiffs['keep2'] = diffKeep2
for key in colorDiffs.keys():
colorDiffs[key] = sum(colorDiffs[key]) / len(colorDiffs[key])
colorDiffs = {k: v for k, v in sorted(colorDiffs.items(), key=lambda item: item[1])}
for key in colorDiffs.keys():
pose.append(key)
break
return pose_estimations | 33.684211 | 109 | 0.651563 | 156 | 1,280 | 5.314103 | 0.378205 | 0.101327 | 0.054282 | 0.084439 | 0.226779 | 0.173703 | 0.173703 | 0.173703 | 0.173703 | 0 | 0 | 0.025819 | 0.213281 | 1,280 | 38 | 110 | 33.684211 | 0.797418 | 0.021094 | 0 | 0.074074 | 0 | 0 | 0.030351 | 0 | 0 | 0 | 0 | 0.026316 | 0 | 1 | 0.037037 | false | 0 | 0.037037 | 0 | 0.111111 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
9eca8cb06280c8af6786e7a410286dc58b44dac0 | 5,734 | py | Python | src/gt4sd/algorithms/generation/polymer_blocks/core.py | hhhsu0825/gt4sd-core | 4a1fe9da58d2f33bba2fba64604427e037ad7a46 | [
"MIT"
] | null | null | null | src/gt4sd/algorithms/generation/polymer_blocks/core.py | hhhsu0825/gt4sd-core | 4a1fe9da58d2f33bba2fba64604427e037ad7a46 | [
"MIT"
] | null | null | null | src/gt4sd/algorithms/generation/polymer_blocks/core.py | hhhsu0825/gt4sd-core | 4a1fe9da58d2f33bba2fba64604427e037ad7a46 | [
"MIT"
] | null | null | null | """PaccMann vanilla generator trained on polymer building blocks (catalysts/monomers)."""
import logging
import os
from dataclasses import field
from typing import ClassVar, Dict, Optional, TypeVar
from ....domains.materials import SmallMolecule, validate_molecules
from ....exceptions import InvalidItem
from ....training_pipelines.core import TrainingPipelineArguments
from ....training_pipelines.paccmann.core import PaccMannSavingArguments
from ...core import AlgorithmConfiguration, GeneratorAlgorithm, Untargeted
from ...registry import ApplicationsRegistry
from .implementation import Generator
logger = logging.getLogger(__name__)
logger.addHandler(logging.NullHandler())
T = type(None)
S = TypeVar("S", bound=SmallMolecule)
class PolymerBlocks(GeneratorAlgorithm[S, T]):
def __init__(
self, configuration: AlgorithmConfiguration, target: Optional[T] = None
):
"""Polymer blocks generation.
Args:
configuration: domain and application
specification, defining types and validations.
target: unused since it is not a conditional generator.
Example:
An example for generating small molecules (SMILES) that resembles
monomers/catalysts for polymer synthesis::
configuration = PolymerBlocksGenerator()
polymer_blocks = PolymerBlocks(configuration=configuration)
items = list(polymer_blocks.sample(10))
print(items)
"""
configuration = self.validate_configuration(configuration)
# TODO there might also be a validation/check on the target input
super().__init__(
configuration=configuration,
target=None, # type:ignore
)
def get_generator(
self,
configuration: AlgorithmConfiguration[S, T],
target: Optional[T],
) -> Untargeted:
"""Get the function to sample batches via the Generator.
Args:
configuration: helps to set up the application.
target: context or condition for the generation. Unused in the algorithm.
Returns:
callable generating a batch of items.
"""
logger.info("ensure artifacts for the application are present.")
self.local_artifacts = configuration.ensure_artifacts()
implementation: Generator = configuration.get_conditional_generator( # type: ignore
self.local_artifacts
)
return implementation.sample
def validate_configuration(
self, configuration: AlgorithmConfiguration
) -> AlgorithmConfiguration:
# TODO raise InvalidAlgorithmConfiguration
assert isinstance(configuration, AlgorithmConfiguration)
return configuration
@ApplicationsRegistry.register_algorithm_application(PolymerBlocks)
class PolymerBlocksGenerator(AlgorithmConfiguration[SmallMolecule, None]):
"""Configuration to generate subunits of polymers."""
algorithm_type: ClassVar[str] = "generation"
domain: ClassVar[str] = "materials"
algorithm_version: str = "v0"
batch_size: int = field(
default=32,
metadata=dict(description="Batch size used for the generative model sampling."),
)
generated_length: int = field(
default=100,
metadata=dict(
description="Maximum length in tokens of the generated molcules (relates to the SMILES length)."
),
)
def get_target_description(self) -> Optional[Dict[str, str]]:
"""Get description of the target for generation.
Returns:
target description, returns None in case no target is used.
"""
return None
def get_conditional_generator(self, resources_path: str) -> Generator:
return Generator(
resources_path=resources_path,
generated_length=self.generated_length,
batch_size=self.batch_size,
)
def validate_item(self, item: str) -> SmallMolecule:
(
molecules,
_,
) = validate_molecules([item])
if molecules[0] is None:
raise InvalidItem(
title="InvalidSMILES",
detail=f'rdkit.Chem.MolFromSmiles returned None for "{item}"',
)
return SmallMolecule(item)
@classmethod
def get_filepath_mappings_for_training_pipeline_arguments(
cls, training_pipeline_arguments: TrainingPipelineArguments
) -> Dict[str, str]:
"""Ger filepath mappings for the given training pipeline arguments.
Args:
training_pipeline_arguments: training pipeline arguments.
Returns:
a mapping between artifacts' files and training pipeline's output files.
"""
if isinstance(training_pipeline_arguments, PaccMannSavingArguments):
return {
"smiles_language.pkl": os.path.join(
training_pipeline_arguments.model_path,
f"{training_pipeline_arguments.training_name}.lang",
),
"params.json": os.path.join(
training_pipeline_arguments.model_path,
training_pipeline_arguments.training_name,
"model_params.json",
),
"weights.pt": os.path.join(
training_pipeline_arguments.model_path,
training_pipeline_arguments.training_name,
"weights",
"best_rec.pt",
),
}
else:
return super().get_filepath_mappings_for_training_pipeline_arguments(
training_pipeline_arguments
)
| 35.614907 | 108 | 0.646495 | 536 | 5,734 | 6.755597 | 0.350746 | 0.06628 | 0.096658 | 0.045568 | 0.113781 | 0.103563 | 0.08285 | 0.05689 | 0.044739 | 0.044739 | 0 | 0.002181 | 0.280258 | 5,734 | 160 | 109 | 35.8375 | 0.875212 | 0.235612 | 0 | 0.09 | 0 | 0 | 0.094271 | 0.017404 | 0 | 0 | 0 | 0.0125 | 0.01 | 1 | 0.07 | false | 0 | 0.11 | 0.01 | 0.32 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
9edfa90d3388411fff4970296751427f8a1b76b6 | 257 | py | Python | 2_UNIXCommands/Exercise11.py | takeyoshinitta/NLP-100-Exercise | e77fb385fbbf50c8a8bdc47442db1421739ea5b6 | [
"MIT"
] | 3 | 2022-01-04T19:02:22.000Z | 2022-02-21T08:52:18.000Z | 2_UNIXCommands/Exercise11.py | takeyoshinitta/NLP-100-Exercise | e77fb385fbbf50c8a8bdc47442db1421739ea5b6 | [
"MIT"
] | null | null | null | 2_UNIXCommands/Exercise11.py | takeyoshinitta/NLP-100-Exercise | e77fb385fbbf50c8a8bdc47442db1421739ea5b6 | [
"MIT"
] | null | null | null | # 11. Replace tabs into spaces
# Replace every occurrence of a tab character into a space. Confirm the result by using sed, tr, or expand command.
with open('popular-names.txt') as f:
for line in f:
print(line.strip().replace("\t", " "))
| 36.714286 | 116 | 0.66537 | 41 | 257 | 4.170732 | 0.853659 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.01 | 0.22179 | 257 | 6 | 117 | 42.833333 | 0.845 | 0.552529 | 0 | 0 | 0 | 0 | 0.188679 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.333333 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
9ee57d6363120b9d54a9902e2243f9122d20af71 | 4,810 | py | Python | src/core/serializers.py | pradipta/back-end | 05895b051afc4c8e0cb17db708063d80102e9de5 | [
"MIT"
] | 17 | 2019-05-11T22:15:34.000Z | 2022-03-26T22:45:33.000Z | src/core/serializers.py | pradipta/back-end | 05895b051afc4c8e0cb17db708063d80102e9de5 | [
"MIT"
] | 390 | 2019-05-23T10:48:57.000Z | 2021-12-17T21:01:43.000Z | src/core/serializers.py | pradipta/back-end | 05895b051afc4c8e0cb17db708063d80102e9de5 | [
"MIT"
] | 40 | 2019-05-21T14:41:57.000Z | 2021-01-30T13:39:38.000Z | from django.contrib.auth import get_user_model
from rest_auth.registration.serializers import (
RegisterSerializer as BaseRegisterSerializer,
)
from rest_auth.registration.serializers import (
SocialLoginSerializer as BaseSocialLoginSerializer,
)
from rest_auth.serializers import LoginSerializer as BaseLoginSerializer
from rest_auth.serializers import (
PasswordResetConfirmSerializer as BasePasswordResetConfirmSerializer,
)
from rest_auth.serializers import UserDetailsSerializer as BaseUserDetailsSerializer
from rest_framework import serializers
from rest_framework.exceptions import ValidationError
from core.models import Profile
# noinspection PyAbstractClass
class LoginSerializer(BaseLoginSerializer):
"""
Extends the default LoginSerializer in order to return
custom error messages
"""
def validate(self, attrs):
try:
return super().validate(attrs)
except serializers.ValidationError as ex:
ex.detail = "The email or password you entered is incorrect!"
raise ex
# noinspection PyAbstractClass
class PasswordResetConfirmSerializer(BasePasswordResetConfirmSerializer):
"""
Extends the default PasswordResetConfirmSerializer in order to return
custom error messages
"""
def validate(self, attrs):
try:
return super().validate(attrs)
except serializers.ValidationError as ex:
if "new_password2" in ex.detail:
ex.detail = ex.detail["new_password2"][0]
else:
ex.detail = "Could not reset password. Reset token expired or invalid."
raise ex
# noinspection PyAbstractClass
class CustomSocialLoginSerializer(BaseSocialLoginSerializer):
"""
Extends default SocialLoginSerializer to add additional details to some
failed login attempts
"""
def validate(self, attrs):
try:
res = super().validate(attrs)
return res
except ValidationError as ex:
if "User is already registered with this e-mail address." in ex.detail:
ex.detail[0] = (
"User is already registered with this e-mail address. "
"Please login using the form above."
)
raise ex
# noinspection PyAbstractClass
class RegisterSerializer(BaseRegisterSerializer):
email = serializers.EmailField(required=True)
password = serializers.CharField(write_only=True)
first_name = serializers.CharField(write_only=True)
last_name = serializers.CharField(write_only=True)
# legacy compat
zip = serializers.CharField(write_only=True, required=False)
zipcode = serializers.CharField(write_only=True, required=False)
# Overrides the default required password fields
password1 = None
password2 = None
def get_cleaned_data(self):
return {
"username": self.validated_data.get("email", ""),
"email": self.validated_data.get("email", ""),
# allauth uses password1 internally for creation
"password1": self.validated_data.get("password", ""),
"first_name": self.validated_data.get("first_name", ""),
"last_name": self.validated_data.get("last_name", ""),
"zipcode": self.validated_data.get("zipcode", ""),
}
def validate(self, data):
return data
UserModel = get_user_model()
class ProfileSerializer(serializers.ModelSerializer):
class Meta:
model = Profile
fields = "__all__"
class UserDetailsSerializer(BaseUserDetailsSerializer):
profile = ProfileSerializer()
class Meta:
model = UserModel
fields = ("username", "email", "first_name", "last_name", "profile")
read_only_fields = ("email",)
def to_representation(self, instance: UserModel) -> dict:
"""Move fields from Profile to user representation."""
representation = super().to_representation(instance)
profile = representation.pop("profile")
representation["zipcode"] = profile["zipcode"]
representation["is_mentor"] = profile["is_mentor"]
return representation
class UserSerializer(BaseUserDetailsSerializer):
profile = ProfileSerializer()
class Meta:
model = UserModel
fields = ("username", "email", "first_name", "last_name", "profile")
read_only_fields = ("email",)
def to_representation(self, instance: UserModel) -> dict:
"""Move fields from Profile to user representation."""
representation = super().to_representation(instance)
profile = representation.pop("profile")
profile.pop("user")
for key, val in profile.items():
representation[key] = val
return representation
| 33.172414 | 88 | 0.677755 | 474 | 4,810 | 6.772152 | 0.278481 | 0.017445 | 0.031776 | 0.037383 | 0.482243 | 0.359502 | 0.310903 | 0.282243 | 0.282243 | 0.255452 | 0 | 0.002176 | 0.235551 | 4,810 | 144 | 89 | 33.402778 | 0.870819 | 0.121622 | 0 | 0.361702 | 0 | 0 | 0.122139 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.074468 | false | 0.106383 | 0.095745 | 0.021277 | 0.457447 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
9ee68cd6efba5b094a83a85c60acb1031a826384 | 2,050 | py | Python | tools/docs/generate_api_rst.py | dcillera/envoy | cb54ba8eec26f768f8c1ae412113b07bacde7321 | [
"Apache-2.0"
] | 17,703 | 2017-09-14T18:23:43.000Z | 2022-03-31T22:04:17.000Z | tools/docs/generate_api_rst.py | dcillera/envoy | cb54ba8eec26f768f8c1ae412113b07bacde7321 | [
"Apache-2.0"
] | 15,957 | 2017-09-14T16:38:22.000Z | 2022-03-31T23:56:30.000Z | tools/docs/generate_api_rst.py | dcillera/envoy | cb54ba8eec26f768f8c1ae412113b07bacde7321 | [
"Apache-2.0"
] | 3,780 | 2017-09-14T18:58:47.000Z | 2022-03-31T17:10:47.000Z | import os
import shutil
import sys
import tarfile
def include_package(envoy_api_protos, rst_file_path, prefix):
# `envoy_api_rst_files` is a list of file paths for .proto.rst files
# generated by protodoc
#
# we are only interested in the proto files generated for envoy protos,
# not for non-envoy dependencies
if ("pkg/" + prefix) not in rst_file_path:
return None
# derive the "canonical" path from the filepath
canonical = f"{rst_file_path.split('pkg/' + prefix)[1]}"
# we are only interested in the actual v3 protos, not their dependencies
if (prefix + canonical) not in envoy_api_protos:
return None
return canonical
def main():
proto_srcs = sys.argv[1]
envoy_api_rst_files = sys.argv[1:-1]
output_filename = sys.argv[-1]
with open(proto_srcs) as f:
# the contents of `proto_srcs` are the result of a bazel genquery,
# containing bazel target rules, eg:
#
# @envoy_api//envoy/watchdog/v3:abort_action.proto
#
# this transforms them to a list with a "canonical" form of:
#
# envoy/watchdog/v3/abort_action.proto.rst
#
envoy_api_protos = [
f"{src.split('//')[1].replace(':', '/')}.rst" for src in f.read().split("\n") if src
]
for rst_file_path in envoy_api_rst_files:
canonical = include_package(envoy_api_protos, rst_file_path, "envoy/")
if canonical is None:
canonical = include_package(envoy_api_protos, rst_file_path, "contrib/envoy/")
if canonical is None:
continue
target = os.path.join("rst-out/api-v3", canonical)
if not os.path.exists(os.path.dirname(target)):
os.makedirs(os.path.dirname(target))
shutil.copy(rst_file_path, target)
# output the generated rst files to a tarfile for consumption
# by other bazel rules
with tarfile.open(output_filename, "w") as tar:
tar.add("rst-out", arcname=".")
if __name__ == "__main__":
main()
| 32.03125 | 96 | 0.642927 | 289 | 2,050 | 4.380623 | 0.318339 | 0.056872 | 0.060821 | 0.052133 | 0.228278 | 0.193523 | 0.106635 | 0.106635 | 0.075829 | 0 | 0 | 0.006545 | 0.254634 | 2,050 | 63 | 97 | 32.539683 | 0.82199 | 0.312195 | 0 | 0.121212 | 0 | 0 | 0.100647 | 0.042416 | 0 | 0 | 0 | 0.015873 | 0 | 1 | 0.060606 | false | 0 | 0.121212 | 0 | 0.272727 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
9ee7307b78f857465fe941638e5a41dd83ec835a | 15,792 | py | Python | src/wa_parser.py | ifly6/NS-WA-Authorboards | 57921457795306867844a29cdfce88bfcdd1c3f6 | [
"Apache-2.0"
] | null | null | null | src/wa_parser.py | ifly6/NS-WA-Authorboards | 57921457795306867844a29cdfce88bfcdd1c3f6 | [
"Apache-2.0"
] | null | null | null | src/wa_parser.py | ifly6/NS-WA-Authorboards | 57921457795306867844a29cdfce88bfcdd1c3f6 | [
"Apache-2.0"
] | null | null | null | # Copyright (c) 2020 ifly6
import html
import io
import re
from datetime import datetime
from functools import cache
from typing import Tuple
import numpy as np
import pandas as pd
import requests
from bs4 import BeautifulSoup
from lxml import etree
from pytz import timezone
from ratelimit import limits, sleep_and_retry
from helpers import ref
from src import wa_cacher
""" Imperium Anglorum:
This is adapted from proprietary InfoEurope code which in part does most of this already. Eg the proposal portions
which translate, the locality adjustments, API reading, etc. There is also code in beta (not-in-production)
which would have done this entirely, but I never got around to developing the VIEWS for that portion of the website.
It seems much easier just to commit something like this given that all the code is already present.
See ifly6.no-ip.org for more information. """
_headers = {
'User-Agent': 'WA parser (Auralia; Imperium Anglorum)'
}
class ApiError(Exception):
pass
@sleep_and_retry
@limits(calls=25, period=30) # 50 calls every 30 seconds they say but somehow this is fake news
def call_api(url) -> str:
response = requests.get(url, headers=_headers)
if response.status_code != 200:
raise ApiError('{} error at api url: {}'.format(response.status_code, str(url)))
return response.text
def clean_chamber_input(chamber):
""" Turns ambiguous chamber information into tuple (int, str) with chamber id and chamber name """
if type(chamber) == str:
if chamber == '1':
chamber = 1
elif chamber == '2':
chamber = 2
elif chamber == 'GA':
chamber = 1
elif chamber == 'SC':
chamber = 2
chamber_name = 'GA' if chamber == 1 else \
'SC' if chamber == 2 else ''
return chamber, chamber_name
def localised(dt: 'datetime', tz='US/Eastern'):
return timezone(tz).localize(dt)
@cache
def _category_map():
d = {'Advancement of Industry': 'Environmental Deregulation',
'Civil Rights': 'Mild',
'Human Rights': 'Mild',
'Education and Creativity': 'Artistic',
'Environmental': 'Automotive',
'Free Trade': 'Mild',
'Furtherment of Democracy': 'Mild',
'Global Disarmament': 'Mild',
'Health': 'Healthcare',
'International Security': 'Mild',
'Moral Decency': 'Mild',
'Political Stability': 'Mild',
'Regulation': 'Consumer Protection',
'Gun Control': 'Tighten',
'Social Justice': 'Mild'}
return {ref(k): v for k, v in d.items()} # force ref name for matching
# nb that this is identical to dict( ( ref(k), v ) for k, v in d.items() )
def _translate_category(category: str, s: str) -> Tuple[bool, str]:
if ref(category) in _category_map() and s == '0':
return True, _category_map()[ref(category)] # yield correct name from ref name of category
# if it isn't 0, then it doesn't apply, return given
# if not in the list, return given
return False, s
def capitalise(s):
s = s.replace('_', ' ').strip()
# exceptions
capitalisation_exceptions = wa_cacher.load_capitalisation_exceptions()
for i in capitalisation_exceptions:
if s.strip().lower() == i.strip().lower():
return i # replace with manual correction
# only capitalise words longer than 2 letters ('new') and always capitalise first
# unless the word is in given list
# > fanboys & the
s = " ".join(
w.capitalize()
if (len(w) > 2 and w not in ['for', 'and', 'nor', 'but', 'yet', 'the']) or (i == 0)
else w
for i, w in enumerate(s.split())
).strip() # avoid apostrophe capitalisations
# but capitalise st -> St
for exception in ['St']:
s = ' '.join((exception if w.lower() == exception.lower() else w)
for w in s.split())
# for split in ['-']:
# # as first should always be capitalised, not checking doesn't matter
# s = split.join(w[:1].upper() + w[1:] for i, w in enumerate(s.split(split))) # capitalise first letter only
# "Christian DeMocrats"
# python str.capitalize forces all other chars to lower
# don't use str.capitalize above
for numeral in ['ii', 'iii', 'iv', 'v', 'vi', 'vii', 'viii', 'ix', 'x']:
s = re.sub(r'(?<=\s){}$'.format(numeral), numeral.upper(), s) # matches only trailing numerals
# people used to use WA missions; capitalise these, they are separate words
s = re.sub(r'(?<=\s)(Wa|wa|wA)(?=\s)', 'WA', s) # if between two spaces
s = re.sub(r'^(Wa|wa|wA)(?=\s)', 'WA', s) # if at start (eg WA Mission of NERV-UN)
return s
def _get_council(i):
if i == 'GA' or i == 1: return 'GA'
if i == 'SC' or i == 2: return 'SC'
if i == 'UN' or i == 0: return 'UN'
raise ValueError(f'provided council code {i} is invalid')
class WaPassedResolution:
def __init__(self, **kwargs):
# core vote information
self.resolution_num = None
self.title = None
self.implementation = None
# category and strength
self.chamber = None
self.category = None
self.strength = None
# handle repeals
self.is_repealed = None
self.repealed_by = None
self.is_repeal = None
self.repeals = None
# text
self.text = None
# ancillary information
self.author = None
self.coauthor0 = None
self.coauthor1 = None
self.coauthor2 = None
self.votes_for = None
self.votes_against = None
self.council = None
self.__dict__.update(kwargs) # django does this automatically, i'm not updating it; lazy
@staticmethod
def parse_ga(res_num, council=1):
from src.wa_cacher import Cacher
try:
cacher = Cacher.load()
except FileNotFoundError:
cacher = Cacher() # init new
api_url = 'https://www.nationstates.net/cgi-bin/api.cgi?wa={}&id={}&q=resolution'.format(council, res_num)
in_cacher = cacher.contains(api_url)
if not in_cacher:
this_response = call_api(api_url)
cacher.update(api_url, this_response)
else:
this_response = cacher.get(api_url)
xml = etree.parse(io.StringIO(this_response))
if not xml.xpath('/WA/RESOLUTION/NAME'):
raise ValueError(f'resolution number {res_num} is invalid; no such resolution exists')
resolution_is_repealed = xml.xpath('/WA/RESOLUTION/REPEALED_BY') != []
resolution_is_a_repeal = xml.xpath('/WA/RESOLUTION/REPEALS_COUNCILID') != []
resolution_text = html.unescape(xml.xpath('/WA/RESOLUTION/DESC')[0].text)
resolution_author = xml.xpath('/WA/RESOLUTION/PROPOSED_BY')[0].text
print(resolution_author)
print(type(resolution_author))
if resolution_author is None or str(resolution_author).strip() == '':
raise RuntimeError('resolution author is empty')
author = capitalise(resolution_author)
resolution = WaPassedResolution(
council=_get_council(council),
resolution_num=res_num,
title=xml.xpath('/WA/RESOLUTION/NAME')[0].text,
implementation=localised(
datetime.utcfromtimestamp(int(xml.xpath('/WA/RESOLUTION/IMPLEMENTED')[0].text)),
'UTC'
).astimezone(timezone('US/Eastern')), # convert to eastern time
chamber=clean_chamber_input(xml.xpath('/WA/RESOLUTION/COUNCIL')[0].text)[1],
category=capitalise(xml.xpath('/WA/RESOLUTION/CATEGORY')[0].text),
strength=capitalise(
_translate_category(
xml.xpath('/WA/RESOLUTION/CATEGORY')[0].text, # category
xml.xpath('/WA/RESOLUTION/OPTION')[0].text # option
)[1] # get name
),
is_repealed=resolution_is_repealed,
repealed_by=int(xml.xpath('/WA/RESOLUTION/REPEALED_BY')[0].text) if resolution_is_repealed else None,
is_repeal=resolution_is_a_repeal,
repeals=int(xml.xpath('/WA/RESOLUTION/REPEALS_COUNCILID')[0].text) if resolution_is_a_repeal else None,
# text and author
text=resolution_text.strip(),
author=author.strip(),
# vote data
votes_for=int(xml.xpath('/WA/RESOLUTION/TOTAL_VOTES_FOR')[0].text),
votes_against=int(xml.xpath('/WA/RESOLUTION/TOTAL_VOTES_AGAINST')[0].text)
)
assert resolution.strength != '0', 'resolution {} has strength 0 with category {}'.format(
resolution.title, resolution.category
)
# overwrite category if repeal with the repeals field; NS API is broken sometimes for some reason
if resolution_is_a_repeal:
resolution.strength = str(int(resolution.repeals)) # cast to integer
# check for co-authors
coauth_list = xml.xpath('/WA/RESOLUTION/COAUTHOR/N')
if len(coauth_list) != 0:
print('received from API coauthors: {}'.format(
', '.join([capitalise(n.text) for n in coauth_list])
))
try:
resolution.coauthor0 = capitalise(coauth_list[0].text)
except IndexError:
pass
try:
resolution.coauthor1 = capitalise(coauth_list[1].text)
except IndexError:
pass
try:
resolution.coauthor2 = capitalise(coauth_list[2].text)
except IndexError:
pass
else:
cleaned_resolution_text = resolution_text \
.replace('[i]', '').replace('[/i]', '') \
.replace('[b]', '').replace('[/b]', '') \
.replace('[u]', '').replace('[/u]', '')
coauthor_matches = [s for s in cleaned_resolution_text.splitlines()
if re.search(
r'(Co-?((Author(ed)?:?)|written|writer) ?(by|with)? ?:?)|'
r'(This resolution includes significant contributions made by\s+)',
s, re.IGNORECASE
)]
if len(coauthor_matches) > 0:
coauthor_line = re.sub(r'Co-?((Author(ed)?:?)|written|writer) ?(by|with)? ?:? ', repl='',
string=coauthor_matches[0], flags=re.IGNORECASE)
print(f'\tidentified coauthor line: "{coauthor_line}"')
coauthor_line = coauthor_line \
.replace('[i]', '') \
.replace('[/i]', '') \
.replace('[b]', '') \
.replace('[/b]', '') \
.replace('[u]', '') \
.replace('[/u]', '')
if '[nation' in coauthor_line.lower(): # scion used the [Nation] tag instead of lower case once
amended_line = re.sub(r'(?<=\[nation)=(.*?)(?=\])', '', coauthor_line.lower()) # remove 'noflag' etc
coauthors = re.findall(r'(?<=\[nation\])(.*?)(?=\[/nation\])', amended_line.lower())
else:
# this will break with names like "Sch'tz and West Runk'land"
coauthors = re.split(r'(,? and )|(, )', coauthor_line, re.IGNORECASE)
coauthors = [i for i in coauthors if i is not None and i.strip() != 'and'] # post facto patching...
coauthors = [ref(s).replace('.', '') for s in coauthors] # cast to reference name
print(f'\tidentified coauthors as {coauthors}')
# pass each co-author in turn
'''
While it could be changed so that the original line's capitalisation is preserved, doing this might
introduce inconsistency in capitalisation of the same nation. Eg '[nation]imperium_anglorum[/nation]' would
be done under capitalisation rules while something provided as 'Imperium ANGLORUM' would be let through.
Because some authors use a ref'd name IN the nation tags, something like [nation]transilia[/nation] cannot
be disentangled from 'Transilia' if the former is proper and the latter is not. A proper-capitalisation
dictionary would be necessary and I am unwilling to download and parse all historical daily dumps for
something this minor.
'''
try:
resolution.coauthor0 = capitalise(coauthors[0])
except IndexError:
pass
try:
resolution.coauthor1 = capitalise(coauthors[1])
except IndexError:
pass
try:
resolution.coauthor2 = capitalise(coauthors[2])
except IndexError:
pass
cacher.save()
return resolution
def get_count() -> int:
soup = BeautifulSoup(call_api('http://forum.nationstates.net/viewtopic.php?f=9&t=30'), 'lxml')
resolution = soup.select('div#p310 div.content a')
return len(resolution)
def parse() -> 'pd.DataFrame':
# find the number of resolutions from Passed GA Resolutions
passed_res_max = get_count()
print(f'found {passed_res_max} resolutions')
# confirm that we have X resolutions
res_list = []
max_res = -1
for i in range(passed_res_max - 1, passed_res_max + 20): # passed resolutions should never be more than 20 behind
try:
print(f'gettingGA {i + 1} of {passed_res_max} predicted resolutions')
d = WaPassedResolution.parse_ga(i + 1).__dict__ # note that 0 returns resolution at vote, need to 1-index
res_list.append(d)
except ValueError:
print('out of resolutions; data should be complete')
max_res = i
break
print(f'found {max_res} resolutions; getting historical')
# get API information for each resolution
for i in reversed(range(0, passed_res_max - 1)): # passed_res_max is already called above
print(f'got {max_res - passed_res_max + i} of {max_res} resolutions')
print(f'getting GA {i + 1}')
r = WaPassedResolution.parse_ga(i + 1) # note that 0 returns resolution at vote, need to 1-index
d = r.__dict__ # hacky cheating to get into dict
res_list.append(d)
# put it up in pandas
df = pd.DataFrame(res_list).replace({None: np.nan})
df.drop(columns=['text'], inplace=True)
df.rename(columns={
'council': 'Council', # Auralia used these names for columns
'resolution_num': 'Number',
'title': 'Title',
'category': 'Category',
'strength': 'Sub-category',
'votes_for': 'Votes For',
'votes_against': 'Votes Against',
'implementation': 'Date Implemented',
'author': 'Author'
}, inplace=True)
df.sort_values(by='Number', inplace=True)
def join_coauthors(coauthor_list, j=', '):
""" Removes empty/whitespace-only strings and then joins """
authors = [s for s in coauthor_list if s.strip() != '']
return j.join(authors)
df['Co-authors'] = df[['coauthor0', 'coauthor1', 'coauthor2']] \
.replace({np.nan: ''}) \
.agg(join_coauthors, axis=1)
assert all(df['Sub-category'] != '0'), 'resolutions {} have sub-category 0'.format(
df.loc[df['Sub-category'] != '0', 'Title'].values
)
return df[['Number', 'Title', 'Category', 'Sub-category', 'Author', 'Co-authors',
'Votes For', 'Votes Against', 'Date Implemented']].copy() # take only relevant vars
| 38.705882 | 123 | 0.590109 | 1,915 | 15,792 | 4.772846 | 0.269974 | 0.014004 | 0.017505 | 0.035011 | 0.128556 | 0.101313 | 0.080744 | 0.030635 | 0.02407 | 0.020131 | 0 | 0.008902 | 0.288627 | 15,792 | 407 | 124 | 38.800983 | 0.8047 | 0.138804 | 0 | 0.108303 | 0 | 0.00361 | 0.191829 | 0.0411 | 0 | 0 | 0 | 0 | 0.00722 | 1 | 0.043321 | false | 0.061372 | 0.057762 | 0.00361 | 0.151625 | 0.039711 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
9eec86a2c6579218afa159749612db5d5e43ce59 | 3,198 | py | Python | models/__init__.py | esentino/literate-doodle | 598533042602b989a4bdaa8778968c5f3ead3500 | [
"Apache-2.0"
] | null | null | null | models/__init__.py | esentino/literate-doodle | 598533042602b989a4bdaa8778968c5f3ead3500 | [
"Apache-2.0"
] | null | null | null | models/__init__.py | esentino/literate-doodle | 598533042602b989a4bdaa8778968c5f3ead3500 | [
"Apache-2.0"
] | 1 | 2019-09-11T21:27:37.000Z | 2019-09-11T21:27:37.000Z | # models/__init__.py
from clcrypto import password_hash
from psycopg2 import connect
def make_connection(db_name='w3'):
cnx = connect(user='postgres', password='coderslab', database=db_name, host='localhost')
cnx.autocommit = True
return cnx
class User:
__id = None
username = None
__hashed_password = None
email = None
def __init__(self):
self.__id = -1
self.username = ""
self.email = ""
self.__hashed_password = ""
@property
def id(self):
return self.__id
@property
def hashed_password(self):
return self.__hashed_password
def set_password(self, password, salt):
self.__hashed_password = password_hash(password, salt)
def save_to_db(self, cursor):
if self.__id == -1:
# saving new instance using prepared statements
sql = """INSERT INTO Users(username, email, hashed_password)
VALUES(%s, %s, %s) RETURNING id"""
values = (self.username, self.email, self.hashed_password)
cursor.execute(sql, values)
self.__id = cursor.fetchone()[0] # albo cursor.fetchone()['id']
return True
else:
sql = """UPDATE Users SET username=%s, email=%s, hashed_password=%s
WHERE id=%s"""
values = (self.username, self.email, self.hashed_password, self.id)
cursor.execute(sql, values)
return True
@staticmethod
def load_user_by_id(cursor, user_id):
sql = "SELECT id, username, email, hashed_password FROM users WHERE id=%s"
cursor.execute(sql, (user_id,)) # (user_id, ) - bo tworzymy krotkę
data = cursor.fetchone()
if data:
loaded_user = User()
loaded_user.__id = data[0]
loaded_user.username = data[1]
loaded_user.email = data[2]
loaded_user.__hashed_password = data[3]
return loaded_user
else:
return None
@staticmethod
def find_by_email(cursor, username):
sql = "SELECT id, username, email, hashed_password FROM users WHERE email=%s"
cursor.execute(sql, (username,)) # (user_id, ) - bo tworzymy krotkę
data = cursor.fetchone()
if data:
loaded_user = User()
loaded_user.__id = data[0]
loaded_user.username = data[1]
loaded_user.email = data[2]
loaded_user.__hashed_password = data[3]
return loaded_user
else:
return None
@staticmethod
def find_all( cursor):
sql = "SELECT id, username, email, hashed_password FROM Users"
ret = []
cursor.execute(sql)
for row in cursor.fetchall():
loaded_user = User()
loaded_user.__id = row[0]
loaded_user.username = row[1]
loaded_user.email = row[2]
loaded_user.__hashed_password = row[3]
ret.append(loaded_user)
return ret
def delete(self, cursor):
sql = "DELETE FROM Users WHERE id=%s"
cursor.execute(sql, (self.__id,))
self.__id = -1
return True
| 31.663366 | 92 | 0.581614 | 374 | 3,198 | 4.724599 | 0.216578 | 0.101868 | 0.054329 | 0.061121 | 0.44086 | 0.426712 | 0.411998 | 0.389926 | 0.309564 | 0.282965 | 0 | 0.008261 | 0.318637 | 3,198 | 100 | 93 | 31.98 | 0.802662 | 0.049719 | 0 | 0.404762 | 0 | 0 | 0.147709 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.119048 | false | 0.214286 | 0.02381 | 0.02381 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
9ef839c4fcb13ab1bd28852911644c75dc9c3837 | 48,320 | py | Python | neon/backends/gpu.py | kashif/neon | d4d8ed498ee826b67f5fda1746d2d65c8ce613d2 | [
"Apache-2.0"
] | 1 | 2018-07-17T16:54:58.000Z | 2018-07-17T16:54:58.000Z | neon/backends/gpu.py | kashif/neon | d4d8ed498ee826b67f5fda1746d2d65c8ce613d2 | [
"Apache-2.0"
] | null | null | null | neon/backends/gpu.py | kashif/neon | d4d8ed498ee826b67f5fda1746d2d65c8ce613d2 | [
"Apache-2.0"
] | 2 | 2016-06-09T13:05:00.000Z | 2021-02-18T14:18:15.000Z | # ----------------------------------------------------------------------------
# Copyright 2014 Nervana Systems Inc.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ----------------------------------------------------------------------------
"""
Neon backend wrapper for the NervanaGPU library. Most functions are thin
wrappers around functions from the NervanaGPU class, the GPUTensor is taken
directly from NervanaGPU as well.
NervanaGPU is available at `<https://github.com/NervanaSystems/nervanagpu>`
"""
import logging
from neon.backends.backend import Backend
from nervanagpu import NervanaGPU
from neon.diagnostics.timing_decorators import FlopsDecorator
import pycuda.driver as drv
import numpy as np
logger = logging.getLogger(__name__)
class GPU(Backend):
"""
Sets up a NervanaGPU based backend for matrix operations.
Note that some functions defined in the generic Backend class such as
cross-map pooling and normalization and are not implemented for
this backend.
"""
default_dtype = np.float32
def __init__(self, rng_seed, stochastic_round=False, device_id=0):
import pycuda.driver as drv
drv.init()
global ctx
ctx = drv.Device(device_id).make_context()
import atexit
atexit.register(ctx.pop)
self.ng = NervanaGPU(stochastic_round=stochastic_round)
logger.info("Initialized NervanaGPU with stochastic_round=%s",
stochastic_round)
self.rng_seed = rng_seed
self.rng_init()
self.device_id = device_id if device_id is not None else 0
def __getstate__(self):
"""
Defines what and how we go about serializing an instance of this class.
Returns:
self.__dict__: The full contents of the backend class instance,
except for the mem_pool which is on device and
cannot be serialized.
"""
if hasattr(self, 'mem_pool') and self.mem_pool is not None:
self.mem_pool_pickle = {'shape': self.mem_pool.shape,
'dtype': np.float32}
self.mem_pool = None
return self.__dict__
def __setstate__(self, state):
"""
Defines how we go about deserializing into an instance of this class.
Arguments:
self.__dict__: The full contents of the backend class instance,
except for the mem_pool which is on device and
cannot be serialized.
"""
self.__dict__.update(state)
self.mem_pool = self.ng.empty(self.mem_pool_pickle['shape'],
dtype=self.mem_pool_pickle['dtype'])
def init_mempool(self, shape, dtype=default_dtype):
"""
Allocates a memory pool for temporary storage
"""
self.mem_pool = self.ng.empty(shape, dtype=dtype)
def alloc_host_mem(self, shape, dtype=default_dtype):
return drv.pagelocked_empty(shape, dtype, order="C", mem_flags=0)
def create_stream(self):
return drv.Stream()
def synchronize(self):
pass
def async_copy(self, dest, src, stream=None):
drv.memcpy_htod_async(dest.gpudata, src, stream)
def rng_init(self):
"""
Initialize and seed the pseudo random number genrator. Random numbers
are generated on the host using numpy, then transfered to device.
"""
seed = None
if 'rng_seed' in self.__dict__:
seed = self.rng_seed
logger.info("Seeding random number generator with: %s", str(seed))
np.random.seed(seed)
def flop_timing_init(self, decorate_fc, decorate_conv, decorate_ew):
"""
Initialize FLOP timing. Wraps the specified MOP calls via a decorator
to record elapsed time and number of operations.
Arguments:
decorate_fc (list): string giving the function names of fully
connected layer forward/backward/update calls
to time.
decorate_conv (list): string giving the function names of
convolutional layer forward/backward/update
calls to time.
decorate_ew (list): string giving the function names of element-wise
calls to time.
Notes:
Must be called prior to first flop_timing_start call
"""
self.start = drv.Event()
self.end = drv.Event()
self.flop_timer = FlopsDecorator(self)
self.flop_timer.decorate(decorate_fc=decorate_fc,
decorate_conv=decorate_conv,
decorate_ew=decorate_ew)
def flop_timinig_start(self):
"""
Start a new FLOP timer.
Returns:
None: dummy value (not used)
"""
return self.start.record()
def flop_timing_finish(self, start_time):
"""
Complete current FLOP timing.
Arguments:
start_time (unused): ignored.
Returns:
float: elapsed time in seconds since prior flop_timing_start call.
"""
self.end.record()
self.end.synchronize()
return self.end.time_since(self.start)
def uniform(self, low=0.0, high=1.0, size=1, dtype=default_dtype,
persist_values=True, name=None):
"""
generate numpy random number and convert to a GPUTensor.
If called with dype=None it will probably explode
"""
ary = np.random.uniform(low, high, size)
return self.ng.array(ary, dtype=dtype, name=name)
def normal(self, loc=0.0, scale=1.0, size=1, dtype=default_dtype,
persist_values=True, name=None):
"""
Gaussian/Normal random number sample generation
"""
ary = np.random.normal(loc, scale, size)
return self.ng.array(ary, dtype=dtype, name=name)
def fprop_fc(self, out, inputs, weights, layer=None):
"""
Forward propagate the inputs of a fully connected network layer to
produce output pre-activations (ready for transformation by an
activation function).
Arguments:
out (GPUTensor): Where to store the forward propagated results.
inputs (GPUTensor): Will be either the dataset input values (first
layer), or the outputs from the previous layer.
weights (GPUTensor): The weight coefficient values for this layer.
layer (Layer): The layer object.
"""
self.ng.dot(weights, inputs, out)
def bprop_fc(self, out, weights, deltas, layer=None):
"""
Backward propagate the error through a fully connected network layer.
Arguments:
out (GPUTensor): Where to store the backward propagated errors.
weights (GPUTensor): The weight coefficient values for this layer.
deltas (GPUTensor): The error values for this layer
layer (Layer): The layer object.
"""
self.ng.dot(weights.T, deltas, out)
def update_fc(self, out, inputs, deltas, layer=None):
"""
Compute the updated gradient for a fully connected network layer.
Arguments:
out (GPUTensor): Where to store the updated gradient value.
inputs (GPUTensor): Will be either the dataset input values (first
layer), or the outputs from the previous layer.
deltas (GPUTensor): The error values for this layer
layer (Layer): The layer object.
"""
self.ng.dot(deltas, inputs.T, out)
def update_fc_bias(self, err, out):
"""
Compute the updated bias gradient for a fully connected network layer.
Arguments:
out (GPUTensor): Where to store the updated gradient value.
err (GPUTensor): backpropagated error
"""
self.ng.sum(err, axis=1, out=out)
def add_fc_bias(self, inputs, bias):
"""
Add the bias for a fully connected network layer.
Arguments:
inputs (GPUTensor): the input to update.
bias (GPUTensor): the amount to increment
"""
self.ng.add(inputs, bias, out=inputs)
def fprop_conv(self, out, inputs, weights, ofmshape, ofmsize, ofmlocs,
ifmshape, links, nifm, padding, stride, ngroups, fpropbuf,
local=False):
"""
Forward propagate the inputs of a convolutional network layer to
produce output pre-activations (ready for transformation by an
activation function).
Arguments:
out (GPUTensor): Where to store the forward propagated results.
inputs (GPUTensor): Will be either the dataset input values (first
layer), or the outputs from the previous layer.
weights (GPUTensor): The weight coefficient values for this layer.
ofmshape (tuple): Dimensions of each output feature map (typically
number of height and width neurons).
ofmsize (int): Total size of each output feature map.
ofmlocs (GPUTensor): Indices giving the location of each element
in each output feature map stored in out.
ifmshape (tuple): Dimensions of each input feature map (typically
number of height and width neurons). For this
backend we expect these values to be square.
links (GPUTensor): Input receptive field indices.
nifm (int): Total number of input feature maps.
padding (int): Number of additional elements to include along each
dimension of each local receptive field during the
convolution operation.
stride (int): Number of neurons to shift the filter at each step.
ngroups (int): Number of groups.
fpropbuf (GPUTensor): Temporary storage buffer used to hold the
convolved outputs for a single receptive
field. Not used for this backend.
local (bool, optional): Whether to do local filtering (True) or
convolution (False, the default)
"""
'''
N: Number of images in mini-batch
C: Number of input feature maps
K: Number of output feature maps
D: Depth of input image
H: Height of input image
W: Width of input image
T: Depth of filter kernel
R: Height of filter kernel
S: Width of filter kernel
'''
self.ng.fprop_conv(layer=fpropbuf, I=inputs, F=weights, O=out,
alpha=1.0, repeat=1)
def bprop_conv(self, out, weights, deltas, ofmshape, ofmsize, ofmlocs,
ifmshape, links, padding, stride, nifm, ngroups, bpropbuf,
local=False):
"""
Backward propagate the error through a convolutional network layer.
Arguments:
out (GPUTensor): Where to store the backward propagated errors.
weights (GPUTensor): The weight coefficient values for this layer.
deltas (GPUTensor): The error values for this layer
ofmshape (tuple): Dimensions of each output feature map (typically
height and width).
ofmsize (int): Total size of each output feature map.
ofmlocs (GPUTensor): Indices giving the location of each element in
each output feature map stored in out.
ifmshape (tuple): Dimensions of each input feature map (typically
height and width).
links (GPUTensor): Input receptive field indices.
nifm (int): Total number of input feature maps.
padding (int): Number of additional elements to include along each
dimension of each local receptive field during the
convolution operation.
stride (int): Number of neurons to shift the filter at each step.
ngroups (int): Number of groups.
bpropbuf (GPUTensor): Temporary storage buffer used to hold the
backpropagated error for a single receptive
field
local (bool, optional): Whether to do local filtering (True) or
convolution (False, the default)
"""
self.ng.bprop_conv(layer=bpropbuf, F=weights, E=deltas, grad_I=out,
alpha=1.0, repeat=1)
def update_conv(self, out, inputs, weights, deltas, ofmshape, ofmsize,
ofmlocs, ifmshape, links, nifm, padding, stride, ngroups,
fwidth, updatebuf, local=False, layer=None):
"""
Compute the updated gradient for a convolutional network layer.
Arguments:
out (GPUTensor): Where to store the updated gradient value.
inputs (GPUTensor): Will be either the dataset input values (first
layer), or the outputs from the previous layer.
weights (GPUTensor): The weight coefficient values for this layer.
deltas (GPUTensor): The error values for this layer
ofmshape (tuple): Dimensions of each output feature map (typically
height and width).
ofmsize (int): Total size of each output feature map.
ofmlocs (GPUTensor): Indices giving the location of each element in
each output feature map stored in out.
ifmshape (tuple): Dimensions of each input feature map (typically
height and width).
links (GPUTensor): Input receptive field indices.
nifm (int): Total number of input feature maps.
padding (int): Number of additional elements to include along each
dimension of each local receptive field during the
convolution operation.
stride (int): Number of neurons to shift the filter at each step.
ngroups (int): Number of groups.
fwidth (int): Filter width.
updatebuf (GPUTensor): Temporary storage buffer used to hold the
updated gradient for a single receptive
field
local (bool, optional): Whether to do local filtering (True) or
convolution (False, the default)
layer (Layer): The layer object.
"""
self.ng.update_conv(layer=updatebuf, I=inputs, E=deltas, grad_F=out,
alpha=1.0, repeat=1)
def fprop_pool(self, out, inputs, op, ofmshape, ofmsize, ofmlocs, fshape,
ifmshape, links, nifm, padding, stride, fpropbuf):
"""
Forward propagate the inputs of a Pooling network layer to
produce output pre-activations (ready for transformation by an
activation function).
Arguments:
out (GPUTensor): Where to store the forward propagated results.
inputs (GPUTensor): Will be either the dataset input values (first
layer), or the outputs from the previous layer.
op (string): The type of pooling operation to apply. We support
"max", "avg", "l2" currently.
ofmshape (tuple): Dimensions of each output feature map (typically
number of height and width neurons).
ofmsize (int): Total size of each output feature map.
ofmlocs (GPUTensor): Indices giving the location of each element in
each output feature map stored in out.
fshape (tuple): Dimensions of each filter (typically height and
width).
ifmshape (tuple): Dimensions of each input feature map (typically
number of height and width neurons).
links (GPUTensor): Input receptive field indices.
nifm (int): Total number of input feature maps.
padding (int): Number of additional elements to include along each
dimension of each local receptive field during the
pooling operation.
stride (int): Number of neurons to shift the filter at each step.
fpropbuf (GPUTensor): Temporary storage buffer used to hold the
pooled outputs for a single receptive field.
"""
op = op.lower()
if op == "max":
self.ng.fprop_pool(layer=fpropbuf, I=inputs, O=out, repeat=1)
else:
raise AttributeError("unexpected pooling op type: %s", op)
def bprop_pool(self, out, fouts, inputs, deltas, op, ofmshape, ofmsize,
ofmlocs, fshape, fpsize, ifmshape, links, nifm, padding,
stride, bpropbuf):
"""
Backward propagate the error through a pooling network layer.
Arguments:
out (GPUTensor): Where to store the backward propagated errors.
fouts (GPUTensor): Forward propagated outputs from the previous
layer.
inputs (GPUTensor): Will be either the dataset input values (first
layer), or the outputs from the previous layer.
deltas (GPUTensor): The error values for this layer
op (string): The type of pooling operation to apply. We support
"max", "avg", "l2" currently.
ofmshape (tuple): Dimensions of each output feature map (typically
height and width).
ofmsize (int): Total size of each output feature map.
ofmlocs (GPUTensor): Indices giving the location of each element in
each output feature map stored in out.
fshape (tuple): Dimensions of each filter (typically height and
width).
fpsize (int): The size of each filter.
ifmshape (tuple): Dimensions of each input feature map (typically
height and width).
links (GPUTensor): Input receptive field indices.
nifm (int): Total number of input feature maps.
padding (int): Number of additional elements to include along each
dimension of each local receptive field during the
pooling operation.
stride (int): Number of neurons to shift the filter at each step.
bpropbuf (GPUTensor): Temporary storage buffer used to hold the
backpropagated error for a single receptive
field
"""
op = op.lower()
if op == "max":
self.ng.bprop_pool(layer=bpropbuf, I=inputs, E=deltas, grad_I=out,
repeat=1)
else:
raise AttributeError("unexpected pooling op type: %s", op)
def logistic(self, x, out):
"""
Logistic sigmoid nonlinearity, 1/(1+exp(-x))
Arguments:
x (GPUTensor): Input tensor
out (GPUTensor): Output tensor
"""
self.ng.sig(x, out=out)
return out
def transpose(self, untransposed, transposed):
transposed[:] = untransposed.T
def crossent(self, y, t, partial, out, epsilon, doscale, ismulti=False):
"""
Computes cross entropy cost.
Arguments:
y (GPUTensor): Model outputs
t (GPUTensor): Targets
partial (GPUTensor): temporary buffer used for 2D reduction
out (GPUTensor): Storage for the cross entropy output
epsilon (float): constant for numerical stability
doscale (boolean): If True, cross_entropy is scaled by batch size
ismulti (boolean): If True, compute multi class cross_entropy
"""
sumbuf = partial.reshape((partial.size, 1))[:partial.shape[0]]
if ismulti:
self.ng.sum(-t * self.ng.log(y + epsilon),
axis=None, partial=sumbuf, out=out)
else:
self.ng.sum((t - 1) * self.ng.log(1 - y + epsilon) -
t * self.ng.log(y + epsilon),
axis=None, partial=sumbuf, out=out)
if doscale:
out[:] = out / y.shape[1]
return out
def logistic_compound(self, inputs, outputs):
"""
Applies logistic function and its derivative to the dataset passed.
Arguments:
inputs (GPUTensor): Input data to be transformed. This also
acts as storage for the output of the
derivative function.
outputs (GPUTensor): Storage for the transformed output.
"""
# Apply the logistic function.
outputs[:] = self.ng.sig(inputs)
inputs[:] = (1.0 - outputs) * inputs
def rectlin(self, x, out):
"""
Rectified Linear nonlinearity
Arguments:
x (GPUTensor): Input tensor
out (GPUTensor): Output tensor
"""
self.ng.maximum(x, 0., out=out)
return out
def rectlin_derivative(self, x, out):
"""
Rectified linear nonlinearity derivative
Arguments:
x (GPUTensor): Input tensor
out (GPUTensor): Output tensor
"""
self.ng.greater(x, 0, out=out)
return out
def rectleaky(self, x, slope, out):
"""
Leaky rectified linear nonlinearity
Arguments:
x (GPUTensor): Input tensor
slope (float): amount of gradient to apply when unit is not active
out (GPUTensor): Output tensor
"""
out[:] = self.ng.maximum(x, x*slope)
def rectleaky_derivative(self, x, slope, out):
"""
Leaky rectified linear nonlinearity derivative
Arguments:
x (GPUTensor): Input tensor
slope (float): amount of gradient to apply when unit is not active
out (GPUTensor): Output tensor
"""
out[:] = self.ng.greater(x, 0) * (1.0 - slope) + slope
def sum(self, tsr, axes, out):
"""
Sum
Arguments:
tsr (GPUTensor): Input tensor
axes (int): Axis along which the reduction is performed. If axes
is None, the tensor is flattened and reduced over
both dimensions.
out (GPUTensor): Output tensor
"""
if axes is None:
sze = tsr.shape[0]*tsr.shape[1]
self.ng.sum(tsr.reshape(sze, 1), axis=0, out=out)
else:
self.ng.sum(tsr, axis=axes, out=out)
return out
def norm(self, tsr, order=None, axis=None, out=None):
"""
Calculates and returns the vector p-norms of the GPUTensor along the
specified axis. The p-norm is defined on a vector A as
:math:`||A||_p = \sum_i(|A_i|^p)^{1/p}`.
Arguments:
tsr (GPUTensor): the GPUTensor on which to find the norms
order (int): The order or p upon which the norm is calculated.
Valid values include:
None, inf, -inf, 0, 1, -1, 2, -2, ...
axis (int): The axis along which to compute vector norms.
out (GPUTensor): where to write the results to. Must be
of the expected result shape.
Returns:
GPUTensor: p-norm of tsr along the specified axis.
Raises:
IndexError if invalid axis specified
AttributeError if invalid order specified
See Also:
`numpy.linalg.norm`
"""
if not isinstance(axis, int) or axis < 0 or axis >= len(tsr.shape):
raise IndexError("invalid axis value: %s", axis)
if not isinstance(order, (int, float)):
raise AttributeError("invalid order value: %s", order)
if out is None:
raise AttributeError("No output tensor speficied", order)
if order == float('Inf'):
self.ng.max(self.fabs(tsr), axis, out)
elif order == float('-Inf'):
self.ng.min(self.fabs(tsr), axis, out)
elif order == 0:
tmp = self.zeros(tsr.shape)
self.ng.not_equal(tsr, tmp, tmp)
self.ng.sum(tmp, axis, out)
else:
tmp = self.empty(tsr.shape)
self.ng.power(self.fabs(tsr), order, tmp)
self.ng.sum(tmp, axis, out)
self.ng.power(out, (1.0 / order), out)
return out
def mean(self, tsr, axes, out):
"""
Calculates the arithmetic mean of the elements along the specified
axes.
Arguments:
tsr (GPUTensor): Input tensor
axes (int): Axis along which the reduction is performed. If axes
is None, the tensor is flattened and reduced over
both dimensions.
out (GPUTensor): Output tensor
"""
if axes is None:
sze = tsr.shape[0]*tsr.shape[1]
self.ng.mean(tsr.reshape(sze, 1), axis=0, out=out)
else:
self.ng.mean(tsr, axis=axes, out=out)
return out
def min(self, tsr, axes, out):
"""
Calculates the minimum of the elements along the specified
axes.
Arguments:
tsr (GPUTensor): Input tensor
axes (int): Axis along which the reduction is performed. If axes
is None, the tensor is flattened and reduced over
both dimensions.
out (GPUTensor): Output tensor
"""
if axes is None:
sze = tsr.shape[0]*tsr.shape[1]
self.ng.min(tsr.reshape(sze, 1), axis=0, out=out)
else:
self.ng.min(tsr, axis=axes, out=out)
return out
def max(self, tsr, axes, out):
"""
Calculates the maximum of the elements along the specified
axes.
Arguments:
tsr (GPUTensor): Input tensor
axes (int): Axis along which the reduction is performed. If axes
is None, the tensor is flattened and reduced over
both dimensions.
out (GPUTensor): Output tensor
"""
if axes is None:
sze = tsr.shape[0]*tsr.shape[1]
self.ng.max(tsr.reshape(sze, 1), axis=0, out=out)
else:
self.ng.max(tsr, axis=axes, out=out)
return out
def variance(self, tsr, axes, out, mean=None):
"""
Calculates the variance of the elements along the specified
axes.
Arguments:
tsr (GPUTensor): the tensor on which to compute the variance
axes (int, list, optional): the dimension(s) along which to
variance. If set to None, we will
variance over all dimensions.
out (GPUTensor): where the result will be stored.
mean (GPUTensor): the tensor containing mean of tsr
Returns:
GPUTensor: reference to out
"""
if mean is None:
logger.error("GPUTensor requires mean to be specified.")
raise ValueError("mean not specified")
self.ng.mean(self.ng.square(tsr-mean), axis=axes, out=out)
return out
def fabs(self, x, out):
"""
Calculates absolute value of the elements in a tensor
Arguments:
x (GPUTensor): Input tensor
out (GPUTensor): Output tensor
Returns:
GPUTensor: reference to out
"""
self.ng.fabs(x, out=out)
return out
def sqrt(self, x, out):
"""
Calculates square root of the elements in a tensor
Arguments:
x (GPUTensor): Input tensor
out (GPUTensor): Output tensor
Returns:
GPUTensor: reference to out
"""
self.ng.sqrt(x, out=out)
return out
def zeros(self, shape, dtype=default_dtype, persist_values=True):
"""
Allocate a new GPUTensor and fill it with zeros.
Arguments:
shape (tupel): Shape of the desired GPUTensor
dtype (dtype): Optional datatype
persist_values (bool, optional): If set to True (the default), the
values assigned to this Tensor
will persist across multiple begin
and end calls. Setting to False
may provide a performance increase
if values do not need to be
maintained across such calls
Returns:
GPUTensor: output
"""
return self.ng.zeros(shape, dtype=dtype)
def ones(self, shape, dtype=default_dtype, persist_values=True):
"""
Allocate a new GPUTensor and fill it with ones.
Arguments:
shape (tupel): Shape of the desired GPUTensor
dtype (dtype): Optional datatype
persist_values (bool, optional): If set to True (the default), the
values assigned to this Tensor
will persist across multiple begin
and end calls. Setting to False
may provide a performance increase
if values do not need to be
maintained across such calls
Returns:
GPUTensor: output
"""
return self.ng.ones(shape, dtype=dtype)
def zeros_like(self, ary, dtype=default_dtype, persist_values=True,
name=None):
"""
Instantiate a new instance of this backend's Tensor class, with the
shape taken from ary and populating each element with a value of 0.
Arguments:
ary (tensor object): Tensor to inherit the dimensions of.
dtype (data-type, optional): If present, specifies the underlying
type to employ for each element.
persist_values (bool, optional): If set to True (the default), the
values assigned to this Tensor
will persist across multiple begin
and end calls. Setting to False
may provide a performance increase
if values do not need to be
maintained across such calls
Returns:
Tensor: array object
Raises:
NotImplementedError: Can't be instantiated directly.
See Also:
:py:func:`~neon.backends.backend.Backend.empty`,
:py:func:`~neon.backends.backend.Backend.ones`,
:py:func:`~neon.backends.backend.Backend.array`
"""
return self.zeros(ary.shape, dtype=dtype,
persist_values=persist_values)
def empty_like(self, ary, dtype=default_dtype, persist_values=True,
name=None):
"""
Instantiate a new instance of this backend's Tensor class, with the
shape taken from ary.
Arguments:
ary (tensor object): Tensor to inherit the dimensions of.
dtype (data-type, optional): If present, specifies the underlying
type to employ for each element.
persist_values (bool, optional): If set to True (the default), the
values assigned to this Tensor
will persist across multiple begin
and end calls. Setting to False
may provide a performance increase
if values do not need to be
maintained across such calls
Returns:
Tensor: array object
Raises:
NotImplementedError: Can't be instantiated directly.
See Also:
:py:func:`~neon.backends.backend.Backend.empty`,
:py:func:`~neon.backends.backend.Backend.ones`,
:py:func:`~neon.backends.backend.Backend.array`
"""
return self.empty(ary.shape, dtype=dtype,
persist_values=persist_values, name=name)
def empty(self, shape, dtype=default_dtype, persist_values=True,
name=None):
"""
Allocate a new GPUTensor.
Arguments:
shape (tupel): Shape of the desired GPUTensor
dtype (dtype): Optional datatype
persist_values (bool, optional): If set to True (the default), the
values assigned to this Tensor
will persist across multiple begin
and end calls. Setting to False
may provide a performance increase
if values do not need to be
maintained across such calls
Returns:
GPUTensor: output
"""
return self.ng.empty(shape, dtype=dtype)
def copy(self, ary):
"""
returns a copy of ary
"""
res = self.empty_like(ary)
res.copy(ary)
return res
def array(self, ary, dtype=default_dtype, persist_values=True, name=None,
allocator=drv.mem_alloc):
"""
Allocate a new GPUTensor and fill it with supplied numpy array.
Arguments:
ary (ndarray): Numpy array with source data
dtype (dtype, optional): Optional datatype
persist_values (bool, optional): If set to True (the default), the
values assigned to this Tensor
will persist across multiple begin
and end calls. Setting to False
may provide a performance increase
if values do not need to be
maintained across such calls
name (string): Name for the GPUTensor
allocator (pycuda): Pycuda memory allocator
Returns:
GPUTensor: output
"""
return self.ng.array(ary, dtype=dtype, name=name)
def add(self, left, right, out):
"""
Elementwise addition
Arguments:
left (GPUTensor, numeric): left-hand side operand.
right (GPUTensor, numeric): right-hand side operand.
out (GPUTensor): where the result will be stored.
Returns:
GPUTensor: reference to out
"""
self.ng.add(left, right, out=out)
return out
def subtract(self, left, right, out):
"""
Elementwise subtraction
Arguments:
left (GPUTensor, numeric): left-hand side operand.
right (GPUTensor, numeric): right-hand side operand.
out (GPUTensor): where the result will be stored.
Returns:
GPUTensor: reference to out
"""
self.ng.subtract(left, right, out=out)
return out
def multiply(self, left, right, out):
"""
Elementwise multiplication
Arguments:
left (GPUTensor, numeric): left-hand side operand.
right (GPUTensor, numeric): right-hand side operand.
out (GPUTensor): where the result will be stored.
Returns:
GPUTensor: reference to out
"""
self.ng.multiply(left, right, out=out)
return out
def divide(self, left, right, out):
"""
Elementwise division
Arguments:
left (GPUTensor, numeric): left-hand side operand.
right (GPUTensor, numeric): right-hand side operand.
out (GPUTensor): where the result will be stored.
Returns:
GPUTensor: reference to out
"""
self.ng.divide(left, right, out=out)
return out
def greater(self, left, right, out):
"""
Elementwise greater than testing
Arguments:
left (GPUTensor, numeric): left-hand side operand.
right (GPUTensor, numeric): right-hand side operand.
out (GPUTensor): where the result will be stored.
Returns:
GPUTensor: reference to out
"""
self.ng.greater(left, right, out=out)
return out
def equal(self, left, right, out):
"""
Performs element-wise equality testing on each element of left and
right, storing the result in out. Each operand is assumed to be the
same shape (or broadcastable as such).
Arguments:
left (GPUTensor, numeric): left-hand side operand.
right (GPUTensor, numeric): right-hand side operand.
out (GPUTensor): where the result will be stored.
Returns:
GPUTensor: reference to out
"""
self.ng.equal(left, right, out=out)
return out
def not_equal(self, left, right, out):
"""
Elementwise not equal testing
Arguments:
left (GPUTensor, numeric): left-hand side operand.
right (GPUTensor, numeric): right-hand side operand.
out (GPUTensor): where the result will be stored.
Returns:
GPUTensor: reference to out
"""
self.ng.not_equal(left, right, out=out)
return out
def clip(self, a, a_min, a_max, out):
"""
Elementwise clipping between a range of specified values
Arguments:
a (GPUTensor): input tensor.
a_min (float): floor value.
a_max (float): ceiling value.
out (GPUTensor): where the result will be stored.
Returns:
GPUTensor: reference to out
"""
self.ng.clip(a, a_min, a_max, out=out)
return out
def log(self, a, out):
"""
Elementwise base-e logarithm
Arguments:
a (GPUTensor): input tensor.
out (GPUTensor): where the result will be stored.
Returns:
GPUTensor: reference to out
"""
self.ng.log(a, out=out)
return out
def tanh(self, a, out):
"""
Elementwise tanh
Arguments:
a (GPUTensor): input tensor.
out (GPUTensor): where the result will be stored.
Returns:
GPUTensor: reference to out
"""
self.ng.tanh(a, out=out)
return out
def argmax(self, a, out, axis=0):
"""
Calculates the indices of the maximal element value along the specified
axis. If multiple elements contain the maximum, only the elements of
the first are returned.
Arguments:
tsr (GPUTensor): The GPUTensor on which to find the maximum indices
axis (int): The dimension along which to find the maximum. If set
to None, find the overall maximum index of a flattened
representation of tsr.
out (GPUTensor): Where to store the result. Should be of the
appropriate type and expected shape
Returns:
GPUTensor: reference to out
"""
self.ng.argmax(a, out=out, axis=axis)
return out
def softmax(self, x, out):
"""
Softmax nonlinearity. Computes exp(x-max(x)) / sum_i exp(x_i-max(x_i))
Arguments:
x (GPUTensor): input tensor.
out (GPUTensor): where the result will be stored.
Returns:
GPUTensor: reference to out
"""
out[:] = (self.ng.reciprocal(self.ng.sum(
self.ng.exp(x - self.ng.max(x, axis=0)), axis=0)) *
self.ng.exp(x - self.ng.max(x, axis=0)))
return out
def softmax_gradient(self, y, err, out):
"""
Gradient of the softmax nonlinearity.
Arguments:
y (GPUTensor): input tensor.
err (GPUTensor): backpropagated error.
out (GPUTensor): where the result will be stored.
Returns:
GPUTensor: reference to out
"""
raise NotImplementedError("Softmax gradient should use shortcut")
return out
def make_binary_mask(self, tsr, keepthresh=0.5, dtype=default_dtype):
"""
Create a binary mask for dropout layers.
Arguments:
tsr (GPUTensor): Output tensor
keepthresh (float): fraction of ones
"""
self.ng.dropout(keep=keepthresh, out=tsr)
def gdm_compound(self, ps_item, us_item, vs_item, momentum_coef,
learning_rate, epoch):
"""
Perform gradient descent update with momentum.
Arguments:
ps_item (GPUTensor): parameter tensor (e.g. a weight matrix)
us_item (GPUTensor): update tensor, contains gradient wrt. weights
vs_item (GPUTensor): velocity tensor.
momentum_coef (float): momentum coefficient.
learning_rate (float): learning rate.
epoch (int): epoch (used in conjunction with diagnostics).
Outputs are written to vs_item (updated velocity)
and ps_item (updated weights)
"""
vs_item[:] = vs_item * momentum_coef - us_item * learning_rate
ps_item[:] = ps_item + vs_item
def gdmwd_compound(self, ps_item, us_item, vs_item, momentum_coef,
learning_rate, wd, epoch):
"""
Perform gradient descent update with momentum and weight decay.
Arguments:
ps_item (GPUTensor): parameter tensor (e.g. a weight matrix)
us_item (GPUTensor): update tensor, contains gradient wrt. weights
vs_item (GPUTensor): velocity tensor.
momentum_coef (float): momentum coefficient.
learning_rate (float): learning rate.
wd (float): weight decay parameter.
epoch (int): epoch (used in conjunction with diagnostics).
Outputs:
ps_item, the updated weights.
vs_item, the updated velocity.
us_item, used as a temp buffer.
"""
vs_item[:] = (vs_item * momentum_coef -
us_item * learning_rate -
ps_item * learning_rate * wd)
ps_item[:] = ps_item + vs_item
def exp_mavg(self, mavg, newval, rho):
"""
Calculate the exponential moving average
Arguments:
mavg: The running value of the moving average
newval: New sample to be added to the moving average
rho: Interpolation value
"""
mavg[:] = rho * mavg + (1.0 - rho) * newval
def ada_update(self, ps_item, us_item, gs_item, ds_item, ls_item, ss_item,
rho, epsilon):
"""
Update rule for AdaDelta (Zeiler, http://arxiv.org/abs/1212.5701)
Arguments:
ps_item: weight / parameter (will be updated)
us_item: update
gs_item: expected value of Gradient Squared (will be updated)
ds_item: expected value of Delta Squared (will be updated)
ls_item: learning rate (will be updated)
ss_item: Scratch Space
rho: decay constant (determines window size)
epsilon: small positive constant for numerical stability
"""
# Accumulate E[Grad^2]
gs_item[:] = gs_item * rho + (1.0 - rho) * us_item * us_item
# Calculate Updates
ls_item[:] = self.ng.sqrt((ds_item + epsilon) /
(gs_item + epsilon)) * (-1.0) * us_item
# Accumulate E[Delt^2]
ds_item[:] = ds_item * rho + (1.0 - rho) * ls_item * ls_item
# Final update to the params
ps_item[:] = ps_item + ls_item
def rms_update(self, params, updates, run_squares, velocity, scratch_space,
gamma, epsilon, learning_rate, momentum_coef):
# Update running squares
run_squares[:] = gamma * run_squares + (1. - gamma) * updates * updates
# Now scale the gradient by lr / rms(grad) (with a epsilon term for
# stability) and use it to update the params
if momentum_coef == 0:
params[:] = params - learning_rate * updates * self.ng.reciprocal(
self.ng.sqrt(run_squares) + epsilon)
else:
velocity[:] = velocity * momentum_coef - \
learning_rate * updates * \
self.ng.reciprocal(self.ng.sqrt(run_squares) + epsilon)
params[:] = params + velocity
def fprop_bn_compound(self, inputs, beta, gamma, eps, xhat,
xmean, xvar, gmean, gvar, rho, out):
"""
Batch normalization forward pass, compounded to run in 3 kernel calls.
Arguments:
inputs: input data to be normalized
beta: location parameter
gamma: scale parameter
eps: small constant for numerical stability
xvar: variance (updated)
xhat: normalized input (updated)
out: normalized and rescaled input (updated)
"""
xvar[:] = self.ng.var(inputs, axis=1)
xmean[:] = self.ng.mean(inputs, axis=1)
gmean[:] = gmean * rho + (1.0 - rho) * xmean
gvar[:] = gvar * rho + (1.0 - rho) * xvar
xvar[:] = self.ng.reciprocal(self.ng.sqrt(xvar + eps))
xhat[:] = xvar * (inputs - xmean)
out[:] = xhat * gamma + beta
return out
def bprop_bn_compound(self, xhat, error, xvar, gamma,
beta_updates, gamma_updates):
"""
Batch normalization backward pass, compounded to run with 4 kernel
calls.
Arguments:
xhat: normalized input data (updated)
error: backpropagated deltas (updated)
xvar: precomputed variance
gamma: scale parameter
beta_updates: gradient update for beta (updated)
gamma_updates: gradient update for gamma (updated)
"""
gamma_updates[:] = self.ng.sum(xhat * error, axis=1)
beta_updates[:] = self.ng.sum(error, axis=1)
xhat[:] = (xhat * gamma_updates + beta_updates) / float(xhat.shape[1])
error[:] = xvar * gamma * (error - xhat)
| 39.736842 | 79 | 0.556126 | 5,446 | 48,320 | 4.882666 | 0.125046 | 0.0176 | 0.011733 | 0.011846 | 0.618405 | 0.581475 | 0.560453 | 0.529803 | 0.504306 | 0.489263 | 0 | 0.003858 | 0.366991 | 48,320 | 1,215 | 80 | 39.769547 | 0.865498 | 0.572641 | 0 | 0.224359 | 0 | 0 | 0.024425 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.217949 | false | 0.003205 | 0.025641 | 0.00641 | 0.378205 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
9ef87644a467b7a43c75ac4ae95f1780dab19950 | 3,934 | py | Python | algopy/base_type.py | arthus701/algopy | 1e2430f803289bbaed6bbdff6c28f98d7767835c | [
"Unlicense"
] | 54 | 2015-03-05T13:38:08.000Z | 2021-11-29T11:54:48.000Z | algopy/base_type.py | arthus701/algopy | 1e2430f803289bbaed6bbdff6c28f98d7767835c | [
"Unlicense"
] | 7 | 2016-04-06T11:25:00.000Z | 2020-11-09T13:53:20.000Z | algopy/base_type.py | arthus701/algopy | 1e2430f803289bbaed6bbdff6c28f98d7767835c | [
"Unlicense"
] | 13 | 2015-01-17T17:05:56.000Z | 2021-08-05T01:13:16.000Z | """
This implements an abstrace base class Ring .
Rationale:
Goal is to separate the datatype specification from the algorithms and containers for the following reasons:
1) It allows to directly use the algorithms *without* overhead. E.g. calling mul(z.data, x.data, y.data)
has much less overhead than z = x.__mul__(y). data is to be kept as close as possible to
machine primitives. E.g. data is array or tuple of arrays.
2) Potential reuse of an algorithm in several datatypes.
3) Relatively easy to connect high performance algorithms with a very highlevel abstract description.
For instance, most programming languages allow calling C-functions. Therefore, the algorithms
should be given as void fcn(int A, double B, ...)
For instance, the datatype is a truncated Taylor polynomial R[t]/<t^D> of the class Foo.
The underlying container is a simple array of doubles.
"""
import numpy
class Ring(object):
"""
An abstract base class in an attempt to follow the DRY principle.
It implements the algebraic class of a ring as defined on
http://en.wikipedia.org/wiki/Ring_%28mathematics%29
The idea is that the set is described in data and the operations +,* etc.
are implemented as functions that operate on the data.
E.g. the factor ring of natural numbers modulo 4, x.data = 3 y.data = 2
then z = add(x,y) is implemented as
def add(x,y):
return self.__class__((x.data*y.data)%4)
and one obtains z.data = 1
Warning:
Since this class is only of little value it may be deprecated in the future.
"""
data = NotImplementedError()
def totype(self, x):
"""
tries to convert x to an object of the class
works for : scalar x, numpy.ndarray x
Remark:
at the moment, scalar x expanded as Ring with the same degree as self though.
The reason is a missing implementation that works for graded rings of different degree.
Once such implementations exist, this function should be adapted.
"""
if numpy.isscalar(x):
xdata = self.__class__.__zeros_like__(self.data)
self.__class__.__scalar_to_data__(xdata, x)
return self.__class__(xdata)
elif isinstance(x, numpy.ndarray):
raise NotImplementedError('sorry, not implemented just yet')
elif not isinstance(x, self.__class__):
raise NotImplementedError('Cannot convert x\n type(x) = %s but expected type(x) = %s'%(str(type(x))))
else:
return x
def __add__(self, rhs):
rhs = self.totype(rhs)
retval = self.__class__(self.__class__.__zeros_like__(self.data))
self.__class__.add(retval.data, self.data, rhs.data)
return retval
def __sub__(self, rhs):
rhs = self.totype(rhs)
retval = self.__class__(self.__class__.__zeros_like__(self.data))
self.__class__.sub(retval.data, self.data, rhs.data)
return retval
def __mul__(self,rhs):
rhs = self.totype(rhs)
retval = self.__class__(self.__class__.__zeros_like__(self.data))
self.__class__.mul(retval.data, self.data, rhs.data)
return retval
def __truediv__(self,rhs):
rhs = self.totype(rhs)
retval = self.__class__(self.__class__.__zeros_like__(self.data))
self.__class__.div(retval.data, self.data, rhs.data)
return retval
def __radd__(self, lhs):
return self + lhs
def __rmul__(self, lhs):
return self * lhs
def zeros_like(self):
return self.__class__(self.__class__.__zeros_like__(self.data))
def __str__(self):
return str(self.data)
| 35.125 | 113 | 0.630147 | 530 | 3,934 | 4.401887 | 0.373585 | 0.073296 | 0.039006 | 0.046292 | 0.243463 | 0.243463 | 0.223746 | 0.223746 | 0.193742 | 0.125161 | 0 | 0.004296 | 0.290036 | 3,934 | 111 | 114 | 35.441441 | 0.831006 | 0.482206 | 0 | 0.285714 | 0 | 0.02381 | 0.047439 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.214286 | false | 0 | 0.02381 | 0.095238 | 0.52381 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
7300890aeb852238c2f50f2aafaca22c70ba3108 | 158 | py | Python | Python/Back_solve_python/back_joon/StringArray/P10808.py | skyriv213/Studyriv | 6dfd3c52a873cd3bdb018280d81aec8bdcf61e6e | [
"MIT"
] | null | null | null | Python/Back_solve_python/back_joon/StringArray/P10808.py | skyriv213/Studyriv | 6dfd3c52a873cd3bdb018280d81aec8bdcf61e6e | [
"MIT"
] | null | null | null | Python/Back_solve_python/back_joon/StringArray/P10808.py | skyriv213/Studyriv | 6dfd3c52a873cd3bdb018280d81aec8bdcf61e6e | [
"MIT"
] | null | null | null | s = input()
num = [0] * 26
for i in range(len(s)):
num[ord(s[i])-97] += 1
for i in num:
print(i, end = " ")
if i == len(num)-1:
print(i)
| 15.8 | 26 | 0.455696 | 31 | 158 | 2.322581 | 0.516129 | 0.111111 | 0.166667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.064815 | 0.316456 | 158 | 9 | 27 | 17.555556 | 0.601852 | 0 | 0 | 0 | 0 | 0 | 0.006329 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.25 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
73009bb6994a5ff455eca19ffc1b698f9cf1d1d2 | 600 | py | Python | src/reliefcpp/utils.py | ferrocactus/reliefcpp | 41705a9e5c749e700f83f9fe9f352457ae57426d | [
"MIT"
] | null | null | null | src/reliefcpp/utils.py | ferrocactus/reliefcpp | 41705a9e5c749e700f83f9fe9f352457ae57426d | [
"MIT"
] | null | null | null | src/reliefcpp/utils.py | ferrocactus/reliefcpp | 41705a9e5c749e700f83f9fe9f352457ae57426d | [
"MIT"
] | null | null | null | from enum import Enum
from numpy import isin
class Metric(Enum):
EUCLIDEAN = 0
MANHATTAN = 1
HAMMING = 2
L2 = 3
L1 = 4
metric_names = [
"euclidean",
"manhattan",
"hamming",
"l2",
"l1"
]
def _validate_metric(metric_name):
if isinstance(metric_name, Metric):
return metric_name.value
elif isinstance(metric_name, str):
metric_name = metric_name.lower()
return metric_names.index(metric_name)
elif isinstance(metric_name, int):
return metric_name
else:
raise ValueError("Could not identify metric.")
| 18.181818 | 54 | 0.638333 | 73 | 600 | 5.068493 | 0.493151 | 0.243243 | 0.162162 | 0.12973 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.020595 | 0.271667 | 600 | 32 | 55 | 18.75 | 0.826087 | 0 | 0 | 0 | 0 | 0 | 0.091667 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.04 | false | 0 | 0.08 | 0 | 0.48 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
7306a81bcc0bef579d78b882fb2bc110b0f6bf5f | 1,506 | py | Python | fannypack/utils/_deprecation.py | brentyi/hfdsajk | 2888aa5d969824ac1e1a528264674ece3f4703f9 | [
"MIT"
] | 5 | 2020-03-13T21:34:31.000Z | 2020-10-27T15:18:17.000Z | fannypack/utils/_deprecation.py | brentyi/hfdsajk | 2888aa5d969824ac1e1a528264674ece3f4703f9 | [
"MIT"
] | 2 | 2020-06-17T11:06:56.000Z | 2020-10-25T03:06:18.000Z | fannypack/utils/_deprecation.py | brentyi/hfdsajk | 2888aa5d969824ac1e1a528264674ece3f4703f9 | [
"MIT"
] | 4 | 2020-03-15T01:55:18.000Z | 2022-01-21T22:06:48.000Z | import warnings
from typing import Callable, Optional, TypeVar, cast
CallableType = TypeVar("CallableType", bound=Callable)
def deprecation_wrapper(message: str, function_or_class: CallableType) -> CallableType:
"""Creates a wrapper for a deprecated function or class. Prints a warning
the first time a function or class is called.
Args:
message (str): Warning message.
function_or_class (CallableType): Function or class to wrap.
Returns:
CallableType: Wrapped function/class.
"""
warned = False
def curried(*args, **kwargs): # pragma: no cover
nonlocal warned
if not warned:
warnings.warn(message, DeprecationWarning, stacklevel=2)
warned = True
return function_or_class(*args, **kwargs)
return cast(CallableType, curried)
def new_name_wrapper(
old_name: str, new_name: str, function_or_class: CallableType
) -> CallableType:
"""Creates a wrapper for a renamed function or class. Prints a warning the first
time a function or class is called with the old name.
Args:
old_name (str): Old name of function or class. Printed in warning.
new_name (str): New name of function or class. Printed in warning.
function_or_class (CallableType): Function or class to wrap.
Returns:
CallableType: Wrapped function/class.
"""
return deprecation_wrapper(
f"{old_name} is deprecated! Use {new_name} instead.", function_or_class
)
| 31.375 | 87 | 0.688579 | 190 | 1,506 | 5.342105 | 0.321053 | 0.137931 | 0.206897 | 0.106404 | 0.492611 | 0.492611 | 0.492611 | 0.492611 | 0.419704 | 0.419704 | 0 | 0.000869 | 0.235724 | 1,506 | 47 | 88 | 32.042553 | 0.880973 | 0.459495 | 0 | 0 | 0 | 0 | 0.082321 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0 | 0.111111 | 0 | 0.444444 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
731bee30cd85e8877da89abd76314f81852e3106 | 730 | py | Python | algoplex/api/order.py | dmitryaleks/algo-plex | c83421642fc1ac11e558126ec73909b175b07862 | [
"BSD-2-Clause"
] | null | null | null | algoplex/api/order.py | dmitryaleks/algo-plex | c83421642fc1ac11e558126ec73909b175b07862 | [
"BSD-2-Clause"
] | null | null | null | algoplex/api/order.py | dmitryaleks/algo-plex | c83421642fc1ac11e558126ec73909b175b07862 | [
"BSD-2-Clause"
] | null | null | null | class Order():
def __init__(self, side, pair, size, price, stop_loss_price, id):
self.side = side
self.pair = pair
self.size = size
self.price = price
self.stop_loss_price = stop_loss_price
self.id = id
self.fills = []
def define_id(self, id):
self.id = id
def add_fill(self, execution):
self.fills.append(execution)
def get_fill_price(self):
nominator = sum(map(lambda f: f.size * f.price, self.fills))
fill_price = nominator/self.get_filled_quantity()
return fill_price
def get_filled_quantity(self):
return sum(map(lambda f: f.size, self.fills))
def get_fills(self):
return self.fills
| 26.071429 | 69 | 0.609589 | 101 | 730 | 4.19802 | 0.267327 | 0.106132 | 0.091981 | 0.084906 | 0.084906 | 0.084906 | 0 | 0 | 0 | 0 | 0 | 0 | 0.286301 | 730 | 27 | 70 | 27.037037 | 0.81382 | 0 | 0 | 0.095238 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.285714 | false | 0 | 0 | 0.095238 | 0.47619 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
73407d37b530e40b65a5d94f1bc5d3086355dead | 1,084 | py | Python | numba/tests/__init__.py | mawanda-jun/numba | 8c6658375c1f8fe50e1a5ccd11d4e7bf5a8053de | [
"BSD-2-Clause",
"Apache-2.0"
] | 1 | 2019-12-04T07:13:18.000Z | 2019-12-04T07:13:18.000Z | numba/tests/__init__.py | mawanda-jun/numba | 8c6658375c1f8fe50e1a5ccd11d4e7bf5a8053de | [
"BSD-2-Clause",
"Apache-2.0"
] | null | null | null | numba/tests/__init__.py | mawanda-jun/numba | 8c6658375c1f8fe50e1a5ccd11d4e7bf5a8053de | [
"BSD-2-Clause",
"Apache-2.0"
] | 1 | 2020-09-18T15:03:46.000Z | 2020-09-18T15:03:46.000Z | from numba import unittest_support as unittest
import gc
from os.path import dirname, join
import multiprocessing
import sys
import time
import warnings
from unittest.suite import TestSuite
from numba.testing import load_testsuite
from numba.testing import ddt # for backward compatibility
try:
import faulthandler
except ImportError:
faulthandler = None
else:
try:
# May fail in IPython Notebook with UnsupportedOperation
faulthandler.enable()
except Exception as e:
msg = "Failed to enable faulthandler due to:\n{err}"
warnings.warn(msg.format(err=e))
def load_tests(loader, tests, pattern):
suite = TestSuite()
suite.addTests(load_testsuite(loader, dirname(__file__)))
# Numba CUDA tests are located in a separate directory:
cuda_dir = join(dirname(dirname(__file__)), 'cuda/tests')
suite.addTests(loader.discover(cuda_dir))
# Numba ROC tests are located in a separate directory
roc_dir = join(dirname(dirname(__file__)), 'roc/tests')
suite.addTests(loader.discover(roc_dir))
return suite
| 27.1 | 64 | 0.737085 | 142 | 1,084 | 5.485915 | 0.443662 | 0.03466 | 0.046213 | 0.064185 | 0.315789 | 0.089859 | 0.089859 | 0 | 0 | 0 | 0 | 0 | 0.186347 | 1,084 | 39 | 65 | 27.794872 | 0.88322 | 0.172509 | 0 | 0.071429 | 0 | 0 | 0.070707 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.035714 | false | 0 | 0.428571 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
734b4343088715a23f5435206ac174b0bc22413c | 11,371 | py | Python | tfx/orchestration/portable/execution_publish_utils.py | johnPertoft/tfx | c6335684a54651adbcbe50aa52918b9b9948326e | [
"Apache-2.0"
] | null | null | null | tfx/orchestration/portable/execution_publish_utils.py | johnPertoft/tfx | c6335684a54651adbcbe50aa52918b9b9948326e | [
"Apache-2.0"
] | null | null | null | tfx/orchestration/portable/execution_publish_utils.py | johnPertoft/tfx | c6335684a54651adbcbe50aa52918b9b9948326e | [
"Apache-2.0"
] | null | null | null | # Copyright 2020 Google LLC. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Portable library for registering and publishing executions."""
import copy
import os
from typing import List, Mapping, MutableMapping, Optional, Sequence, cast
from absl import logging
from tfx import types
from tfx.orchestration import metadata
from tfx.orchestration.portable.mlmd import execution_lib
from tfx.proto.orchestration import execution_result_pb2
from ml_metadata.proto import metadata_store_pb2
def _check_validity(new_artifact: metadata_store_pb2.Artifact,
original_artifact: types.Artifact,
has_multiple_artifacts: bool) -> None:
"""Check the validity of new artifact against the original artifact."""
if new_artifact.type_id != original_artifact.type_id:
raise RuntimeError('Executor output should not change artifact type.')
if has_multiple_artifacts:
# If there are multiple artifacts in the executor output, their URIs should
# be a direct sub-dir of the system generated URI.
if os.path.dirname(new_artifact.uri) != original_artifact.uri:
raise RuntimeError(
'When there are multiple artifacts to publish, their URIs '
'should be direct sub-directories of the URI of the system generated '
'artifact.')
else:
# If there is only one output artifact, its URI should not be changed
if new_artifact.uri != original_artifact.uri:
# TODO(b/175426744): Data Binder will modify the uri.
logging.warning(
'When there is one artifact to publish, the URI of it should be '
'identical to the URI of system generated artifact.')
def publish_cached_execution(
metadata_handler: metadata.Metadata,
contexts: Sequence[metadata_store_pb2.Context],
execution_id: int,
output_artifacts: Optional[MutableMapping[str,
Sequence[types.Artifact]]] = None,
) -> None:
"""Marks an existing execution as using cached outputs from a previous execution.
Args:
metadata_handler: A handler to access MLMD.
contexts: MLMD contexts to associated with the execution.
execution_id: The id of the execution.
output_artifacts: Output artifacts of the execution. Each artifact will be
linked with the execution through an event with type OUTPUT.
"""
[execution] = metadata_handler.store.get_executions_by_id([execution_id])
execution.last_known_state = metadata_store_pb2.Execution.CACHED
execution_lib.put_execution(
metadata_handler,
execution,
contexts,
input_artifacts=None,
output_artifacts=output_artifacts)
def _set_execution_result_if_not_empty(
executor_output: Optional[execution_result_pb2.ExecutorOutput],
execution: metadata_store_pb2.Execution) -> bool:
"""Sets execution result as a custom property of the execution."""
if executor_output and (executor_output.execution_result.result_message or
executor_output.execution_result.metadata_details or
executor_output.execution_result.code):
# TODO(b/190001754): Consider either switching to base64 encoding or using
# a proto descriptor pool to circumvent TypeError which may be raised when
# converting embedded `Any` protos.
try:
execution_lib.set_execution_result(executor_output.execution_result,
execution)
except TypeError:
logging.exception(
'Skipped setting execution_result as custom property of the '
'execution due to error')
def publish_succeeded_execution(
metadata_handler: metadata.Metadata,
execution_id: int,
contexts: Sequence[metadata_store_pb2.Context],
output_artifacts: Optional[MutableMapping[str,
Sequence[types.Artifact]]] = None,
executor_output: Optional[execution_result_pb2.ExecutorOutput] = None
) -> Optional[MutableMapping[str, List[types.Artifact]]]:
"""Marks an existing execution as success.
Also publishes the output artifacts produced by the execution. This method
will also merge the executor produced info into system generated output
artifacts. The `last_know_state` of the execution will be changed to
`COMPLETE` and the output artifacts will be marked as `LIVE`.
Args:
metadata_handler: A handler to access MLMD.
execution_id: The id of the execution to mark successful.
contexts: MLMD contexts to associated with the execution.
output_artifacts: Output artifacts skeleton of the execution, generated by
the system. Each artifact will be linked with the execution through an
event with type OUTPUT.
executor_output: Executor outputs. `executor_output.output_artifacts` will
be used to update system-generated output artifacts passed in through
`output_artifacts` arg. There are three contraints to the update: 1. The
keys in `executor_output.output_artifacts` are expected to be a subset
of the system-generated output artifacts dict. 2. An update to a certain
key should contains all the artifacts under that key. 3. An update to an
artifact should not change the type of the artifact.
Returns:
The maybe updated output_artifacts, note that only outputs whose key are in
executor_output will be updated and others will be untouched. That said,
it can be partially updated.
Raises:
RuntimeError: if the executor output to a output channel is partial.
"""
output_artifacts = copy.deepcopy(output_artifacts) or {}
output_artifacts = cast(MutableMapping[str, List[types.Artifact]],
output_artifacts)
if executor_output:
if not set(executor_output.output_artifacts.keys()).issubset(
output_artifacts.keys()):
raise RuntimeError(
'Executor output %s contains more keys than output skeleton %s.' %
(executor_output, output_artifacts))
for key, artifact_list in output_artifacts.items():
if key not in executor_output.output_artifacts:
continue
updated_artifact_list = executor_output.output_artifacts[key].artifacts
# We assume the original output dict must include at least one output
# artifact and all artifacts in the list share the same type.
original_artifact = artifact_list[0]
# Update the artifact list with what's in the executor output
artifact_list.clear()
# TODO(b/175426744): revisit this:
# 1) Whether multiple output is needed or not after TFX componets
# are upgraded.
# 2) If multiple output are needed and is a common practice, should we
# use driver instead to create the list of output artifact instead
# of letting executor to create them.
for proto_artifact in updated_artifact_list:
_check_validity(proto_artifact, original_artifact,
len(updated_artifact_list) > 1)
python_artifact = types.Artifact(original_artifact.artifact_type)
python_artifact.set_mlmd_artifact(proto_artifact)
artifact_list.append(python_artifact)
# Marks output artifacts as LIVE.
for artifact_list in output_artifacts.values():
for artifact in artifact_list:
artifact.mlmd_artifact.state = metadata_store_pb2.Artifact.LIVE
[execution] = metadata_handler.store.get_executions_by_id([execution_id])
execution.last_known_state = metadata_store_pb2.Execution.COMPLETE
_set_execution_result_if_not_empty(executor_output, execution)
execution_lib.put_execution(
metadata_handler, execution, contexts, output_artifacts=output_artifacts)
return output_artifacts
def publish_failed_execution(
metadata_handler: metadata.Metadata,
contexts: Sequence[metadata_store_pb2.Context],
execution_id: int,
executor_output: Optional[execution_result_pb2.ExecutorOutput] = None
) -> None:
"""Marks an existing execution as failed.
Args:
metadata_handler: A handler to access MLMD.
contexts: MLMD contexts to associated with the execution.
execution_id: The id of the execution.
executor_output: The output of executor.
"""
[execution] = metadata_handler.store.get_executions_by_id([execution_id])
execution.last_known_state = metadata_store_pb2.Execution.FAILED
_set_execution_result_if_not_empty(executor_output, execution)
execution_lib.put_execution(metadata_handler, execution, contexts)
def publish_internal_execution(
metadata_handler: metadata.Metadata,
contexts: Sequence[metadata_store_pb2.Context],
execution_id: int,
output_artifacts: Optional[MutableMapping[str,
Sequence[types.Artifact]]] = None
) -> None:
"""Marks an exeisting execution as as success and links its output to an INTERNAL_OUTPUT event.
Args:
metadata_handler: A handler to access MLMD.
contexts: MLMD contexts to associated with the execution.
execution_id: The id of the execution.
output_artifacts: Output artifacts of the execution. Each artifact will be
linked with the execution through an event with type INTERNAL_OUTPUT.
"""
[execution] = metadata_handler.store.get_executions_by_id([execution_id])
execution.last_known_state = metadata_store_pb2.Execution.COMPLETE
execution_lib.put_execution(
metadata_handler,
execution,
contexts,
output_artifacts=output_artifacts,
output_event_type=metadata_store_pb2.Event.INTERNAL_OUTPUT)
def register_execution(
metadata_handler: metadata.Metadata,
execution_type: metadata_store_pb2.ExecutionType,
contexts: Sequence[metadata_store_pb2.Context],
input_artifacts: Optional[MutableMapping[str,
Sequence[types.Artifact]]] = None,
exec_properties: Optional[Mapping[str, types.Property]] = None,
) -> metadata_store_pb2.Execution:
"""Registers a new execution in MLMD.
Along with the execution:
- the input artifacts will be linked to the execution.
- the contexts will be linked to both the execution and its input artifacts.
Args:
metadata_handler: A handler to access MLMD.
execution_type: The type of the execution.
contexts: MLMD contexts to associated with the execution.
input_artifacts: Input artifacts of the execution. Each artifact will be
linked with the execution through an event.
exec_properties: Execution properties. Will be attached to the execution.
Returns:
An MLMD execution that is registered in MLMD, with id populated.
"""
execution = execution_lib.prepare_execution(
metadata_handler, execution_type, metadata_store_pb2.Execution.RUNNING,
exec_properties)
return execution_lib.put_execution(
metadata_handler, execution, contexts, input_artifacts=input_artifacts)
| 43.734615 | 97 | 0.735731 | 1,459 | 11,371 | 5.559973 | 0.202193 | 0.068417 | 0.033531 | 0.021573 | 0.406065 | 0.359098 | 0.325567 | 0.312007 | 0.273422 | 0.253575 | 0 | 0.007182 | 0.204028 | 11,371 | 259 | 98 | 43.903475 | 0.889073 | 0.421775 | 0 | 0.341085 | 0 | 0 | 0.068609 | 0 | 0 | 0 | 0 | 0.011583 | 0 | 1 | 0.054264 | false | 0 | 0.069767 | 0 | 0.139535 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
735799fe024faf41da595642a3d8bdb3ba238a42 | 1,693 | py | Python | tools/SDKTool/src/ui/dialog/progress_bar_dialog.py | Passer-D/GameAISDK | a089330a30b7bfe1f6442258a12d8c0086240606 | [
"Apache-2.0"
] | 1,210 | 2020-08-18T07:57:36.000Z | 2022-03-31T15:06:05.000Z | tools/SDKTool/src/ui/dialog/progress_bar_dialog.py | guokaiSama/GameAISDK | a089330a30b7bfe1f6442258a12d8c0086240606 | [
"Apache-2.0"
] | 37 | 2020-08-24T02:48:38.000Z | 2022-01-30T06:41:52.000Z | tools/SDKTool/src/ui/dialog/progress_bar_dialog.py | guokaiSama/GameAISDK | a089330a30b7bfe1f6442258a12d8c0086240606 | [
"Apache-2.0"
] | 275 | 2020-08-18T08:35:16.000Z | 2022-03-31T15:06:07.000Z | # -*- coding: utf-8 -*-
"""
Tencent is pleased to support the open source community by making GameAISDK available.
This source code file is licensed under the GNU General Public License Version 3.
For full details, please refer to the file "LICENSE.txt" which is provided as part of this source code package.
Copyright (C) 2020 THL A29 Limited, a Tencent company. All rights reserved.
"""
from PyQt5.QtCore import Qt
from PyQt5.QtWidgets import QWidget, QProgressDialog
class ProgressBarDialog(QWidget):
def __init__(self, title='', label='', minValue=0, maxValue=100, parent=None):
super(ProgressBarDialog, self).__init__(parent)
self.process_bar = QProgressDialog(self)
self.set_bar_window_title(title)
self.set_label_text(label)
self.set_min_value(minValue)
self.set_max_value(maxValue)
self.process_bar.setWindowModality(Qt.WindowModal)
self.setGeometry(800, 300, 580, 570)
self.process_bar.canceled.connect(self.close_bar)
def set_bar_window_title(self, text):
self.process_bar.setWindowTitle(text)
self.setWindowTitle(text)
def set_label_text(self, text):
self.process_bar.setLabelText(text)
def set_min_value(self, minValue):
self.process_bar.setMinimum(minValue)
def set_max_value(self, maxvalue):
self.process_bar.setMaximum(maxvalue)
def set_value(self, value):
self.process_bar.setValue(value)
def close_bar(self):
self.process_bar.close()
def reset_bar(self):
self.process_bar = None
def show(self):
self.process_bar.show()
def is_valid(self):
return bool(self.process_bar)
| 31.351852 | 111 | 0.705848 | 228 | 1,693 | 5.052632 | 0.438596 | 0.114583 | 0.145833 | 0.046875 | 0.074653 | 0 | 0 | 0 | 0 | 0 | 0 | 0.019231 | 0.201418 | 1,693 | 53 | 112 | 31.943396 | 0.83284 | 0.225635 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.3125 | false | 0 | 0.0625 | 0.03125 | 0.4375 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
73623a0c8d94829ad21399f5bae6f22979a769e7 | 1,562 | py | Python | api/web/apps/auth/views.py | procool/itstructure | 6aa3a43e1a759f5509f130ddf911779645dc89d0 | [
"BSD-2-Clause"
] | null | null | null | api/web/apps/auth/views.py | procool/itstructure | 6aa3a43e1a759f5509f130ddf911779645dc89d0 | [
"BSD-2-Clause"
] | null | null | null | api/web/apps/auth/views.py | procool/itstructure | 6aa3a43e1a759f5509f130ddf911779645dc89d0 | [
"BSD-2-Clause"
] | null | null | null | from flask import url_for
from flaskcbv.view import View
from flaskcbv.conf import settings
from misc.mixins import HelperMixin
from misc.views import JSONView
class authView(JSONView):
def helper(self):
return """Authorizaion handler
Use "login" and "passwd" arguments by GET or POST to get session
"""
def get(self, *args, **kwargs):
return self.post(*args, **kwargs)
def post(self, *args, **kwargs):
try:
username = self.get_argument_smart('username')
passwd = self.get_argument_smart('password')
except Exception as err:
self.abort_error(errno=-1, error="wrong_params", details="set arguments: 'username', 'passwd'")
r = settings._BB_CLIENT.login(username, passwd)
answ = r.as_dict
del answ["cmd"]
del answ["token"]
self.abort_error(**answ)
class sessionView(JSONView):
def helper(self):
return """Session check handler
Use "session" argument by GET or POST to check your session
"""
def get(self, *args, **kwargs):
return self.post(*args, **kwargs)
def post(self, *args, **kwargs):
try:
session = self.get_argument_smart('session')
except Exception as err:
self.abort_error(errno=-1, error="wrong_params", details="set argument: 'session'")
r = settings._BB_CLIENT.session(session)
answ = r.as_dict
del answ["cmd"]
del answ["token"]
self.abort_error(**answ)
| 26.474576 | 107 | 0.608195 | 191 | 1,562 | 4.874346 | 0.329843 | 0.064447 | 0.06015 | 0.064447 | 0.498389 | 0.41246 | 0.41246 | 0.41246 | 0.41246 | 0.41246 | 0 | 0.001773 | 0.277849 | 1,562 | 58 | 108 | 26.931034 | 0.823582 | 0 | 0 | 0.55 | 0 | 0 | 0.206675 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.15 | false | 0.1 | 0.125 | 0.1 | 0.425 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
7df75aa4524bb4f5a708857ab0d660fb8ccedfb8 | 603 | py | Python | math/0x04-convolutions_and_pooling/test/2-main.py | cbarros7/holbertonschool-machine_learning | 1edb4c253441f6319b86c9c590d1e7dd3fc32bf4 | [
"MIT"
] | 1 | 2022-03-09T19:12:22.000Z | 2022-03-09T19:12:22.000Z | math/0x04-convolutions_and_pooling/test/2-main.py | cbarros7/holbertonschool-machine_learning | 1edb4c253441f6319b86c9c590d1e7dd3fc32bf4 | [
"MIT"
] | null | null | null | math/0x04-convolutions_and_pooling/test/2-main.py | cbarros7/holbertonschool-machine_learning | 1edb4c253441f6319b86c9c590d1e7dd3fc32bf4 | [
"MIT"
] | null | null | null | #!/usr/bin/env python3
import matplotlib.pyplot as plt
import numpy as np
convolve_grayscale_padding = __import__(
'2-convolve_grayscale_padding').convolve_grayscale_padding
if __name__ == '__main__':
dataset = np.load('../../supervised_learning/data/MNIST.npz')
images = dataset['X_train']
print(images.shape)
kernel = np.array([[1, 0, -1], [1, 0, -1], [1, 0, -1]])
images_conv = convolve_grayscale_padding(images, kernel, (2, 4))
print(images_conv.shape)
plt.imshow(images[0], cmap='gray')
plt.show()
plt.imshow(images_conv[0], cmap='gray')
plt.show()
| 27.409091 | 68 | 0.6733 | 84 | 603 | 4.535714 | 0.47619 | 0.178478 | 0.251969 | 0.020997 | 0.107612 | 0.023622 | 0 | 0 | 0 | 0 | 0 | 0.029703 | 0.162521 | 603 | 21 | 69 | 28.714286 | 0.724752 | 0.034826 | 0 | 0.133333 | 0 | 0 | 0.156627 | 0.11704 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.2 | 0 | 0.2 | 0.133333 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
7dfda8cef5923a2a0d78158e8c874838389cfd46 | 3,678 | py | Python | src/oci/dns/models/external_master.py | Manny27nyc/oci-python-sdk | de60b04e07a99826254f7255e992f41772902df7 | [
"Apache-2.0",
"BSD-3-Clause"
] | 249 | 2017-09-11T22:06:05.000Z | 2022-03-04T17:09:29.000Z | src/oci/dns/models/external_master.py | Manny27nyc/oci-python-sdk | de60b04e07a99826254f7255e992f41772902df7 | [
"Apache-2.0",
"BSD-3-Clause"
] | 228 | 2017-09-11T23:07:26.000Z | 2022-03-23T10:58:50.000Z | src/oci/dns/models/external_master.py | Manny27nyc/oci-python-sdk | de60b04e07a99826254f7255e992f41772902df7 | [
"Apache-2.0",
"BSD-3-Clause"
] | 224 | 2017-09-27T07:32:43.000Z | 2022-03-25T16:55:42.000Z | # coding: utf-8
# Copyright (c) 2016, 2021, Oracle and/or its affiliates. All rights reserved.
# This software is dual-licensed to you under the Universal Permissive License (UPL) 1.0 as shown at https://oss.oracle.com/licenses/upl or Apache License 2.0 as shown at http://www.apache.org/licenses/LICENSE-2.0. You may choose either license.
from oci.util import formatted_flat_dict, NONE_SENTINEL, value_allowed_none_or_none_sentinel # noqa: F401
from oci.decorators import init_model_state_from_kwargs
@init_model_state_from_kwargs
class ExternalMaster(object):
"""
An external master name server used as the source of zone data.
"""
def __init__(self, **kwargs):
"""
Initializes a new ExternalMaster object with values from keyword arguments.
The following keyword arguments are supported (corresponding to the getters/setters of this class):
:param address:
The value to assign to the address property of this ExternalMaster.
:type address: str
:param port:
The value to assign to the port property of this ExternalMaster.
:type port: int
:param tsig_key_id:
The value to assign to the tsig_key_id property of this ExternalMaster.
:type tsig_key_id: str
"""
self.swagger_types = {
'address': 'str',
'port': 'int',
'tsig_key_id': 'str'
}
self.attribute_map = {
'address': 'address',
'port': 'port',
'tsig_key_id': 'tsigKeyId'
}
self._address = None
self._port = None
self._tsig_key_id = None
@property
def address(self):
"""
**[Required]** Gets the address of this ExternalMaster.
The server's IP address (IPv4 or IPv6).
:return: The address of this ExternalMaster.
:rtype: str
"""
return self._address
@address.setter
def address(self, address):
"""
Sets the address of this ExternalMaster.
The server's IP address (IPv4 or IPv6).
:param address: The address of this ExternalMaster.
:type: str
"""
self._address = address
@property
def port(self):
"""
Gets the port of this ExternalMaster.
The server's port. Port value must be a value of 53, otherwise omit
the port value.
:return: The port of this ExternalMaster.
:rtype: int
"""
return self._port
@port.setter
def port(self, port):
"""
Sets the port of this ExternalMaster.
The server's port. Port value must be a value of 53, otherwise omit
the port value.
:param port: The port of this ExternalMaster.
:type: int
"""
self._port = port
@property
def tsig_key_id(self):
"""
Gets the tsig_key_id of this ExternalMaster.
The OCID of the TSIG key.
:return: The tsig_key_id of this ExternalMaster.
:rtype: str
"""
return self._tsig_key_id
@tsig_key_id.setter
def tsig_key_id(self, tsig_key_id):
"""
Sets the tsig_key_id of this ExternalMaster.
The OCID of the TSIG key.
:param tsig_key_id: The tsig_key_id of this ExternalMaster.
:type: str
"""
self._tsig_key_id = tsig_key_id
def __repr__(self):
return formatted_flat_dict(self)
def __eq__(self, other):
if other is None:
return False
return self.__dict__ == other.__dict__
def __ne__(self, other):
return not self == other
| 27.244444 | 245 | 0.609027 | 474 | 3,678 | 4.537975 | 0.274262 | 0.065086 | 0.075314 | 0.066946 | 0.452348 | 0.308229 | 0.261274 | 0.18689 | 0.18689 | 0.18689 | 0 | 0.010289 | 0.312942 | 3,678 | 134 | 246 | 27.447761 | 0.840918 | 0.503535 | 0 | 0.068182 | 0 | 0 | 0.051957 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.227273 | false | 0 | 0.045455 | 0.045455 | 0.454545 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
b4024d84d4513279dde8eeb7b78e3491e9770d6e | 1,038 | py | Python | app/api/v1/models/user_model.py | munniomer/Send-IT-Api-v1 | 17041c987638c7e47c7c2ebed29bf7e2b5156bed | [
"CNRI-Python",
"OML"
] | null | null | null | app/api/v1/models/user_model.py | munniomer/Send-IT-Api-v1 | 17041c987638c7e47c7c2ebed29bf7e2b5156bed | [
"CNRI-Python",
"OML"
] | null | null | null | app/api/v1/models/user_model.py | munniomer/Send-IT-Api-v1 | 17041c987638c7e47c7c2ebed29bf7e2b5156bed | [
"CNRI-Python",
"OML"
] | 1 | 2019-02-05T07:44:19.000Z | 2019-02-05T07:44:19.000Z | users = []
class UserModel(object):
"""Class user models."""
def __init__(self):
self.db = users
def add_user(self, fname, lname, email, phone, password, confirm_password, city):
""" Method for saving user to the dictionary """
payload = {
"userId": len(self.db)+1,
"fname": fname,
"lname": lname,
"email": email,
"phone": phone,
"password": password,
"confirm_password": confirm_password,
"city": city,
}
self.db.append(payload)
return self.db
def check_email(self, email):
"""Method for checking if user email exist"""
user = [user for user in users if user['email'] == email]
if user:
return True
return False
def check_user(self, userId):
"""Method for checking if user exist"""
user = [user for user in users if user['userId'] == userId]
if user:
return True
return False
| 26.615385 | 85 | 0.531792 | 116 | 1,038 | 4.672414 | 0.318966 | 0.066421 | 0.127306 | 0.099631 | 0.306273 | 0.221402 | 0.121771 | 0.121771 | 0.121771 | 0 | 0 | 0.001493 | 0.354528 | 1,038 | 38 | 86 | 27.315789 | 0.807463 | 0.129094 | 0 | 0.222222 | 0 | 0 | 0.07378 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.148148 | false | 0.111111 | 0 | 0 | 0.37037 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
b40aad26fdc784cc5dfaf249f1c167e4160e4887 | 2,279 | py | Python | Exemple.py | LVWolff/Python_Lesson_2 | ece186f988c94a1aaa1656a1e6e1093c3d5b6251 | [
"MIT"
] | null | null | null | Exemple.py | LVWolff/Python_Lesson_2 | ece186f988c94a1aaa1656a1e6e1093c3d5b6251 | [
"MIT"
] | null | null | null | Exemple.py | LVWolff/Python_Lesson_2 | ece186f988c94a1aaa1656a1e6e1093c3d5b6251 | [
"MIT"
] | null | null | null | #Задачи на циклы и оператор условия------
#----------------------------------------
'''
Задача 1
Вывести на экран циклом пять строк из нулей, причем каждая строка должна быть пронумерована.
'''
for i in range(1, 6):
print(i, '0000000000000000000000000000000000000000000')
'''
Задача 2
Пользователь в цикле вводит 10 цифр. Найти количество введеных пользователем цифр 5.
'''
count = 0
for i in range(10):
user_data = int(input('Введите число: '))
if user_data == 5:
count += 1
print(count)
'''
Задача 3
Найти сумму ряда чисел от 1 до 100. Полученный результат вывести на экран.
'''
sum = 0
for i in range(1, 101):
sum += i
print(sum)
'''
Задача 4
Найти произведение ряда чисел от 1 до 10. Полученный результат вывести на экран.
'''
proiz = 1
for i in range(2, 11):
proiz *= i
print(proiz)
'''
Задача 5
Вывести цифры числа на каждой строчке.
'''
integer_number = 123456
start_del = len(str(integer_number)) - 1
delitel = 10 ** start_del
#print(integer_number % delitel, integer_number // delitel)
while integer_number > 0:
print(int(integer_number // delitel))
integer_number = integer_number % delitel
delitel /= 10
'''
Задача 6
Найти сумму цифр числа.
'''
integer_number = 123456
sum = 0
while integer_number > 0:
sum += integer_number % 10
integer_number = integer_number // 10
print(sum)
'''
Задача 7
Найти произведение цифр числа.
'''
integer_number = 123456
proiz = 1
while integer_number > 0:
proiz *= integer_number % 10
integer_number = integer_number // 10
print(proiz)
'''
Задача 8
Дать ответ на вопрос: есть ли среди цифр числа 5?
'''
integer_number = 125254
while integer_number > 0:
if integer_number % 10 == 5:
print('Yes')
break
integer_number = integer_number // 10
else:
print('No')
'''
Задача 9
Найти максимальную цифру в числе
'''
integer_number = 125278954
max_num = integer_number % 10
while integer_number > 0:
max_num = max(max_num, integer_number % 10)
integer_number = integer_number // 10
print(max_num)
'''
Задача 10
Найти количество цифр 5 в числе
'''
integer_number = 125278954
count_num = 0
while integer_number > 0:
if integer_number % 10 == 5:
count_num += 1
integer_number = integer_number // 10
print(count_num)
| 18.087302 | 92 | 0.67749 | 321 | 2,279 | 4.669782 | 0.302181 | 0.294863 | 0.110073 | 0.076051 | 0.424283 | 0.167445 | 0.14543 | 0.14543 | 0.14543 | 0 | 0 | 0.089906 | 0.204476 | 2,279 | 125 | 93 | 18.232 | 0.7369 | 0.105309 | 0 | 0.464286 | 0 | 0 | 0.043994 | 0.030028 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.196429 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
b40c71ed0a4ab0b122f61556dae6f792302c5678 | 776 | py | Python | lepiota/lepiota/urls.py | sgelias/lepiota | 4b30aa25ac5308229f6d41f1720e1af02557826e | [
"MIT"
] | null | null | null | lepiota/lepiota/urls.py | sgelias/lepiota | 4b30aa25ac5308229f6d41f1720e1af02557826e | [
"MIT"
] | null | null | null | lepiota/lepiota/urls.py | sgelias/lepiota | 4b30aa25ac5308229f6d41f1720e1af02557826e | [
"MIT"
] | null | null | null | from django.conf import settings
from django.conf.urls.static import static
from django.contrib import admin
from django.urls import path, re_path
from django.conf.urls import include
from django.views.generic import TemplateView, RedirectView
urlpatterns = [
# Administration
path('admin/', admin.site.urls),
# Accounts
path('account/', include('account.urls', namespace='account')),
# Oauth2
path('api/v1/o/', include('oauth.urls', namespace='oauth2_provider')),
# General purpose
path('welcome/', TemplateView.as_view(template_name="welcome.html")),
path('', RedirectView.as_view(url="/welcome/")),
re_path(r'^$', RedirectView.as_view(url="/welcome/")),
] + static(settings.STATIC_URL, document_root=settings.STATIC_ROOT)
| 29.846154 | 74 | 0.716495 | 97 | 776 | 5.628866 | 0.412371 | 0.10989 | 0.076923 | 0.065934 | 0.102564 | 0 | 0 | 0 | 0 | 0 | 0 | 0.004484 | 0.137887 | 776 | 25 | 75 | 31.04 | 0.811659 | 0.059278 | 0 | 0 | 0 | 0 | 0.147586 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.428571 | 0 | 0.428571 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
b4130d04b43c706ebb56a9d6ede2201a268db5d7 | 7,913 | py | Python | tensorflow/contrib/training/python/training/hparam_test.py | DEVESHTARASIA/tensorflow | d3edb8c60ed4fd831d62833ed22f5c23486c561c | [
"Apache-2.0"
] | 384 | 2017-02-21T18:38:04.000Z | 2022-02-22T07:30:25.000Z | tensorflow/contrib/training/python/training/hparam_test.py | DEVESHTARASIA/tensorflow | d3edb8c60ed4fd831d62833ed22f5c23486c561c | [
"Apache-2.0"
] | 15 | 2017-03-01T20:18:43.000Z | 2020-05-07T10:33:51.000Z | tensorflow/contrib/training/python/training/hparam_test.py | DEVESHTARASIA/tensorflow | d3edb8c60ed4fd831d62833ed22f5c23486c561c | [
"Apache-2.0"
] | 81 | 2017-02-21T19:31:19.000Z | 2022-02-22T07:30:24.000Z | # Copyright 2016 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Tests for hparam."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import six
from tensorflow.contrib.training.python.training import hparam
from tensorflow.python.platform import test
class HParamsTest(test.TestCase):
def _assertDictEquals(self, d1, d2):
self.assertEqual(len(d1), len(d2))
for k, v in six.iteritems(d1):
self.assertTrue(k in d2, k)
self.assertEquals(v, d2[k], d2[k])
def testEmpty(self):
hparams = hparam.HParams()
self._assertDictEquals({}, hparams.values())
hparams.parse('')
self._assertDictEquals({}, hparams.values())
with self.assertRaisesRegexp(ValueError, 'Unknown hyperparameter'):
hparams.parse('xyz=123')
def testSomeValues(self):
hparams = hparam.HParams(aaa=1, b=2.0, c_c='relu6')
self._assertDictEquals(
{'aaa': 1, 'b': 2.0, 'c_c': 'relu6'}, hparams.values())
expected_str = '[(\'aaa\', 1), (\'b\', 2.0), (\'c_c\', \'relu6\')]'
self.assertEquals(expected_str, str(hparams.__str__()))
self.assertEquals(expected_str, str(hparams))
self.assertEquals(1, hparams.aaa)
self.assertEquals(2.0, hparams.b)
self.assertEquals('relu6', hparams.c_c)
hparams.parse('aaa=12')
self._assertDictEquals(
{'aaa': 12, 'b': 2.0, 'c_c': 'relu6'}, hparams.values())
self.assertEquals(12, hparams.aaa)
self.assertEquals(2.0, hparams.b)
self.assertEquals('relu6', hparams.c_c)
hparams.parse('c_c=relu4,b=-2.0e10')
self._assertDictEquals({'aaa': 12, 'b': -2.0e10, 'c_c': 'relu4'},
hparams.values())
self.assertEquals(12, hparams.aaa)
self.assertEquals(-2.0e10, hparams.b)
self.assertEquals('relu4', hparams.c_c)
hparams.parse('c_c=,b=0,')
self._assertDictEquals({'aaa': 12, 'b': 0, 'c_c': ''}, hparams.values())
self.assertEquals(12, hparams.aaa)
self.assertEquals(0.0, hparams.b)
self.assertEquals('', hparams.c_c)
hparams.parse('c_c=2.3",b=+2,')
self.assertEquals(2.0, hparams.b)
self.assertEquals('2.3"', hparams.c_c)
with self.assertRaisesRegexp(ValueError, 'Unknown hyperparameter'):
hparams.parse('x=123')
with self.assertRaisesRegexp(ValueError, 'Could not parse'):
hparams.parse('aaa=poipoi')
with self.assertRaisesRegexp(ValueError, 'Could not parse'):
hparams.parse('aaa=1.0')
with self.assertRaisesRegexp(ValueError, 'Could not parse'):
hparams.parse('b=12x')
with self.assertRaisesRegexp(ValueError, 'Could not parse'):
hparams.parse('b=relu')
with self.assertRaisesRegexp(ValueError, 'Must not pass a list'):
hparams.parse('aaa=[123]')
self.assertEquals(12, hparams.aaa)
self.assertEquals(2.0, hparams.b)
self.assertEquals('2.3"', hparams.c_c)
# Exports to proto.
hparam_def = hparams.to_proto()
# Imports from proto.
hparams2 = hparam.HParams(hparam_def=hparam_def)
# Verifies that all hparams are restored.
self.assertEquals(12, hparams2.aaa)
self.assertEquals(2.0, hparams2.b)
self.assertEquals('2.3"', hparams2.c_c)
def testBoolParsing(self):
for value in 'true', 'false', 'True', 'False', '1', '0':
for initial in False, True:
hparams = hparam.HParams(use_gpu=initial)
hparams.parse('use_gpu=' + value)
self.assertEqual(hparams.use_gpu, value in ['True', 'true', '1'])
# Exports to proto.
hparam_def = hparams.to_proto()
# Imports from proto.
hparams2 = hparam.HParams(hparam_def=hparam_def)
self.assertEquals(hparams.use_gpu, hparams2.use_gpu)
# Check that hparams2.use_gpu is a bool rather than an int.
# The assertEquals() call above won't catch this, since
# (0 == False) and (1 == True) in Python.
self.assertEquals(bool, type(hparams2.use_gpu))
def testBoolParsingFail(self):
hparams = hparam.HParams(use_gpu=True)
with self.assertRaisesRegexp(ValueError, r'Could not parse.*use_gpu'):
hparams.parse('use_gpu=yep')
def testLists(self):
hparams = hparam.HParams(aaa=[1], b=[2.0, 3.0], c_c=['relu6'])
self._assertDictEquals({'aaa': [1], 'b': [2.0, 3.0], 'c_c': ['relu6']},
hparams.values())
self.assertEquals([1], hparams.aaa)
self.assertEquals([2.0, 3.0], hparams.b)
self.assertEquals(['relu6'], hparams.c_c)
hparams.parse('aaa=[12]')
self.assertEquals([12], hparams.aaa)
hparams.parse('aaa=[12,34,56]')
self.assertEquals([12, 34, 56], hparams.aaa)
hparams.parse('c_c=[relu4,relu12],b=[1.0]')
self.assertEquals(['relu4', 'relu12'], hparams.c_c)
self.assertEquals([1.0], hparams.b)
hparams.parse('c_c=[],aaa=[-34]')
self.assertEquals([-34], hparams.aaa)
self.assertEquals([], hparams.c_c)
hparams.parse('c_c=[_12,3\'4"],aaa=[+3]')
self.assertEquals([3], hparams.aaa)
self.assertEquals(['_12', '3\'4"'], hparams.c_c)
with self.assertRaisesRegexp(ValueError, 'Unknown hyperparameter'):
hparams.parse('x=[123]')
with self.assertRaisesRegexp(ValueError, 'Could not parse'):
hparams.parse('aaa=[poipoi]')
with self.assertRaisesRegexp(ValueError, 'Could not parse'):
hparams.parse('aaa=[1.0]')
with self.assertRaisesRegexp(ValueError, 'Could not parse'):
hparams.parse('b=[12x]')
with self.assertRaisesRegexp(ValueError, 'Could not parse'):
hparams.parse('b=[relu]')
with self.assertRaisesRegexp(ValueError, 'Must pass a list'):
hparams.parse('aaa=123')
# Exports to proto.
hparam_def = hparams.to_proto()
# Imports from proto.
hparams2 = hparam.HParams(hparam_def=hparam_def)
# Verifies that all hparams are restored.
self.assertEquals([3], hparams2.aaa)
self.assertEquals([1.0], hparams2.b)
self.assertEquals(['_12', '3\'4"'], hparams2.c_c)
def testJson(self):
hparams = hparam.HParams(aaa=1, b=2.0, c_c='relu6', d=True)
self._assertDictEquals(
{'aaa': 1, 'b': 2.0, 'c_c': 'relu6', 'd': True}, hparams.values())
self.assertEquals(1, hparams.aaa)
self.assertEquals(2.0, hparams.b)
self.assertEquals('relu6', hparams.c_c)
hparams.parse_json('{"aaa": 12, "b": 3.0, "c_c": "relu4", "d": false}')
self._assertDictEquals(
{'aaa': 12, 'b': 3.0, 'c_c': 'relu4', 'd': False}, hparams.values())
self.assertEquals(12, hparams.aaa)
self.assertEquals(3.0, hparams.b)
self.assertEquals('relu4', hparams.c_c)
json_str = hparams.to_json()
hparams2 = hparam.HParams(aaa=10, b=20.0, c_c='hello', d=False)
hparams2.parse_json(json_str)
self.assertEquals(12, hparams2.aaa)
self.assertEquals(3.0, hparams2.b)
self.assertEquals('relu4', hparams2.c_c)
self.assertEquals(False, hparams2.d)
def testNonProtoFails(self):
with self.assertRaisesRegexp(AssertionError, ''):
hparam.HParams(hparam_def=1)
with self.assertRaisesRegexp(AssertionError, ''):
hparam.HParams(hparam_def=1.0)
with self.assertRaisesRegexp(AssertionError, ''):
hparam.HParams(hparam_def='hello')
with self.assertRaisesRegexp(AssertionError, ''):
hparam.HParams(hparam_def=[1, 2, 3])
if __name__ == '__main__':
test.main()
| 40.372449 | 80 | 0.65841 | 1,052 | 7,913 | 4.85076 | 0.162548 | 0.153635 | 0.091711 | 0.098765 | 0.627474 | 0.576132 | 0.55242 | 0.524201 | 0.47913 | 0.396238 | 0 | 0.037315 | 0.173638 | 7,913 | 195 | 81 | 40.579487 | 0.74308 | 0.129534 | 0 | 0.331126 | 0 | 0.006623 | 0.112715 | 0.003791 | 0 | 0 | 0 | 0 | 0.529801 | 1 | 0.05298 | false | 0.013245 | 0.039735 | 0 | 0.099338 | 0.006623 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
b415b852eb1504fe65a58d7db038c31b5386abda | 2,616 | py | Python | thelma/repositories/rdb/view.py | fogathmann/TheLMA | ac330a0005da4fea2f1387da9ff9938611ad1481 | [
"MIT"
] | 1 | 2020-07-12T22:47:58.000Z | 2020-07-12T22:47:58.000Z | thelma/repositories/rdb/view.py | papagr/TheLMA | d2dc7a478ee5d24ccf3cc680888e712d482321d0 | [
"MIT"
] | null | null | null | thelma/repositories/rdb/view.py | papagr/TheLMA | d2dc7a478ee5d24ccf3cc680888e712d482321d0 | [
"MIT"
] | 1 | 2020-07-12T22:40:36.000Z | 2020-07-12T22:40:36.000Z | """
This file is part of the TheLMA (THe Laboratory Management Application) project.
See LICENSE.txt for licensing, CONTRIBUTORS.txt for contributor information.
Utilities to create/drop views.
Based on a recipe published in:
http://www.sqlalchemy.org/trac/wiki/UsageRecipes/Views
"""
from sqlalchemy.sql import table
from sqlalchemy.ext import compiler
from sqlalchemy.schema import DDLElement
__docformat__ = 'reStructuredText en'
__all__ = ['CreateView',
'DropView',
'view_factory',
]
class CreateView(DDLElement):
def __init__(self, name, selectable): # pylint: disable=W0231
self.name = name
self.selectable = selectable
class DropView(DDLElement):
def __init__(self, name): # pylint: disable=W0231
self.name = name
@compiler.compiles(CreateView, 'postgresql')
def create_view_compile_postgresql(element, compiler, **kw): # pylint: disable=W0621,W0613
selection = compiler.sql_compiler.process(element.selectable)
stmt = "CREATE OR REPLACE VIEW %s AS %s" % (element.name, selection)
# FIXME: we should not combine the statement and params here.
# it is a SQLAlchemy bug... report it.
params = {}
for k, v in element.selectable.compile().params.iteritems():
params[k] = ("'%s'" % v) if isinstance(v, basestring) else v
return stmt % params
@compiler.compiles(CreateView, 'sqlite')
def create_view_compile_sqlite(element, compiler, **kw): # pylint: disable=W0621,W0613
# FIXME: duplicate code
# FIXME: it seems that there is a bug in SQLAlchemy and creating views
# this way emits an exception
selection = compiler.sql_compiler.process(element.selectable)
stmt = "CREATE VIEW %s AS %s" % (element.name, selection)
# FIXME: we should not combine the statement and params here.
# it is a SQLAlchemy bug... report it.
params = {}
for k, v in element.selectable.compile().params.iteritems():
params[k] = ("'%s'" % v) if isinstance(v, basestring) else v
return stmt % params
@compiler.compiles(DropView)
def drop_view_compile(element, compiler, **kw): # pylint: disable=W0621,W0613
return "DROP VIEW %s" % (element.name)
def view_factory(name, metadata, selectable):
if not hasattr(metadata, 'views'):
metadata.views = {}
metadata.views[name] = table(name)
for c in selectable.c:
c._make_proxy(metadata.views[name]) # pylint: disable=W0212
CreateView(name, selectable).execute_at('after-create', metadata)
DropView(name).execute_at('before-drop', metadata)
return metadata.views[name]
| 33.974026 | 90 | 0.69419 | 332 | 2,616 | 5.373494 | 0.35241 | 0.043722 | 0.028587 | 0.038677 | 0.44843 | 0.420404 | 0.386771 | 0.319507 | 0.319507 | 0.25 | 0 | 0.017086 | 0.194572 | 2,616 | 76 | 91 | 34.421053 | 0.829616 | 0.292049 | 0 | 0.27907 | 0 | 0 | 0.08952 | 0 | 0 | 0 | 0 | 0.013158 | 0 | 1 | 0.139535 | false | 0 | 0.069767 | 0.023256 | 0.348837 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
b41c9702fa909cdc15c31981b7aeb56a1df4c9bb | 534 | py | Python | src/commands/__init__.py | lysol/lvlss | ca068de516159be732d2cb8c4752dee4f4ef2e09 | [
"MIT"
] | null | null | null | src/commands/__init__.py | lysol/lvlss | ca068de516159be732d2cb8c4752dee4f4ef2e09 | [
"MIT"
] | null | null | null | src/commands/__init__.py | lysol/lvlss | ca068de516159be732d2cb8c4752dee4f4ef2e09 | [
"MIT"
] | null | null | null | from quit import Quit
from set_name import SetName
from who import Who
from say import Say
from look import Look
from go import Go
from take import Take
from inventory import Inventory
from drop import Drop
from make import Make
from landfill import Landfill
from item_info import ItemInfo
from script import SetScript, GetScript
from image_editing import ImageEditing
all_commands = (Quit, SetName, Who, Say, Look,
Go, Take, Inventory, Drop, Make, Landfill,
SetScript, GetScript, ItemInfo, ImageEditing)
| 28.105263 | 50 | 0.773408 | 77 | 534 | 5.311688 | 0.337662 | 0.08802 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.192884 | 534 | 18 | 51 | 29.666667 | 0.948956 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.823529 | 0 | 0.823529 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
b41e6039b9544ca2bf93ee054b91393cabc444ec | 1,343 | py | Python | Wallpaper change.py | Arbazkhan4712/Wallpaper-Changer-using-Python | a221443bc7e7b5410f06653fa741b9d7af0fe10f | [
"MIT"
] | 4 | 2020-04-17T06:39:23.000Z | 2021-12-25T11:05:16.000Z | Wallpaper change.py | Arbazkhan4712/Wallpaper-Changer-using-Python | a221443bc7e7b5410f06653fa741b9d7af0fe10f | [
"MIT"
] | null | null | null | Wallpaper change.py | Arbazkhan4712/Wallpaper-Changer-using-Python | a221443bc7e7b5410f06653fa741b9d7af0fe10f | [
"MIT"
] | 3 | 2020-04-03T12:36:20.000Z | 2020-06-06T15:12:04.000Z | import ctypes
import os
import time
from pynput.keyboard import Key,Controller
import Bing
def closeTerminal():
keyboard=Controller()
keyboard.press(Key.alt)
keyboard.press(Key.f4)
keyboard.release(Key.alt)
keyboard.release(Key.f4)
def changeWallpaper(image_path):
start=time.time()
end=time.time()
while True:
for dirname,dirnames,filenames in os.walk(image_path):
for file_name in filenames:
if (end-start)//3600 > 6:
try:
Bing.wallpaper_of_the_day(image_path)
start=time.time()
except:
pass
if file_name.endswith('.png') or file_name.endswith('.jpg'):
image=os.path.join(image_path,dirname,file_name)
SPI_SETDESKTOPWALLPAPER=20
ctypes.windll.user32.SystemParametersInfoW(SPI_SETDESKTOPWALLPAPER,0,image,3)
time.sleep(30)
end=time.time()
def main():
closeTerminal()
#configure own folder
image_path = r'D:\Wallpapers'
try:
os.makedirs(image_path)
except:
pass
try:
Bing.wallpaper_of_the_day(image_path)
except:
pass
changeWallpaper(image_path)
if __name__=='__main__':
main()
| 26.86 | 97 | 0.581534 | 150 | 1,343 | 5.02 | 0.426667 | 0.095618 | 0.042497 | 0.047809 | 0.13413 | 0.087649 | 0.087649 | 0.087649 | 0 | 0 | 0 | 0.016556 | 0.325391 | 1,343 | 49 | 98 | 27.408163 | 0.81457 | 0.014892 | 0 | 0.348837 | 0 | 0 | 0.021936 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.069767 | false | 0.069767 | 0.116279 | 0 | 0.186047 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
b41e7a6675758027f59252fdd90ad0a28c111058 | 976 | py | Python | flask_start/flask_start/public/email.py | kostekci/flask_start | fa279fc8907aff9868e2596f4ed9c4d9428d2f75 | [
"MIT"
] | null | null | null | flask_start/flask_start/public/email.py | kostekci/flask_start | fa279fc8907aff9868e2596f4ed9c4d9428d2f75 | [
"MIT"
] | 95 | 2021-09-13T21:23:12.000Z | 2022-03-31T21:22:32.000Z | flask_start/flask_start/public/email.py | kostekci/flask_start | fa279fc8907aff9868e2596f4ed9c4d9428d2f75 | [
"MIT"
] | null | null | null | from flask_mail import Message
from flask import render_template
from flask_start.extensions import mail
'''
from threading import Thread
def send_async_email(app, msg):
with app.app_context():
mail.send(msg)
'''
def send_email(subject, sender, recipients, text_body, html_body):
msg = Message(subject, sender=sender, recipients=recipients)
msg.body = text_body
msg.html = html_body
mail.send(msg)
#Thread(target=send_async_email, args=(app, msg)).start()
def send_password_reset_email(user):
token = user.get_reset_password_token()
send_email('Reset Your Password',
sender='admin@test.test',
recipients=[user.email],
text_body=render_template('public/reset_password_mail.txt',
user=user, token=token),
html_body=render_template('public/reset_password_mail.html',
user=user, token=token))
| 32.533333 | 75 | 0.646516 | 120 | 976 | 5.025 | 0.308333 | 0.044776 | 0.046434 | 0.079602 | 0.135987 | 0.135987 | 0.135987 | 0 | 0 | 0 | 0 | 0 | 0.254098 | 976 | 29 | 76 | 33.655172 | 0.828297 | 0.057377 | 0 | 0 | 0 | 0 | 0.118899 | 0.076345 | 0 | 0 | 0 | 0 | 0 | 1 | 0.117647 | false | 0.294118 | 0.176471 | 0 | 0.294118 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
b42110e69fbba6f3cc1175f605afe65f09844634 | 5,211 | py | Python | validation_tests/analytical_exact/river_at_rest_varying_topo_width/numerical_varying_width.py | samcom12/anuga_core | f4378114dbf02d666fe6423de45798add5c42806 | [
"Python-2.0",
"OLDAP-2.7"
] | 136 | 2015-05-07T05:47:43.000Z | 2022-02-16T03:07:40.000Z | validation_tests/analytical_exact/river_at_rest_varying_topo_width/numerical_varying_width.py | samcom12/anuga_core | f4378114dbf02d666fe6423de45798add5c42806 | [
"Python-2.0",
"OLDAP-2.7"
] | 184 | 2015-05-03T09:27:54.000Z | 2021-12-20T04:22:48.000Z | validation_tests/analytical_exact/river_at_rest_varying_topo_width/numerical_varying_width.py | samcom12/anuga_core | f4378114dbf02d666fe6423de45798add5c42806 | [
"Python-2.0",
"OLDAP-2.7"
] | 70 | 2015-03-18T07:35:22.000Z | 2021-11-01T07:07:29.000Z | """Simple water flow example using ANUGA
Water driven up a linear slope and time varying boundary,
similar to a beach environment
"""
#------------------------------------------------------------------------------
# Import necessary modules
#------------------------------------------------------------------------------
import sys
import anuga
from anuga import myid, finalize, distribute
from anuga import Domain as Domain
from math import cos
from numpy import zeros, ones, array, interp, polyval, ones_like, zeros_like
from numpy import where, logical_and
from time import localtime, strftime, gmtime
from scipy.interpolate import interp1d
from anuga.geometry.polygon import inside_polygon, is_inside_triangle
#from balanced_dev import *
#-------------------------------------------------------------------------------
# Copy scripts to time stamped output directory and capture screen
# output to file
#-------------------------------------------------------------------------------
time = strftime('%Y%m%d_%H%M%S',localtime())
#output_dir = 'varying_width'+time
output_dir = '.'
output_file = 'varying_width'
#anuga.copy_code_files(output_dir,__file__)
#start_screen_catcher(output_dir+'_')
args = anuga.get_args()
alg = args.alg
verbose = args.verbose
#------------------------------------------------------------------------------
# Setup domain
#------------------------------------------------------------------------------
dx = 1.
dy = dx
L = 1500.
W = 60.
#===============================================================================
# Create sequential domain
#===============================================================================
if myid == 0:
# structured mesh
points, vertices, boundary = anuga.rectangular_cross(int(L/dx), int(W/dy), L, W, (0.,-W/2.))
#domain = anuga.Domain(points, vertices, boundary)
domain = Domain(points, vertices, boundary)
domain.set_name(output_file)
domain.set_datadir(output_dir)
#------------------------------------------------------------------------------
# Setup Algorithm, either using command line arguments
# or override manually yourself
#------------------------------------------------------------------------------
domain.set_flow_algorithm(alg)
#------------------------------------------------------------------------------
# Setup initial conditions
#------------------------------------------------------------------------------
domain.set_quantity('friction', 0.0)
domain.set_quantity('stage', 12.0)
XX = array([0.,50.,100.,150.,250.,300.,350.,400.,425.,435.,450.,470.,475.,500.,
505.,530.,550.,565.,575.,600.,650.,700.,750.,800.,820.,900.,950.,
1000.,1500.])
ZZ = array([0.,0.,2.5,5.,5.,3.,5.,5.,7.5,8.,9.,9.,9.,9.1,9.,9.,6.,5.5,5.5,5.,
4.,3.,3.,2.3,2.,1.2,0.4,0.,0.])
WW = array([40.,40.,30.,30.,30.,30.,25.,25.,30.,35.,35.,40.,40.,40.,45.,45.,50.,
45.,40.,40.,30.,40.,40.,5.,40.,35.,25.,40.,40.])/2.
depth = interp1d(XX, ZZ)
width = interp1d(XX, WW)
def bed_elevation(x,y):
z = 25.0*ones_like(x)
wid = width(x)
dep = depth(x)
z = where( logical_and(y < wid, y>-wid), dep, z)
return z
domain.set_quantity('elevation', bed_elevation)
else:
domain = None
#===========================================================================
# Create Parallel domain
#===========================================================================
domain = distribute(domain)
#-----------------------------------------------------------------------------
# Setup boundary conditions
#------------------------------------------------------------------------------
from math import sin, pi, exp
Br = anuga.Reflective_boundary(domain) # Solid reflective wall
#Bt = anuga.Transmissive_boundary(domain) # Continue all values on boundary
#Bd = anuga.Dirichlet_boundary([1,0.,0.]) # Constant boundary values
# Associate boundary tags with boundary objects
domain.set_boundary({'left': Br, 'right': Br, 'top': Br, 'bottom': Br})
#------------------------------------------------------------------------------
# Produce a documentation of parameters
#------------------------------------------------------------------------------
if myid == 0:
parameter_file=open('parameters.tex', 'w')
parameter_file.write('\\begin{verbatim}\n')
from pprint import pprint
pprint(domain.get_algorithm_parameters(),parameter_file,indent=4)
parameter_file.write('\\end{verbatim}\n')
parameter_file.close()
#------------------------------------------------------------------------------
# Evolve system through time
#------------------------------------------------------------------------------
import time
t0 = time.time()
for t in domain.evolve(yieldstep = 0.1, finaltime = 5.0):
#print(domain.timestepping_statistics(track_speeds=True))
if myid == 0 and verbose: print(domain.timestepping_statistics())
#vis.update()
if myid == 0 and verbose: print('That took %s sec' % str(time.time()-t0))
domain.sww_merge(delete_old=True)
finalize()
| 36.440559 | 96 | 0.459797 | 538 | 5,211 | 4.351301 | 0.431227 | 0.026912 | 0.011961 | 0.023921 | 0.047843 | 0.018795 | 0 | 0 | 0 | 0 | 0 | 0.047788 | 0.136634 | 5,211 | 142 | 97 | 36.697183 | 0.472549 | 0.485895 | 0 | 0.03125 | 0 | 0 | 0.051048 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.015625 | false | 0 | 0.203125 | 0 | 0.234375 | 0.0625 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
b426f99a8bac6c3327cab3da97ce79ef51269da3 | 1,068 | py | Python | commands/calc.py | periodicaidan/dalton-cli | 6a83e1a2675e335bf807c43c4201d78e5b389837 | [
"MIT"
] | 2 | 2018-12-21T19:09:49.000Z | 2018-12-22T10:41:36.000Z | commands/calc.py | periodicaidan/dalton-cli | 6a83e1a2675e335bf807c43c4201d78e5b389837 | [
"MIT"
] | null | null | null | commands/calc.py | periodicaidan/dalton-cli | 6a83e1a2675e335bf807c43c4201d78e5b389837 | [
"MIT"
] | null | null | null | """
File: commands/calc.py
Purpose: Performs calculations in response to user input, and outputs the result
"""
from sys import argv
import click
from calculator import *
from models import History
from models.Config import Config
from help_menus import calc_help
@click.group("calc", invoke_without_command=True)
@click.option("-M", "--mass-spec",
is_flag=True, default=False,
help="Get a theoretical mass spectrum of a molecule")
@click.option("-i", "--histogram",
is_flag=True, default=False,
help="Use with -M/--mass-spec to display the mass spec as a histogram")
@click.argument("formula", required=False)
def calc(mass_spec, histogram, formula):
config = Config.setup() # todo: Pass as context
if not any(locals().items()) or len(argv) == 2:
calc_help()
else:
if mass_spec:
click.echo(get_mass_spec(formula, histogram))
else:
mass = History.get(formula)["mass"] or get_mass(formula)
click.echo("%.3f %s" % (mass, config.units))
| 31.411765 | 85 | 0.652622 | 146 | 1,068 | 4.691781 | 0.5 | 0.070073 | 0.026277 | 0.049635 | 0.075912 | 0.075912 | 0 | 0 | 0 | 0 | 0 | 0.002413 | 0.223783 | 1,068 | 33 | 86 | 32.363636 | 0.823884 | 0.117978 | 0 | 0.166667 | 0 | 0 | 0.167024 | 0 | 0 | 0 | 0 | 0.030303 | 0 | 1 | 0.041667 | false | 0 | 0.25 | 0 | 0.291667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
b4378b3e91302a7b53287f43ef0ed313d4ff8c2f | 1,992 | py | Python | tests/test_pythonpath.py | browniebroke/pytest-srcpaths | c0bf4a9b521c8f7af029f9923b344936cf425bf1 | [
"MIT"
] | 26 | 2021-02-18T20:49:41.000Z | 2022-02-08T21:06:20.000Z | tests/test_pythonpath.py | browniebroke/pytest-srcpaths | c0bf4a9b521c8f7af029f9923b344936cf425bf1 | [
"MIT"
] | null | null | null | tests/test_pythonpath.py | browniebroke/pytest-srcpaths | c0bf4a9b521c8f7af029f9923b344936cf425bf1 | [
"MIT"
] | 2 | 2021-04-04T01:45:37.000Z | 2022-02-07T11:28:51.000Z | import sys
from typing import Generator
from typing import List
from typing import Optional
import pytest
from _pytest.pytester import Pytester
def test_one_dir_pythonpath(pytester: Pytester, file_structure) -> None:
pytester.makefile(".ini", pytest="[pytest]\npythonpath=sub\n")
result = pytester.runpytest("test_foo.py")
assert result.ret == 0
result.assert_outcomes(passed=1)
def test_two_dirs_pythonpath(pytester: Pytester, file_structure) -> None:
pytester.makefile(".ini", pytest="[pytest]\npythonpath=sub sub2\n")
result = pytester.runpytest("test_foo.py", "test_bar.py")
assert result.ret == 0
result.assert_outcomes(passed=2)
def test_unconfigure_unadded_dir_pythonpath(pytester: Pytester) -> None:
pytester.makeconftest(
"""
def pytest_configure(config):
config.addinivalue_line("pythonpath", "sub")
"""
)
pytester.makepyfile(
"""
import sys
def test_something():
pass
"""
)
result = pytester.runpytest()
result.assert_outcomes(passed=1)
def test_clean_up_pythonpath(pytester: Pytester) -> None:
"""Test that the srcpaths plugin cleans up after itself."""
pytester.makefile(".ini", pytest="[pytest]\npythonpath=I_SHALL_BE_REMOVED\n")
pytester.makepyfile(test_foo="""def test_foo(): pass""")
before: Optional[List[str]] = None
after: Optional[List[str]] = None
class Plugin:
@pytest.hookimpl(hookwrapper=True, tryfirst=True)
def pytest_unconfigure(self) -> Generator[None, None, None]:
nonlocal before, after
before = sys.path.copy()
yield
after = sys.path.copy()
result = pytester.runpytest_inprocess(plugins=[Plugin()])
assert result.ret == 0
assert before is not None
assert after is not None
assert any("I_SHALL_BE_REMOVED" in entry for entry in before)
assert not any("I_SHALL_BE_REMOVED" in entry for entry in after)
| 30.181818 | 81 | 0.676205 | 246 | 1,992 | 5.321138 | 0.325203 | 0.032086 | 0.07945 | 0.057296 | 0.36822 | 0.36822 | 0.336134 | 0.255157 | 0.255157 | 0.18793 | 0 | 0.004456 | 0.211345 | 1,992 | 65 | 82 | 30.646154 | 0.828771 | 0.026606 | 0 | 0.121951 | 0 | 0 | 0.114171 | 0.052209 | 0 | 0 | 0 | 0 | 0.243902 | 1 | 0.121951 | false | 0.097561 | 0.146341 | 0 | 0.292683 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
b442fb148ab72708b2f20e85644d227c7977348c | 453 | py | Python | ejercicio 14.py | Davidpadilla1234/taller_estructura-secuencial | 3a65931ad75fd4902f406c6c872053169dad1a0b | [
"MIT"
] | null | null | null | ejercicio 14.py | Davidpadilla1234/taller_estructura-secuencial | 3a65931ad75fd4902f406c6c872053169dad1a0b | [
"MIT"
] | null | null | null | ejercicio 14.py | Davidpadilla1234/taller_estructura-secuencial | 3a65931ad75fd4902f406c6c872053169dad1a0b | [
"MIT"
] | null | null | null | """
Entradas:
lectura actual--->float--->lect2
lectura anterior--->float--->lect1
valor kw--->float--->valorkw
Salidas:
consumo--->float--->consumo
total factura-->flotante--->total
"""
lect2 = float ( entrada ( "Digite lectura real:" ))
lect1 = float ( entrada ( "Digite lectura anterior:" ))
valorkw = float ( input ( "Valor del kilowatio: " ))
consumo = ( lect2 - lect1 )
total = ( consumo * valorkw )
print ( "El valor a pagar es: " + str ( total )) | 30.2 | 55 | 0.653422 | 53 | 453 | 5.584906 | 0.509434 | 0.101351 | 0.121622 | 0.168919 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.015666 | 0.154525 | 453 | 15 | 56 | 30.2 | 0.75718 | 0.390728 | 0 | 0 | 0 | 0 | 0.319703 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.166667 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
b443e69cd16f1827fe9ba10cb1499425321f1ac2 | 1,059 | py | Python | manage.py | xinbingliang/dockertest | aca2a508658681a5e6b1beab714059bf1b43d9ed | [
"MIT"
] | 30 | 2018-05-23T16:58:12.000Z | 2021-10-18T21:25:01.000Z | manage.py | xinbingliang/dockertest | aca2a508658681a5e6b1beab714059bf1b43d9ed | [
"MIT"
] | 2 | 2019-12-01T13:32:50.000Z | 2019-12-01T13:32:53.000Z | manage.py | xinbingliang/dockertest | aca2a508658681a5e6b1beab714059bf1b43d9ed | [
"MIT"
] | 136 | 2018-02-04T14:13:33.000Z | 2022-03-09T08:26:07.000Z | # manage.py
import unittest
from flask_script import Manager
from flask_migrate import Migrate, MigrateCommand
from skeleton.server import app, db
from skeleton.server.models import User
migrate = Migrate(app, db)
manager = Manager(app)
# migrations
manager.add_command('db', MigrateCommand)
@manager.command
def test():
"""Runs the unit tests without coverage."""
tests = unittest.TestLoader().discover('tests', pattern='test*.py')
result = unittest.TextTestRunner(verbosity=2).run(tests)
if result.wasSuccessful():
return 0
else:
return 1
@manager.command
def create_db():
"""Creates the db tables."""
db.create_all()
@manager.command
def drop_db():
"""Drops the db tables."""
db.drop_all()
@manager.command
def create_admin():
"""Creates the admin user."""
db.session.add(User(email='admin@cisco.com', password='admin', admin=True))
db.session.commit()
@manager.command
def create_data():
"""Creates sample data."""
pass
if __name__ == '__main__':
manager.run()
| 18.578947 | 79 | 0.685552 | 135 | 1,059 | 5.251852 | 0.444444 | 0.098731 | 0.119887 | 0.09732 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.003448 | 0.17847 | 1,059 | 56 | 80 | 18.910714 | 0.811494 | 0.139754 | 0 | 0.16129 | 0 | 0 | 0.048643 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.16129 | false | 0.064516 | 0.16129 | 0 | 0.387097 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
b4484ab703976e8f170a719cc81c5d0146cb13ba | 533 | py | Python | dictionaries/lab/06_students.py | Galchov/python-fundamentals | 4939bdd1c66a7b458fd9ffd0a01d714de26724b5 | [
"MIT"
] | null | null | null | dictionaries/lab/06_students.py | Galchov/python-fundamentals | 4939bdd1c66a7b458fd9ffd0a01d714de26724b5 | [
"MIT"
] | null | null | null | dictionaries/lab/06_students.py | Galchov/python-fundamentals | 4939bdd1c66a7b458fd9ffd0a01d714de26724b5 | [
"MIT"
] | null | null | null | data = input()
courses = {}
while ":" in data:
student_name, id, course_name = data.split(":")
if course_name not in courses:
courses[course_name] = {}
courses[course_name][id] = student_name
data = input()
searched_course = data
searched_course_name_as_list = searched_course.split("_")
searched_course = " ".join(searched_course_name_as_list)
for course_name in courses:
if course_name == searched_course:
for id, name in courses[course_name].items():
print(f"{name} - {id}")
| 23.173913 | 57 | 0.669794 | 71 | 533 | 4.71831 | 0.28169 | 0.268657 | 0.152239 | 0.119403 | 0.143284 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.206379 | 533 | 22 | 58 | 24.227273 | 0.791962 | 0 | 0 | 0.133333 | 0 | 0 | 0.031895 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.066667 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
b44954b2c2b3e9462c5ae4cfc721ce64071a8588 | 1,184 | py | Python | 04.Encapsulation/Exe/pizza_maker/project/main.py | nmoskova/Python-OOP | 07327bcb93eee3a7db5d7c0bbdd1b54eb9e8b864 | [
"MIT"
] | null | null | null | 04.Encapsulation/Exe/pizza_maker/project/main.py | nmoskova/Python-OOP | 07327bcb93eee3a7db5d7c0bbdd1b54eb9e8b864 | [
"MIT"
] | null | null | null | 04.Encapsulation/Exe/pizza_maker/project/main.py | nmoskova/Python-OOP | 07327bcb93eee3a7db5d7c0bbdd1b54eb9e8b864 | [
"MIT"
] | null | null | null | from encapsulation_04.exe.pizza_maker.project.dough import Dough
from encapsulation_04.exe.pizza_maker.project.pizza import Pizza
from encapsulation_04.exe.pizza_maker.project.topping import Topping
tomato_topping = Topping("Tomato", 60)
print(tomato_topping.topping_type)
print(tomato_topping.weight)
mushrooms_topping = Topping("Mushroom", 75)
print(mushrooms_topping.topping_type)
print(mushrooms_topping.weight)
mozzarella_topping = Topping("Mozzarella", 80)
print(mozzarella_topping.topping_type)
print(mozzarella_topping.weight)
cheddar_topping = Topping("Cheddar", 150)
pepperoni_topping = Topping("Pepperoni", 120)
white_flour_dough = Dough("White Flour", "Mixing", 200)
print(white_flour_dough.flour_type)
print(white_flour_dough.weight)
print(white_flour_dough.baking_technique)
whole_wheat_dough = Dough("Whole Wheat Flour", "Mixing", 200)
print(whole_wheat_dough.weight)
print(whole_wheat_dough.flour_type)
print(whole_wheat_dough.baking_technique)
p = Pizza("Margherita", whole_wheat_dough, 2)
p.add_topping(tomato_topping)
print(p.calculate_total_weight())
p.add_topping(mozzarella_topping)
print(p.calculate_total_weight())
p.add_topping(mozzarella_topping)
| 29.6 | 68 | 0.831081 | 165 | 1,184 | 5.648485 | 0.230303 | 0.120172 | 0.080472 | 0.070815 | 0.248927 | 0.248927 | 0.248927 | 0.123391 | 0.123391 | 0.123391 | 0 | 0.022604 | 0.065878 | 1,184 | 39 | 69 | 30.358974 | 0.820072 | 0 | 0 | 0.142857 | 0 | 0 | 0.076014 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.107143 | 0 | 0.107143 | 0.5 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 |
b4561bfc43f0bcb4bcb4c7719b19ceba05dfa31d | 853 | py | Python | onnx/backend/test/case/node/constant.py | stillmatic/onnx | 8d5eb62d5299f6dcb6ac787f0ea8e6cf5b8331a7 | [
"Apache-2.0"
] | null | null | null | onnx/backend/test/case/node/constant.py | stillmatic/onnx | 8d5eb62d5299f6dcb6ac787f0ea8e6cf5b8331a7 | [
"Apache-2.0"
] | null | null | null | onnx/backend/test/case/node/constant.py | stillmatic/onnx | 8d5eb62d5299f6dcb6ac787f0ea8e6cf5b8331a7 | [
"Apache-2.0"
] | null | null | null | # SPDX-License-Identifier: Apache-2.0
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from __future__ import unicode_literals
import numpy as np
import onnx
from ..base import Base
from . import expect
class Constant(Base):
@staticmethod
def export(): # type: () -> None
values = np.random.randn(5, 5).astype(np.float32)
node = onnx.helper.make_node(
'Constant',
inputs=[],
outputs=['values'],
value=onnx.helper.make_tensor(
name='const_tensor',
data_type=onnx.TensorProto.FLOAT,
dims=values.shape,
vals=values.flatten().astype(float),
),
)
expect(node, inputs=[], outputs=[values],
name='test_constant')
| 25.088235 | 57 | 0.601407 | 93 | 853 | 5.258065 | 0.548387 | 0.0818 | 0.130879 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.009967 | 0.294256 | 853 | 33 | 58 | 25.848485 | 0.802326 | 0.060961 | 0 | 0 | 0 | 0 | 0.048872 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.04 | false | 0 | 0.32 | 0 | 0.4 | 0.04 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
b4695e99489bd38daaa6b9010e4ec8efec4ce4a7 | 3,524 | py | Python | pyvalidator/is_strong_password.py | theteladras/py.validator | 624ace7973552c8ac9353f48acbf96ec0ecc24a9 | [
"MIT"
] | 15 | 2021-11-01T14:14:56.000Z | 2022-03-17T11:52:29.000Z | pyvalidator/is_strong_password.py | theteladras/py.validator | 624ace7973552c8ac9353f48acbf96ec0ecc24a9 | [
"MIT"
] | 1 | 2022-03-16T13:39:16.000Z | 2022-03-17T09:16:00.000Z | pyvalidator/is_strong_password.py | theteladras/py.validator | 624ace7973552c8ac9353f48acbf96ec0ecc24a9 | [
"MIT"
] | null | null | null | from typing import TypedDict
from .utils.Classes.String import String
from .utils.assert_string import assert_string
from .utils.merge import merge
class _IsStrongPasswordOptions(TypedDict):
min_length: int
min_uppercase: int
min_lowercase: int
min_numbers: int
min_symbols: int
return_score: bool
points_per_unique: int
points_per_repeat: float
points_for_containing_upper: int
points_for_containing_lower: int
points_for_containing_number: int
points_for_containing_symbol: int
class _Analysis(TypedDict):
length: int
unique_chars: int
uppercase_count: int
lowercase_count: int
number_count: int
symbol_count: int
default_options: _IsStrongPasswordOptions = {
"min_length": 8,
"min_uppercase": 1,
"min_lowercase": 1,
"min_numbers": 1,
"min_symbols": 1,
"return_score": False,
"points_per_unique": 1,
"points_per_repeat": 0.5,
"points_for_containing_lower": 10,
"points_for_containing_upper": 10,
"points_for_containing_number": 10,
"points_for_containing_symbol": 10,
}
def count_chars(pw: String):
result = {}
for char in pw:
if char in result:
result[char] += result[char] + 1
else:
result[char] = 1
return result
def analyze_password(pw: String) -> _Analysis:
upper_case_regex = r"^[A-Z]$"
lower_case_regex = r"^[a-z]$"
number_regex = r"^[0-9]$"
symbol_regex = r"^[-#!$@%^&*()_+|~=`{}\[\]:\";'<>?,./ ]$"
char_map = count_chars(pw)
analysis: _Analysis = {
"length": pw.length,
"unique_chars": len([*char_map]),
"uppercase_count": 0,
"lowercase_count": 0,
"number_count": 0,
"symbol_count": 0,
}
for char in [*char_map]:
char = String(char)
if char.match(upper_case_regex):
analysis["uppercase_count"] += char_map[char]
elif char.match(lower_case_regex):
analysis["lowercase_count"] += char_map[char]
elif char.match(number_regex):
analysis["number_count"] += char_map[char]
elif char.match(symbol_regex):
analysis["symbol_count"] += char_map[char]
return analysis
def score_password(analysis: _Analysis, options: _IsStrongPasswordOptions):
points = 0
points += analysis["unique_chars"] * options["points_per_unique"]
points += (analysis["length"] - analysis["unique_chars"]) * options["points_per_unique"]
if analysis["uppercase_count"] > 0:
points += options["points_for_containing_upper"]
if analysis["lowercase_count"] > 0:
points += options["points_for_containing_lower"]
if analysis["number_count"] > 0:
points += options["points_for_containing_number"]
if analysis["symbol_count"] > 0:
points += options["points_for_containing_symbol"]
return points
def is_strong_password(input: str, options: _IsStrongPasswordOptions = {}) -> bool:
input = assert_string(input)
options = merge(options, default_options)
analysis = analyze_password(input)
if options["return_score"]:
return score_password(analysis, options)
return (
analysis["length"] >= options["min_length"] and
analysis["uppercase_count"] >= options["min_uppercase"] and
analysis["lowercase_count"] >= options["min_lowercase"] and
analysis["number_count"] >= options["min_numbers"] and
analysis["symbol_count"] >= options["min_symbols"]
)
| 29.864407 | 92 | 0.652667 | 413 | 3,524 | 5.251816 | 0.162228 | 0.049793 | 0.105118 | 0.029507 | 0.159059 | 0.147994 | 0.147994 | 0 | 0 | 0 | 0 | 0.010619 | 0.225028 | 3,524 | 117 | 93 | 30.119658 | 0.783596 | 0 | 0 | 0 | 0 | 0 | 0.211691 | 0.070091 | 0 | 0 | 0 | 0 | 0.020619 | 1 | 0.041237 | false | 0.072165 | 0.041237 | 0 | 0.340206 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
b47e5c3d3423860e078e6b322a1719db193870cb | 3,107 | py | Python | pkg/tests/helpers_test.py | hborawski/rules_pkg | 8d542763a3959db79175404758f46c7f3f385fa5 | [
"Apache-2.0"
] | null | null | null | pkg/tests/helpers_test.py | hborawski/rules_pkg | 8d542763a3959db79175404758f46c7f3f385fa5 | [
"Apache-2.0"
] | null | null | null | pkg/tests/helpers_test.py | hborawski/rules_pkg | 8d542763a3959db79175404758f46c7f3f385fa5 | [
"Apache-2.0"
] | null | null | null | # Copyright 2019 The Bazel Authors. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import os
import tempfile
import unittest
from private import helpers
class GetFlagValueTestCase(unittest.TestCase):
def testNonStripped(self):
self.assertEqual(helpers.GetFlagValue('value ', strip=False), 'value ')
def testStripped(self):
self.assertEqual(helpers.GetFlagValue('value ', strip=True), 'value')
def testNonStripped_fromFile(self):
with tempfile.TemporaryDirectory() as temp_d:
argfile_path = os.path.join(temp_d, 'argfile')
with open(argfile_path, 'wb') as f:
f.write(b'value ')
self.assertEqual(
helpers.GetFlagValue('@'+argfile_path, strip=False), 'value ')
def testStripped_fromFile(self):
with tempfile.TemporaryDirectory() as temp_d:
argfile_path = os.path.join(temp_d, 'argfile')
with open(argfile_path, 'wb') as f:
f.write(b'value ')
self.assertEqual(
helpers.GetFlagValue('@'+argfile_path, strip=True), 'value')
class SplitNameValuePairAtSeparatorTestCase(unittest.TestCase):
def testNoSep(self):
key, val = helpers.SplitNameValuePairAtSeparator('abc', '=')
self.assertEqual(key, 'abc')
self.assertEqual(val, '')
def testNoSepWithEscape(self):
key, val = helpers.SplitNameValuePairAtSeparator('a\\=bc', '=')
self.assertEqual(key, 'a=bc')
self.assertEqual(val, '')
def testNoSepWithDanglingEscape(self):
key, val = helpers.SplitNameValuePairAtSeparator('abc\\', '=')
self.assertEqual(key, 'abc')
self.assertEqual(val, '')
def testHappyCase(self):
key, val = helpers.SplitNameValuePairAtSeparator('abc=xyz', '=')
self.assertEqual(key, 'abc')
self.assertEqual(val, 'xyz')
def testHappyCaseWithEscapes(self):
key, val = helpers.SplitNameValuePairAtSeparator('a\\=\\=b\\=c=xyz', '=')
self.assertEqual(key, 'a==b=c')
self.assertEqual(val, 'xyz')
def testStopsAtFirstSep(self):
key, val = helpers.SplitNameValuePairAtSeparator('a=b=c', '=')
self.assertEqual(key, 'a')
self.assertEqual(val, 'b=c')
def testDoesntUnescapeVal(self):
key, val = helpers.SplitNameValuePairAtSeparator('abc=x\\=yz\\', '=')
self.assertEqual(key, 'abc')
# the val doesn't get unescaped at all
self.assertEqual(val, 'x\\=yz\\')
def testUnescapesNonsepCharsToo(self):
key, val = helpers.SplitNameValuePairAtSeparator('na\\xffme=value', '=')
# this behaviour is surprising
self.assertEqual(key, 'naxffme')
self.assertEqual(val, 'value')
if __name__ == '__main__':
unittest.main()
| 33.408602 | 77 | 0.700032 | 368 | 3,107 | 5.855978 | 0.347826 | 0.139211 | 0.037123 | 0.063109 | 0.482599 | 0.415777 | 0.348492 | 0.285847 | 0.240371 | 0.240371 | 0 | 0.003086 | 0.165755 | 3,107 | 92 | 78 | 33.771739 | 0.828318 | 0.206308 | 0 | 0.327586 | 0 | 0 | 0.082857 | 0 | 0 | 0 | 0 | 0 | 0.344828 | 1 | 0.206897 | false | 0 | 0.068966 | 0 | 0.310345 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
b48720b38e6ef7c7ce6bd71cd8a1fc79b8ad2a3a | 3,263 | py | Python | scripts/sha3.py | cidox479/ecc | da4091ff675d0fc757dc7d19bcdd4474a1388011 | [
"BSD-2-Clause"
] | null | null | null | scripts/sha3.py | cidox479/ecc | da4091ff675d0fc757dc7d19bcdd4474a1388011 | [
"BSD-2-Clause"
] | null | null | null | scripts/sha3.py | cidox479/ecc | da4091ff675d0fc757dc7d19bcdd4474a1388011 | [
"BSD-2-Clause"
] | 1 | 2020-09-28T03:06:38.000Z | 2020-09-28T03:06:38.000Z | #/*
# * Copyright (C) 2017 - This file is part of libecc project
# *
# * Authors:
# * Ryad BENADJILA <ryadbenadjila@gmail.com>
# * Arnaud EBALARD <arnaud.ebalard@ssi.gouv.fr>
# * Jean-Pierre FLORI <jean-pierre.flori@ssi.gouv.fr>
# *
# * Contributors:
# * Nicolas VIVET <nicolas.vivet@ssi.gouv.fr>
# * Karim KHALFALLAH <karim.khalfallah@ssi.gouv.fr>
# *
# * This software is licensed under a dual BSD and GPL v2 license.
# * See LICENSE file at the root folder of the project.
# */
import struct
keccak_rc = [
0x0000000000000001, 0x0000000000008082, 0x800000000000808A, 0x8000000080008000,
0x000000000000808B, 0x0000000080000001, 0x8000000080008081, 0x8000000000008009,
0x000000000000008A, 0x0000000000000088, 0x0000000080008009, 0x000000008000000A,
0x000000008000808B, 0x800000000000008B, 0x8000000000008089, 0x8000000000008003,
0x8000000000008002, 0x8000000000000080, 0x000000000000800A, 0x800000008000000A,
0x8000000080008081, 0x8000000000008080, 0x0000000080000001, 0x8000000080008008
]
keccak_rot = [
[ 0, 36, 3, 41, 18 ],
[ 1, 44, 10, 45, 2 ],
[ 62, 6, 43, 15, 61 ],
[ 28, 55, 25, 21, 56 ],
[ 27, 20, 39, 8, 14 ],
]
# Keccak function
def keccak_rotl(x, l):
return (((x << l) ^ (x >> (64 - l))) & (2**64-1))
def keccakround(bytestate, rc):
# Import little endian state
state = [0] * 25
for i in range(0, 25):
(state[i],) = struct.unpack('<Q', ''.join(bytestate[(8*i):(8*i)+8]))
# Proceed with the KECCAK core
bcd = [0] * 25
# Theta
for i in range(0, 5):
bcd[i] = state[i] ^ state[i + (5*1)] ^ state[i + (5*2)] ^ state[i + (5*3)] ^ state[i + (5*4)]
for i in range(0, 5):
tmp = bcd[(i+4)%5] ^ keccak_rotl(bcd[(i+1)%5], 1)
for j in range(0, 5):
state[i + (5 * j)] = state[i + (5 * j)] ^ tmp
# Rho and Pi
for i in range(0, 5):
for j in range(0, 5):
bcd[j + (5*(((2*i)+(3*j)) % 5))] = keccak_rotl(state[i + (5*j)], keccak_rot[i][j])
# Chi
for i in range(0, 5):
for j in range(0, 5):
state[i + (5*j)] = bcd[i + (5*j)] ^ (~bcd[((i+1)%5) + (5*j)] & bcd[((i+2)%5) + (5*j)])
# Iota
state[0] = state[0] ^ keccak_rc[rc]
# Pack the output state
output = [0] * (25 * 8)
for i in range(0, 25):
output[(8*i):(8*i)+1] = struct.pack('<Q', state[i])
return output
def keccakf(bytestate):
for rnd in range(0, 24):
bytestate = keccakround(bytestate, rnd)
return bytestate
# SHA-3 context class
class Sha3_ctx(object):
def __init__(self, digest_size):
self.digest_size = digest_size / 8
self.block_size = (25*8) - (2 * (digest_size / 8))
self.idx = 0
self.state = [chr(0)] * (25 * 8)
def digest_size(self):
return self.digest_size
def block_size(self):
return self.block_size
def update(self, message):
for i in range(0, len(message)):
self.state[self.idx] = chr(ord(self.state[self.idx]) ^ ord(message[i]))
self.idx = self.idx + 1
if (self.idx == self.block_size):
self.state = keccakf(self.state)
self.idx = 0
def digest(self):
self.state[self.idx] = chr(ord(self.state[self.idx]) ^ 0x06)
self.state[self.block_size - 1] = chr(ord(self.state[self.block_size - 1]) ^ 0x80)
self.state = keccakf(self.state)
return ''.join(self.state[:self.digest_size])
| 32.63 | 97 | 0.62274 | 490 | 3,263 | 4.095918 | 0.312245 | 0.053812 | 0.043847 | 0.038366 | 0.169905 | 0.128052 | 0.078226 | 0.078226 | 0.078226 | 0.078226 | 0 | 0.211013 | 0.204107 | 3,263 | 99 | 98 | 32.959596 | 0.561802 | 0.190316 | 0 | 0.19697 | 0 | 0 | 0.001528 | 0 | 0 | 0 | 0.168067 | 0 | 0 | 1 | 0.121212 | false | 0 | 0.015152 | 0.045455 | 0.242424 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
81eb4b0e294989b02c9358c7a2349765725c6844 | 970 | py | Python | app/mod_check/MySQL.py | RITC3/Hermes | 7df5cf1cbeaca949918ace9278b2d5c1138d4eac | [
"MIT"
] | 2 | 2018-03-06T03:39:00.000Z | 2018-03-06T04:31:39.000Z | app/mod_check/MySQL.py | RITC3/Hermes | 7df5cf1cbeaca949918ace9278b2d5c1138d4eac | [
"MIT"
] | 15 | 2018-01-01T20:55:22.000Z | 2018-06-09T21:37:39.000Z | app/mod_check/MySQL.py | RITC3/Hermes | 7df5cf1cbeaca949918ace9278b2d5c1138d4eac | [
"MIT"
] | null | null | null | import pymysql.cursors
from ..mod_check import app
@app.task
def check(host, port, username, password, db):
result = None
connection = None
try:
connection = pymysql.connect(host=host,
port=port,
user=username,
password=password,
db=db,
charset='utf8mb4',
autocommit=True,
cursorclass=pymysql.cursors.DictCursor)
with connection.cursor() as cursor:
cursor.execute('SELECT @@version AS version')
res = cursor.fetchone()
if isinstance(res, dict):
result = res.get('version', None)
except pymysql.Error:
result = False
finally:
if connection is not None:
connection.close()
return result
| 29.393939 | 76 | 0.464948 | 81 | 970 | 5.555556 | 0.567901 | 0.062222 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.003824 | 0.460825 | 970 | 32 | 77 | 30.3125 | 0.856597 | 0 | 0 | 0 | 0 | 0 | 0.042268 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.038462 | false | 0.076923 | 0.076923 | 0 | 0.153846 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.