hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
10ee5a867e9442e6f840b47ede6896ae45d15ac9 | 16,380 | py | Python | appengine/predator/analysis/test/stacktrace_test.py | allaparthi/monorail | e18645fc1b952a5a6ff5f06e0c740d75f1904473 | [
"BSD-3-Clause"
] | 2 | 2021-04-13T21:22:18.000Z | 2021-09-07T02:11:57.000Z | appengine/predator/analysis/test/stacktrace_test.py | allaparthi/monorail | e18645fc1b952a5a6ff5f06e0c740d75f1904473 | [
"BSD-3-Clause"
] | 21 | 2020-09-06T02:41:05.000Z | 2022-03-02T04:40:01.000Z | appengine/predator/analysis/test/stacktrace_test.py | allaparthi/monorail | e18645fc1b952a5a6ff5f06e0c740d75f1904473 | [
"BSD-3-Clause"
] | null | null | null | # Copyright 2016 The Chromium Authors. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.
from analysis.analysis_testcase import AnalysisTestCase
from analysis.callstack_detectors import StartOfCallStack
from analysis.stacktrace import FunctionLine
from analysis.stacktrace import CallStack
from analysis.stacktrace import CallStackBuffer
from analysis.stacktrace import ProfilerStackFrame
from analysis.stacktrace import StackFrame
from analysis.stacktrace import Stacktrace
from analysis.stacktrace import StacktraceBuffer
from analysis.type_enums import CallStackFormatType
from analysis.type_enums import LanguageType
from libs.deps.dependency import Dependency
class ProfilerStackFrameTest(AnalysisTestCase):
def testParseFrameWithMissingFields(self):
"""Tests parsing a frame with the optional fields missing."""
frame_dict = {
'difference': 0.01,
'log_change_factor': -8.1,
'responsible': False,
# other fields are absent e.g. filename
}
deps = {'chrome': Dependency('chrome', 'https://repo', '1')}
frame, language_type = ProfilerStackFrame.Parse(frame_dict, 0, deps)
self.assertEqual(frame.index, 0)
self.assertEqual(frame.difference, 0.01)
self.assertEqual(frame.log_change_factor, -8.1)
self.assertEqual(frame.responsible, False)
self.assertIsNone(frame.dep_path)
self.assertIsNone(frame.function)
self.assertIsNone(frame.file_path)
self.assertIsNone(frame.raw_file_path)
self.assertIsNone(frame.repo_url)
self.assertIsNone(frame.function_start_line)
self.assertIsNone(frame.lines_old)
self.assertIsNone(frame.lines_new)
self.assertEqual(language_type, LanguageType.CPP)
def testParseFrame(self):
"""Tests successfully parsing a stacktrace with one frame."""
frame_dict = {
'difference': 0.01,
'log_change_factor': -8.1,
'responsible': False,
'filename': 'chrome/app/chrome_exe_main_win.cc',
'function_name': 'wWinMain',
'function_start_line': 484,
'lines': [
[
{'line': 490, 'sample_fraction': 0.7},
{'line': 511, 'sample_fraction': 0.3}
],
[
{'line': 490, 'sample_fraction': 0.9},
{'line': 511, 'sample_fraction': 0.1}
]
]
}
deps = {'chrome': Dependency('chrome', 'https://repo', '1')}
frame, language_type = ProfilerStackFrame.Parse(frame_dict, 1, deps)
self.assertEqual(frame.index, 1)
self.assertEqual(frame.difference, 0.01)
self.assertEqual(frame.log_change_factor, -8.1)
self.assertEqual(frame.responsible, False)
self.assertEqual(frame.dep_path, 'chrome')
self.assertEqual(frame.function, 'wWinMain')
self.assertEqual(frame.file_path, 'app/chrome_exe_main_win.cc')
self.assertEqual(frame.raw_file_path, 'chrome/app/chrome_exe_main_win.cc')
self.assertEqual(frame.repo_url, 'https://repo')
self.assertEqual(frame.function_start_line, 484)
expected_lines_old = (
FunctionLine(line=490, sample_fraction=0.7),
FunctionLine(line=511, sample_fraction=0.3),
)
expected_lines_new = (
FunctionLine(line=490, sample_fraction=0.9),
FunctionLine(line=511, sample_fraction=0.1),
)
self.assertEqual(frame.lines_old, expected_lines_old)
self.assertEqual(frame.lines_new, expected_lines_new)
self.assertEqual(language_type, LanguageType.CPP)
def testCrashedLineNumbersForProfilerStackFrame(self):
"""Tests the ``crashed_line_numbers`` property."""
frame = ProfilerStackFrame(
0, 0, 0, False, function_start_line=5,
lines_old=(
FunctionLine(line=10, sample_fraction=0.3),
FunctionLine(line=15, sample_fraction=0.7)
),
lines_new=(
FunctionLine(line=12, sample_fraction=0.5),
FunctionLine(line=19, sample_fraction=0.5)
))
self.assertEqual(frame.crashed_line_numbers, (5, 12, 19))
frame2 = ProfilerStackFrame(0, 0, 0, False, function_start_line=10)
self.assertEqual(frame2.crashed_line_numbers, (10,))
frame3 = ProfilerStackFrame(0, 0, 0, False)
self.assertIsNone(frame3.crashed_line_numbers)
def testBlameUrlForProfilerStackFrame(self):
"""Tests that ``ProfilerStackFrame.BlameUrl`` generates the correct url."""
frame = ProfilerStackFrame(0, 0, float('inf'), False, 'src', 'func',
'f.cc', 'src/f.cc')
self.assertEqual(frame.BlameUrl('1'), None)
frame = frame._replace(repo_url = 'https://repo_url')
self.assertEqual(frame.BlameUrl('1'), 'https://repo_url/+blame/1/f.cc')
def testFailureWhenProfilerStackFrameIndexIsNone(self):
"""Tests that a TypeError is raised when the ``index`` is ``None``."""
with self.assertRaises(TypeError):
ProfilerStackFrame(None, 0, float('inf'), False, 'src', 'func',
'f.cc', 'src/f.cc')
class CallStackTest(AnalysisTestCase):
def testCallStackBool(self):
self.assertFalse(CallStack(0, [], None, None))
frame = StackFrame(0, 'src', 'func', 'f.cc', 'src/f.cc', [])
self.assertTrue(CallStack(0, [frame], None, None))
def testStackFrameToString(self):
self.assertEqual(
StackFrame(0, 'src', 'func', 'f.cc', 'src/f.cc', []).ToString(),
'#0 0xXXX in func src/f.cc')
self.assertEqual(
StackFrame(0, 'src', 'func', 'f.cc', 'src/f.cc', [1]).ToString(),
'#0 0xXXX in func src/f.cc:1')
self.assertEqual(
StackFrame(0, 'src', 'func', 'f.cc', 'src/f.cc', [1, 2]).ToString(),
'#0 0xXXX in func src/f.cc:1:1')
def testBlameUrlForStackFrame(self):
frame = StackFrame(0, 'src', 'func', 'f.cc', 'src/f.cc', [])
self.assertEqual(frame.BlameUrl('1'), None)
frame = frame._replace(repo_url = 'https://repo_url')
self.assertEqual(frame.BlameUrl('1'), 'https://repo_url/+blame/1/f.cc')
frame = frame._replace(crashed_line_numbers = [9, 10])
self.assertEqual(frame.BlameUrl('1'), 'https://repo_url/+blame/1/f.cc#9')
def testCallStackConstructorIsLanguageJavaIfFormatJava(self):
self.assertEqual(
CallStack(0, format_type=CallStackFormatType.JAVA).language_type,
LanguageType.JAVA)
def testParseStackFrameForJavaCallstackFormat(self):
language_type = None
format_type = CallStackFormatType.JAVA
self.assertIsNone(
StackFrame.Parse(language_type, format_type, 'dummy line', {}))
deps = {'org': Dependency('org', 'https://repo', '1')}
frame = StackFrame.Parse(language_type, format_type,
' at org.a.b(a.java:609)', deps)
self._VerifyTwoStackFramesEqual(
frame,
StackFrame(0, 'org', 'org.a.b', 'a.java', 'org/a.java', [609]))
def testParseStackFrameForSyzyasanCallstackFormat(self):
language_type = None
format_type = CallStackFormatType.SYZYASAN
self.assertIsNone(
StackFrame.Parse(language_type, format_type, 'dummy line', {}))
deps = {'src/content': Dependency('src/content', 'https://repo', '1')}
frame = StackFrame.Parse(language_type, format_type,
'c::p::n [src/content/e.cc @ 165]', deps)
self._VerifyTwoStackFramesEqual(
frame,
StackFrame(
0, 'src/content', 'c::p::n', 'e.cc', 'src/content/e.cc', [165]))
def testParseStackFrameForDefaultCallstackFormat(self):
language_type = None
format_type = CallStackFormatType.DEFAULT
self.assertIsNone(
StackFrame.Parse(language_type, format_type, 'dummy line', {}))
deps = {'tp/webrtc': Dependency('tp/webrtc', 'https://repo', '1')}
frame = StackFrame.Parse(language_type, format_type,
'#0 0x52617a in func0 tp/webrtc/a.c:38:3', deps)
self._VerifyTwoStackFramesEqual(
frame,
StackFrame(
0, 'tp/webrtc', 'func0', 'a.c', 'tp/webrtc/a.c', [38, 39, 40, 41]))
frame = StackFrame.Parse(language_type, format_type,
'#1 0x526 in func::func2::func3 tp/webrtc/a.c:3:2', deps)
self._VerifyTwoStackFramesEqual(
frame,
StackFrame(
1, 'tp/webrtc', 'func::func2::func3', 'a.c', 'tp/webrtc/a.c',
[3, 4, 5]))
def testParseStackFrameForFracasJavaStack(self):
format_type = CallStackFormatType.DEFAULT
language_type = LanguageType.JAVA
frame = StackFrame.Parse(language_type, format_type,
'#0 0xxx in android.app.func app.java:2450', {})
self._VerifyTwoStackFramesEqual(
frame,
StackFrame(
0, '', 'android.app.func', 'android/app.java',
'android/app.java', [2450]))
class CallStackBufferTest(AnalysisTestCase):
def setUp(self):
super(CallStackBufferTest, self).setUp()
self.stack_buffer = CallStackBuffer(0, frame_list=[
StackFrame(0, 'repo/path1', 'func1', 'a/c/f1.cc', 'a/b/f1.cc',
[1, 2], 'https://repo1'),
StackFrame(1, 'repo/path2', 'func2', 'a/c/f2.cc', 'a/b/f2.cc',
[11, 12, 13], 'https://repo2')])
def testCallStackBufferLen(self):
"""Tests ``len(CallStackBuffer)`` works as expected."""
self.assertEqual(len(self.stack_buffer), 2)
def testCallStackBufferBool(self):
"""Tests ``bool`` for ``CallStackBuffer`` object works as expected."""
self.assertTrue(bool(self.stack_buffer))
self.assertFalse(bool(CallStackBuffer(0, frame_list=[])))
def testCallStackBufferIter(self):
"""Tests ``iter`` for ``CallStackBuffer`` works as expected."""
for index, frame in enumerate(self.stack_buffer.frames):
self.assertEqual(index, frame.index)
def testToCallStackForNonEmptyCallStackBuffer(self):
"""Tests ``ToCallStack`` for non empty ``CallStackBuffer`` object."""
frame_list=[StackFrame(0, 'repo/path', 'func', 'a/c.cc', 'a/c.cc',
[3, 4], 'https://repo')]
stack_buffer = CallStackBuffer(0, frame_list=frame_list)
expected_callstack = CallStack(stack_buffer.priority,
tuple(frame_list),
CallStackFormatType.DEFAULT,
LanguageType.CPP)
self.assertTupleEqual(stack_buffer.ToCallStack(), expected_callstack)
def testToCallStackForEmptyCallStackBuffer(self):
"""Tests ``ToCallStack`` for empty ``CallStackBuffer`` object."""
self.assertIsNone(CallStackBuffer(0, frame_list=[]).ToCallStack())
def testFromNoneStartOfCallStack(self):
"""Tests ``FromStartOfCallStack`` with None input."""
self.assertIsNone(CallStackBuffer.FromStartOfCallStack(None))
def testFromStartOfCallStack(self):
"""Tests ``FromStartOfCallStack`` with ``StartOfCallStack`` input."""
start_of_callstack = StartOfCallStack(0, CallStackFormatType.DEFAULT,
LanguageType.CPP, {'pid': 123})
stack_buffer = CallStackBuffer.FromStartOfCallStack(start_of_callstack)
self.assertEqual(stack_buffer.priority, start_of_callstack.priority)
self.assertEqual(stack_buffer.format_type, start_of_callstack.format_type)
self.assertEqual(stack_buffer.language_type,
start_of_callstack.language_type)
self.assertDictEqual(stack_buffer.metadata, start_of_callstack.metadata)
class StacktraceTest(AnalysisTestCase):
def testStacktraceLen(self):
"""Tests ``len`` for ``Stacktrace`` object."""
frame_list1 = [
StackFrame(0, 'src', 'func', 'file0.cc', 'src/file0.cc', [32])]
frame_list2 = [
StackFrame(0, 'src', 'func2', 'file0.cc', 'src/file0.cc', [32])]
stack1 = CallStack(0, frame_list1, None, None)
stack2 = CallStack(1, frame_list2, None, None)
stacktrace = Stacktrace((stack1, stack2), stack1)
self.assertEqual(len(stacktrace), 2)
def testStacktraceBool(self):
"""Tests ``bool`` for ``Stacktrace`` object."""
self.assertFalse(bool(Stacktrace([], None)))
frame_list1 = [
StackFrame(0, 'src', 'func', 'file0.cc', 'src/file0.cc', [32])]
frame_list2 = [
StackFrame(0, 'src', 'func2', 'file0.cc', 'src/file0.cc', [32])]
stack1 = CallStack(0, frame_list1, None, None)
stack2 = CallStack(1, frame_list2, None, None)
self.assertTrue(bool(Stacktrace((stack1, stack2), stack1)))
class StacktraceBufferTest(AnalysisTestCase):
def _DummyFilter(self, stack_buffer): # pragma: no cover
return stack_buffer
def testStacktraceBufferWithoutSignature(self):
"""Tests using least priority stack as crash_stack without signature."""
frame_list1 = [
StackFrame(0, 'src', 'func', 'file0.cc', 'src/file0.cc', [32])]
frame_list2 = [
StackFrame(0, 'src', 'func2', 'file0.cc', 'src/file0.cc', [32])]
stack1 = CallStackBuffer(0, frame_list=frame_list1)
stack2 = CallStackBuffer(1, frame_list=frame_list2)
stacktrace = StacktraceBuffer([stack1, stack2]).ToStacktrace()
self._VerifyTwoCallStacksEqual(stacktrace.crash_stack, stack1.ToCallStack())
def testSettingStackWithIsSignatureStackMetaDataAsCrashStack(self):
"""Tests using stack with signature as crash_stack with signature."""
frame_list1 = [
StackFrame(0, 'src', 'func', 'file0.cc', 'src/file0.cc', [32])]
frame_list2 = [
StackFrame(0, 'src', 'signature_func2', 'f.cc', 'src/f.cc', [32])]
stack1 = CallStackBuffer(0, frame_list=frame_list1,
metadata={'is_signature_stack': True})
stack2 = CallStackBuffer(1, frame_list=frame_list2)
stacktrace = StacktraceBuffer([stack1, stack2]).ToStacktrace()
self._VerifyTwoCallStacksEqual(stacktrace.crash_stack, stack1.ToCallStack())
def testAddFitleredStackWithNoFilters(self):
"""Tests that ``AddFilteredStack`` returns None if there is no filters."""
frame_list = [
StackFrame(0, 'src', 'func', 'file0.cc', 'src/file0.cc', [32]),
StackFrame(0, 'src', 'func2', 'file0.cc', 'src/file0.cc', [32])]
stack_buffer = CallStackBuffer(0, frame_list=frame_list)
stacktrace_buffer = StacktraceBuffer()
stacktrace_buffer.AddFilteredStack(stack_buffer)
self.assertEqual(len(stacktrace_buffer.stacks), 1)
def testFilterInfinityPriorityStackBuffer(self):
"""Tests that ``AddFilteredStack`` returns None for inf priority stack."""
stack_buffer = CallStackBuffer(priority=float('inf'))
stacktrace_buffer = StacktraceBuffer(filters=[self._DummyFilter])
stacktrace_buffer.AddFilteredStack(stack_buffer)
self.assertEqual(len(stacktrace_buffer.stacks), 0)
def testFilterEmptyStackBuffer(self):
"""Tests that ``AddFilteredStack`` returns None for empty stack buffer."""
stack_buffer = CallStackBuffer(frame_list=[])
stacktrace_buffer = StacktraceBuffer(filters=[self._DummyFilter])
stacktrace_buffer.AddFilteredStack(stack_buffer)
self.assertEqual(len(stacktrace_buffer.stacks), 0)
def testFilterAllFrames(self):
"""Tests that ``AddFilteredStack`` filters all frames and resturns None."""
frame_list = [
StackFrame(0, 'src', 'func', 'file0.cc', 'src/file0.cc', [32]),
StackFrame(0, 'src', 'func2', 'file0.cc', 'src/file0.cc', [32])]
stack_buffer = CallStackBuffer(0, frame_list=frame_list)
def _MockFilterAllFrames(stack_buffer):
stack_buffer.frames = None
return stack_buffer
stacktrace_buffer = StacktraceBuffer(filters=[_MockFilterAllFrames])
stacktrace_buffer.AddFilteredStack(stack_buffer)
self.assertEqual(len(stacktrace_buffer.stacks), 0)
def testFilterSomeFrames(self):
"""Tests that ``AddFilteredStack`` filters some frames."""
frame_list = [
StackFrame(0, 'src', 'func', 'file0.cc', 'src/file0.cc', [32]),
StackFrame(0, 'src', 'func2', 'file0.cc', 'src/file0.cc', [32])]
stack_buffer = CallStackBuffer(0, frame_list=frame_list)
def _MockKeepFirstFrame(stack):
stack.frames = stack.frames[:1]
return stack
stacktrace_buffer = StacktraceBuffer(filters=[_MockKeepFirstFrame])
stacktrace_buffer.AddFilteredStack(stack_buffer)
self._VerifyTwoCallStacksEqual(
stacktrace_buffer.stacks[0],
CallStackBuffer(stack_buffer.priority, frame_list=frame_list[:1]))
def testEmptyStacktraceBufferToStacktrace(self):
"""Tests that ``ToStacktrace`` returns None for empty stacktrace buffer."""
self.assertIsNone(StacktraceBuffer([]).ToStacktrace())
| 41.052632 | 80 | 0.675031 | 1,846 | 16,380 | 5.85753 | 0.146804 | 0.054102 | 0.040692 | 0.018034 | 0.458707 | 0.410062 | 0.361417 | 0.332933 | 0.314344 | 0.284565 | 0 | 0.028177 | 0.185348 | 16,380 | 398 | 81 | 41.155779 | 0.782149 | 0.093346 | 0 | 0.369637 | 0 | 0 | 0.118485 | 0.006243 | 0 | 0 | 0.000882 | 0 | 0.207921 | 1 | 0.112211 | false | 0 | 0.039604 | 0.0033 | 0.178218 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
10f262341aa17bc9e6c9627fbde97ada1f6bde4c | 1,528 | py | Python | setup.py | gpaOliveira/SuperDiffer | 1936ead7ab209994892f6bc29a33a47415fa95e8 | [
"MIT"
] | 2 | 2016-10-12T16:11:04.000Z | 2017-08-25T18:29:44.000Z | setup.py | gpaOliveira/SuperDiffer | 1936ead7ab209994892f6bc29a33a47415fa95e8 | [
"MIT"
] | null | null | null | setup.py | gpaOliveira/SuperDiffer | 1936ead7ab209994892f6bc29a33a47415fa95e8 | [
"MIT"
] | null | null | null | import os
from setuptools import setup,find_packages
project_name = "SuperDiffer"
__version__ = "1.0.0"
__author__ = "Gabriel Oliveira"
__author_email__ = "gabriel.pa.oliveira@gmail.com"
__author_username__ = "gpaOliveira"
__description__ = "REST Service to calculate the difference between two input strings"
#adapted from https://pythonhosted.org/an_example_pypi_project/setuptools.html
def read(fname):
try:
return open(os.path.join(os.path.dirname(__file__), fname)).read()
except:
pass
return ""
setup(
author = __author__,
author_email = __author_email__,
description = __description__,
install_requires = read("requirements.txt"),
license = read("LICENSE"),
long_description = read("README.md"),
name = project_name,
packages = find_packages(),
platforms = ["any"],
test_suite = "nose2.collector.collector",
url = "https://github.com/" + __author_username__ + "/" + project_name,
version = __version__,
classifiers = [
'Environment :: Web Environment',
'Intended Audience :: Developers',
'License :: OSI Approved :: MIT License',
'Operating System :: OS Independent',
'Topic :: Internet :: WWW/HTTP :: Dynamic Content',
'Topic :: Software Development :: Libraries :: Python Modules',
'Programming Language :: Python',
'Programming Language :: Python :: 2',
'Programming Language :: Python :: 2.6',
'Programming Language :: Python :: 2.7',
]
) | 34.727273 | 88 | 0.659031 | 158 | 1,528 | 5.993671 | 0.613924 | 0.080253 | 0.105597 | 0.082365 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.007525 | 0.217277 | 1,528 | 44 | 89 | 34.727273 | 0.784281 | 0.050393 | 0 | 0 | 0 | 0 | 0.41213 | 0.037216 | 0 | 0 | 0 | 0 | 0 | 1 | 0.025 | false | 0.025 | 0.05 | 0 | 0.125 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
10f323886fcda17f4ca99970441ea89c67714915 | 1,134 | py | Python | mtp_cashbook/apps/cashbook/urls.py | uk-gov-mirror/ministryofjustice.money-to-prisoners-cashbook | d35a621e21631e577faacaeacb5ab9f883c9b4f4 | [
"MIT"
] | 4 | 2016-01-05T12:21:39.000Z | 2016-12-22T15:56:37.000Z | mtp_cashbook/apps/cashbook/urls.py | uk-gov-mirror/ministryofjustice.money-to-prisoners-cashbook | d35a621e21631e577faacaeacb5ab9f883c9b4f4 | [
"MIT"
] | 132 | 2015-06-10T09:53:14.000Z | 2022-02-01T17:35:54.000Z | mtp_cashbook/apps/cashbook/urls.py | uk-gov-mirror/ministryofjustice.money-to-prisoners-cashbook | d35a621e21631e577faacaeacb5ab9f883c9b4f4 | [
"MIT"
] | 3 | 2015-07-07T14:40:33.000Z | 2021-04-11T06:20:14.000Z | from django.conf.urls import url
from django.views.generic import RedirectView
from .views import (
NewCreditsView, ProcessingCreditsView,
ProcessedCreditsListView, ProcessedCreditsDetailView,
SearchView,
CashbookFAQView,
CashbookGetHelpView, CashbookGetHelpSuccessView,
)
urlpatterns = [
url(r'^new/$', NewCreditsView.as_view(), name='new-credits'),
url(r'^processed/$', ProcessedCreditsListView.as_view(), name='processed-credits-list'),
url(r'^processed/(?P<date>[0-9]{8})-(?P<user_id>[0-9]+)/$',
ProcessedCreditsDetailView.as_view(), name='processed-credits-detail'),
url(r'^processing/$', ProcessingCreditsView.as_view(), name='processing-credits'),
url(r'^search/$', SearchView.as_view(), name='search'),
url(r'^all/$', RedirectView.as_view(pattern_name='search', permanent=True)),
url(r'^cashbook/faq/$', CashbookFAQView.as_view(), name='cashbook_faq'),
url(r'^cashbook/feedback/$', CashbookGetHelpView.as_view(), name='cashbook_submit_ticket'),
url(r'^cashbook/feedback/success/$', CashbookGetHelpSuccessView.as_view(), name='cashbook_feedback_success'),
]
| 42 | 113 | 0.722222 | 124 | 1,134 | 6.475806 | 0.370968 | 0.044832 | 0.099626 | 0.067248 | 0.064757 | 0 | 0 | 0 | 0 | 0 | 0 | 0.004936 | 0.106702 | 1,134 | 26 | 114 | 43.615385 | 0.787759 | 0 | 0 | 0 | 0 | 0.047619 | 0.269841 | 0.151675 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.142857 | 0 | 0.142857 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
10f4e1c3903fa8067c8e89fb4ec897f34d53d80b | 3,871 | py | Python | Products.py | Idimma/api-web-scrapper-to-csv | ff4e11eb78d04b0b263b7e76909405ebd1a2e1b5 | [
"MIT"
] | null | null | null | Products.py | Idimma/api-web-scrapper-to-csv | ff4e11eb78d04b0b263b7e76909405ebd1a2e1b5 | [
"MIT"
] | null | null | null | Products.py | Idimma/api-web-scrapper-to-csv | ff4e11eb78d04b0b263b7e76909405ebd1a2e1b5 | [
"MIT"
] | null | null | null |
# coding: utf-8
# In[ ]:
from bs4 import BeautifulSoup
import csv
# In[ ]:
import requests
import pandas as pd
from time import sleep
# In[ ]:
brand = ' '
# header =['BRANDS', 'PRODUCT TITLE', 'PRODUCT GROUP', 'PRODUCT CODE',
# 'PRICE', 'PRODUCT IMAGE','PRODUCT', 'DESCRIPTION']
# with open('products_list.csv', 'w', newline='', encoding='utf-8') as csvfile:
# writer = csv.writer(csvfile, delimiter=',', quotechar='|', quoting=csv.QUOTE_MINIMAL)
# writer.writerow(header)
# In[ ]:
df = pd.read_csv("products_array.csv", encoding = "utf-8")
# In[ ]:
def get_response(brand, page):
url = "https://www.omnical.co/en/json/productresults"
payload = "------WebKitFormBoundary7MA4YWxkTrZu0gW\r\nContent-Disposition: form-data; name=\"page\"\r\n\r\n"+str(page)+"\r\n------WebKitFormBoundary7MA4YWxkTrZu0gW\r\nContent-Disposition: form-data; name=\"q\"\r\n\r\n\r\n------WebKitFormBoundary7MA4YWxkTrZu0gW\r\nContent-Disposition: form-data; name=\"autoclass\"\r\n\r\nfalse\r\n------WebKitFormBoundary7MA4YWxkTrZu0gW\r\nContent-Disposition: form-data; name=\"postData\"\r\n\r\n\r\n------WebKitFormBoundary7MA4YWxkTrZu0gW\r\nContent-Disposition: form-data; name=\"f_brand\"\r\n\r\n"+brand+"\r\n------WebKitFormBoundary7MA4YWxkTrZu0gW\r\nContent-Disposition: form-data; name=\"getImages\"\r\n\r\ntrue\r\n------WebKitFormBoundary7MA4YWxkTrZu0gW--"
headers = {
'content-type': "multipart/form-data; boundary=----WebKitFormBoundary7MA4YWxkTrZu0gW",
'Cache-Control': "no-cache",
'Postman-Token': "fd232235-7b6f-486b-a63c-d8e6b8572e0b"
}
response = requests.request("POST", url, data=payload, headers=headers)
return response.json()
def get_total(resp):
return resp[total]
def get_bs(resp):
divs = BeautifulSoup(resp, 'html.parser')
return divs
# In[ ]:
def write_to_file(view):
with open(r'products_list.csv', 'a', newline='') as f:
global brand
writer = csv.writer(f)
views = view.findAll('div', attrs={'class':'tradeproduct product-row'})
for i, vw in zip(range(len(views)), views):
product = vw.find('div', attrs={'class':'tradeproduct-title'}).text
prod = product
desc = vw.find('div', attrs={'class':'tradeproduct-generated-description-search'}).text
if len(product.split(" ")) >= 4:
product_title = product.split(" ")[3]
product_group = product.split(" ")[1]
product_code = product.split(" ")[2]
else:
product_title = product
product_group = product
product_code = product
price = vw.find('span',
attrs={'itemprop':'pricecurrency'}).text +" " + vw.find('span',
attrs={'itemprop':'price'}).text
product_detail_image = vw.find('img')['src']
writer.writerow([brand, product_title, product_group, product_code, price,
product_detail_image, prod, desc])
# In[ ]:
def start(b_rand = -1, page = -1):
global brand
for index, value in df.iterrows():
if index > b_rand:
brand = value['brand']
print('Started '+ brand)
html = get_response(brand, 1)
maxi = html['numProducts']
for x in range(1 , maxi+1):
if x > page and page != 20:
resp = get_response(brand, x)
print('Writing Page '+ str(x) + ' of ' + str(maxi))
write_to_file(get_bs(resp['productView']))
print('Done with ' + brand)
# In[ ]:
start(-1, -1) #START FROM THE VERY BEGINNING
# In[ ]:
| 32.529412 | 702 | 0.572204 | 435 | 3,871 | 5.02069 | 0.354023 | 0.014652 | 0.010989 | 0.145604 | 0.279304 | 0.258242 | 0.229853 | 0.200092 | 0.157051 | 0.065018 | 0 | 0.021532 | 0.268148 | 3,871 | 118 | 703 | 32.805085 | 0.749382 | 0.111341 | 0 | 0.033333 | 0 | 0.05 | 0.316267 | 0.177589 | 0 | 0 | 0 | 0 | 0 | 1 | 0.083333 | false | 0 | 0.083333 | 0.016667 | 0.216667 | 0.05 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
10f504e4b3eeb9d6415e959e475aa8d56aa55d8f | 661 | py | Python | string_split.py | maneeshdisodia/pythonic_examples | f722bfbe253bbcead111ba082550bdfd1c6046d3 | [
"MIT"
] | null | null | null | string_split.py | maneeshdisodia/pythonic_examples | f722bfbe253bbcead111ba082550bdfd1c6046d3 | [
"MIT"
] | null | null | null | string_split.py | maneeshdisodia/pythonic_examples | f722bfbe253bbcead111ba082550bdfd1c6046d3 | [
"MIT"
] | null | null | null | import re
w = 'AbcDefgHijkL'
ws= ' .nalf!!213knlsc'
wd =' ca++ and mbbs '
loc ='mumbai dombivli'
r = re.findall('([A-Z])', w)
print(r)
r = re.findall('([A-Z][a-z]+)', w)
print(r)
r = re.findall('([a-z,A-Z,+]+)',wd)
print(r)
print(' '.join(r))
loc.find('mumcai')
join_locations = {
'ncr': ['delhi', 'faridabad', 'gurgaon', 'noida', 'ncr'],
'bangluru': ['banglore', 'bangluru'],
'pune': ['pune'],
'hydrabad': ['hydrabad']
}
def cluster_location(location):
for key,value in join_locations.items():
print(key , value)
for item in value:
print(item)
cluster_location('my name is mumbai pune is new')
| 16.948718 | 61 | 0.568835 | 92 | 661 | 4.043478 | 0.48913 | 0.026882 | 0.080645 | 0.08871 | 0.145161 | 0.11828 | 0.11828 | 0.11828 | 0.11828 | 0.11828 | 0 | 0.005736 | 0.208775 | 661 | 38 | 62 | 17.394737 | 0.705545 | 0 | 0 | 0.12 | 0 | 0 | 0.322239 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.04 | false | 0 | 0.04 | 0 | 0.08 | 0.24 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
10f5a8ade59179bf2b0330345df9e5d84f2210ed | 2,724 | py | Python | pdepy/wave.py | OliverTso/PDE | b65fd92f0d62d4160ef93e2a29762025ba869012 | [
"MIT"
] | 11 | 2018-04-28T14:09:18.000Z | 2022-03-04T19:10:00.000Z | pdepy/wave.py | OliverTso/PDE | b65fd92f0d62d4160ef93e2a29762025ba869012 | [
"MIT"
] | 6 | 2020-04-06T23:59:52.000Z | 2021-02-03T01:49:14.000Z | pdepy/wave.py | OliverTso/PDE | b65fd92f0d62d4160ef93e2a29762025ba869012 | [
"MIT"
] | 2 | 2021-02-11T12:27:24.000Z | 2021-05-11T16:46:13.000Z | """
Finite-difference solver for wave equation:
u_yy = u_xx.
Initial and boundary conditions:
u(x, 0) = init(x), 0 <= x <= xf,
u_y(x, 0) = d_init(x), 0 <= x <= xf,
u(0, y) = bound_x0(y), 0 <= y <= yf,
u(xf, y) = bound_xf(y), 0 <= y <= yf.
"""
import numpy as np
from scipy import linalg
from pdepy import time, utils
@utils.validate_method(valid_methods=["e", "i"])
def solve(axis, conds, method="i"):
"""
Methods
-------
* e: explicit
* i: implicit
Parameters
----------
axis : array_like
Axis 'x' and 'y'; [x, y], each element should be an array_like.
conds : array_like
Initial and boundary conditions; [d_init, init, bound_x0, bound_xf],
each element should be a scalar or an array_like of size 'x.size'
for 'init' and 'y.size' for 'bound_x'.
method : string | optional
Finite-difference method.
Returns
-------
u : ndarray
A 2-D ndarray; u[x, y].
"""
u = time.set_u(*axis, *conds[1:])
consts = _cal_constants(*axis)
_set_first_row(u, *consts[1:], conds[0])
if method == "e":
_explicit(u, consts[0] ** (-1))
elif method == "i":
_implicit(u, consts[0] ** (-1))
return u
def _explicit(u, 𝛂):
"""Métodos de diferenças finitas explícitos."""
for j in np.arange(1, u.shape[1] - 1):
u[1:-1, j + 1] = (
2 * u[1:-1, j] - u[1:-1, j - 1] + 𝛂 * (u[2:, j] - 2 * u[1:-1, j] + u[:-2, j])
)
def _implicit(u, 𝛂):
"""Métodos de diferenças finitas implícitos."""
mat = _set_mat(np.shape(u)[0] - 2, 𝛂)
for j in np.arange(1, u.shape[1] - 1):
vec = _set_vec(𝛂, u[:, j - 1 : j + 2])
u[1:-1, j + 1] = linalg.solve(mat, vec)
def _set_mat(n, 𝛂):
"""Monta a matriz do sistema em cada iteração de '_implicit()'."""
main = -2 * (np.ones(n) + 𝛂)
upper = np.ones(n - 1)
lower = np.ones(n - 1)
return np.diag(main) + np.diag(upper, 1) + np.diag(lower, -1)
def _set_vec(𝛂, u):
"""Monta o vetor do sistema em cada iteração de '_implicit()'."""
vec = -u[:-2, 0] - u[2:, 0] + 2 * (1 + 𝛂) * u[1:-1, 0] - 4 * 𝛂 * u[1:-1, 1]
vec[0] -= u[0, 2]
vec[-1] -= u[-1, 2]
return vec
def _cal_constants(x, y):
"""Calcula as constantes '𝛂', 'h' e 'k'."""
h = x[-1] / (x.size - 1)
k = y[-1] / (y.size - 1)
𝛂 = k ** 2 / h ** 2
return (𝛂, h, k)
def _set_first_row(u, h, k, d_init):
"""
Determina a primeira linha da malha interior. 'd_init' pode ser um
escalar ou um vetor de tamanho do 'x'.
"""
u[1:-1, 1] = (u[:, 0] + k * d_init)[1:-1] + k ** 2 / 2 * (
u[2:, 0] - 2 * u[1:-1, 0] + u[:-2, 0]
) / h ** 2
| 24.763636 | 89 | 0.506975 | 458 | 2,724 | 2.919214 | 0.255459 | 0.020942 | 0.020194 | 0.014959 | 0.162304 | 0.154824 | 0.08377 | 0.034405 | 0.034405 | 0.034405 | 0 | 0.047297 | 0.293686 | 2,724 | 109 | 90 | 24.990826 | 0.647609 | 0.401982 | 0 | 0.047619 | 0 | 0 | 0.003347 | 0 | 0 | 0 | 0 | 0.009174 | 0 | 1 | 0.166667 | false | 0 | 0.071429 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
10fadf307cc6cb6a41c77070b0b754a502353b7a | 10,067 | py | Python | tools/fetch_codeforces_examples/fetch_cf_examples.py | adw1n/competitive-programming | b28a166e7c93d7b239c0a6b09eafd6020685afdf | [
"WTFPL"
] | null | null | null | tools/fetch_codeforces_examples/fetch_cf_examples.py | adw1n/competitive-programming | b28a166e7c93d7b239c0a6b09eafd6020685afdf | [
"WTFPL"
] | null | null | null | tools/fetch_codeforces_examples/fetch_cf_examples.py | adw1n/competitive-programming | b28a166e7c93d7b239c0a6b09eafd6020685afdf | [
"WTFPL"
] | null | null | null | #!/usr/bin/env python3
import typing
import re
import os.path
import enum
import json
import urllib
import argparse
import asyncio
import lxml.html
import requests
import lxml.etree
import aiohttp.client
HOME = os.path.expanduser("~")
USERNAME = "adwin_" # type: str
CONTEST_DIR = os.path.join(HOME, "algo_competitions") # type: str
class Example:
def __init__(self, _input: str, output: str):
"""
sometimes input/output is missing trailing \n
"""
self.input = _input
self.output = output
def __str__(self):
return "input:\n"+self.input+"output:\n"+self.output
@staticmethod
def get_example_text(node: lxml.html.HtmlElement)->str:
s = node.text
if s is None:
s = ''
for child in node:
s += lxml.etree.tostring(child, encoding='unicode')
return s.replace("<br/>", "\n")
def __eq__(self, other):
if isinstance(other, self.__class__):
return self.__dict__ == other.__dict__
return False
def __ne__(self, other):
return not self.__eq__(other)
class Problem:
PROBLEM_LINK_PATTERN = re.compile(".*/contest/\d+/problem/(.+?)")
def __init__(self, problem_link: str, examples: typing.List[Example]):
self.problem_link = problem_link
self.problem_name = self._get_problem_name(problem_link)
self.examples = examples
def __str__(self):
problem_as_str = self.problem_link+"\n"
for example in self.examples:
problem_as_str += str(example)
return problem_as_str
@staticmethod
def _get_problem_name(problem_link: str):
found = Problem.PROBLEM_LINK_PATTERN.match(problem_link)
assert found
return found.group(1)
def write(self, contest_directory: str):
problem_directory = os.path.join(contest_directory, self.problem_name)
if not os.path.isdir(problem_directory):
os.makedirs(problem_directory)
for index, example in enumerate(self.examples, start=1):
input_path = os.path.join(problem_directory, "in%s.txt" % index)
output_path = os.path.join(problem_directory, "expected%s.txt" % index)
with open(input_path, "w+") as input_file:
input_file.write(example.input)
with open(output_path, "w+") as output_file:
output_file.write(example.output)
class Division(enum.IntEnum):
ONE = 1
TWO = 2
class Codeforces:
CONTEST_GENERIC_URL = "http://codeforces.com/contest/%s"
CONTEST_LINK_PATTERN = re.compile("^(http://)?(www\.)?codeforces.com/contest/(\d+?)(/+.*)?$")
class NoContestRunning(Exception):
pass
@staticmethod
def get_contest_id_from_contest_link(contest_link: str)->str:
return Codeforces.CONTEST_LINK_PATTERN.match(contest_link).group(3)
@staticmethod
def _get_user_rating(handle: str)->int:
response = requests.get("http://codeforces.com/api/user.info?%s" % (urllib.parse.urlencode({"handles": handle})))
assert response.status_code == 200,\
"Could not fetch user rating using Codeforces API - response status %s" % response.status_code
api_info = json.loads(response.content.decode("utf-8"))
results = api_info.get("result") # type: typing.List[typing.Dict[str,typing.Any]]
for user_info in results:
if not user_info.get("handle") == handle:
continue
else:
return user_info.get("rating")
raise NotImplementedError("No info about user in the response. Response: %s" % api_info)
@staticmethod
def _get_user_division()->Division:
user_rating = Codeforces._get_user_rating(USERNAME)
if user_rating <= 1900:
return Division.TWO
else:
return Division.ONE
@staticmethod
@asyncio.coroutine
def get_problem(session: aiohttp.client.ClientSession, link: str) -> Problem:
page = yield from session.get(link)
assert page.status == 200, "Page %s is not accessible - request status code %s" % (link, page.status_code)
tree = lxml.html.fromstring((yield from page.text()))
inputs = [] # type: typing.List[str]
outputs = [] # type: typing.List[str]
for test_div in tree.xpath("//div[@class='sample-test']"):
for input_div in test_div.xpath("//div[@class='input']"):
pre = input_div.xpath("pre")[0]
inputs.append(Example.get_example_text(pre))
for output_div in test_div.xpath("//div[@class='output']"):
pre = output_div.xpath("pre")[0]
outputs.append(Example.get_example_text(pre))
assert len(inputs) == len(outputs)
return Problem(problem_link=link,
examples=[Example(inputs[index], outputs[index]) for index in range(len(inputs))])
@staticmethod
@asyncio.coroutine
def get_problems(loop: asyncio.BaseEventLoop, contest_link: str) -> typing.List[Problem]:
contest = requests.get(contest_link)
assert contest.status_code == 200,\
"Could not fetch contest page: %s status code: %s" % (contest_link, contest.status_code)
tree = lxml.html.fromstring(contest.content)
tree.make_links_absolute(contest_link)
problem_links = set()
for element, attribute, link, pos in tree.iterlinks():
if Problem.PROBLEM_LINK_PATTERN.match(link):
problem_links.add(link)
with aiohttp.ClientSession(loop=loop) as session:
tasks = [Codeforces.get_problem(session, link) for link in problem_links]
return (yield from asyncio.gather(*tasks))
@staticmethod
def get_currently_running_contest()->str:
CONTESTS_URL = "http://codeforces.com/contests/"
contests_page = requests.get(CONTESTS_URL)
assert contests_page.status_code == 200, "Could not open the contest page"
user_division = Codeforces._get_user_division()
print("Downloading division %s" % user_division.value)
tree = lxml.html.fromstring(contests_page.content)
tree.make_links_absolute(CONTESTS_URL)
for contest in tree.xpath("//tr[@data-contestid]"):
contest_id = contest.get("data-contestid") # type: str
if "Div. %s" % user_division.value not in lxml.html.tostring(contest, encoding="unicode"):
continue
red_enter_links = contest.xpath('.//a[@class="red-link"]') # type: typing.List[lxml.html.HtmlElement]
for red_enter_link in red_enter_links:
contest_link = Codeforces._extract_contest_link(red_enter_link.get("href"))
assert Codeforces.get_contest_id_from_contest_link(contest_link) == contest_id
return contest_link
else:
raise Codeforces.NoContestRunning(
"No contest for your division is running. "
"Please use the --link option to specify the direct link to the contest.")
@staticmethod
def _extract_contest_link(contest_link: str)->str:
"""in case sb provides http://codeforces.com/contest/779/problem/D instead of http://codeforces.com/contest/779"""
found = Codeforces.CONTEST_LINK_PATTERN.match(contest_link)
assert found, "Invalid contest link: %s - link did not match the pattern: %s" % \
(contest_link, Codeforces.CONTEST_LINK_PATTERN.pattern)
for i in found.groups():
if i == "http://" or i == "www." or not i:
continue
else:
round_id = i
break
return Codeforces.CONTEST_GENERIC_URL % round_id
@staticmethod
def download_examples(contest_link: str = None, contest_name: str = None, contest_full_path: str = None):
assert contest_name or contest_full_path
if not contest_link:
contest_link = Codeforces.get_currently_running_contest()
else:
contest_link = Codeforces._extract_contest_link(contest_link)
loop = asyncio.get_event_loop()
problems = loop.run_until_complete(Codeforces.get_problems(loop, contest_link))
loop.close()
assert problems
if not contest_full_path:
contest_full_path = os.path.join(CONTEST_DIR, contest_name)
print("Downloading examples from: %s to %s" % (contest_link, contest_full_path))
for problem in problems:
problem.write(contest_full_path)
def handle_username_change(new_username: str):
raise NotImplementedError()
def handle_contest_directory_change(new_contest_directory: str):
raise NotImplementedError()
def main():
parser = argparse.ArgumentParser()
parser.add_argument("--demo", action="store_true", help="examples will be written to /tmp/codeforces/")
parser.add_argument("-n", "--contest-name", help="contest name - for example ct403 for Codeforces Round #403. Used to determine the output directory.")
parser.add_argument("-d", "--contest-dir", help="path to contest directory to save examples in")
parser.add_argument("-l", "--link", help="link to the contest (or any of the examples in the contest) - "
"for example: http://codeforces.com/contest/779")
parser.add_argument("--set-username",
help="your username - used to deduce the contest you are participating in (when not using the --link flag),"
" when there are two concurrent contests for div 1 and div 2")
parser.add_argument("--set-contest-dir",
help="set contest directory - examples are saved in $CONTEST_DIR/$PROBLEM_NAME/in(ID).txt")
args = parser.parse_args()
Codeforces.download_examples(
contest_link=args.link,
contest_name=args.contest_name,
contest_full_path=args.contest_dir if not args.demo else "/tmp/codeforces/"
)
if __name__ == "__main__":
main()
| 42.121339 | 155 | 0.645674 | 1,248 | 10,067 | 4.992788 | 0.200321 | 0.052961 | 0.02311 | 0.017654 | 0.147007 | 0.090194 | 0.034344 | 0.012197 | 0 | 0 | 0 | 0.005525 | 0.244859 | 10,067 | 238 | 156 | 42.298319 | 0.814128 | 0.033575 | 0 | 0.12 | 0 | 0.005 | 0.16376 | 0.018471 | 0 | 0 | 0 | 0 | 0.05 | 1 | 0.1 | false | 0.005 | 0.06 | 0.015 | 0.285 | 0.01 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
10fc95e15d3a50c2d7236f7e5808fcfc6aa2b291 | 878 | py | Python | GIL/thread_factorize.py | Chanakya-School-of-AI/pytorch-tutorials | 8745db96172c45f6b07a589a88905a00cc555010 | [
"MIT"
] | 3 | 2021-07-07T14:40:09.000Z | 2021-09-30T07:07:55.000Z | GIL/thread_factorize.py | Chanakya-School-of-AI/pytorch-tutorials | 8745db96172c45f6b07a589a88905a00cc555010 | [
"MIT"
] | null | null | null | GIL/thread_factorize.py | Chanakya-School-of-AI/pytorch-tutorials | 8745db96172c45f6b07a589a88905a00cc555010 | [
"MIT"
] | 2 | 2021-07-24T14:52:32.000Z | 2021-11-15T10:12:06.000Z | from threading import Thread
from time import time
def factorize(number):
for i in range(1, number + 1):
if number % i == 0:
yield i
class FactorizeThread(Thread):
def __init__(self, number):
super().__init__()
self.number = number
def run(self):
self.factors = list(factorize(self.number))
numbers = [8402868, 2295738, 5938342, 7925426]
# Serial Calculation
start = time()
for number in numbers:
list(factorize(number))
end = time()
print('Took %.3f seconds for serial calculation' % (end - start))
# Threaded Calculation
start = time()
threads = []
for number in numbers:
thread = FactorizeThread(number)
thread.start()
threads.append(thread) # wait for all thread to finish
for thread in threads:
thread.join()
end = time()
print('Took %.3f seconds for threaded calculation' % (end - start)) | 23.72973 | 67 | 0.665148 | 112 | 878 | 5.142857 | 0.401786 | 0.052083 | 0.048611 | 0.0625 | 0.097222 | 0.097222 | 0.097222 | 0 | 0 | 0 | 0 | 0.048316 | 0.222096 | 878 | 37 | 67 | 23.72973 | 0.795022 | 0.078588 | 0 | 0.214286 | 0 | 0 | 0.101737 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.107143 | false | 0 | 0.071429 | 0 | 0.214286 | 0.071429 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
10fd71f61fee7303569e1de176660e413458e1b6 | 8,774 | py | Python | spinalcordtoolbox/testing/create_test_data.py | valosekj/spinalcordtoolbox | 266bfc88d6eb6e96a2c2f1ec88c2e185c6f88e09 | [
"MIT"
] | null | null | null | spinalcordtoolbox/testing/create_test_data.py | valosekj/spinalcordtoolbox | 266bfc88d6eb6e96a2c2f1ec88c2e185c6f88e09 | [
"MIT"
] | null | null | null | spinalcordtoolbox/testing/create_test_data.py | valosekj/spinalcordtoolbox | 266bfc88d6eb6e96a2c2f1ec88c2e185c6f88e09 | [
"MIT"
] | null | null | null | # -*- coding: utf-8
# Collection of functions to create data for testing
import numpy as np
from datetime import datetime
import itertools
from skimage.transform import rotate
import nibabel as nib
from spinalcordtoolbox.image import Image
from spinalcordtoolbox.resampling import resample_nib
DEBUG = False # Save img_sub
def dummy_blob(size_arr=(9, 9, 9), pixdim=(1, 1, 1), coordvox=None):
"""
Create an image with a non-null voxels at coordinates specified by coordvox.
:param size_arr:
:param pixdim:
:param coordvox: If None: will create a single voxel in the middle of the FOV.
If tuple: (x,y,z): Create single voxel at specified coordinate
If list of tuples: [(x1,y1,z1), (x2,y2,z2)]: Create multiple voxels.
:return: Image object
"""
# nx, ny, nz = size_arr
data = np.zeros(size_arr)
# if not specified, voxel coordinate is set at the middle of the volume
if coordvox is None:
coordvox = tuple([round(i / 2) for i in size_arr])
elif isinstance(coordvox, list):
for icoord in coordvox:
data[icoord] = 1
elif isinstance(coordvox, tuple):
data[coordvox] = 1
else:
ValueError("Wrong type for coordvox")
# Create image with default orientation LPI
affine = np.eye(4)
affine[0:3, 0:3] = affine[0:3, 0:3] * pixdim
nii = nib.nifti1.Nifti1Image(data, affine)
img = Image(data, hdr=nii.header, dim=nii.header.get_data_shape())
return img
def dummy_centerline(size_arr=(9, 9, 9), pixdim=(1, 1, 1), subsampling=1, dilate_ctl=0, hasnan=False, zeroslice=[],
outlier=[], orientation='RPI', debug=False):
"""
Create a dummy Image centerline of small size. Return the full and sub-sampled version along z. Voxel resolution
on fully-sampled data is 1x1x1 mm (so, 2x undersampled data along z would have resolution of 1x1x2 mm).
:param size_arr: tuple: (nx, ny, nz)
:param pixdim: tuple: (px, py, pz)
:param subsampling: int >=1. Subsampling factor along z. 1: no subsampling. 2: centerline defined every other z.
:param dilate_ctl: Dilation of centerline. E.g., if dilate_ctl=1, result will be a square of 3x3 per slice.
if dilate_ctl=0, result will be a single pixel per slice.
:param hasnan: Bool: Image has non-numerical values: nan, inf. In this case, do not subsample.
:param zeroslice: list int: zero all slices listed in this param
:param outlier: list int: replace the current point with an outlier at the corner of the image for the slices listed
:param orientation:
:param debug: Bool: Write temp files
:return:
"""
from numpy import poly1d, polyfit
nx, ny, nz = size_arr
# define array based on a polynomial function, within X-Z plane, located at y=ny/4, based on the following points:
x = np.array([round(nx/4.), round(nx/2.), round(3*nx/4.)])
z = np.array([0, round(nz/2.), nz-1])
p = poly1d(polyfit(z, x, deg=3))
data = np.zeros((nx, ny, nz))
arr_ctl = np.array([p(range(nz)).astype(np.int),
[round(ny / 4.)] * len(range(nz)),
range(nz)], dtype=np.uint16)
# Loop across dilation of centerline. E.g., if dilate_ctl=1, result will be a square of 3x3 per slice.
for ixiy_ctl in itertools.product(range(-dilate_ctl, dilate_ctl+1, 1), range(-dilate_ctl, dilate_ctl+1, 1)):
data[(arr_ctl[0] + ixiy_ctl[0]).tolist(),
(arr_ctl[1] + ixiy_ctl[1]).tolist(),
arr_ctl[2].tolist()] = 1
# Zero specified slices
if zeroslice is not []:
data[:, :, zeroslice] = 0
# Add outlier
if outlier is not []:
# First, zero all the slice
data[:, :, outlier] = 0
# Then, add point in the corner
data[0, 0, outlier] = 1
# Create image with default orientation LPI
affine = np.eye(4)
affine[0:3, 0:3] = affine[0:3, 0:3] * pixdim
nii = nib.nifti1.Nifti1Image(data, affine)
img = Image(data, hdr=nii.header, dim=nii.header.get_data_shape())
# subsample data
img_sub = img.copy()
img_sub.data = np.zeros((nx, ny, nz))
for iz in range(0, nz, subsampling):
img_sub.data[..., iz] = data[..., iz]
# Add non-numerical values at the top corner of the image
if hasnan:
img.data[0, 0, 0] = np.nan
img.data[1, 0, 0] = np.inf
# Update orientation
img.change_orientation(orientation)
img_sub.change_orientation(orientation)
if debug:
img_sub.save('tmp_dummy_seg_'+datetime.now().strftime("%Y%m%d%H%M%S%f")+'.nii.gz')
return img, img_sub, arr_ctl
def dummy_segmentation(size_arr=(256, 256, 256), pixdim=(1, 1, 1), dtype=np.float64, orientation='LPI',
shape='rectangle', angle_RL=0, angle_AP=0, angle_IS=0, radius_RL=5.0, radius_AP=3.0,
zeroslice=[], debug=False):
"""Create a dummy Image with a ellipse or ones running from top to bottom in the 3rd dimension, and rotate the image
to make sure that compute_csa and compute_shape properly estimate the centerline angle.
:param size_arr: tuple: (nx, ny, nz)
:param pixdim: tuple: (px, py, pz)
:param dtype: Numpy dtype.
:param orientation: Orientation of the image. Default: LPI
:param shape: {'rectangle', 'ellipse'}
:param angle_RL: int: angle around RL axis (in deg)
:param angle_AP: int: angle around AP axis (in deg)
:param angle_IS: int: angle around IS axis (in deg)
:param radius_RL: float: 1st radius. With a, b = 50.0, 30.0 (in mm), theoretical CSA of ellipse is 4712.4
:param radius_AP: float: 2nd radius
:param zeroslice: list int: zero all slices listed in this param
:param debug: Write temp files for debug
:return: img: Image object
"""
# Initialization
padding = 15 # Padding size (isotropic) to avoid edge effect during rotation
# Create a 3d array, with dimensions corresponding to x: RL, y: AP, z: IS
nx, ny, nz = [int(size_arr[i] * pixdim[i]) for i in range(3)]
data = np.random.random((nx, ny, nz)) * 0.
xx, yy = np.mgrid[:nx, :ny]
# loop across slices and add object
for iz in range(nz):
if shape == 'rectangle': # theoretical CSA: (a*2+1)(b*2+1)
data[:, :, iz] = ((abs(xx - nx / 2) <= radius_RL) & (abs(yy - ny / 2) <= radius_AP)) * 1
if shape == 'ellipse':
data[:, :, iz] = (((xx - nx / 2) / radius_RL) ** 2 + ((yy - ny / 2) / radius_AP) ** 2 <= 1) * 1
# Pad to avoid edge effect during rotation
data = np.pad(data, padding, 'reflect')
# ROTATION ABOUT IS AXIS
# rotate (in deg), and re-grid using linear interpolation
data_rotIS = rotate(data, angle_IS, resize=False, center=None, order=1, mode='constant', cval=0, clip=False,
preserve_range=False)
# ROTATION ABOUT RL AXIS
# Swap x-z axes (to make a rotation within y-z plane, because rotate will apply rotation on the first 2 dims)
data_rotIS_swap = data_rotIS.swapaxes(0, 2)
# rotate (in deg), and re-grid using linear interpolation
data_rotIS_swap_rotRL = rotate(data_rotIS_swap, angle_RL, resize=False, center=None, order=1, mode='constant',
cval=0, clip=False, preserve_range=False)
# swap back
data_rotIS_rotRL = data_rotIS_swap_rotRL.swapaxes(0, 2)
# ROTATION ABOUT AP AXIS
# Swap y-z axes (to make a rotation within x-z plane)
data_rotIS_rotRL_swap = data_rotIS_rotRL.swapaxes(1, 2)
# rotate (in deg), and re-grid using linear interpolation
data_rotIS_rotRL_swap_rotAP = rotate(data_rotIS_rotRL_swap, angle_AP, resize=False, center=None, order=1,
mode='constant', cval=0, clip=False, preserve_range=False)
# swap back
data_rot = data_rotIS_rotRL_swap_rotAP.swapaxes(1, 2)
# Crop image (to remove padding)
data_rot_crop = data_rot[padding:nx+padding, padding:ny+padding, padding:nz+padding]
# Zero specified slices
if zeroslice is not []:
data_rot_crop[:, :, zeroslice] = 0
# Create nibabel object
xform = np.eye(4)
for i in range(3):
xform[i][i] = 1 # in [mm]
nii = nib.nifti1.Nifti1Image(data_rot_crop.astype('float32'), xform)
# resample to desired resolution
nii_r = resample_nib(nii, new_size=pixdim, new_size_type='mm', interpolation='linear')
# Create Image object. Default orientation is LPI.
# For debugging add .save() at the end of the command below
img = Image(nii_r.get_data(), hdr=nii_r.header, dim=nii_r.header.get_data_shape())
# Update orientation
img.change_orientation(orientation)
if debug:
img.save('tmp_dummy_seg_'+datetime.now().strftime("%Y%m%d%H%M%S%f")+'.nii.gz')
return img
| 46.178947 | 120 | 0.646114 | 1,351 | 8,774 | 4.105107 | 0.226499 | 0.019473 | 0.008655 | 0.006491 | 0.338622 | 0.301839 | 0.248828 | 0.230436 | 0.216372 | 0.20952 | 0 | 0.026112 | 0.236152 | 8,774 | 189 | 121 | 46.42328 | 0.801403 | 0.418623 | 0 | 0.166667 | 0 | 0 | 0.034744 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.03125 | false | 0 | 0.083333 | 0 | 0.145833 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
800076f2bca31ec7949923ed290ca06ed9e6401a | 13,035 | py | Python | ped_core/keytab.py | jpfxgood/ped | f753ca27e4462c321ed28f00e1ef47fbde62990e | [
"MIT"
] | null | null | null | ped_core/keytab.py | jpfxgood/ped | f753ca27e4462c321ed28f00e1ef47fbde62990e | [
"MIT"
] | 21 | 2020-07-03T13:14:15.000Z | 2020-07-14T14:27:43.000Z | ped_core/keytab.py | jpfxgood/ped | f753ca27e4462c321ed28f00e1ef47fbde62990e | [
"MIT"
] | null | null | null | # Copyright 2009-2012 James P Goodwin ped tiny python editor
""" module that contains the symbolic names of the keys """
import curses
KEYTAB_NOKEY=chr(0)
KEYTAB_ALTA=chr(27)+'a'
KEYTAB_ALTB=chr(27)+'b'
KEYTAB_ALTC=chr(27)+'c'
KEYTAB_ALTD=chr(27)+'d'
KEYTAB_ALTE=chr(27)+'e'
KEYTAB_ALTF=chr(27)+'f'
KEYTAB_ALTG=chr(27)+'g'
KEYTAB_ALTH=chr(27)+'h'
KEYTAB_ALTI=chr(27)+'i'
KEYTAB_ALTJ=chr(27)+'j'
KEYTAB_ALTK=chr(27)+'k'
KEYTAB_ALTL=chr(27)+'l'
KEYTAB_ALTM=chr(27)+'m'
KEYTAB_ALTN=chr(27)+'n'
KEYTAB_ALTo=chr(27)+'o'
KEYTAB_ALTO=chr(27)+'O'
KEYTAB_ALTP=chr(27)+'p'
KEYTAB_ALTQ=chr(27)+'q'
KEYTAB_ALTR=chr(27)+'r'
KEYTAB_ALTS=chr(27)+'s'
KEYTAB_ALTT=chr(27)+'t'
KEYTAB_ALTU=chr(27)+'u'
KEYTAB_ALTV=chr(27)+'v'
KEYTAB_ALTW=chr(27)+'w'
KEYTAB_ALTX=chr(27)+'x'
KEYTAB_ALTY=chr(27)+'y'
KEYTAB_ALTZ=chr(27)+'z'
KEYTAB_BACKSPACE="backspace"
KEYTAB_BACKSPACE=chr(8)
KEYTAB_BACKTAB=chr(27)+'[Z'
KEYTAB_BTAB="btab"
KEYTAB_CR=chr(10)
KEYTAB_CTRLA=chr(1)
KEYTAB_CTRLB=chr(2)
KEYTAB_CTRLC=chr(3)
KEYTAB_CTRLD=chr(4)
KEYTAB_CTRLE=chr(5)
KEYTAB_CTRLF=chr(6)
KEYTAB_CTRLG=chr(7)
KEYTAB_CTRLH=chr(8)
KEYTAB_CTRLI=chr(9)
KEYTAB_CTRLJ=chr(10)
KEYTAB_CTRLK=chr(11)
KEYTAB_CTRLL=chr(12)
KEYTAB_CTRLM=chr(13)
KEYTAB_CTRLN=chr(14)
KEYTAB_CTRLO=chr(15)
KEYTAB_CTRLP=chr(16)
KEYTAB_CTRLQ=chr(17)
KEYTAB_CTRLR=chr(18)
KEYTAB_CTRLS=chr(19)
KEYTAB_CTRLT=chr(20)
KEYTAB_CTRLU=chr(21)
KEYTAB_CTRLV=chr(22)
KEYTAB_CTRLW=chr(23)
KEYTAB_CTRLX=chr(24)
KEYTAB_CTRLY=chr(25)
KEYTAB_CTRLZ=chr(26)
KEYTAB_CTRLLEFT='ctrl-left'
KEYTAB_CTRLRIGHT='ctrl-right'
KEYTAB_CTRLHOME='ctrl-home'
KEYTAB_CTRLEND='ctrl-end'
KEYTAB_DELC="delc"
KEYTAB_DLGCANCEL="cancel"
KEYTAB_DLGNOP=KEYTAB_NOKEY
KEYTAB_DLGOK="ok"
KEYTAB_DOWN="down"
KEYTAB_END="end"
KEYTAB_ESC=chr(27)
KEYTAB_F00="fk00"
KEYTAB_F01="fk01"
KEYTAB_F02="fk02"
KEYTAB_F03="fk03"
KEYTAB_F04="fk04"
KEYTAB_F05="fk05"
KEYTAB_F06="fk06"
KEYTAB_F07="fk07"
KEYTAB_F08="fk08"
KEYTAB_F09="fk09"
KEYTAB_F10="fk10"
KEYTAB_F11="fk11"
KEYTAB_F12="fk12"
KEYTAB_F13="fk13"
KEYTAB_F14="fk14"
KEYTAB_F15="fk15"
KEYTAB_F16="fk16"
KEYTAB_F17="fk17"
KEYTAB_F18="fk18"
KEYTAB_F19="fk19"
KEYTAB_F20="fk20"
KEYTAB_F21="fk21"
KEYTAB_F22="fk22"
KEYTAB_F23="fk23"
KEYTAB_F24="fk24"
KEYTAB_F25="fk25"
KEYTAB_F26="fk26"
KEYTAB_F27="fk27"
KEYTAB_F28="fk28"
KEYTAB_F29="fk29"
KEYTAB_F30="fk30"
KEYTAB_F31="fk31"
KEYTAB_F32="fk32"
KEYTAB_F33="fk33"
KEYTAB_F34="fk34"
KEYTAB_F35="fk35"
KEYTAB_F36="fk36"
KEYTAB_F37="fk37"
KEYTAB_F38="fk38"
KEYTAB_F39="fk39"
KEYTAB_F40="fk40"
KEYTAB_F41="fk41"
KEYTAB_F42="fk42"
KEYTAB_F43="fk43"
KEYTAB_F44="fk44"
KEYTAB_F45="fk45"
KEYTAB_F46="fk46"
KEYTAB_F47="fk47"
KEYTAB_F48="fk48"
KEYTAB_F49="fk49"
KEYTAB_F50="fk50"
KEYTAB_F51="fk51"
KEYTAB_F52="fk52"
KEYTAB_F53="fk53"
KEYTAB_F54="fk54"
KEYTAB_F55="fk55"
KEYTAB_F56="fk56"
KEYTAB_F57="fk57"
KEYTAB_F58="fk58"
KEYTAB_F59="fk59"
KEYTAB_F60="fk60"
KEYTAB_F61="fk61"
KEYTAB_F62="fk62"
KEYTAB_F63="fk63"
KEYTAB_HOME="home"
KEYTAB_INSERT="insert"
KEYTAB_KEYPADPLUS=chr(27)+'Ok'
KEYTAB_KEYTPADMINUS=chr(27)+'Om'
KEYTAB_LEFT="left"
KEYTAB_PAGEDOWN="pagedown"
KEYTAB_PAGEUP="pageup"
KEYTAB_REFRESH="refresh"
KEYTAB_RESIZE="resize"
KEYTAB_RIGHT="right"
KEYTAB_SPACE=' '
KEYTAB_TAB=chr(9)
KEYTAB_UP="up"
KEYTAB_MOUSE="mouse"
name_to_key = {
"KEYTAB_ALTA" : KEYTAB_ALTA,
"KEYTAB_ALTB" : KEYTAB_ALTB,
"KEYTAB_ALTC" : KEYTAB_ALTC,
"KEYTAB_ALTD" : KEYTAB_ALTD,
"KEYTAB_ALTE" : KEYTAB_ALTE,
"KEYTAB_ALTF" : KEYTAB_ALTF,
"KEYTAB_ALTG" : KEYTAB_ALTG,
"KEYTAB_ALTH" : KEYTAB_ALTH,
"KEYTAB_ALTI" : KEYTAB_ALTI,
"KEYTAB_ALTJ" : KEYTAB_ALTJ,
"KEYTAB_ALTK" : KEYTAB_ALTK,
"KEYTAB_ALTL" : KEYTAB_ALTL,
"KEYTAB_ALTM" : KEYTAB_ALTM,
"KEYTAB_ALTN" : KEYTAB_ALTN,
"KEYTAB_ALTo" : KEYTAB_ALTo,
"KEYTAB_ALTO" : KEYTAB_ALTO,
"KEYTAB_ALTP" : KEYTAB_ALTP,
"KEYTAB_ALTQ" : KEYTAB_ALTQ,
"KEYTAB_ALTR" : KEYTAB_ALTR,
"KEYTAB_ALTS" : KEYTAB_ALTS,
"KEYTAB_ALTT" : KEYTAB_ALTT,
"KEYTAB_ALTU" : KEYTAB_ALTU,
"KEYTAB_ALTV" : KEYTAB_ALTV,
"KEYTAB_ALTW" : KEYTAB_ALTW,
"KEYTAB_ALTX" : KEYTAB_ALTX,
"KEYTAB_ALTY" : KEYTAB_ALTY,
"KEYTAB_ALTZ" : KEYTAB_ALTZ,
"KEYTAB_BACKSPACE" : KEYTAB_BACKSPACE,
"KEYTAB_BACKSPACE" : KEYTAB_BACKSPACE,
"KEYTAB_BACKTAB" : KEYTAB_BACKTAB,
"KEYTAB_BTAB" : KEYTAB_BTAB,
"KEYTAB_CR" : KEYTAB_CR,
"KEYTAB_CTRLA" : KEYTAB_CTRLA,
"KEYTAB_CTRLB" : KEYTAB_CTRLB,
"KEYTAB_CTRLC" : KEYTAB_CTRLC,
"KEYTAB_CTRLD" : KEYTAB_CTRLD,
"KEYTAB_CTRLE" : KEYTAB_CTRLE,
"KEYTAB_CTRLF" : KEYTAB_CTRLF,
"KEYTAB_CTRLG" : KEYTAB_CTRLG,
"KEYTAB_CTRLH" : KEYTAB_CTRLH,
"KEYTAB_CTRLI" : KEYTAB_CTRLI,
"KEYTAB_CTRLJ" : KEYTAB_CTRLJ,
"KEYTAB_CTRLK" : KEYTAB_CTRLK,
"KEYTAB_CTRLL" : KEYTAB_CTRLL,
"KEYTAB_CTRLM" : KEYTAB_CTRLM,
"KEYTAB_CTRLN" : KEYTAB_CTRLN,
"KEYTAB_CTRLO" : KEYTAB_CTRLO,
"KEYTAB_CTRLP" : KEYTAB_CTRLP,
"KEYTAB_CTRLQ" : KEYTAB_CTRLQ,
"KEYTAB_CTRLR" : KEYTAB_CTRLR,
"KEYTAB_CTRLS" : KEYTAB_CTRLS,
"KEYTAB_CTRLT" : KEYTAB_CTRLT,
"KEYTAB_CTRLU" : KEYTAB_CTRLU,
"KEYTAB_CTRLV" : KEYTAB_CTRLV,
"KEYTAB_CTRLW" : KEYTAB_CTRLW,
"KEYTAB_CTRLX" : KEYTAB_CTRLX,
"KEYTAB_CTRLY" : KEYTAB_CTRLY,
"KEYTAB_CTRLZ" : KEYTAB_CTRLZ,
"KEYTAB_CTRLLEFT" : KEYTAB_CTRLLEFT,
"KEYTAB_CTRLRIGHT" : KEYTAB_CTRLRIGHT,
"KEYTAB_CTRLHOME" : KEYTAB_CTRLHOME,
"KEYTAB_CTRLEND" : KEYTAB_CTRLEND,
"KEYTAB_DELC" : KEYTAB_DELC,
"KEYTAB_DLGCANCEL" : KEYTAB_DLGCANCEL,
"KEYTAB_DLGNOP" : KEYTAB_DLGNOP,
"KEYTAB_DLGOK" : KEYTAB_DLGOK,
"KEYTAB_DOWN" : KEYTAB_DOWN,
"KEYTAB_END" : KEYTAB_END,
"KEYTAB_ESC" : KEYTAB_ESC,
"KEYTAB_F00" : KEYTAB_F00,
"KEYTAB_F01" : KEYTAB_F01,
"KEYTAB_F02" : KEYTAB_F02,
"KEYTAB_F03" : KEYTAB_F03,
"KEYTAB_F04" : KEYTAB_F04,
"KEYTAB_F05" : KEYTAB_F05,
"KEYTAB_F06" : KEYTAB_F06,
"KEYTAB_F07" : KEYTAB_F07,
"KEYTAB_F08" : KEYTAB_F08,
"KEYTAB_F09" : KEYTAB_F09,
"KEYTAB_F10" : KEYTAB_F10,
"KEYTAB_F11" : KEYTAB_F11,
"KEYTAB_F12" : KEYTAB_F12,
"KEYTAB_F13" : KEYTAB_F13,
"KEYTAB_F14" : KEYTAB_F14,
"KEYTAB_F15" : KEYTAB_F15,
"KEYTAB_F16" : KEYTAB_F16,
"KEYTAB_F17" : KEYTAB_F17,
"KEYTAB_F18" : KEYTAB_F18,
"KEYTAB_F19" : KEYTAB_F19,
"KEYTAB_F20" : KEYTAB_F20,
"KEYTAB_F21" : KEYTAB_F21,
"KEYTAB_F22" : KEYTAB_F22,
"KEYTAB_F23" : KEYTAB_F23,
"KEYTAB_F24" : KEYTAB_F24,
"KEYTAB_F25" : KEYTAB_F25,
"KEYTAB_F26" : KEYTAB_F26,
"KEYTAB_F27" : KEYTAB_F27,
"KEYTAB_F28" : KEYTAB_F28,
"KEYTAB_F29" : KEYTAB_F29,
"KEYTAB_F30" : KEYTAB_F30,
"KEYTAB_F31" : KEYTAB_F31,
"KEYTAB_F32" : KEYTAB_F32,
"KEYTAB_F33" : KEYTAB_F33,
"KEYTAB_F34" : KEYTAB_F34,
"KEYTAB_F35" : KEYTAB_F35,
"KEYTAB_F36" : KEYTAB_F36,
"KEYTAB_F37" : KEYTAB_F37,
"KEYTAB_F38" : KEYTAB_F38,
"KEYTAB_F39" : KEYTAB_F39,
"KEYTAB_F40" : KEYTAB_F40,
"KEYTAB_F41" : KEYTAB_F41,
"KEYTAB_F42" : KEYTAB_F42,
"KEYTAB_F43" : KEYTAB_F43,
"KEYTAB_F44" : KEYTAB_F44,
"KEYTAB_F45" : KEYTAB_F45,
"KEYTAB_F46" : KEYTAB_F46,
"KEYTAB_F47" : KEYTAB_F47,
"KEYTAB_F48" : KEYTAB_F48,
"KEYTAB_F49" : KEYTAB_F49,
"KEYTAB_F50" : KEYTAB_F50,
"KEYTAB_F51" : KEYTAB_F51,
"KEYTAB_F52" : KEYTAB_F52,
"KEYTAB_F53" : KEYTAB_F53,
"KEYTAB_F54" : KEYTAB_F54,
"KEYTAB_F55" : KEYTAB_F55,
"KEYTAB_F56" : KEYTAB_F56,
"KEYTAB_F57" : KEYTAB_F57,
"KEYTAB_F58" : KEYTAB_F58,
"KEYTAB_F59" : KEYTAB_F59,
"KEYTAB_F60" : KEYTAB_F60,
"KEYTAB_F61" : KEYTAB_F61,
"KEYTAB_F62" : KEYTAB_F62,
"KEYTAB_F63" : KEYTAB_F63,
"KEYTAB_HOME" : KEYTAB_HOME,
"KEYTAB_INSERT" : KEYTAB_INSERT,
"KEYTAB_KEYPADPLUS" : KEYTAB_KEYPADPLUS,
"KEYTAB_KEYTPADMINUS" : KEYTAB_KEYTPADMINUS,
"KEYTAB_LEFT" : KEYTAB_LEFT,
"KEYTAB_NOKEY" : KEYTAB_NOKEY,
"KEYTAB_PAGEDOWN" : KEYTAB_PAGEDOWN,
"KEYTAB_PAGEUP" : KEYTAB_PAGEUP,
"KEYTAB_REFRESH" : KEYTAB_REFRESH,
"KEYTAB_RESIZE" : KEYTAB_RESIZE,
"KEYTAB_RIGHT" : KEYTAB_RIGHT,
"KEYTAB_SPACE" : KEYTAB_SPACE,
"KEYTAB_TAB" : KEYTAB_TAB,
"KEYTAB_UP" : KEYTAB_UP,
"KEYTAB_MOUSE" : KEYTAB_MOUSE,
}
key_to_name = {}
for name,key in list(name_to_key.items()):
key_to_name[key] = name
keydef = [
((0,),KEYTAB_NOKEY),
((27,-1,),KEYTAB_ESC),
((27,ord('a'),-1),KEYTAB_ALTA),
((27,ord('b'),-1),KEYTAB_ALTB),
((27,ord('c'),-1),KEYTAB_ALTC),
((27,ord('d'),-1),KEYTAB_ALTD),
((27,ord('e'),-1),KEYTAB_ALTE),
((27,ord('f'),-1),KEYTAB_ALTF),
((27,ord('g'),-1),KEYTAB_ALTG),
((27,ord('h'),-1),KEYTAB_ALTH),
((27,ord('i'),-1),KEYTAB_ALTI),
((27,ord('j'),-1),KEYTAB_ALTJ),
((27,ord('k'),-1),KEYTAB_ALTK),
((27,ord('l'),-1),KEYTAB_ALTL),
((27,ord('m'),-1),KEYTAB_ALTM),
((27,ord('n'),-1),KEYTAB_ALTN),
((27,ord('o'),-1),KEYTAB_ALTo),
((27,ord('p'),-1),KEYTAB_ALTP),
((27,ord('q'),-1),KEYTAB_ALTQ),
((27,ord('r'),-1),KEYTAB_ALTR),
((27,ord('s'),-1),KEYTAB_ALTS),
((27,ord('t'),-1),KEYTAB_ALTT),
((27,ord('u'),-1),KEYTAB_ALTU),
((27,ord('v'),-1),KEYTAB_ALTV),
((27,ord('w'),-1),KEYTAB_ALTW),
((27,ord('x'),-1),KEYTAB_ALTX),
((27,ord('y'),-1),KEYTAB_ALTY),
((27,ord('z'),-1),KEYTAB_ALTZ),
((27,ord('A'),-1),KEYTAB_ALTA),
((27,ord('B'),-1),KEYTAB_ALTB),
((27,ord('C'),-1),KEYTAB_ALTC),
((27,ord('D'),-1),KEYTAB_ALTD),
((27,ord('E'),-1),KEYTAB_ALTE),
((27,ord('F'),-1),KEYTAB_ALTF),
((27,ord('G'),-1),KEYTAB_ALTG),
((27,ord('H'),-1),KEYTAB_ALTH),
((27,ord('I'),-1),KEYTAB_ALTI),
((27,ord('J'),-1),KEYTAB_ALTJ),
((27,ord('K'),-1),KEYTAB_ALTK),
((27,ord('L'),-1),KEYTAB_ALTL),
((27,ord('M'),-1),KEYTAB_ALTM),
((27,ord('N'),-1),KEYTAB_ALTN),
((27,ord('O'),-1),KEYTAB_ALTO),
((27,ord('P'),-1),KEYTAB_ALTP),
((27,ord('Q'),-1),KEYTAB_ALTQ),
((27,ord('R'),-1),KEYTAB_ALTR),
((27,ord('S'),-1),KEYTAB_ALTS),
((27,ord('T'),-1),KEYTAB_ALTT),
((27,ord('U'),-1),KEYTAB_ALTU),
((27,ord('V'),-1),KEYTAB_ALTV),
((27,ord('W'),-1),KEYTAB_ALTW),
((27,ord('X'),-1),KEYTAB_ALTX),
((27,ord('Y'),-1),KEYTAB_ALTY),
((27,ord('Z'),-1),KEYTAB_ALTZ),
((curses.KEY_BACKSPACE,-1),KEYTAB_BACKSPACE),
((8,-1),KEYTAB_BACKSPACE),
((127,-1),KEYTAB_BACKSPACE),
((27,ord('['),ord('Z'),-1),KEYTAB_BACKTAB),
((curses.KEY_BTAB,-1),KEYTAB_BTAB),
((10,-1),KEYTAB_CR),
((1,-1),KEYTAB_CTRLA),
((2,-1),KEYTAB_CTRLB),
((3,-1),KEYTAB_CTRLC),
((4,-1),KEYTAB_CTRLD),
((5,-1),KEYTAB_CTRLE),
((6,-1),KEYTAB_CTRLF),
((7,-1),KEYTAB_CTRLG),
((8,-1),KEYTAB_CTRLH),
((9,-1),KEYTAB_CTRLI),
((10,-1),KEYTAB_CTRLJ),
((11,-1),KEYTAB_CTRLK),
((12,-1),KEYTAB_CTRLL),
((13,-1),KEYTAB_CTRLM),
((14,-1),KEYTAB_CTRLN),
((15,-1),KEYTAB_CTRLO),
((16,-1),KEYTAB_CTRLP),
((17,-1),KEYTAB_CTRLQ),
((18,-1),KEYTAB_CTRLR),
((19,-1),KEYTAB_CTRLS),
((20,-1),KEYTAB_CTRLT),
((21,-1),KEYTAB_CTRLU),
((22,-1),KEYTAB_CTRLV),
((23,-1),KEYTAB_CTRLW),
((24,-1),KEYTAB_CTRLX),
((25,-1),KEYTAB_CTRLY),
((26,-1),KEYTAB_CTRLZ),
((545,-1),KEYTAB_CTRLLEFT),
((560,-1),KEYTAB_CTRLRIGHT),
((530,-1),KEYTAB_CTRLHOME),
((525,-1),KEYTAB_CTRLEND),
((curses.KEY_DC,-1),KEYTAB_DELC),
((curses.KEY_DOWN,-1),KEYTAB_DOWN),
((curses.KEY_END,-1),KEYTAB_END),
((curses.KEY_F0,-1),KEYTAB_F00),
((curses.KEY_F1,-1),KEYTAB_F01),
((curses.KEY_F2,-1),KEYTAB_F02),
((curses.KEY_F3,-1),KEYTAB_F03),
((curses.KEY_F4,-1),KEYTAB_F04),
((curses.KEY_F5,-1),KEYTAB_F05),
((curses.KEY_F6,-1),KEYTAB_F06),
((curses.KEY_F7,-1),KEYTAB_F07),
((curses.KEY_F8,-1),KEYTAB_F08),
((curses.KEY_F9,-1),KEYTAB_F09),
((curses.KEY_F10,-1),KEYTAB_F10),
((curses.KEY_F11,-1),KEYTAB_F11),
((curses.KEY_F12,-1),KEYTAB_F12),
((curses.KEY_F13,-1),KEYTAB_F13),
((curses.KEY_F14,-1),KEYTAB_F14),
((curses.KEY_F15,-1),KEYTAB_F15),
((curses.KEY_F16,-1),KEYTAB_F16),
((curses.KEY_F17,-1),KEYTAB_F17),
((curses.KEY_F18,-1),KEYTAB_F18),
((curses.KEY_F19,-1),KEYTAB_F19),
((curses.KEY_F20,-1),KEYTAB_F20),
((curses.KEY_F21,-1),KEYTAB_F21),
((curses.KEY_F22,-1),KEYTAB_F22),
((curses.KEY_F23,-1),KEYTAB_F23),
((curses.KEY_F24,-1),KEYTAB_F24),
((curses.KEY_F25,-1),KEYTAB_F25),
((curses.KEY_F26,-1),KEYTAB_F26),
((curses.KEY_F27,-1),KEYTAB_F27),
((curses.KEY_F28,-1),KEYTAB_F28),
((curses.KEY_F29,-1),KEYTAB_F29),
((curses.KEY_F30,-1),KEYTAB_F30),
((curses.KEY_F31,-1),KEYTAB_F31),
((curses.KEY_F32,-1),KEYTAB_F32),
((curses.KEY_F33,-1),KEYTAB_F33),
((curses.KEY_F34,-1),KEYTAB_F34),
((curses.KEY_F35,-1),KEYTAB_F35),
((curses.KEY_F36,-1),KEYTAB_F36),
((curses.KEY_F37,-1),KEYTAB_F37),
((curses.KEY_F38,-1),KEYTAB_F38),
((curses.KEY_F39,-1),KEYTAB_F39),
((curses.KEY_F40,-1),KEYTAB_F40),
((curses.KEY_F41,-1),KEYTAB_F41),
((curses.KEY_F42,-1),KEYTAB_F42),
((curses.KEY_F43,-1),KEYTAB_F43),
((curses.KEY_F44,-1),KEYTAB_F44),
((curses.KEY_F45,-1),KEYTAB_F45),
((curses.KEY_F46,-1),KEYTAB_F46),
((curses.KEY_F47,-1),KEYTAB_F47),
((curses.KEY_F48,-1),KEYTAB_F48),
((curses.KEY_F49,-1),KEYTAB_F49),
((curses.KEY_F50,-1),KEYTAB_F50),
((curses.KEY_F51,-1),KEYTAB_F51),
((curses.KEY_F52,-1),KEYTAB_F52),
((curses.KEY_F53,-1),KEYTAB_F53),
((curses.KEY_F54,-1),KEYTAB_F54),
((curses.KEY_F55,-1),KEYTAB_F55),
((curses.KEY_F56,-1),KEYTAB_F56),
((curses.KEY_F57,-1),KEYTAB_F57),
((curses.KEY_F58,-1),KEYTAB_F58),
((curses.KEY_F59,-1),KEYTAB_F59),
((curses.KEY_F60,-1),KEYTAB_F60),
((curses.KEY_F61,-1),KEYTAB_F61),
((curses.KEY_F62,-1),KEYTAB_F62),
((curses.KEY_F63,-1),KEYTAB_F63),
((curses.KEY_HOME,-1),KEYTAB_HOME),
((curses.KEY_IC,-1),KEYTAB_INSERT),
((27,ord('O'),ord('k'),-1),KEYTAB_KEYPADPLUS),
((27,ord('O'),ord('m'),-1),KEYTAB_KEYTPADMINUS),
((curses.KEY_LEFT,-1),KEYTAB_LEFT),
((curses.KEY_NPAGE,-1),KEYTAB_PAGEDOWN),
((curses.KEY_PPAGE,-1),KEYTAB_PAGEUP),
((curses.KEY_RESIZE,-1),KEYTAB_RESIZE),
((curses.KEY_RIGHT,-1),KEYTAB_RIGHT),
((ord(' '),-1),KEYTAB_SPACE),
((9,-1),KEYTAB_TAB),
((curses.KEY_UP,-1),KEYTAB_UP),
((curses.KEY_MOUSE,-1),KEYTAB_MOUSE),
]
| 27.043568 | 60 | 0.715919 | 2,123 | 13,035 | 4.065473 | 0.149317 | 0.137875 | 0.011123 | 0.006952 | 0.119801 | 0.119801 | 0.115398 | 0.102422 | 0.102422 | 0.102422 | 0 | 0.099828 | 0.062447 | 13,035 | 481 | 61 | 27.099792 | 0.606415 | 0.008592 | 0 | 0.004219 | 0 | 0 | 0.164215 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.00211 | 0 | 0.00211 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
8001f091db832c8853cc1001b477b8ee30ecdde0 | 19,438 | py | Python | agilegan/transfer.py | open-mmlab/MMGEN-FaceStylor | 67e7b17a323ac5a3eae28b88e0b2c921876b50ca | [
"Apache-2.0"
] | 122 | 2021-12-10T06:19:03.000Z | 2022-03-27T12:30:42.000Z | agilegan/transfer.py | open-mmlab/MMGEN-FaceStylor | 67e7b17a323ac5a3eae28b88e0b2c921876b50ca | [
"Apache-2.0"
] | 1 | 2021-12-24T10:07:41.000Z | 2021-12-24T10:07:41.000Z | agilegan/transfer.py | open-mmlab/MMGEN-FaceStylor | 67e7b17a323ac5a3eae28b88e0b2c921876b50ca | [
"Apache-2.0"
] | 12 | 2021-12-10T10:38:19.000Z | 2022-02-08T12:54:46.000Z | from copy import deepcopy
import torch
import torch.nn.functional as F
from mmgen.models.builder import MODELS, build_module
from mmgen.models.common import set_requires_grad
from mmgen.models.gans.static_unconditional_gan import StaticUnconditionalGAN
from torch.nn.parallel.distributed import _find_tensors
from .losses.lpips.lpips import LPIPS
from mmgen.models.architectures.common import get_module_device # isort:skip # noqa
def requires_grad(model, flag=True, target_layer=None):
"""Set the `requires_grad` of the model target layer to flag.
Args:
model (nn.Module): Model to be set.
flag (bool, optional): Flag for `requires_grad`.
Defaults to True.
target_layer (str | None, optional): Name or Key words of
target layer. Defaults to None.
"""
for name, param in model.named_parameters():
if target_layer is None or target_layer in name:
param.requires_grad = flag
@MODELS.register_module()
class PSPTransfer(StaticUnconditionalGAN):
def __init__(self,
src_generator,
generator,
discriminator,
gan_loss,
disc_auxiliary_loss=None,
gen_auxiliary_loss=None,
lpips_lambda=0.8,
freezeG=-1,
freezeD=-1,
freezeStyle=-1,
structure_loss_layer=-1,
sample_space='zplus',
train_cfg=None,
test_cfg=None):
super().__init__(generator,
discriminator,
gan_loss,
disc_auxiliary_loss=disc_auxiliary_loss,
gen_auxiliary_loss=gen_auxiliary_loss,
train_cfg=train_cfg,
test_cfg=test_cfg)
self.src_g_cfg = deepcopy(src_generator)
self.source_generator = build_module(src_generator)
self.lpips_lambda = lpips_lambda
if self.lpips_lambda > 0:
self.lpips_loss = LPIPS(net_type='vgg').eval()
else:
self.lpips_loss = None
self.structure_loss_layer = structure_loss_layer
set_requires_grad(self.source_generator, False)
set_requires_grad(self.generator.style_mapping, False)
self.face_pool = torch.nn.AdaptiveAvgPool2d((256, 256))
# freeze parameters
self.freezeD = freezeD
self.freezeG = freezeG
self.freezeStyle = freezeStyle
self.sample_space = sample_space
def latent_generator(self, batch_size):
device = get_module_device(self.generator)
z_plus_code = torch.randn(batch_size, 18, 512).to(device)
w_plus_code = [
self.source_generator.style_mapping(s) for s in z_plus_code
]
w_plus_code = [torch.stack(w_plus_code, dim=0)]
return w_plus_code
def _get_gen_loss(self, outputs_dict):
# Construct losses dict. If you hope some items to be included in the
# computational graph, you have to add 'loss' in its name. Otherwise,
# items without 'loss' in their name will just be used to print
# information.
losses_dict = {}
# gan loss
losses_dict['loss_disc_fake_g'] = self.gan_loss(
outputs_dict['disc_pred_fake_g'],
target_is_real=True,
is_disc=False)
# TODO: add modified LPIPS
source_results = self.source_generator(outputs_dict['latents'],
input_is_latent=True)
resized_source_results = self.face_pool(source_results)
resized_target_results = self.face_pool(outputs_dict['fake_imgs'])
# lpip loss
if self.lpips_loss is not None:
losses_dict['loss_sim'] = self.lpips_lambda * self.lpips_loss(
x=resized_source_results, y=resized_target_results)
# structure loss
if self.structure_loss_layer > 0:
losses_dict['loss_structure'] = 0.
for layer in range(self.structure_loss_layer):
generator_source = outputs_dict['gen_source']
generator = outputs_dict['gen'].module
_, latent_med_sor = generator_source.swap_forward(
outputs_dict['latents'],
input_is_latent=True,
swap=True,
swap_layer_num=layer + 1)
_, latent_med_tar = generator.swap_forward(
outputs_dict['latents'],
input_is_latent=True,
swap=True,
swap_layer_num=layer + 1)
losses_dict['loss_structure'] += F.mse_loss(
latent_med_tar, latent_med_sor)
# gen auxiliary loss
if self.with_gen_auxiliary_loss:
for loss_module in self.gen_auxiliary_losses:
loss_ = loss_module(outputs_dict)
if loss_ is None:
continue
# mmcv.print_log(f'get loss for {loss_module.name()}')
# the `loss_name()` function return name as 'loss_xxx'
if loss_module.loss_name() in losses_dict:
losses_dict[loss_module.loss_name(
)] = losses_dict[loss_module.loss_name()] + loss_
else:
losses_dict[loss_module.loss_name()] = loss_
loss, log_var = self._parse_losses(losses_dict)
return loss, log_var, source_results
def freeze_before_train_d(self):
requires_grad(self.generator, False)
requires_grad(self.discriminator, False)
# obtain some params
g_log_size = self.generator.module.log_size
if hasattr(self.generator.module, 'num_layers'):
g_num_layers = self.generator.module.num_layers
else:
g_num_layers = self.generator.module.num_injected_noises
d_log_size = self.discriminator.module.log_size
# Freeze !!!
# set_requires_grad(self.discriminator, True)
if self.freezeG > 0 and self.freezeD > 0:
# G
for layer in range(self.freezeG):
requires_grad(self.generator,
False,
target_layer=f'convs.{g_num_layers-2-2*layer}')
requires_grad(self.generator,
False,
target_layer=f'convs.{g_num_layers-3-2*layer}')
requires_grad(self.generator,
False,
target_layer=f'to_rgbs.{g_log_size-3-layer}')
# D
for layer in range(self.freezeD):
requires_grad(self.discriminator,
True,
target_layer=f'convs.{d_log_size-2-layer}')
requires_grad(self.discriminator, True,
target_layer='final_') # final_conv, final_linear
elif self.freezeG > 0:
# G
for layer in range(self.freezeG):
requires_grad(self.generator,
False,
target_layer=f'convs.{g_num_layers-2-2*layer}')
requires_grad(self.generator,
False,
target_layer=f'convs.{g_num_layers-3-2*layer}')
requires_grad(self.generator,
False,
target_layer=f'to_rgbs.{g_log_size-3-layer}')
# D
requires_grad(self.discriminator, True)
elif self.freezeD > 0:
# G
requires_grad(self.generator, False)
# D
for layer in range(self.freezeD):
requires_grad(self.discriminator,
True,
target_layer=f'convs.{d_log_size-2-layer}')
requires_grad(self.discriminator, True,
target_layer='final_') # final_conv, final_linear
else:
# G
requires_grad(self.generator, False)
# D
requires_grad(self.discriminator, True)
def freeze_before_train_g(self):
# Freeze !!!
requires_grad(self.generator, False)
requires_grad(self.discriminator, False)
# obtain some params
g_log_size = self.generator.module.log_size
if hasattr(self.generator.module, 'num_layers'):
g_num_layers = self.generator.module.num_layers
else:
g_num_layers = self.generator.module.num_injected_noises
d_log_size = self.discriminator.module.log_size
if self.freezeG > 0 and self.freezeD > 0:
# G
for layer in range(self.freezeG):
requires_grad(self.generator,
True,
target_layer=f'convs.{g_num_layers-2-2*layer}')
requires_grad(self.generator,
True,
target_layer=f'convs.{g_num_layers-3-2*layer}')
requires_grad(self.generator,
True,
target_layer=f'to_rgbs.{g_log_size-3-layer}')
# D
for layer in range(self.freezeD):
requires_grad(self.discriminator,
False,
target_layer=f'convs.{d_log_size-2-layer}')
requires_grad(self.discriminator, False,
target_layer='final_') # final_conv, final_linear
elif self.freezeG > 0:
# G
for layer in range(self.freezeG):
requires_grad(self.generator,
True,
target_layer=f'convs.{g_num_layers-2-2*layer}')
requires_grad(self.generator,
True,
target_layer=f'convs.{g_num_layers-3-2*layer}')
requires_grad(self.generator,
True,
target_layer=f'to_rgbs.{g_log_size-3-layer}')
# D
requires_grad(self.discriminator, False)
elif self.freezeD > 0:
# G
requires_grad(self.generator, True)
# D
for layer in range(self.freezeD):
requires_grad(self.discriminator,
False,
target_layer=f'convs.{d_log_size-2-layer}')
requires_grad(self.discriminator, False,
target_layer='final_') # final_conv, final_linear
else:
# G
requires_grad(self.generator, True)
# D
requires_grad(self.discriminator, False)
def train_step(self,
data_batch,
optimizer,
ddp_reducer=None,
loss_scaler=None,
use_apex_amp=False,
running_status=None):
"""Train step function.
This function implements the standard training iteration for
asynchronous adversarial training. Namely, in each iteration, we first
update discriminator and then compute loss for generator with the newly
updated discriminator.
As for distributed training, we use the ``reducer`` from ddp to
synchronize the necessary params in current computational graph.
Args:
data_batch (dict): Input data from dataloader.
optimizer (dict): Dict contains optimizer for generator and
discriminator.
ddp_reducer (:obj:`Reducer` | None, optional): Reducer from ddp.
It is used to prepare for ``backward()`` in ddp. Defaults to
None.
loss_scaler (:obj:`torch.cuda.amp.GradScaler` | None, optional):
The loss/gradient scaler used for auto mixed-precision
training. Defaults to ``None``.
use_apex_amp (bool, optional). Whether to use apex.amp. Defaults to
``False``.
running_status (dict | None, optional): Contains necessary basic
information for training, e.g., iteration number. Defaults to
None.
Returns:
dict: Contains 'log_vars', 'num_samples', and 'results'.
"""
# get data from data_batch
real_imgs = data_batch[self.real_img_key]
# If you adopt ddp, this batch size is local batch size for each GPU.
# If you adopt dp, this batch size is the global batch size as usual.
batch_size = real_imgs.shape[0]
# get running status
if running_status is not None:
curr_iter = running_status['iteration']
else:
# dirty walkround for not providing running status
if not hasattr(self, 'iteration'):
self.iteration = 0
curr_iter = self.iteration
# disc training
self.freeze_before_train_d()
optimizer['discriminator'].zero_grad()
# TODO: add noise sampler to customize noise sampling
with torch.no_grad():
if self.sample_space == 'zplus':
latents = self.latent_generator(batch_size)
fake_imgs = self.generator(latents, input_is_latent=True)
else:
out_dict = self.generator(None,
num_batches=batch_size,
return_latents=True)
latents = [out_dict['latent']]
fake_imgs = out_dict['fake_img']
# disc pred for fake imgs and real_imgs
disc_pred_fake = self.discriminator(fake_imgs)
disc_pred_real = self.discriminator(real_imgs)
# get data dict to compute losses for disc
data_dict_ = dict(gen=self.generator,
disc=self.discriminator,
disc_pred_fake=disc_pred_fake,
disc_pred_real=disc_pred_real,
fake_imgs=fake_imgs,
real_imgs=real_imgs,
iteration=curr_iter,
batch_size=batch_size,
loss_scaler=loss_scaler)
loss_disc, log_vars_disc = self._get_disc_loss(data_dict_)
# prepare for backward in ddp. If you do not call this function before
# back propagation, the ddp will not dynamically find the used params
# in current computation.
if ddp_reducer is not None:
ddp_reducer.prepare_for_backward(_find_tensors(loss_disc))
if loss_scaler:
# add support for fp16
loss_scaler.scale(loss_disc).backward()
elif use_apex_amp:
from apex import amp
with amp.scale_loss(loss_disc,
optimizer['discriminator'],
loss_id=0) as scaled_loss_disc:
scaled_loss_disc.backward()
else:
loss_disc.backward()
if loss_scaler:
loss_scaler.unscale_(optimizer['discriminator'])
# note that we do not contain clip_grad procedure
loss_scaler.step(optimizer['discriminator'])
# loss_scaler.update will be called in runner.train()
else:
optimizer['discriminator'].step()
# skip generator training if only train discriminator for current
# iteration
if (curr_iter + 1) % self.disc_steps != 0:
results = dict(fake_imgs=fake_imgs.cpu(),
real_imgs=real_imgs.cpu())
outputs = dict(log_vars=log_vars_disc,
num_samples=batch_size,
results=results)
if hasattr(self, 'iteration'):
self.iteration += 1
return outputs
# generator training
self.freeze_before_train_g()
optimizer['generator'].zero_grad()
# TODO: add noise sampler to customize noise sampling
if self.sample_space == 'zplus':
latents = self.latent_generator(batch_size)
fake_imgs = self.generator(latents, input_is_latent=True)
else:
out_dict = self.generator(None,
num_batches=batch_size,
return_latents=True)
latents = [out_dict['latent']]
fake_imgs = out_dict['fake_img']
disc_pred_fake_g = self.discriminator(fake_imgs)
data_dict_ = dict(gen=self.generator,
disc=self.discriminator,
gen_source=self.source_generator,
fake_imgs=fake_imgs,
disc_pred_fake_g=disc_pred_fake_g,
iteration=curr_iter,
batch_size=batch_size,
loss_scaler=loss_scaler,
latents=latents)
loss_gen, log_vars_g, source_results = self._get_gen_loss(data_dict_)
# prepare for backward in ddp. If you do not call this function before
# back propagation, the ddp will not dynamically find the used params
# in current computation.
if ddp_reducer is not None:
ddp_reducer.prepare_for_backward(_find_tensors(loss_gen))
if loss_scaler:
loss_scaler.scale(loss_gen).backward()
elif use_apex_amp:
from apex import amp
with amp.scale_loss(loss_gen, optimizer['generator'],
loss_id=1) as scaled_loss_disc:
scaled_loss_disc.backward()
else:
loss_gen.backward()
if loss_scaler:
loss_scaler.unscale_(optimizer['generator'])
# note that we do not contain clip_grad procedure
loss_scaler.step(optimizer['generator'])
# loss_scaler.update will be called in runner.train()
else:
optimizer['generator'].step()
# update ada p
if hasattr(self.discriminator.module,
'with_ada') and self.discriminator.module.with_ada:
self.discriminator.module.ada_aug.log_buffer[0] += 1
self.discriminator.module.ada_aug.log_buffer[
1] += disc_pred_real.sign()
self.discriminator.module.ada_aug.update(iteration=curr_iter,
num_batches=batch_size)
log_vars_disc['ada_prob'] = (
self.discriminator.module.ada_aug.aug_pipeline.p.data)
log_vars = {}
log_vars.update(log_vars_g)
log_vars.update(log_vars_disc)
results = dict(fake_imgs=fake_imgs.cpu(),
real_imgs=real_imgs.cpu(),
src_g_imgs=source_results.cpu(),
src_g_imgs_bgr=source_results[:, [2, 1, 0], ...].cpu())
outputs = dict(log_vars=log_vars,
num_samples=batch_size,
results=results)
if hasattr(self, 'iteration'):
self.iteration += 1
return outputs
| 41.712446 | 85 | 0.554635 | 2,139 | 19,438 | 4.773259 | 0.134175 | 0.047013 | 0.054848 | 0.046523 | 0.520764 | 0.479334 | 0.469148 | 0.435847 | 0.426836 | 0.409011 | 0 | 0.00588 | 0.3701 | 19,438 | 465 | 86 | 41.802151 | 0.827997 | 0.164214 | 0 | 0.513846 | 0 | 0 | 0.051348 | 0.028589 | 0 | 0 | 0 | 0.004301 | 0 | 1 | 0.021538 | false | 0 | 0.033846 | 0 | 0.070769 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
80021b4bd92a24d71fffed003a89ba58002247b3 | 1,766 | py | Python | tests/unit/utils/aws/test_aws_responses.py | Madajevas/localstack | 85c712e50d45183b9703c682de02d5114c50c47c | [
"Apache-2.0"
] | 2 | 2021-11-19T00:06:54.000Z | 2021-12-26T02:03:47.000Z | tests/unit/utils/aws/test_aws_responses.py | Madajevas/localstack | 85c712e50d45183b9703c682de02d5114c50c47c | [
"Apache-2.0"
] | null | null | null | tests/unit/utils/aws/test_aws_responses.py | Madajevas/localstack | 85c712e50d45183b9703c682de02d5114c50c47c | [
"Apache-2.0"
] | null | null | null | import xml.etree.ElementTree as ET
import pytest
from localstack.utils.aws.aws_responses import to_xml
result_raw = {
"DescribeChangeSetResult": {
# ...
"Changes": [
{
"ResourceChange": {
"Replacement": False,
"Scope": ["Tags"],
},
"Type": "Resource",
}
]
}
}
result_raw_none_element = {"a": {"b": None}}
result_raw_empty_list = {"a": {"b": []}}
result_raw_multiple_members = {"a": {"b": ["c", "d"]}}
@pytest.mark.parametrize(
"test_input,included",
[
(
result_raw,
"<member><ResourceChange><Replacement>False</Replacement><Scope><member>Tags</member></Scope></ResourceChange><Type>Resource</Type></member>",
),
(result_raw_none_element, "<b />"),
(result_raw_empty_list, "<b />"),
(result_raw_multiple_members, "<b><member>c</member><member>d</member></b>"),
],
)
def test_to_xml(test_input, included):
result = to_xml(test_input)
result_str = str(ET.tostring(result, short_empty_elements=True))
assert included in result_str
@pytest.mark.parametrize(
"test_input", [lambda: None, lambda: [], lambda: "", lambda: 0]
) # direct literals here trip up pytest
def test_to_xml_raise_error_simpleinputs(test_input):
with pytest.raises(Exception):
to_xml(test_input())
class SomeClass:
pass
result_raw_class_value = {"a": {"b": SomeClass()}}
multiple_root = {"a": "b", "c": "d"}
empty_dict = {}
@pytest.mark.parametrize("test_input", [multiple_root, empty_dict, result_raw_class_value])
def test_to_xml_raise_error_malformeddict(test_input):
with pytest.raises(Exception):
to_xml(test_input)
| 26.358209 | 154 | 0.610985 | 203 | 1,766 | 5.019704 | 0.334975 | 0.088322 | 0.035329 | 0.054956 | 0.274779 | 0.13739 | 0.09421 | 0.09421 | 0.09421 | 0.09421 | 0 | 0.000737 | 0.231597 | 1,766 | 66 | 155 | 26.757576 | 0.750184 | 0.022084 | 0 | 0.08 | 0 | 0.02 | 0.186195 | 0.11891 | 0 | 0 | 0 | 0 | 0.02 | 1 | 0.06 | false | 0.02 | 0.06 | 0 | 0.14 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
80033e1784f2fa309970f5133dc3cfd33c45fae1 | 991 | py | Python | server/accounts/schema.py | kingbar1990/react-apollo-django-subscriptions-boilerplate | e6b948ab7f211157e6ef0592a9d1b943a0d88b13 | [
"MIT"
] | null | null | null | server/accounts/schema.py | kingbar1990/react-apollo-django-subscriptions-boilerplate | e6b948ab7f211157e6ef0592a9d1b943a0d88b13 | [
"MIT"
] | 22 | 2020-06-05T19:56:44.000Z | 2022-03-11T23:41:45.000Z | server/accounts/schema.py | kingbar1990/react-apollo-django-subscriptions-boilerplate | e6b948ab7f211157e6ef0592a9d1b943a0d88b13 | [
"MIT"
] | null | null | null | import graphene
from graphene_django.types import DjangoObjectType
from graphql_jwt.decorators import login_required
from django.contrib.auth import get_user_model
class UserType(DjangoObjectType):
""" UserType object for GraphQL """
class Meta:
model = get_user_model()
has_unreaded_messages = graphene.Boolean()
def resolve_has_unreaded_messages(self, info):
""" Return True or False depending on if user has unreaded messages """
unreaded_rooms = self.rooms.all()
unreaded_rooms = unreaded_rooms.filter(last_message__seen=False).exclude(
last_message__sender_id=self.id)
if len(unreaded_rooms) > 0:
return True
else:
return False
class Query:
users = graphene.List(UserType)
me = graphene.Field(UserType)
def resolve_users(self, info):
return get_user_model().objects.all()
@login_required
def resolve_me(self, info):
return info.context.user
| 27.527778 | 81 | 0.694248 | 122 | 991 | 5.409836 | 0.45082 | 0.078788 | 0.054545 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.001307 | 0.228052 | 991 | 35 | 82 | 28.314286 | 0.861438 | 0.092836 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.125 | false | 0 | 0.166667 | 0.083333 | 0.708333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
800640b204af733d917dcb2f2cc81ab285988b93 | 5,393 | py | Python | ual_teleop/scripts/joy_teleop.py | ros-ual/ual | 9f03926fadf094d6f55a9b75fc1cae39c71c2045 | [
"MIT"
] | null | null | null | ual_teleop/scripts/joy_teleop.py | ros-ual/ual | 9f03926fadf094d6f55a9b75fc1cae39c71c2045 | [
"MIT"
] | null | null | null | ual_teleop/scripts/joy_teleop.py | ros-ual/ual | 9f03926fadf094d6f55a9b75fc1cae39c71c2045 | [
"MIT"
] | null | null | null | #!/usr/bin/env python
import yaml
import argparse
import rospy
import rospkg
import math
from joy_handle import JoyHandle, ButtonState
from sensor_msgs.msg import Joy
from ual_core.srv import TakeOff, Land, SetVelocity
from ual_core.msg import State
from geometry_msgs.msg import TwistStamped
from geometry_msgs.msg import PoseStamped
class JoyTeleop:
def __init__(self, joy_name):
action_file = rospkg.RosPack().get_path('ual_teleop') + '/config/joy_teleop.yaml'
with open(action_file, 'r') as action_config:
action_map = yaml.load(action_config)['joy_actions']
self.joy_handle = JoyHandle(joy_name, action_map)
take_off_url = 'ual/take_off'
land_url = 'ual/land'
velocity_url = 'ual/set_velocity'
rospy.wait_for_service(take_off_url)
rospy.wait_for_service(land_url)
self.take_off = rospy.ServiceProxy(take_off_url, TakeOff)
self.land = rospy.ServiceProxy(land_url, Land)
self.velocity_pub = rospy.Publisher(velocity_url, TwistStamped, queue_size=1)
self.ual_state = State()
self.headless = False
self.uav_yaw = 0.0
self.gains_table = [0.5, 0.8, 1.0, 1.3, 1.8, 2.1, 2.5]
self.gain_index = 2
def state_callback(self, data):
self.ual_state = data
def pose_callback(self, data):
self.uav_yaw = 2.0 * math.atan2(data.pose.orientation.z, data.pose.orientation.w)
def joy_callback(self, data):
self.joy_handle.update(data)
# print self.joy_handle # DEBUG
if self.joy_handle.get_action_button('secure'):
if self.joy_handle.get_action_button_state('take_off') is ButtonState.JUST_PRESSED and self.ual_state.state == State.LANDED_ARMED:
rospy.loginfo("Taking off")
self.take_off(2.0, False) # TODO(franreal): takeoff height?
if self.joy_handle.get_action_button_state('land') is ButtonState.JUST_PRESSED and self.ual_state.state == State.FLYING_AUTO:
rospy.loginfo("Landing")
self.land(False)
if self.headless == True and (self.joy_handle.get_action_button_state('toggle_headless') is ButtonState.JUST_PRESSED):
rospy.loginfo("Exiting headless mode")
self.headless = False
elif self.headless == False and (self.joy_handle.get_action_button_state('toggle_headless') is ButtonState.JUST_PRESSED):
rospy.loginfo("Entering headless mode")
self.headless = True
if self.joy_handle.get_action_button_state('speed_down') is ButtonState.JUST_PRESSED:
self.gain_index = self.gain_index - 1 if self.gain_index > 0 else 0
rospy.loginfo("Speed level: %d", self.gain_index)
if self.joy_handle.get_action_button_state('speed_up') is ButtonState.JUST_PRESSED:
max_index = len(self.gains_table) - 1
self.gain_index = self.gain_index + 1 if self.gain_index < max_index else max_index
rospy.loginfo("Speed level: %d", self.gain_index)
if self.ual_state.state == State.FLYING_AUTO:
vel_cmd = TwistStamped()
vel_cmd.header.stamp = rospy.Time.now()
# TODO: Use frame_id = 'uav_1' in not-headless mode?
vel_cmd.header.frame_id = 'map'
if self.headless:
vel_cmd.twist.linear.x = self.gains_table[self.gain_index] * self.joy_handle.get_action_axis('move_forward')
vel_cmd.twist.linear.y = self.gains_table[self.gain_index] * self.joy_handle.get_action_axis('move_right')
vel_cmd.twist.linear.z = self.gains_table[self.gain_index] * self.joy_handle.get_action_axis('move_up')
vel_cmd.twist.angular.z = self.joy_handle.get_action_axis('move_yaw')
else:
x = self.gains_table[self.gain_index] * self.joy_handle.get_action_axis('move_forward')
y = self.gains_table[self.gain_index] * self.joy_handle.get_action_axis('move_right')
vel_cmd.twist.linear.x = (x*math.cos(self.uav_yaw) - y*math.sin(self.uav_yaw))
vel_cmd.twist.linear.y = (x*math.sin(self.uav_yaw) + y*math.cos(self.uav_yaw))
vel_cmd.twist.linear.z = self.gains_table[self.gain_index] * self.joy_handle.get_action_axis('move_up')
vel_cmd.twist.angular.z = self.joy_handle.get_action_axis('move_yaw')
self.velocity_pub.publish(vel_cmd)
def main():
# Parse arguments
parser = argparse.ArgumentParser(description='Teleoperate ual with a joystick')
parser.add_argument('-joy_name', type=str, default=None,
help='Joystick name, must have a equally named .yaml file in ual_teleop/config/joysticks folder')
args, unknown = parser.parse_known_args()
# utils.check_unknown_args(unknown)
rospy.init_node('joy_teleop', anonymous=True)
if args.joy_name is None:
default_joy = 'saitek_p3200'
rospy.loginfo("Using default joy [%s]", default_joy)
args.joy_name = default_joy
teleop = JoyTeleop(args.joy_name)
rospy.Subscriber('ual/state', State, teleop.state_callback)
rospy.Subscriber('ual/pose', PoseStamped, teleop.pose_callback) # TODO: Use ground truth
rospy.Subscriber('ual_teleop/joy', Joy, teleop.joy_callback)
rospy.spin()
if __name__ == '__main__':
main()
| 49.027273 | 142 | 0.672168 | 760 | 5,393 | 4.502632 | 0.231579 | 0.049971 | 0.068381 | 0.070134 | 0.395675 | 0.355932 | 0.355932 | 0.325248 | 0.304793 | 0.283168 | 0 | 0.007853 | 0.220842 | 5,393 | 109 | 143 | 49.477064 | 0.806521 | 0.038198 | 0 | 0.088889 | 0 | 0 | 0.101564 | 0.009654 | 0 | 0 | 0 | 0.009174 | 0 | 1 | 0.055556 | false | 0 | 0.122222 | 0 | 0.188889 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
800826e08a1cf06e466211edc2d614832eb36a58 | 4,910 | py | Python | pyNastran/femutils/coord_transforms.py | jtran10/pyNastran | 4aed8e05b91576c2b50ee835f0497a9aad1d2cb0 | [
"BSD-3-Clause"
] | null | null | null | pyNastran/femutils/coord_transforms.py | jtran10/pyNastran | 4aed8e05b91576c2b50ee835f0497a9aad1d2cb0 | [
"BSD-3-Clause"
] | null | null | null | pyNastran/femutils/coord_transforms.py | jtran10/pyNastran | 4aed8e05b91576c2b50ee835f0497a9aad1d2cb0 | [
"BSD-3-Clause"
] | null | null | null | """
Defines general coordinate system related functions including:
- xyz_to_rtz_array(xyz)
- xyz_to_rtp_array(xyz)
- rtz_to_xyz_array(xyz)
- rtp_to_xyz_array(xyz)
- rtz_to_rtp_array(xyz)
- rtp_to_rtz_array(xyz)
- coords = cylindrical_rotation_matrix(thetar, dtype='float64')
"""
# pylint: disable=C0103
from __future__ import print_function, division
import numpy as np
# xyz to xxx transforms
def xyz_to_rtz_array(xyz):
"""
xyz to R-theta-z transform::
y R
| /
| /
| / theta
*------------x
.. math:: x = R \cos(\theta)
.. math:: y = R \sin(\theta)
Returns
-------
xyz : (3,) float ndarray
the point in the local coordinate system
"""
xyz = np.atleast_2d(xyz)
assert len(xyz.shape) == 2, xyz.shape
x = xyz[:, 0]
y = xyz[:, 1]
theta = np.degrees(np.arctan2(y, x))
R = np.sqrt(x * x + y * y)
return np.array([R, theta, xyz[:, 2]], dtype=xyz.dtype).T
def xyz_to_rtp_array(xyz):
"""rho-theta-phi to xyz transform"""
xyz = np.atleast_2d(xyz)
assert len(xyz.shape) == 2, xyz.shape
x = xyz[:, 0]
y = xyz[:, 1]
z = xyz[:, 2]
rho = np.sqrt(x * x + y * y + z * z)
phi = np.degrees(np.arctan2(y, x))
#i = np.where(rho == 0.0)
#if len(i):
theta = np.zeros(len(z), dtype=z.dtype)
ir = np.where(rho != 0.0)
theta[ir] = np.degrees(np.arccos(z[ir] / rho[ir]))
return np.array([rho, theta, phi], dtype=xyz.dtype).T
#---------------------------------------------------------------
# xxx to xyz transforms
def rtz_to_xyz_array(rtz):
r"""
R-theta-z to xyz transform::
y R
| /
| /
| / theta
*------------x
.. math:: x = R \cos(\theta)
.. math:: y = R \sin(\theta)
Returns
-------
xyz : (3,) float ndarray
the point in the local coordinate system
"""
rtz = np.atleast_2d(rtz)
assert len(rtz.shape) == 2, rtz.shape
R = rtz[:, 0]
theta = np.radians(rtz[:, 1])
x = R * np.cos(theta)
y = R * np.sin(theta)
xyz = np.array([x, y, rtz[:, 2]], dtype=rtz.dtype).T
return xyz
def rtp_to_xyz_array(rtp):
"""
rho-theta-phi to xyz transform
Returns
-------
xyz : (3,) float ndarray
the x, y, z in the local coordinate system
"""
rtp = np.atleast_2d(rtp)
assert len(rtp.shape) == 2, rtp.shape
R = rtp[:, 0]
theta = np.radians(rtp[:, 1])
phi = np.radians(rtp[:, 2])
x = R * np.sin(theta) * np.cos(phi)
y = R * np.sin(theta) * np.sin(phi)
z = R * np.cos(theta)
return np.array([x, y, z], dtype=rtp.dtype).T
#---------------------------------------------------------------
# rtz/rtp and rtp/rtz transforms
def rtz_to_rtp_array(rtz):
"""R-theta-z to rho-theta-phi transform"""
rtz = np.atleast_2d(rtz)
r = rtz[:, 0]
thetad = rtz[:, 1]
z = rtz[:, 2]
rho = (r**2 + z**2)**0.5
irho0 = np.where(rho > 0.0)[0]
dtype = thetad.dtype
# We need to choose a default. The equation for phi is:
# phi = acos(z/sqrt(x^2 + y^2 + z^2))
#
# If we let x and y go to 0, we're left with z/z=1 and
# phi = acos(1) = 0. The other alternative is to let
# z -> 0 and x/y be non-zero, but that leaves us with
# a 90 degree angle, which feels wrong.
#
phi = np.full(thetad.shape, 0., dtype=thetad.dtype)
phi[irho0] = np.degrees(np.arccos(z[irho0] / rho[irho0]))
return np.array([rho, thetad, phi], dtype=dtype).T
def rtp_to_rtz_array(rtp):
"""rho-theta-phi to R-theta-z transform"""
rtp = np.atleast_2d(rtp)
rho = rtp[:, 0]
thetad = rtp[:, 1]
phid = rtp[:, 2]
phi = np.radians(phid)
r = rho * np.sin(phi)
z = rho * np.cos(phi)
return np.array([r, thetad, z], dtype=rtp.dtype).T
def cylindrical_rotation_matrix(thetar, dtype='float64'):
"""
Creates a series transformation matrices to rotate by some angle theta
Parameters
----------
thetar : (n, ) float ndarray
the theta in radians
dtype : dtype/str
the type of the output matrix
Returns
-------
rotation : (ntheta, 3, 3)
the rotation matrices
"""
theta = np.asarray(thetar, dtype=dtype)
ntheta = len(theta)
cos_theta = np.cos(theta)
sin_theta = np.sin(theta)
#rotation = np.array([
#[cos(theta), -sin(theta), zero],
#[sin(theta), cos(theta), zero],
#[0.zero, zero, one],
#], dtype=dtype)
# can this be faster?
# it definitely could look nicer
rotation = np.zeros((ntheta, 3, 3), dtype=dtype)
rotation[:, 0, 0] = cos_theta
rotation[:, 0, 1] = -sin_theta
rotation[:, 1, 1] = cos_theta
rotation[:, 1, 0] = sin_theta
rotation[:, 2, 2] = 1.
#print('---------')
#for rot in rotation:
#print(np.squeeze(rot))
#print('---------')
return rotation
| 25.706806 | 74 | 0.538289 | 736 | 4,910 | 3.513587 | 0.182065 | 0.034029 | 0.025522 | 0.015081 | 0.343387 | 0.233179 | 0.136118 | 0.119876 | 0.119876 | 0.119876 | 0 | 0.023113 | 0.268635 | 4,910 | 190 | 75 | 25.842105 | 0.69702 | 0.419959 | 0 | 0.162162 | 0 | 0 | 0.00269 | 0 | 0 | 0 | 0 | 0 | 0.054054 | 1 | 0.094595 | false | 0 | 0.027027 | 0 | 0.216216 | 0.013514 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
800911a43c7133868a3683a5d66978dc4ccb6173 | 783 | py | Python | greww/_envs.py | A-Hilaly/greww | 0e58bf1a450987202554ee232d013bd717155d1d | [
"Apache-2.0"
] | null | null | null | greww/_envs.py | A-Hilaly/greww | 0e58bf1a450987202554ee232d013bd717155d1d | [
"Apache-2.0"
] | null | null | null | greww/_envs.py | A-Hilaly/greww | 0e58bf1a450987202554ee232d013bd717155d1d | [
"Apache-2.0"
] | null | null | null | import os
env = lambda var : os.environ[var]
try:
GREWW_PATH = env('GREWW_PATH')
GREWW_CACHE = env('GREWW_CACHE')
GREWW_CONFIG = env('GREWW_CONFIG')
except:
# no scop tests
def _dispatch_path(fdir):
fn, fd = '', ''
bo = False
for e in fdir[::-1]:
if bo:
fd += e
elif e == '/':
bo = True
else:
fn += e
return fd[::-1], fn[::-1]
_tmp = str(os.path.abspath(__file__))
_tmp, _ = _dispatch_path(_tmp)
_tmp, _ = _dispatch_path(_tmp)
GREWW_PATH = _tmp
GREWW_CACHE = _tmp + "/cache"
GREWW_CONFIG = _tmp + "/pkg/config"
env = {
'GREWW_PATH' : GREWW_PATH,
'GREWW_CACHE' : GREWW_CACHE,
'GREWW_CONFIG' : GREWW_CONFIG,
}
| 23.029412 | 41 | 0.52235 | 96 | 783 | 3.895833 | 0.375 | 0.120321 | 0.112299 | 0.090909 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.005825 | 0.342273 | 783 | 33 | 42 | 23.727273 | 0.720388 | 0.016603 | 0 | 0.068966 | 0 | 0 | 0.109375 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.034483 | false | 0 | 0.034483 | 0 | 0.103448 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
8009ef4ae49f5108f510678da5c940c0974957ce | 1,464 | py | Python | kmcsim/buildtools/mklatt.py | vlcekl/kmcpy | b55a23f64d4b6d2871671f4a16346cc897c4a2a5 | [
"MIT"
] | null | null | null | kmcsim/buildtools/mklatt.py | vlcekl/kmcpy | b55a23f64d4b6d2871671f4a16346cc897c4a2a5 | [
"MIT"
] | null | null | null | kmcsim/buildtools/mklatt.py | vlcekl/kmcpy | b55a23f64d4b6d2871671f4a16346cc897c4a2a5 | [
"MIT"
] | null | null | null | #!//anaconda/envs/py36/bin/python
#
# File name: mklatt.py
# Date: 2018/08/02 16:15
# Author: Lukas Vlcek
#
# Description:
#
import sys
import re
def make_sc(box):
print('sc', *box)
return
def make_bcc(box):
print('bcc', *box)
return
def make_fcc(box):
lx, ly, lz = box
latt = {}
latt['nat'] = lx*ly*lz
latt['box'] = ['fcc', 2*lx, ly, lz]
latt['xyzs'] = []
# box dimensions in lattice units
# layer number
for iz in range(lz):
# layer structure
for iy in range(ly):
for ix in range(lx):
rx = 2*ix + (iy + iz)%2
latt['xyzs'].append(['Ni', rx, iy, iz])
return latt
def write_latt(latt, fname):
with open(fname, 'w') as fo:
fo.write('{0:d}\n'.format(latt['nat']))
fo.write('{0} {1:d} {2:d} {3:d}\n'.format(*latt['box']))
for row in latt['xyzs']:
fo.write('{0} {1:d} {2:d} {3:d}\n'.format(*row))
if __name__ == "__main__":
# dictionary of lattice build functions
make_lattice = {'fcc':make_fcc, 'bcc':make_bcc, 'sc':make_sc}
# read lattice type and parameters
with open(sys.argv[1], 'r') as f:
# lattice type
ltype = re.findall('\S+', f.readline())[0]
# box dimensions
box = [int(d) for d in re.findall('\S+', f.readline())]
# make lattice
latt = make_lattice[ltype](box)
write_latt(latt, 'init.xyz')
# end of mklatt.py
| 20.619718 | 65 | 0.536202 | 220 | 1,464 | 3.486364 | 0.395455 | 0.027379 | 0.023468 | 0.041721 | 0.104302 | 0.054759 | 0.054759 | 0.054759 | 0.054759 | 0.054759 | 0 | 0.026718 | 0.284153 | 1,464 | 70 | 66 | 20.914286 | 0.705153 | 0.214481 | 0 | 0.060606 | 0 | 0 | 0.105124 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.121212 | false | 0 | 0.060606 | 0 | 0.272727 | 0.060606 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
8009f9c9faa56881d77c010013d2d6763c2f0d9f | 10,083 | py | Python | run_experiment.py | weelingtan/LocIT | 0795a6cbb64a72b7906824b451d2bbb4559b080d | [
"Apache-2.0"
] | null | null | null | run_experiment.py | weelingtan/LocIT | 0795a6cbb64a72b7906824b451d2bbb4559b080d | [
"Apache-2.0"
] | null | null | null | run_experiment.py | weelingtan/LocIT | 0795a6cbb64a72b7906824b451d2bbb4559b080d | [
"Apache-2.0"
] | null | null | null | # -*- coding: UTF-8 -*-
"""
Run experiments.
:author: Vincent Vercruyssen (2019)
:license: Apache License, Version 2.0, see LICENSE for details.
"""
import sys, os, time, argparse
import numpy as np
import pandas as pd
from sklearn.metrics import roc_auc_score
# transfer models
from models.locit import apply_LocIT
from models.transferall import apply_transferall
from models.coral import apply_CORAL
# anomaly detection
from models.knno import apply_kNNO
from models.iforest import apply_iForest
# for the remaining baselines, see implementations:
#
# HBOS --> https://github.com/Kanatoko/HBOS-python/blob/master/hbos.py
# LOF --> https://scikit-learn.org/stable/modules/generated/sklearn.neighbors.LocalOutlierFactor.html
# TCA --> https://github.com/Vincent-Vercruyssen/transfertools
# CBIT --> https://github.com/Vincent-Vercruyssen/transfertools
# GFK --> https://github.com/jindongwang/transferlearning/tree/master/code
# JGSA --> https://github.com/jindongwang/transferlearning/tree/master/code
# JDA --> https://github.com/jindongwang/transferlearning/tree/master/code
# TJM --> https://github.com/jindongwang/transferlearning/tree/master/code
# ----------------------------------------------------------------------------
# run experiment
# ----------------------------------------------------------------------------
def main():
parser = argparse.ArgumentParser(
description='Run transfer learning - anomaly detection experiment')
parser.add_argument('-d', '--dataset', type=str, default='', help='dataset = folder in data/ directory')
parser.add_argument('-m', '--method', type=str, default='', help='method to use')
args, unknownargs = parser.parse_known_args()
# difficulty dictionary
transfer_difficulty = {
'n1_a1': 1, # (n1, a1)
'n1_a2': 2, # (n1, a2)
'n2_a1': 4, # (n2, a1)
'n12_a1': 3, # ((n1, n2), a1) -> not in shuttle and most other datasets
'n12_a12': 3, # ((n1, n2), (a1, a2))
'n2_a2': 5} # (n2, a2)
# combos = [(n1, a1), (n1, a2), (n2, a1), (n2, a2), (n2, n1), ((n1, n2), (a1, a2))]
# combo_names = ['n1_a1', 'n1_a2', 'n2_a1', 'n2_a2', 'n2_n1', 'n12_a12']
# load the data
# main_path is '/home/tanwl/LocIT'
main_path = os.path.dirname(os.path.abspath(__file__))
# data_path is '/home/tanwl/LocIT/data/shuttle'
data_path = os.path.join(main_path, 'data', args.dataset)
print('The experiments are executed on the ' + args.dataset.lower() + ' data')
print('The experiments are executed using the ' + args.method.lower() + ' method')
source_sets, target_sets = _load_and_preprocess_data(data_path)
# source_sets is now a dictionary {'shuttle_source_n1_a1_v0': (550,10) Pandas Dataframe,
# 'shuttle_source_n1_a1_v1': (550,10) Pandas Dataframe,
# ...}
# target_sets is now a dictionary {'shuttle_v0': (550,10) Pandas Dataframe,
# 'shuttle_v1': (550,10) Pandas Dataframe,
# ...}
# apply algorithms - every combination of source and target
auc_results = dict()
dataset_name = ''
# tgt_name is 'shuttle_v0'
# target_data is (550, 10)
for tgt_name, target_data in target_sets.items():
# dataset_name is 'shuttle'
dataset_name = tgt_name.split('_v')[0]
# target data
# Xt is (550, 9) numpy array i.e. the samples
# yt is (550,) numpy array i.e. the labels
Xt = target_data.iloc[:, :-1].values
yt = target_data.iloc[:, -1].values
# This is not used though
# ixtl is (550,) numpy array containing only index positions of places where entries are NOT 0.0
# e.g: testing = np.array([-1, -1, 0.0, -1, 0.0, -1, -1, -1, -1, 0.0, -1])
# Hence, np.where(testing != 0.0)[0] gives array([ 0, 1, 3, 5, 6, 7, 8, 10])
ixtl = np.where(yt != 0.0)[0]
# This is not used though
# nt is 550
nt, _ = Xt.shape
# transfer from each source domain
# src_name is 'shuttle_source_n1_a1_v0', v1, ..., v4
# ...
# src_name is 'shuttle_source_n12_a12_v0', v1, .., v4
# source_data is (550, 10)
for src_name, source_data in source_sets.items():
# source data
# Xs is (550, 9) numpy array i.e. the samples
# ys is (550,) numpy array i.e. the labels
Xs = source_data.iloc[:, :-1].values
ys = source_data.iloc[:, -1].values
# ns is 550
ns, _ = Xs.shape
# actual transfer + anomaly detection
# TRANSFER METHODS
# For all cases, target_scores is (550,), i.e. predicted probabilities of the sample being anomalous, for each of the 550 target domain samples
if args.method.lower() == 'locit':
target_scores = apply_LocIT(Xs, Xt.copy(), ys, yt.copy(),
k=10, psi=20, scaling=False, supervision='loose',
train_selection='farthest')
elif args.method.lower() == 'transferall':
target_scores = apply_transferall(Xs, Xt.copy(), ys, yt.copy(),
k=10, scaling=True)
elif args.method.lower() == 'coral':
target_scores = apply_CORAL(Xs, Xt.copy(), ys, yt.copy(),
scaling=True)
# UNSUPERVISED ANOMALY DETECTION METHODS
elif args.method.lower() == 'knno':
target_scores = apply_kNNO(Xs, Xt.copy(), ys, yt.copy(), scaling=False)
elif args.method.lower() == 'iforest':
target_scores = apply_iForest(Xs, Xt.copy(), ys, yt.copy(),
n_estimators=100, contamination=0.1)
else:
raise ValueError(args.method,
'is not an implemented/accepted method')
# compute AUC
auc = roc_auc_score(y_true=yt, y_score=target_scores)
# Transfer: 'shuttle_source_n1_a1_v0' --> 'shuttle_v0' AUC = auc
print('Transfer: ', src_name, '\t-->\t', tgt_name, '\tAUC =', auc)
# store the results
# sn = 'n1_a1'
sn = src_name.split('source_')[1].split('_v')[0]
if sn in auc_results.keys():
auc_results[sn].append(auc)
else:
auc_results[sn] = [auc]
# Hence, all auc scores for v0 to v4 for a particular transfer difficulty, e.g: n1_a1, will be stored in a list
# print results
# AUC results on SHUTTLE
# ----------------------
# Difficulty level 1: auc_mean_of_v0_to_v4
# ...
# Done!
print('\n\nAUC results on {}:'.format(dataset_name.upper()))
print('----------------'+'-'*len(dataset_name))
for k, v in auc_results.items():
print(' Difficulty level {}: \t{}'.format(transfer_difficulty[k], np.mean(v)))
print('\nDone!\n')
def _load_and_preprocess_data(data_path):
""" Load and preprocess the data. """
# src_path is '/home/tanwl/LocIT/data/shuttle/source'
# tgt_path is '/home/tanwl/LocIT/data/shuttle/target'
src_path = os.path.join(data_path, 'source')
tgt_path = os.path.join(data_path, 'target')
# source files is ['/home/tanwl/LocIT/data/shuttle/source/shuttle_source_n1_a1_v0.csv', ..., '/home/tanwl/LocIT/data/shuttle/source/shuttle_source_n12_a12_v4.csv']
source_files = [f for f in os.listdir(src_path) if os.path.isfile(os.path.join(src_path, f))]
source_files = [os.path.join(src_path, f) for f in source_files if '.csv' in f]
# target files is ['/home/tanwl/LocIT/data/shuttle/target/shuttle_v0.csv', ... '/home/tanwl/LocIT/data/shuttle/target/shuttle_v9.csv']
target_files = [f for f in os.listdir(tgt_path) if os.path.isfile(os.path.join(tgt_path, f))]
target_files = [os.path.join(tgt_path, f) for f in target_files if '.csv' in f]
# load the data
source_sets = dict()
for sf in source_files:
# data is (550, 10) Pandas DataFrame that looks like this (shuffled already):
# labels: 1.0 is anomaly, -1.0 is normal
'''
0 1 2 3 4 5 6 7 8 labels
0 41.0 -4.0 86.0 0.0 42.0 15.0 46.0 45.0 0.0 -1.0
1 43.0 0.0 84.0 -2.0 44.0 26.0 41.0 41.0 0.0 -1.0
2 55.0 0.0 78.0 0.0 42.0 -2.0 23.0 37.0 14.0 1.0
3 37.0 0.0 91.0 3.0 8.0 0.0 53.0 83.0 30.0 -1.0
4 37.0 0.0 76.0 -7.0 28.0 0.0 39.0 47.0 8.0 -1.0
.. ... ... ... ... ... ... ... ... ... ...
545 43.0 0.0 85.0 0.0 42.0 -14.0 42.0 44.0 2.0 -1.0
546 37.0 0.0 78.0 0.0 -4.0 4.0 42.0 83.0 42.0 -1.0
547 43.0 -1.0 79.0 0.0 42.0 -15.0 35.0 37.0 2.0 -1.0
548 56.0 0.0 76.0 -7.0 -4.0 0.0 20.0 81.0 62.0 1.0
549 37.0 0.0 79.0 5.0 36.0 -15.0 42.0 43.0 2.0 -1.0
'''
data = pd.read_csv(sf, sep=',', index_col=0).sample(frac=1).reset_index(drop=True)
# file_name is 'shuttle_source_n1_a1_v0'
file_name = os.path.split(sf)[1].split('.csv')[0]
source_sets[file_name] = data
# source_sets is now a dictionary {'shuttle_source_n1_a1_v0': (550,10) Pandas Dataframe,
# 'shuttle_source_n1_a1_v1': (550,10) Pandas Dataframe,
# ...}
target_sets = dict()
for sf in target_files:
data = pd.read_csv(sf, sep=',', index_col=0).sample(frac=1).reset_index(drop=True)
file_name = os.path.split(sf)[1].split('.csv')[0]
target_sets[file_name] = data
# target_sets is now a dictionary {'shuttle_v0': (550,10) Pandas Dataframe,
# 'shuttle_v1': (550,10) Pandas Dataframe,
# ...}
return source_sets, target_sets
if __name__ == '__main__':
main()
| 43.83913 | 167 | 0.562829 | 1,442 | 10,083 | 3.79473 | 0.214286 | 0.016082 | 0.010965 | 0.032895 | 0.393458 | 0.322917 | 0.275402 | 0.204313 | 0.121345 | 0.111111 | 0 | 0.076669 | 0.28077 | 10,083 | 230 | 168 | 43.83913 | 0.677882 | 0.410691 | 0 | 0.067416 | 0 | 0 | 0.095871 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.022472 | false | 0 | 0.101124 | 0 | 0.134831 | 0.078652 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
800b886faa1ae8ac7c6f91868cae769b7429e1cc | 3,416 | py | Python | freeze_reqs/pytest_freeze_reqs.py | j-kawa/pytest-freeze-reqs | b0bf088bf4865dae9d6f7b78efeea35e1c113678 | [
"MIT"
] | null | null | null | freeze_reqs/pytest_freeze_reqs.py | j-kawa/pytest-freeze-reqs | b0bf088bf4865dae9d6f7b78efeea35e1c113678 | [
"MIT"
] | null | null | null | freeze_reqs/pytest_freeze_reqs.py | j-kawa/pytest-freeze-reqs | b0bf088bf4865dae9d6f7b78efeea35e1c113678 | [
"MIT"
] | 1 | 2021-07-01T20:45:05.000Z | 2021-07-01T20:45:05.000Z | import pytest
def pytest_addoption(parser):
group = parser.getgroup("general")
group.addoption(
"--freeze_reqs",
action="store_true",
help="run check if requirements (req*.txt|pip) are frozen",
)
parser.addini(
"freeze-reqs-ignore-paths",
type="linelist",
help="each line specifies a part of path to ignore "
"by pytest-freeze-reqs, example: "
"requirement_dev.txt matches /a/b/c/requirement_dev.txt",
)
parser.addini(
"freeze-reqs-include-paths",
type="linelist",
help="each line specifies a part of path to include "
"by pytest-freeze-reqs, example: "
"/base_requirements.txt matches /a/b/c/base_requirements.txt",
)
def pytest_sessionstart(session):
config = session.config
if config.option.freeze_reqs:
config._freeze_reqs_ignore = config.getini("freeze-reqs-ignore-paths")
config._freeze_reqs_include = config.getini("freeze-reqs-include-paths")
def pytest_collect_file(parent, path):
config = parent.config
if not config.option.freeze_reqs:
return None
if path.ext in (".txt", ".pip") and path.basename.startswith("req"):
for ignore_path in config._freeze_reqs_ignore:
if ignore_path in str(path):
return None
return RequirementFile(path, parent)
else:
for include_path in config._freeze_reqs_include:
if include_path in str(path):
return RequirementFile(path, parent)
class RequirementFile(pytest.File):
def collect(self):
import requirements
with open(str(self.fspath), "r") as fd:
for req in requirements.parse(fd):
yield RequirementItem(req.name, self, req)
class RequirementItem(pytest.Item):
def __init__(self, name, parent, req):
super(RequirementItem, self).__init__(name, parent)
self.add_marker("freeze_reqs")
self.req = req
def runtest(self):
# local files
if self.req.local_file:
return
# revision
if self.req.vcs:
if not self.req.revision:
raise RequirementNotFrozenException(self, self.name, "[no revision]")
else:
return
# pip packages
if not self.req.specs:
raise RequirementNotFrozenException(self, self.name, self.req.specs)
for spec in self.req.specs:
operator, _ = spec
if operator in ("<", "<=", "=="):
return
raise RequirementNotFrozenException(self, self.name, self.req.specs)
def repr_failure(self, excinfo):
""" called when self.runtest() raises an exception. """
if isinstance(excinfo.value, RequirementNotFrozenException):
args = excinfo.value.args
return "\n".join(
[
"requirement freeze test failed",
" improperly frozen requirement: {1!r}: {2!r}".format(*args),
" try adding pkg==version, or git@revision",
]
)
def reportinfo(self):
return (
self.fspath,
0,
"requirement: {name} is not frozen properly.".format(name=self.name),
)
class RequirementNotFrozenException(Exception):
""" custom exception for error reporting. """
| 31.33945 | 85 | 0.596897 | 379 | 3,416 | 5.271768 | 0.332454 | 0.07007 | 0.032032 | 0.063063 | 0.211211 | 0.109109 | 0.109109 | 0.109109 | 0.051051 | 0.051051 | 0 | 0.00125 | 0.297424 | 3,416 | 108 | 86 | 31.62963 | 0.83125 | 0.035714 | 0 | 0.204819 | 0 | 0 | 0.203721 | 0.053065 | 0 | 0 | 0 | 0 | 0 | 1 | 0.096386 | false | 0 | 0.024096 | 0.012048 | 0.26506 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
800bcb34a09cb433be662372e06e48fe14cfe496 | 17,760 | py | Python | src/test/lcv2-2ray-5.0.1-/v2ray_old_3_2019-1-4/client/macos/mods/tool_mac.py | lucycore/lcv2 | cfda722b430036e2a2de946b71d81d265e2165dc | [
"MIT"
] | null | null | null | src/test/lcv2-2ray-5.0.1-/v2ray_old_3_2019-1-4/client/macos/mods/tool_mac.py | lucycore/lcv2 | cfda722b430036e2a2de946b71d81d265e2165dc | [
"MIT"
] | null | null | null | src/test/lcv2-2ray-5.0.1-/v2ray_old_3_2019-1-4/client/macos/mods/tool_mac.py | lucycore/lcv2 | cfda722b430036e2a2de946b71d81d265e2165dc | [
"MIT"
] | null | null | null | # -- coding:utf-8--
import json
import paramiko
import re
import os
from urllib import request
#---------------------------------------------------------------------------------------
def start_v2():
gzlj = os.getcwd()
key_json_lj = os.path.join(gzlj, "pythonz5", "sun36x64", "v2ray", "config.json")
request.urlretrieve(r"http://60.205.221.103/zzz/v2ray/v2_config_1.json", key_json_lj)
v2ray_start_lj = os.path.join(gzlj, "pythonz5", "sun36x64", "v2ray", "v2ray")
os.system(v2ray_start_lj)
#---------------------------------------------------------------------------------------
def get_v2_json():
#获取v2ray的配置文件
gzlj = os.getcwd()
v2ray_server_json_lj = "http://60.205.221.103/zzz/v2ray/v2_config_1.json"
lj = os.path.join(gzlj, "Desktop", "config.json")
request.urlretrieve(v2ray_server_json_lj, lj)
#---------------------------------------------------------------------------------------
def v2ray_lj_rm():
#用于删除v2ray安装的函数
gzlj = os.getcwd()
old_python_4 = os.path.join(gzlj, "pythonz5")
remove_dir(old_python_4)
#---------------------------------------------------------------------------------------
def remove_dir(dir):
#用于删除路径的函数
dir = dir.replace('\\', '/')
if(os.path.isdir(dir)):
for p in os.listdir(dir):
remove_dir(os.path.join(dir,p))
if(os.path.exists(dir)):
os.rmdir(dir)
else:
if(os.path.exists(dir)):
os.remove(dir)
#---------------------------------------------------------------------------------------
def v2ray_key_rm():
#用于删除用户key的函数
gzlj = os.getcwd()
key_json_lj = os.path.join(gzlj, "pythonz5", "unsers", "key.json")
os.remove(key_json_lj)
#---------------------------------------------------------------------------------------
def root_tool():
#用于启动工具的函数
gzlj = os.getcwd()
root_lj = os.path.join(gzlj, "Desktop", "lucycore.txt")
try:
with open(root_lj) as xxx:
beee = xxx.read()
if beee == "v2ray.tool":
tool_core()
except:
a = 2
#---------------------------------------------------------------------------------------
def remove_rz():
#删除rz.txt的函数
ps = input("server_password:")
client = paramiko.SSHClient()
client.set_missing_host_key_policy(paramiko.AutoAddPolicy())
client.connect('60.205.221.103', username='root', password=ps, timeout=5)
client.exec_command('rm /zzz/rz.txt')
#---------------------------------------------------------------------------------------
def get_rz_txt():
#下载日志文件模块
#用于获取rz.txt的函数
ps = input("server_password:")
transport = paramiko.Transport(('60.205.221.103', 22))
transport.connect(username='root', password=ps)
sftp = paramiko.SFTPClient.from_transport(transport)
sftp.get('/zzz/rz.txt', 'rz.txt')
transport.close()
#---------------------------------------------------------------------------------------
#卡密生成模块
def mod_1():
key_zd = {}
while True :
key = input("密钥名称:")
key_z = input("密钥时间:")
if key_z == "0":
print("退出写入\n")
input()
break
key_zd[key] = int(key_z)
with open("key.json",'w') as ojbk_1:
json.dump(key_zd,ojbk_1)
def mod_2():
key_zd = {}
while True :
key_z = input("请输入此批密钥时间")
if key_z == "0":
print("退出写入\n")
input()
break
with open("key.txt") as xxx:
for key_zlq in xxx:
print("写入:" + key_zlq)
key_zlqa = key_zlq[:-1]
key_zd[key_zlqa] = int(key_z)
with open("key.json",'w') as ojbk_1:
json.dump(key_zd,ojbk_1)
def mod_3():
key_zd = {}
hhz = 0
with open("key.txt") as xxx:
for key_zlq in xxx:
hhz = hhz + 1
keyzl_1 = str(hhz) + " " + key_zlq
with open("key_1.txt",'a') as hii:
hii.write(keyzl_1)
def mod_4():
key_zd = {}
zl = input("数量(2-5):")
if zl == "2":
with open("1.json") as zx:
zd_1 = json.load(zx)
with open("2.json") as zx:
zd_2 = json.load(zx)
zd_3 = {}
zd_3.update(zd_1)
zd_3.update(zd_2)
with open("key.json",'w') as ojbk:
json.dump(zd_3,ojbk)
if zl == "3":
with open("1.json") as zx:
zd_1 = json.load(zx)
with open("2.json") as zx:
zd_2 = json.load(zx)
with open("3.json") as zx:
zd_4 = json.load(zx)
zd_3 = {}
zd_3.update(zd_1)
zd_3.update(zd_2)
zd_3.update(zd_4)
with open("key.json",'w') as ojbk:
json.dump(zd_3,ojbk)
if zl == "4":
with open("1.json") as zx:
zd_1 = json.load(zx)
with open("2.json") as zx:
zd_2 = json.load(zx)
with open("3.json") as zx:
zd_4 = json.load(zx)
with open("4.json") as zx:
zd_5 = json.load(zx)
zd_3 = {}
zd_3.update(zd_1)
zd_3.update(zd_2)
zd_3.update(zd_4)
zd_3.update(zd_5)
with open("key.json",'w') as ojbk:
json.dump(zd_3,ojbk)
if zl == "5":
with open("1.json") as zx:
zd_1 = json.load(zx)
with open("2.json") as zx:
zd_2 = json.load(zx)
with open("3.json") as zx:
zd_4 = json.load(zx)
with open("4.json") as zx:
zd_5 = json.load(zx)
with open("5.json") as zx:
zd_6 = json.load(zx)
zd_3 = {}
zd_3.update(zd_1)
zd_3.update(zd_2)
zd_3.update(zd_4)
zd_3.update(zd_5)
zd_3.update(zd_6)
with open("key.json",'w') as ojbk:
json.dump(zd_3,ojbk)
def modxzhs(mod):
if mod == "1":
print("开始运行mod1\n")
mod_1()
if mod == "2":
print("开始运行mod2\n")
mod_2()
if mod == "3":
print("开始运行mod3\n")
mod_3()
if mod == "4":
print("开始运行mod4\n")
mod_4()
def kmsc_bt():
print("请选择生成模式\n")
print("1.手动输入生成 2.大批量导入生成 3.爱发卡整理")
print("4.二次添加卡密\n")
mod = input("请选择数字:")
if mod > "0" and mod < "5":
modxzhs(mod)
#---------------------------------------------------------------------------------------
def rz_fx():
#分析日志内容
gl = False
a = input("是否过滤无标记用户?[y/n]:")
if a == "y":
gl = True
if a == "n":
gl = False
#日志
user_rz = "rz.txt"
#逐行读取文件
with open(user_rz, encoding='UTF-8') as xxx:
for hii in xxx:
time = re.match(r'.{4}(-..){5}', hii)
if time:
time_now = hii.strip("\n")
mac = re.match(r'(..:){5}..', hii)
if mac:
userklj = "userk.json"
hii = hii.strip("\n")
#打开用户信息文件
with open(userklj) as zx_1:
userk = json.load(zx_1)
#读取详细信息
for key, value in userk.items():
keyy = value[0]
time = value[1]
root = value[2]
x = value[3]
try:
bjname = value[4]
except:
bjname = "no_name"
if key == hii:
if gl == True:
if bjname != "no_name":
print("\n\n" + time_now)
print("名称:" + bjname)
print("密钥:" + keyy)
print("时间:" + time)
if root == True:
rootx = "True"
else:
rootx = "False"
print("等级:" + rootx)
print("指令:" + x)
else:
print("\n\n" + time_now)
print("名称:" + bjname)
print("密钥:" + keyy)
print("时间:" + time)
if root == True:
rootx = "True"
else:
rootx = "False"
print("等级:" + rootx)
print("指令:" + x)
#---------------------------------------------------------------------------------------
#使用mac查询用户信息
def userk_c_mac():
#用户信息库
userklj = "userk.json"
mac = input("mac:")
#打开用户信息文件
with open(userklj) as zx_1:
userk = json.load(zx_1)
#读取详细信息
for key, value in userk.items():
keyy = value[0]
time = value[1]
root = value[2]
x = value[3]
try:
bjname = value[4]
except:
bjname = "no_name"
if key == mac:
print("密钥:" + keyy)
print("时间:" + time)
if root == True:
rootx = "True"
else:
rootx = "False"
print("等级:" + rootx)
print("指令:" + x)
print("名称:" + bjname)
#---------------------------------------------------------------------------------------
#用于统一修改用户特殊指令的函数
def userk_x_pl():
#用户信息库
userklj = "userk.json"
a = input("数值:")
#打开用户信息文件
with open(userklj) as zx_1:
userk = json.load(zx_1)
#读取详细信息
for key, value in userk.items():
value[3] = a
with open(userklj,'w') as ojbk_1:
json.dump(userk,ojbk_1)
#---------------------------------------------------------------------------------------
def dq_userk():
#用户信息库
userklj = "userk.json"
#打开用户信息文件
with open(userklj) as zx_1:
userk = json.load(zx_1)
#验证key是否存在
for key, value in userk.items():
keyy = value[0]
time = value[1]
root = value[2]
x = value[3]
try:
name = value[4]
except:
name = "无"
print("")
print("用户名称:" + name)
print("用户mac:" + key)
print("密钥:" + keyy)
print("时间:" + time)
if root:
print("root:True")
else:
print("root:False")
print("命令标记:" + x)
print("")
#---------------------------------------------------------------------------------------
def dq_userk_lcbj():
#用于列出标记过后的用户
#用户信息库
userklj = "userk.json"
#打开用户信息文件
with open(userklj) as zx_1:
userk = json.load(zx_1)
#验证key是否存在
for key, value in userk.items():
keyy = value[0]
time = value[1]
root = value[2]
x = value[3]
try:
name = value[4]
print("")
print("用户名称:" + name)
print("用户mac:" + key)
print("密钥:" + keyy)
print("时间:" + time)
if root:
print("root:True")
else:
print("root:False")
print("命令标记:" + x)
print("")
except:
name = "无"
#---------------------------------------------------------------------------------------
def dq_userk_cz_key():
#用于使用密钥查找用户
key_sr = input("密钥:")
#用户信息库
userklj = "userk.json"
#打开用户信息文件
with open(userklj) as zx_1:
userk = json.load(zx_1)
#验证key是否存在
for key, value in userk.items():
keyy = value[0]
time = value[1]
root = value[2]
x = value[3]
try:
name = value[4]
except:
name = "无"
if key_sr == keyy:
print("")
print("用户名称:" + name)
print("用户mac:" + key)
print("密钥:" + keyy)
print("时间:" + time)
if root:
print("root:True")
else:
print("root:False")
print("命令标记:" + x)
print("")
#---------------------------------------------------------------------------------------
def dq_userk_cz_name():
#用于使用标记名称查找用户
name_sr = input("名称:")
#用户信息库
userklj = "userk.json"
#打开用户信息文件
with open(userklj) as zx_1:
userk = json.load(zx_1)
#验证key是否存在
for key, value in userk.items():
keyy = value[0]
time = value[1]
root = value[2]
x = value[3]
try:
name = value[4]
except:
name = "无"
if name_sr == name:
print("")
print("用户名称:" + name)
print("用户mac:" + key)
print("密钥:" + keyy)
print("时间:" + time)
if root:
print("root:True")
else:
print("root:False")
print("命令标记:" + x)
print("")
#---------------------------------------------------------------------------------------
def xr_userk_rot():
#用于修改用户权限的函数
macc = input("mac号:")
zbz = input("y/n:")
#用户信息库
userklj = "userk.json"
#打开用户信息文件
with open(userklj) as zx_1:
userk = json.load(zx_1)
user_xx = userk[macc]
if zbz == "y":
user_xx[2] = True
print("添加权限")
if zbz == "n":
user_xx[2] = False
print("删除权限")
userk[macc] = user_xx
with open(userklj,'w') as ojbk_1:
json.dump(userk,ojbk_1)
#---------------------------------------------------------------------------------------
def xr_userk_xml():
#用于修改用户权限的函数
print("删除命令标记输入“del”")
macc = input("mac号:")
zbz = input("命令:")
#用户信息库
userklj = "userk.json"
#打开用户信息文件
with open(userklj) as zx_1:
userk = json.load(zx_1)
user_xx = userk[macc]
if zbz == "del":
user_xx[3] = "0"
print("删除命令")
else:
user_xx[3] = zbz
print("添加命令")
userk[macc] = user_xx
with open(userklj,'w') as ojbk_1:
json.dump(userk,ojbk_1)
#---------------------------------------------------------------------------------------
def xr_userk_namebj():
#用于修改用户名称标记的函数
print("删除名称标记输入“del”")
macc = input("mac号:")
zbz = input("名称:")
#用户信息库
userklj = "userk.json"
#打开用户信息文件
with open(userklj) as zx_1:
userk = json.load(zx_1)
user_xx = userk[macc]
if zbz == "del":
del user_xx[4]
print("删除名称")
else:
try:
del user_xx[4]
print("尝试覆盖历史名称")
except:
print("无历史名称")
user_xx.insert(4, zbz)
print("添加名称")
userk[macc] = user_xx
with open(userklj,'w') as ojbk_1:
json.dump(userk,ojbk_1)
#---------------------------------------------------------------------------------------
def get_userk_json():
#用于获取userk.json的函数
ps = input("server_password:")
transport = paramiko.Transport(('60.205.221.103', 22))
transport.connect(username='root', password=ps)
sftp = paramiko.SFTPClient.from_transport(transport)
sftp.get('/zzz/userk.json', 'userk.json')
transport.close()
#---------------------------------------------------------------------------------------
def get_key_json():
#用于获取key.json的函数
ps = input("server_password:")
transport = paramiko.Transport(('60.205.221.103', 22))
transport.connect(username='root', password=ps)
sftp = paramiko.SFTPClient.from_transport(transport)
sftp.get('/zzz/key.json', 'key.json')
transport.close()
#---------------------------------------------------------------------------------------
def put_userk_json():
#用于上传userk.json的函数
ps = input("server_password:")
transport = paramiko.Transport(('60.205.221.103', 22))
transport.connect(username='root', password=ps)
sftp = paramiko.SFTPClient.from_transport(transport)
sftp.put('userk.json', '/zzz/userk.json')
transport.close()
#---------------------------------------------------------------------------------------
def put_key_json():
#用于上传key.json的函数
ps = input("server_password:")
transport = paramiko.Transport(('60.205.221.103', 22))
transport.connect(username='root', password=ps)
sftp = paramiko.SFTPClient.from_transport(transport)
sftp.put('key.json', '/zzz/key.json')
transport.close()
#---------------------------------------------------------------------------------------
def ssh_user_rm_key():
#删除key.json的函数
ps = input("server_password:")
client = paramiko.SSHClient()
client.set_missing_host_key_policy(paramiko.AutoAddPolicy())
client.connect('60.205.221.103', username='root', password=ps, timeout=5)
client.exec_command('rm /zzz/key.json')
#---------------------------------------------------------------------------------------
def ssh_user_rm_userk():
#删除userk.json的函数
ps = input("server_password:")
client = paramiko.SSHClient()
client.set_missing_host_key_policy(paramiko.AutoAddPolicy())
client.connect('60.205.221.103', username='root', password=ps, timeout=5)
client.exec_command('rm /zzz/userk.json')
#---------------------------------------------------------------------------------------
def help():
#api列表
print("")
print("--------------v2ray操作---------------")
print("")
print("rmv2k------删除v2ray本地的key.json")
print("rmv2-------删除整个v2ray本地的安装痕迹")
print("")
print("gv2json----获取v2ray配置文件")
print("sv2--------直接启动v2ray服务")
print("")
print("")
print("--------------userk操作---------------")
print("")
print("ukl--------列出userk所有用户信息")
print("uklb-------列出userk所有标记用户的信息")
print("umac-------mac查询userk用户的信息")
print("")
print("mc---------使用密钥查找userk用户")
print("nc---------使用名称查找userk用户")
print("")
print("ur---------使用mac修改userk用户等级")
print("un---------使用mac写入userk用户名称标记")
print("")
print("ux---------使用mac写入userk用户操作命令")
print("ng---------统一修改所有userk用户操作命令")
print("")
print("--------------卡密操作---------------")
print("")
print("km---------启动卡密生成模块")
print("")
print("-------------server操作--------------")
print("")
print("guk--------获取server中的userk.json")
print("gky--------获取server中的key.json")
print("")
print("puk--------上传本地的userk.json")
print("pky--------上传本地的key.json")
print("")
print("rmkey------删除server中的key.json")
print("rmuserk----删除server中的userk.json")
print("")
print("------------日志文件操作-------------")
print("")
print("grz--------下载日志文件")
print("rmrz-------删除服务器日志文件")
print("rzfx-------分析日志文件")
print("")
#---------------------------------------------------------------------------------------
def tool_core():
#核心
core_ml = input("core:")
if core_ml == "help":
try:
help()
except:
print("错误!函数help")
if core_ml == "ukl":
try:
dq_userk()
except:
print("错误!函数dq_userk")
if core_ml == "km":
try:
kmsc_bt()
except:
print("错误!函数kmsc_bt")
if core_ml == "uklb":
try:
dq_userk_lcbj()
except:
print("错误!函数dq_userk_lcbj")
if core_ml == "mc":
try:
dq_userk_cz_key()
except:
print("错误!函数dq_userk_cz_key")
if core_ml == "nc":
try:
dq_userk_cz_name()
except:
print("错误!函数dq_userk_cz_name")
if core_ml == "ur":
try:
xr_userk_rot()
except:
print("错误!函数xr_userk_rot")
if core_ml == "ux":
try:
xr_userk_xml()
except:
print("错误!函数xr_userk_xml")
if core_ml == "un":
try:
xr_userk_namebj()
except:
print("错误!函数xr_userk_namebj")
if core_ml == "guk":
try:
get_userk_json()
except:
print("错误!函数get_userk_json")
if core_ml == "gky":
try:
get_key_json()
except:
print("错误!函数get_key_json")
if core_ml == "rmkey":
try:
ssh_user_rm_key()
except:
print("错误!函数ssh_user_rm_key")
if core_ml == "rmuserk":
try:
ssh_user_rm_userk()
except:
print("错误!函数ssh_user_rm_userk")
if core_ml == "pky":
try:
put_key_json()
except:
print("错误!函数ssh_user_rm_userk")
if core_ml == "puk":
try:
put_userk_json()
except:
print("错误!函数ssh_user_rm_userk")
if core_ml == "ng":
try:
userk_x_pl()
except:
print("错误!函数userk_x_pl")
if core_ml == "umac":
try:
userk_c_mac()
except:
print("错误!函数userk_c_mac")
if core_ml == "rzfx":
try:
rz_fx()
except:
print("错误!函数rz_fx")
if core_ml == "grz":
try:
get_rz_txt()
except:
print("错误!函数get_rz_txt")
if core_ml == "rmrz":
try:
remove_rz()
except:
print("错误!函数get_rz_txt")
if core_ml == "rmv2k":
try:
v2ray_key_rm()
except:
print("错误!函数v2ray_key_rm")
if core_ml == "rmv2":
try:
v2ray_lj_rm()
except:
print("错误!函数v2ray_lj_rm")
if core_ml == "gv2json":
try:
get_v2_json()
except:
print("错误!函数get_v2_json")
if core_ml == "sv2":
try:
start_v2()
except:
print("错误!函数start_v2")
if core_ml == "q!":
sys.exit(0)
tool_core()
| 20.11325 | 88 | 0.523536 | 2,325 | 17,760 | 3.84172 | 0.128172 | 0.034931 | 0.022391 | 0.015674 | 0.620802 | 0.568294 | 0.546014 | 0.546014 | 0.540416 | 0.527429 | 0 | 0.024269 | 0.174043 | 17,760 | 882 | 89 | 20.136054 | 0.584634 | 0.160248 | 0 | 0.605431 | 0 | 0 | 0.185637 | 0.051652 | 0 | 0 | 0 | 0 | 0 | 1 | 0.051118 | false | 0.025559 | 0.007987 | 0 | 0.059105 | 0.228435 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
800d2e58139d60299371e076a22f726713168331 | 452 | py | Python | Exercicios/ex054.py | luisgnc/Python-Exercises | 94979abb3ca1515145bde54ecd2f784e5d903219 | [
"MIT"
] | null | null | null | Exercicios/ex054.py | luisgnc/Python-Exercises | 94979abb3ca1515145bde54ecd2f784e5d903219 | [
"MIT"
] | null | null | null | Exercicios/ex054.py | luisgnc/Python-Exercises | 94979abb3ca1515145bde54ecd2f784e5d903219 | [
"MIT"
] | null | null | null | pessoa_maior = 0
pessoa_menor = 0
for i in range(1, 8):
ano_pessoa = int(input(f'Em que ano a {i}ª pessoa nasceu? '))
if ano_pessoa <= 1900 or ano_pessoa >= 2021:
print('Your not alive bitch!')
break
elif ano_pessoa <= 2002:
pessoa_maior += 1
else:
pessoa_menor += 1
print(f'\n Ao todo tivemos {pessoa_maior} pessoas maior de idade.')
print(f' E também tivemos {pessoa_menor} pessoas menores de idade.')
| 32.285714 | 68 | 0.646018 | 73 | 452 | 3.863014 | 0.561644 | 0.12766 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.052786 | 0.245575 | 452 | 13 | 69 | 34.769231 | 0.774194 | 0 | 0 | 0 | 0 | 0 | 0.373894 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.230769 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
8010a7e60e4c67f9ff387b9c7eef7cdff821f6f3 | 7,427 | py | Python | Video-Person-ReID/Graph_ModelDataGen.py | anurag3/2019-CVPR-AIC-Track-2-UWIPL | 61ee2c96611e10fe51a52033b1cd0e2804d544ca | [
"MIT"
] | 20 | 2019-06-05T08:43:26.000Z | 2021-12-07T08:48:18.000Z | Video-Person-ReID/Graph_ModelDataGen.py | yizhou-wang/2019-CVPR-AIC-Track-2-UWIPL-ETRI | 387924b1e33e0594977cd095c26a147e4a7f8192 | [
"MIT"
] | 8 | 2019-10-05T11:17:11.000Z | 2020-04-04T00:40:20.000Z | Video-Person-ReID/Graph_ModelDataGen.py | yizhou-wang/2019-CVPR-AIC-Track-2-UWIPL-ETRI | 387924b1e33e0594977cd095c26a147e4a7f8192 | [
"MIT"
] | 14 | 2019-06-16T23:09:15.000Z | 2021-09-13T08:36:50.000Z | from __future__ import print_function, absolute_import
import os
import sys
import time
import datetime
import argparse
import os.path as osp
import numpy as np
import torch
import torch.nn as nn
import torch.backends.cudnn as cudnn
from torch.utils.data import DataLoader
from torch.autograd import Variable
from torch.optim import lr_scheduler
import Graph_data_manager
from Graph_video_loader import VideoDataset
import transforms as T
import models
from models import resnet3d
from losses import CrossEntropyLabelSmooth, TripletLoss
from utils import AverageMeter, Logger, save_checkpoint
from eval_metrics import evaluate
from samplers import RandomIdentitySampler
from reidtools import visualize_ranked_results # TH
def testseq(dataset_name, use_gpu):
dataset_root = './video2img/track1_sct_img_test_big/'
dataset = Graph_data_manager.AICityTrack2(root=dataset_root)
width = 224
height = 224
transform_train = T.Compose([
T.Random2DTranslation(height, width),
T.RandomHorizontalFlip(),
T.ToTensor(),
T.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]),
])
transform_test = T.Compose([
T.Resize((height, width)),
T.ToTensor(),
T.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]),
])
pin_memory = True if use_gpu else False
seq_len = 4
num_instance = 4
train_batch = 32
test_batch = 1
queryloader = DataLoader(
VideoDataset(dataset.query, seq_len=seq_len, sample='dense', transform=transform_test),
batch_size=test_batch, shuffle=False, num_workers=4,
pin_memory=pin_memory, drop_last=False,
)
arch = "resnet50ta"
pretrained_model = "./log/track12_ta224_checkpoint_ep500.pth.tar"
start_epoch = 0
print("Initializing model: {}".format(arch))
dataset.num_train_pids = 517
if arch=='resnet503d':
model = resnet3d.resnet50(num_classes=dataset.num_train_pids, sample_width=width, sample_height=height, sample_duration=seq_len)
if not os.path.exists(pretrained_model):
raise IOError("Can't find pretrained model: {}".format(pretrained_model))
print("Loading checkpoint from '{}'".format(pretrained_model))
checkpoint = torch.load(pretrained_model)
state_dict = {}
for key in checkpoint['state_dict']:
if 'fc' in key: continue
state_dict[key.partition("module.")[2]] = checkpoint['state_dict'][key]
model.load_state_dict(state_dict, strict=False)
else:
if not os.path.exists(pretrained_model):
model = models.init_model(name=arch, num_classes=dataset.num_train_pids, loss={'xent', 'htri'})
else:
model = models.init_model(name=arch, num_classes=dataset.num_train_pids, loss={'xent', 'htri'})
checkpoint = torch.load(pretrained_model)
model.load_state_dict(checkpoint['state_dict'])
start_epoch = checkpoint['epoch'] + 1
print("Loaded checkpoint from '{}'".format(pretrained_model))
print("- start_epoch: {}\n- rank1: {}".format(start_epoch, checkpoint['rank1']))
print("Model size: {:.5f}M".format(sum(p.numel() for p in model.parameters())/1000000.0))
criterion_xent = CrossEntropyLabelSmooth(num_classes=dataset.num_train_pids, use_gpu=use_gpu)
criterion_htri = TripletLoss(margin=0.3)
lr = 0.0003
gamma = 0.1
stepsize = 200
weight_decay = 5e-04
optimizer = torch.optim.Adam(model.parameters(), lr=lr, weight_decay=weight_decay)
if stepsize > 0:
scheduler = lr_scheduler.StepLR(optimizer, step_size=stepsize, gamma=gamma)
start_epoch = start_epoch
if use_gpu:
model = nn.DataParallel(model).cuda()
test(model, queryloader, 'avg', use_gpu, dataset, -1, meta_data_tab=None)
def test(model, queryloader, pool, use_gpu, dataset, epoch, ranks=[1, 5, 10, 20], meta_data_tab = None):
model.eval()
qf, q_pids, q_camids = [], [], []
if False:
for batch_idx, (imgs, surfaces, pids, camids) in enumerate(queryloader):
torch.cuda.empty_cache()
if use_gpu:
imgs = imgs.cuda()
surfaces = surfaces.cuda()
imgs = Variable(imgs, volatile=True)
surfaces = Variable(surfaces, volatile=True)
b, n, s, c, h, w = imgs.size()
b_s, n_s, s_s, d_s = surfaces.size()
assert(b == b_s and n == n_s and s == s_s)
if n < 100:
assert(b == 1)
imgs = imgs.view(b * n, s, c, h, w)
surfaces = surfaces.view(b * n, s, -1)
features = model(imgs, surfaces)
features = features.view(n, -1)
else:
imgs = imgs.data
imgs.resize_(50, s, c, h, w)
imgs = imgs.view(50, s, c, h, w)
imgs = Variable(imgs, volatile=True)
surfaces = surfaces.data
surfaces.resize_(50, s, d_s)
surfaces = surfaces.view(50, s, -1)
surfaces = Variable(surfaces, volatile=True)
features = model(imgs, surfaces)
features = features.view(50, -1)
features = torch.mean(features, 0)
features = features.data.cpu()
qf.append(features)
q_pids.extend(pids)
q_camids.extend(camids)
else:
for batch_idx, (imgs, pids, camids) in enumerate(queryloader):
torch.cuda.empty_cache()
if use_gpu:
imgs = imgs.cuda()
imgs = Variable(imgs, volatile=True)
b, n, s, c, h, w = imgs.size()
if n < 100:
assert(b == 1)
imgs = imgs.view(b * n, s, c, h, w)
features = model(imgs)
features = features.view(n, -1)
else:
imgs = imgs.data
imgs.resize_(50, s, c, h, w)
imgs = imgs.view(50, s, c, h, w)
imgs = Variable(imgs, volatile=True)
features = model(imgs)
features = features.view(50, -1)
features = torch.mean(features, 0)
features = features.data.cpu()
qf.append(features.numpy())
q_pids.extend(pids.numpy())
q_camids.extend(camids.numpy())
qf = np.array(qf)
q_pids = np.asarray(q_pids)
q_camids = np.asarray(q_camids)
np.save("qf3_no_nms_big0510.npy", qf)
np.save("q_pids3_no_nms_big0510.npy", q_pids)
np.save("q_camids3_no_nms_big0510.npy", q_camids)
def main():
seed = 1
gpu_devices = '0'
torch.manual_seed(seed)
os.environ['CUDA_VISIBLE_DEVICES'] = gpu_devices
use_gpu = torch.cuda.is_available()
use_gpu = True
if not True:
sys.stdout = Logger(osp.join('track1_log', 'log_train.txt'))
else:
sys.stdout = Logger(osp.join('track1_log', 'log_test.txt'))
print("==========\nArgs:{}\n==========")
if use_gpu:
print("Currently using GPU {}".format(gpu_devices))
cudnn.benchmark = True
torch.cuda.manual_seed_all(seed)
else:
print("Currently using CPU (GPU is highly recommended)")
dataset = "aictrack2"
print("Initializing dataset {}".format(dataset))
testseq(dataset, use_gpu)
if __name__ == '__main__':
main()
| 34.225806 | 136 | 0.610475 | 955 | 7,427 | 4.569634 | 0.253403 | 0.017874 | 0.0055 | 0.007333 | 0.328139 | 0.277269 | 0.241522 | 0.215399 | 0.199817 | 0.199817 | 0 | 0.031937 | 0.270634 | 7,427 | 216 | 137 | 34.384259 | 0.773675 | 0.000269 | 0 | 0.329609 | 0 | 0 | 0.082446 | 0.025192 | 0 | 0 | 0 | 0 | 0.01676 | 1 | 0.01676 | false | 0 | 0.134078 | 0 | 0.150838 | 0.055866 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
8010d7a2405a42f8001165cec41e8db24b3d32c2 | 6,024 | py | Python | test/test_soft_regression.py | jack09581013/Dual-GDNet | d9d65928208caee781cbe8f8f794241d06b4bf5d | [
"MIT"
] | null | null | null | test/test_soft_regression.py | jack09581013/Dual-GDNet | d9d65928208caee781cbe8f8f794241d06b4bf5d | [
"MIT"
] | null | null | null | test/test_soft_regression.py | jack09581013/Dual-GDNet | d9d65928208caee781cbe8f8f794241d06b4bf5d | [
"MIT"
] | null | null | null | import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
import matplotlib.pyplot as plt
from torch.autograd import Function
def disparity_regression(x):
disp = torch.arange(0, x.size(1)).unsqueeze(0)
disp = disp.repeat(x.size(0), 1)
return torch.sum(x*disp, dim=1).unsqueeze(1)
def entropy(x, epsilon=1e-6):
return - torch.sum(x * torch.log(x + epsilon))
class Model(nn.Module):
def __init__(self, max_disparity):
super(Model, self).__init__()
self.w1 = nn.Linear(max_disparity, 64)
self.w2 = nn.Linear(64, 32)
self.w3 = nn.Linear(32, 64)
self.w4 = nn.Linear(64, max_disparity)
def forward(self, x):
x = self.w1(x)
x = F.relu(x)
x = self.w2(x)
x = F.relu(x)
x = self.w3(x)
# x = F.normalize(x, dim=1, p=1)
# x = x / x.sum()
# x = torch.sigmoid(x)
x = F.relu(x)
x = self.w4(x)
return x
def softmax(x):
e = torch.exp(x - x.max(dim=1)[0].unsqueeze(1))
return e / torch.sum(e, dim=1).unsqueeze(1)
def cross_entropy(y, t, epsilon=1e-6):
return - torch.sum(t * torch.log(y + epsilon)) / y.size(0)
class SoftmaxWithLoss(Function):
@staticmethod
def forward(ctx, x, t):
y = softmax(x)
ctx.save_for_backward(y, t)
return cross_entropy(y, t)
@staticmethod
def backward(ctx, grad):
y, t = ctx.saved_tensors
return grad * (y - t), None
class Regression(nn.Module):
def __init__(self, max_disparity):
super(Regression, self).__init__()
self.w = Model(max_disparity)
def forward(self, x, t):
cost = self.w(x)
# x = F.normalize(x, dim=1, p=1)
cost = F.softmax(cost, dim=1)
y = disparity_regression(cost)
loss = F.mse_loss(y, t)
return loss, cost, y
class CrossEntropy(nn.Module):
def __init__(self, max_disparity):
super(CrossEntropy, self).__init__()
self.w = Model(max_disparity)
self.loss = nn.CrossEntropyLoss()
def forward(self, x, t_value):
cost = self.w(x)
y = torch.argmax(cost, dim=1)
t = torch.tensor(int(t_value), dtype=torch.long).unsqueeze(0)
loss = self.loss(cost, t)
cost = F.softmax(cost, dim=1).squeeze()
return loss, cost, float(y)
class CrossEntropyMulti(nn.Module):
def __init__(self, max_disparity):
super(CrossEntropyMulti, self).__init__()
self.max_disparity = max_disparity
self.w = Model(max_disparity)
def forward(self, x, t_value):
cost = self.w(x)
t = self.get_t(t_value)
loss = SoftmaxWithLoss.apply(cost, t)
cost = F.softmax(cost, dim=1)
y = disparity_regression(cost).float()
return loss, cost, y
def get_t(self, t_value):
t = torch.zeros((t_value.size(0), self.max_disparity), dtype=torch.float)
long_t = t_value.long()
mid = t_value - long_t
for b in range(t_value.size(0)):
t[b, long_t[b]] = 1 - mid[b]
t[b, long_t[b] + 1] = mid[b]
return t.unsqueeze(0)
class CrossEntropyRegression(nn.Module):
def __init__(self, max_disparity):
super(CrossEntropyRegression, self).__init__()
self.w = Model(max_disparity)
def forward(self, x, t_value):
cost = self.w(x)
y = disparity_regression(cost)
t1 = torch.tensor(t_value, dtype=torch.float)
t2 = torch.tensor(t_value, dtype=torch.long).unsqueeze(0)
loss_regression = F.mse_loss(y, t1)
loss_cross_entropy = F.cross_entropy(cost.unsqueeze(0), t2)
loss = loss_cross_entropy + loss_regression
return loss, cost, y
class State2Regression(nn.Module):
def __init__(self, max_disparity):
super(State2Regression, self).__init__()
self.w = Model(max_disparity)
def forward(self, x, t_value, first):
cost = self.w(x)
# print(cost)
if first:
y = torch.argmax(cost, dim=0).float()
t = torch.tensor(int(t_value), dtype=torch.long).unsqueeze(0)
loss = F.cross_entropy(cost.unsqueeze(0), t)
else:
y = disparity_regression(cost)
t = torch.tensor(t_value, dtype=torch.float)
loss = F.mse_loss(y, t)
return loss, cost, y
batch = 5
max_disparity = 128
if batch == 1:
t = torch.tensor([80.6]).view(-1, 1)
elif batch == 2:
t = torch.tensor([20.3, 80.6]).view(-1, 1)
elif batch == 5:
t = torch.tensor([20.3, 80.6, 30.7, 60.33, 5.7]).view(-1, 1)
converge_i = []
for r in range(100):
print(f'round: {r}')
x = torch.randn(batch, max_disparity)
# model = Regression(max_disparity)
# model = CrossEntropy(max_disparity)
model = CrossEntropyMulti(max_disparity)
# model = CrossEntropyRegression(max_disparity)
# model = State2Regression(max_disparity)
optimizer = optim.Adam(model.parameters(), lr=0.1, betas=(0.9, 0.999))
i = 0
y = torch.zeros((batch, max_disparity), dtype=torch.float)
cost = None
while torch.all(torch.abs(y - t) >= 1e-03):
# while i < 200:
# if i > 10:
# t = torch.tensor([40.8, 30.6, 50.7, 15.33, 7.7]).view(-1, 1)
optimizer.zero_grad()
loss, cost, y = model(x, t)
# loss, cost, y = model(x, t, i < 10)
# print(f'[{i}], y = {y.view(-1).data.numpy()}, loss = {loss:.3f}')
loss.backward()
optimizer.step()
i += 1
converge_i.append(i)
# fig = plt.figure()
# for b in range(batch):
# plt.subplot(batch, 1, b+1)
# plt.plot(cost[b].data.numpy())
# plt.axvline(float(y[b]), color='k', linestyle='--', label=f'y({float(y[b]):.3f})')
# plt.axvline(float(t[b]), color='r', linestyle='--', label=f't({float(t[b]):.3f})')
# plt.legend()
# plt.show()
# plt.close(fig)
converge_i = torch.tensor(converge_i, dtype=torch.float)
print(f'avg i = {converge_i.mean():.3f}')
| 30.57868 | 92 | 0.586487 | 891 | 6,024 | 3.830528 | 0.171717 | 0.084383 | 0.037504 | 0.04102 | 0.397304 | 0.353355 | 0.302959 | 0.243774 | 0.154117 | 0.142983 | 0 | 0.032982 | 0.260126 | 6,024 | 196 | 93 | 30.734694 | 0.73278 | 0.130146 | 0 | 0.266187 | 0 | 0 | 0.00786 | 0.00441 | 0 | 0 | 0 | 0 | 0 | 1 | 0.136691 | false | 0 | 0.043165 | 0.014388 | 0.323741 | 0.014388 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
801221814dd3561ad89dbc84e1db20b23ada021c | 2,467 | py | Python | src/docs/generator/logger_parser/generate.py | e-ntro-py/win-vind | 1ec805420732c82a46e2c79720db728ded792814 | [
"MIT"
] | null | null | null | src/docs/generator/logger_parser/generate.py | e-ntro-py/win-vind | 1ec805420732c82a46e2c79720db728ded792814 | [
"MIT"
] | null | null | null | src/docs/generator/logger_parser/generate.py | e-ntro-py/win-vind | 1ec805420732c82a46e2c79720db728ded792814 | [
"MIT"
] | null | null | null | # coding: utf-8
from transitions.extensions import GraphMachine
class Model(object):
pass
states = [
'Waiting inputs',
'Waiting inputs in <num>',
'Rejected',
'Rejected while matching with subset',
'Accepted',
'Accepted in <num>',
'Accepted in <any>'
]
transitions = [
# Waiting inputs
{
'trigger': 'Empty Input',
'source': states[0],
'dest': states[2]
},
{
'trigger': 'Match with last <num>',
'source': states[0],
'dest': states[5]
},
{
'trigger': 'Match with <any>',
'source': states[0],
'dest': states[6]
},
{
'trigger': 'Match with last keyset',
'source': states[0],
'dest': states[4]
},
{
'trigger': 'Match with <num>',
'source': states[0],
'dest': states[1]
},
{
'trigger': 'Match with keyset',
'source': states[0],
'dest': states[0]
},
{
'trigger': 'Match with subset',
'source': states[0],
'dest': states[3]
},
{
'trigger': 'Unmatched',
'source': states[0],
'dest': states[2]
},
# Waiting inputs in <num>
{
'trigger': 'Input numbers',
'source': states[1],
'dest': states[1]
},
{
'trigger': 'Input others (ε-transitions)',
'source': states[1],
'dest': states[0]
},
# Rejected
{
'trigger': 'Input',
'source': states[2],
'dest': states[2]
},
# Rejected while matching with subset
{
'trigger': 'Input',
'source': states[3],
'dest': states[2]
},
# Accepted
{
'trigger': 'Input (ε-transitions)',
'source': states[4],
'dest': states[0]
},
# Accepted in <num>
{
'trigger': 'Input numbers',
'source': states[5],
'dest': states[5]
},
{
'trigger': 'Input others',
'source': states[5],
'dest': states[0]
},
# Accepted in <any>
{
'trigger': 'Input',
'source': states[6],
'dest': states[6]
}
]
model = Model()
machine = GraphMachine(
model=model,
states=states,
transitions=transitions,
initial=states[0],
title='LoggerParser State Transition Diagram')
graph = model.get_graph()
graph.draw('logger_parser_state_transition_diagram.png', prog='dot')
| 19.579365 | 68 | 0.483989 | 235 | 2,467 | 5.059574 | 0.246809 | 0.16148 | 0.087468 | 0.114382 | 0.366695 | 0.19344 | 0.060555 | 0 | 0 | 0 | 0 | 0.021118 | 0.347385 | 2,467 | 125 | 69 | 19.736 | 0.717391 | 0.05756 | 0 | 0.300971 | 0 | 0 | 0.305268 | 0.018135 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.009709 | 0.009709 | 0 | 0.019417 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
80122c1ac4058449323ee4626cfbb15560375a32 | 666 | py | Python | factory-ai-vision/EdgeSolution/modules/WebModule/backend/vision_on_edge/azure_projects/tests/factories.py | kaka-lin/azure-intelligent-edge-patterns | 766833c7c25d2458cec697937be288202d1763bc | [
"MIT"
] | 176 | 2019-07-03T00:20:15.000Z | 2022-03-14T07:51:22.000Z | factory-ai-vision/EdgeSolution/modules/WebModule/backend/vision_on_edge/azure_projects/tests/factories.py | kaka-lin/azure-intelligent-edge-patterns | 766833c7c25d2458cec697937be288202d1763bc | [
"MIT"
] | 121 | 2019-06-24T20:47:27.000Z | 2022-03-28T02:16:18.000Z | factory-ai-vision/EdgeSolution/modules/WebModule/backend/vision_on_edge/azure_projects/tests/factories.py | kaka-lin/azure-intelligent-edge-patterns | 766833c7c25d2458cec697937be288202d1763bc | [
"MIT"
] | 144 | 2019-06-18T18:48:43.000Z | 2022-03-31T12:14:46.000Z | """App model factories.
"""
from factory import DjangoModelFactory, Faker, SubFactory
from ...azure_projects.models import Project
from ...azure_settings.tests.factories import SettingFactory
class ProjectFactory(DjangoModelFactory):
"""ProjectFactory."""
setting = SubFactory(SettingFactory)
name = Faker("sentence")
class Meta:
model = Project
django_get_or_create = ["name"]
class DemoProjectFactory(DjangoModelFactory):
"""DemoProjectFactory."""
setting = SubFactory(SettingFactory)
is_demo = True
name = Faker("sentence")
class Meta:
model = Project
django_get_or_create = ["name"]
| 20.8125 | 60 | 0.695195 | 63 | 666 | 7.206349 | 0.47619 | 0.039648 | 0.136564 | 0.096916 | 0.259912 | 0.259912 | 0.259912 | 0.259912 | 0.259912 | 0.259912 | 0 | 0 | 0.201201 | 666 | 31 | 61 | 21.483871 | 0.853383 | 0.084084 | 0 | 0.625 | 0 | 0 | 0.040472 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.1875 | 0 | 0.75 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
80124c2e7a29a44670216e7b4c369b44f3f2c5dc | 2,190 | py | Python | spark/ReqModPython.py | wensheng/spark | ab47107d000f0670f4cfe131637f72471a04cfb2 | [
"MIT"
] | null | null | null | spark/ReqModPython.py | wensheng/spark | ab47107d000f0670f4cfe131637f72471a04cfb2 | [
"MIT"
] | null | null | null | spark/ReqModPython.py | wensheng/spark | ab47107d000f0670f4cfe131637f72471a04cfb2 | [
"MIT"
] | null | null | null | #import time
from spark.ReqBase import ReqBase
class ReqModPython(ReqBase):
def __init__(self, req, reallympy=True):
self.reallympy = reallympy
self.mpyreq = req
self.mpyreq.add_common_vars()
self.mpyreq.content_type = "text/html"
self.mpyreq.status = 200
self._have_status = 0
ReqBase.__init__(self)
def send_header(self):
pass
def _setup_vars(self):
self.env=self.mpyreq.subprocess_env
self._setup_vars_from_std_env()
#In modpython, The last of script_name is actually
# first part of path_info
self.path = self.script_name[-1:]+self.path
if self.script_name: self.script_name.pop()
def _setup_form(self):
self.form = {}
if self.reallympy:
from mod_python import util
pg_fields = util.FieldStorage(self.mpyreq,1).list
for field in pg_fields:
self.form[field.name] = field.value
def write(self, data):
if type(data) == type(""):
self.mpyreq.write(data)
elif type(data) == type([]):
if self.gzip_ok:
import gzip,StringIO
zbuf = StringIO.StringIO()
zfile = gzip.GzipFile('wb',zbuf,6)
zfile.write(''.join(data))
zfile.close()
self.mpyreq.write(zbuf.getvalue())
else:
self.mpyreq.write('\n'.join(data))
else:
self.mpyreq.write(str(data))
def finish(self):
return 0
def process_headers(self):
for header in self.headers:
if header == 'content-type':
self.mpyreq.content_type = self.headers[header]
elif header == 'status':
try:
self.mpyreq.status = int(self.headers[header])
except:
self.mpyreq.status = 200
else:
self.mpyreq.headers_out[header]=self.headers[header]
def redirect(self, addr):
from mod_python import util
util.redirect(self.mpyreq,addr)
def get_cookie(self,coname):
if self.reallympy:
from mod_python import Cookie
cookie = Cookie.get_cookies(self.mpyreq)
if cookie.has_key(coname):
return cookie[coname].value
else:
return ''
def set_cookie(self, coname, codata, expires=None):
if self.reallympy:
from mod_python import Cookie
cookie = Cookie.Cookie(coname,codata)
#for simplicity
cookie.path = '/'
if expires: cookie.expires = expires
Cookie.add_cookie(self.mpyreq,cookie)
| 25.764706 | 56 | 0.699087 | 314 | 2,190 | 4.735669 | 0.308917 | 0.114324 | 0.03497 | 0.05111 | 0.110962 | 0.092804 | 0.092804 | 0.069939 | 0.069939 | 0.069939 | 0 | 0.006111 | 0.178082 | 2,190 | 84 | 57 | 26.071429 | 0.82 | 0.045205 | 0 | 0.185714 | 0 | 0 | 0.01534 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | false | 0.014286 | 0.085714 | 0.014286 | 0.285714 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
8012892e7728aa0fca32fc55889737c34c9bce78 | 2,840 | py | Python | display_restults.py | MickRemmerswaal/ASCAD | bdd54303922df5d0798ffb9fc4a9a5d13429e45e | [
"BSD-2-Clause"
] | null | null | null | display_restults.py | MickRemmerswaal/ASCAD | bdd54303922df5d0798ffb9fc4a9a5d13429e45e | [
"BSD-2-Clause"
] | null | null | null | display_restults.py | MickRemmerswaal/ASCAD | bdd54303922df5d0798ffb9fc4a9a5d13429e45e | [
"BSD-2-Clause"
] | null | null | null | import matplotlib.pyplot as plt
import numpy as np
def load_data(data_loc):
return np.genfromtxt(data_loc, delimiter=',')
def calc_average(files, range):
averages = []
for i in range:
to_average_rows = np.asarray(files)[:, i]
average = np.round(np.average(to_average_rows, axis=0))
averages.append(average)
return averages
def display_row_data(data_file, row_nr):
data_row = data_file.iloc[row_nr].values
_, axs = plt.subplots(1, sharey=False)
axs.plot(data_row)
def calc_succes_rate(files, range, index):
# files: total results of experiments
# range: range of indices which apply to current technique, e.g. range(6) for SOST
# first 5 experiments are bound to SOST
# index: denotes amount of attack traces need
# e.g. index = 100 => we check if on index 100 there exists a 0.0 and denote that as a succes
rates = []
for i in range:
selection_rows = np.asarray(files)[:, i]
count = np.count_nonzero(selection_rows[:, (index-1)] == 0.0)
rate = np.divide(count, 5)
rates.append(rate)
return rates
total_results = []
averaged_results = []
for i in range(3,8):
data_loc = "results_fixed_key_byte"+ str(i) +".csv"
results = load_data(data_loc)
np.asarray(total_results.append(results))
# Calculate average scores
sost_average = calc_average(total_results, range(6))
LDA_average = calc_average(total_results, range(6, 21, 2))
PCA_average = calc_average(total_results, range(7, 22, 2))
# Calculating Succes rate
succes_rate_indices = [1, 2, 5, 10, 20, 50, 100, 200, 500, 1000]
SOST_succes_rate = []
PCA_succes_rate = []
LDA_succes_rate = []
AE_succes_rate = []
# Check for each index and each technique
# what % of the runs were able to have their guessing entropy at 0
for index in succes_rate_indices:
# SOST
SOST_succes_rate.append(calc_succes_rate(total_results, range(6), index))
# PCA
PCA_succes_rate.append(calc_succes_rate(total_results, range(7, 22, 2), index))
# LDA
LDA_succes_rate.append(calc_succes_rate(total_results, range(6, 21, 2), index ))
total_results = []
averaged_results = []
for i in range(3,8):
data_loc = "results_AE_fixed_key_byte"+ str(i) +".csv"
results = load_data(data_loc)
total_results.append(results)
AE_average = calc_average(total_results, range(6))
sost_labels = ["2", "4", "10", "20", "50", "100"]
for i in succes_rate_indices:
# AE Success rate
AE_succes_rate.append(calc_succes_rate(total_results, range(6), index))
'''
# Plot results
for i in range(0, len(LDA_average)):
plt.plot(LDA_average[i], label=(str(i+1) +" POIs"), linewidth=2)
plt.ylim([0, 256])
plt.xlim([0, 1000])
plt.ylabel("Guessing entropy")
plt.xlabel("Number of attack traces")
plt.legend()
plt.show(block=True)
'''
| 27.572816 | 104 | 0.685915 | 441 | 2,840 | 4.213152 | 0.303855 | 0.091496 | 0.073197 | 0.058127 | 0.325081 | 0.294941 | 0.268568 | 0.210441 | 0.210441 | 0.185145 | 0 | 0.037359 | 0.189437 | 2,840 | 102 | 105 | 27.843137 | 0.769765 | 0.171479 | 0 | 0.196078 | 0 | 0 | 0.032446 | 0.02276 | 0 | 0 | 0 | 0 | 0 | 1 | 0.078431 | false | 0 | 0.039216 | 0.019608 | 0.176471 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
8012ad5def5e3503fb5f33d5f1de10fd3533f6ca | 2,171 | py | Python | tests/test_cases/test_cocotb/test_testfactory.py | lavanyajagan/cocotb | 2f98612016e68510e264a2b4963303d3588d8404 | [
"BSD-3-Clause"
] | 350 | 2015-01-09T12:50:13.000Z | 2019-07-12T09:08:17.000Z | tests/test_cases/test_cocotb/test_testfactory.py | lavanyajagan/cocotb | 2f98612016e68510e264a2b4963303d3588d8404 | [
"BSD-3-Clause"
] | 710 | 2015-01-05T16:42:29.000Z | 2019-07-16T13:40:00.000Z | tests/test_cases/test_cocotb/test_testfactory.py | lavanyajagan/cocotb | 2f98612016e68510e264a2b4963303d3588d8404 | [
"BSD-3-Clause"
] | 182 | 2015-01-08T09:35:20.000Z | 2019-07-12T18:41:37.000Z | # Copyright cocotb contributors
# Licensed under the Revised BSD License, see LICENSE for details.
# SPDX-License-Identifier: BSD-3-Clause
"""
Tests of cocotb.regression.TestFactory functionality
"""
import random
import string
from collections.abc import Coroutine
import cocotb
from cocotb.regression import TestFactory
from cocotb.triggers import NullTrigger
testfactory_test_names = set()
testfactory_test_args = set()
prefix = "".join(random.choices(string.ascii_letters, k=4))
postfix = "".join(random.choices(string.ascii_letters, k=4))
async def run_testfactory_test(dut, arg1, arg2, arg3):
testfactory_test_names.add(cocotb.regression_manager._test.__qualname__)
testfactory_test_args.add((arg1, arg2, arg3))
factory = TestFactory(run_testfactory_test)
factory.add_option("arg1", ["a1v1", "a1v2"])
factory.add_option(("arg2", "arg3"), [("a2v1", "a3v1"), ("a2v2", "a3v2")])
factory.generate_tests(prefix=prefix, postfix=postfix)
@cocotb.test()
async def test_testfactory_verify_args(dut):
assert testfactory_test_args == {
("a1v1", "a2v1", "a3v1"),
("a1v2", "a2v1", "a3v1"),
("a1v1", "a2v2", "a3v2"),
("a1v2", "a2v2", "a3v2"),
}
assert testfactory_test_names == {
f"{prefix}run_testfactory_test{postfix}_{i:03}" for i in range(1, 5)
}
class TestClass(Coroutine):
def __init__(self, dut, myarg):
self._coro = self.run(dut, myarg)
async def run(self, dut, myarg):
assert myarg == 1
def send(self, value):
self._coro.send(value)
def throw(self, exception):
self._coro.throw(exception)
def __await__(self):
yield from self._coro.__await__()
tf = TestFactory(TestClass)
tf.add_option("myarg", [1])
tf.generate_tests()
generator_testfactory_args = set()
@cocotb.coroutine
def generator_test(dut, arg):
generator_testfactory_args.add(arg)
yield NullTrigger()
generator_testfactory = TestFactory(generator_test)
generator_testfactory.add_option("arg", [1, 2, 3, 4])
generator_testfactory.generate_tests()
@cocotb.test()
async def test_generator_testfactory(_):
assert generator_testfactory_args == {1, 2, 3, 4}
| 25.845238 | 76 | 0.707969 | 276 | 2,171 | 5.326087 | 0.311594 | 0.091837 | 0.040816 | 0.031293 | 0.080272 | 0.05034 | 0.05034 | 0.05034 | 0 | 0 | 0 | 0.033861 | 0.15661 | 2,171 | 83 | 77 | 26.156627 | 0.768979 | 0.085675 | 0 | 0.037736 | 0 | 0 | 0.068861 | 0.022278 | 0 | 0 | 0 | 0 | 0.075472 | 1 | 0.09434 | false | 0 | 0.113208 | 0 | 0.226415 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
8012c5a341f13ec069b05a8dc76bfdac9b6781b3 | 5,617 | py | Python | pkgs/servers/home-assistant/parse-requirements.py | hamhut1066/nixpkgs | a2736d27d16aecd1f2179e07db23bf1b6c0c4c46 | [
"MIT"
] | 2 | 2021-10-20T11:39:26.000Z | 2021-11-08T09:49:56.000Z | pkgs/servers/home-assistant/parse-requirements.py | hamhut1066/nixpkgs | a2736d27d16aecd1f2179e07db23bf1b6c0c4c46 | [
"MIT"
] | 4 | 2021-09-28T05:38:41.000Z | 2022-02-26T10:09:42.000Z | pkgs/servers/home-assistant/parse-requirements.py | hamhut1066/nixpkgs | a2736d27d16aecd1f2179e07db23bf1b6c0c4c46 | [
"MIT"
] | 1 | 2019-02-03T10:41:26.000Z | 2019-02-03T10:41:26.000Z | #! /usr/bin/env nix-shell
#! nix-shell -i python3 -p "python3.withPackages (ps: with ps; [ aiohttp astral async-timeout attrs certifi jinja2 pyjwt cryptography pip pytz pyyaml requests ruamel_yaml voluptuous python-slugify ])"
#
# This script downloads Home Assistant's source tarball.
# Inside the homeassistant/components directory, each component has an associated .py file,
# specifying required packages and other components it depends on:
#
# REQUIREMENTS = [ 'package==1.2.3' ]
# DEPENDENCIES = [ 'component' ]
#
# By parsing the files, a dictionary mapping component to requirements and dependencies is created.
# For all of these requirements and the dependencies' requirements,
# Nixpkgs' python3Packages are searched for appropriate names.
# Then, a Nix attribute set mapping component name to dependencies is created.
from urllib.request import urlopen
import tempfile
from io import BytesIO
import tarfile
import importlib
import subprocess
import os
import sys
import json
import re
COMPONENT_PREFIX = 'homeassistant.components'
PKG_SET = 'python3Packages'
# If some requirements are matched by multiple python packages,
# the following can be used to choose one of them
PKG_PREFERENCES = {
# Use python3Packages.youtube-dl-light instead of python3Packages.youtube-dl
'youtube-dl': 'youtube-dl-light'
}
def get_version():
with open(os.path.dirname(sys.argv[0]) + '/default.nix') as f:
m = re.search('hassVersion = "([\\d\\.]+)";', f.read())
return m.group(1)
def parse_components(version='master'):
components = {}
with tempfile.TemporaryDirectory() as tmp:
with urlopen('https://github.com/home-assistant/home-assistant/archive/{}.tar.gz'.format(version)) as response:
tarfile.open(fileobj=BytesIO(response.read())).extractall(tmp)
# Use part of a script from the Home Assistant codebase
sys.path.append(tmp + '/home-assistant-{}'.format(version))
from script.gen_requirements_all import explore_module
for package in explore_module(COMPONENT_PREFIX, True):
# Remove 'homeassistant.components.' prefix
component = package[len(COMPONENT_PREFIX + '.'):]
try:
module = importlib.import_module(package)
components[component] = {}
components[component]['requirements'] = getattr(module, 'REQUIREMENTS', [])
components[component]['dependencies'] = getattr(module, 'DEPENDENCIES', [])
# If there is an ImportError, the imported file is not the main file of the component
except ImportError:
continue
return components
# Recursively get the requirements of a component and its dependencies
def get_reqs(components, component):
requirements = set(components[component]['requirements'])
for dependency in components[component]['dependencies']:
requirements.update(get_reqs(components, dependency))
return requirements
# Store a JSON dump of Nixpkgs' python3Packages
output = subprocess.check_output(['nix-env', '-f', os.path.dirname(sys.argv[0]) + '/../../..', '-qa', '-A', PKG_SET, '--json'])
packages = json.loads(output)
def name_to_attr_path(req):
attr_paths = set()
names = [req]
# E.g. python-mpd2 is actually called python3.6-mpd2
# instead of python-3.6-python-mpd2 inside Nixpkgs
if req.startswith('python-') or req.startswith('python_'):
names.append(req[len('python-'):])
for name in names:
# treat "-" and "_" equally
name = re.sub('[-_]', '[-_]', name)
pattern = re.compile('^python\\d\\.\\d-{}-\\d'.format(name), re.I)
for attr_path, package in packages.items():
if pattern.match(package['name']):
attr_paths.add(attr_path)
if len(attr_paths) > 1:
for to_replace, replacement in PKG_PREFERENCES.items():
try:
attr_paths.remove(PKG_SET + '.' + to_replace)
attr_paths.add(PKG_SET + '.' + replacement)
except KeyError:
pass
# Let's hope there's only one derivation with a matching name
assert len(attr_paths) <= 1, "{} matches more than one derivation: {}".format(req, attr_paths)
if len(attr_paths) == 1:
return attr_paths.pop()
else:
return None
version = get_version()
print('Generating component-packages.nix for version {}'.format(version))
components = parse_components(version=version)
build_inputs = {}
for component in sorted(components.keys()):
attr_paths = []
for req in sorted(get_reqs(components, component)):
# Some requirements are specified by url, e.g. https://example.org/foobar#xyz==1.0.0
# Therefore, if there's a "#" in the line, only take the part after it
req = req[req.find('#') + 1:]
name = req.split('==')[0]
attr_path = name_to_attr_path(name)
if attr_path is not None:
# Add attribute path without "python3Packages." prefix
attr_paths.append(attr_path[len(PKG_SET + '.'):])
else:
build_inputs[component] = attr_paths
with open(os.path.dirname(sys.argv[0]) + '/component-packages.nix', 'w') as f:
f.write('# Generated by parse-requirements.py\n')
f.write('# Do not edit!\n\n')
f.write('{\n')
f.write(' version = "{}";\n'.format(version))
f.write(' components = {\n')
for component, attr_paths in build_inputs.items():
f.write(' "{}" = ps: with ps; [ '.format(component))
f.write(' '.join(attr_paths))
f.write(' ];\n')
f.write(' };\n')
f.write('}\n')
| 42.55303 | 200 | 0.659605 | 715 | 5,617 | 5.100699 | 0.33986 | 0.034549 | 0.009597 | 0.013162 | 0.039485 | 0.031259 | 0.015903 | 0.015903 | 0 | 0 | 0 | 0.006998 | 0.211323 | 5,617 | 131 | 201 | 42.877863 | 0.816253 | 0.29749 | 0 | 0.043478 | 0 | 0 | 0.155045 | 0.029374 | 0 | 0 | 0 | 0 | 0.01087 | 1 | 0.043478 | false | 0.01087 | 0.141304 | 0 | 0.23913 | 0.01087 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
8016308abbe08e97f8cc91fc90577979b5691b44 | 25,336 | py | Python | NewvueApp/forgerynewstrackervue/app.py | awallemo/Social-media-coverage-of-fake-news | d3312dda6cb555a5c8e7f7bb89076d8cbf2cc0b9 | [
"Unlicense"
] | 1 | 2021-09-20T18:15:49.000Z | 2021-09-20T18:15:49.000Z | NewvueApp/forgerynewstrackervue/app.py | jonelorentzen/Social-media-coverage-of-fake-news | d3312dda6cb555a5c8e7f7bb89076d8cbf2cc0b9 | [
"Unlicense"
] | null | null | null | NewvueApp/forgerynewstrackervue/app.py | jonelorentzen/Social-media-coverage-of-fake-news | d3312dda6cb555a5c8e7f7bb89076d8cbf2cc0b9 | [
"Unlicense"
] | null | null | null | from flask import Flask, jsonify, redirect, request, url_for
from flask_cors import CORS
import config
import requests
import json
import os
import time
from textblob import TextBlob
import numpy as np
import pandas as pd
import re
import matplotlib.pyplot as plt
plt.style.use('fivethirtyeight')
import praw
import geocoder
from datetime import datetime
import urllib
# configuration
DEBUG = True
# instantiate the app
app = Flask(__name__)
app.config.from_object(__name__)
app.config['TESTING'] = True
# enable CORS
CORS(app, resources={r'/*': {'origins': '*'}})
#Getting data from TWITTER
def auth():
return os.environ.get("BEARER_TOKEN")
def create_url(query):
tweet_fields = "tweet.fields=public_metrics,created_at,geo,referenced_tweets,text,author_id,id,in_reply_to_user_id"
max_results = "max_results=100"
user_fields = "user.fields=profile_image_url"
url = "https://api.twitter.com/2/tweets/search/recent?query={}&{}&{}&{}".format(
query, tweet_fields, user_fields ,max_results
)
return url
def create_id_url(query):
tweet_fields = "tweet.fields=public_metrics,created_at,geo,lang,referenced_tweets,text,author_id,in_reply_to_user_id"
user_fields = "user.fields=profile_image_url"
url = "https://api.twitter.com/2/tweets?ids={}&{}&{}".format(
query, tweet_fields, user_fields
)
return url
def create_users_url(query):
user_fields= "user.fields=location,name,profile_image_url,public_metrics,username,verified"
url = "https://api.twitter.com/2/users?ids={}&{}".format(
query, user_fields
)
return url
def create_headers(bearer_token):
headers = {"Authorization": config.bearer_token}
return headers
def connect_to_endpoint(url, headers):
response = requests.request("GET", url, headers=headers)
print(response.status_code)
if response.status_code != 200:
raise Exception(response.status_code, response.text)
return response.json()
@app.route('/showinfo', methods=['GET', 'POST'])
def showinfo():
d = request.json
print(d)
#Reddit API call, time displayed in unix
reddit_data = reddit_api(d["query"])
if len(reddit_data) != 0:
try:
piechartreddit = reddit_piechart(reddit_data)
linechartreddit = reddit_linechart(reddit_data)
reddit_data = sorted(reddit_data, key = lambda i: i['upvotes'],reverse=True)
engagementreddit = reddit_engagement(reddit_data)
toppostsreddit = reddit_top_posts(reddit_data)
topusersreddit = reddit_top_users(reddit_data)
wordcloudreddit = reddit_wordcloud(reddit_data)
except Exception as e:
print(e)
else:
print("No Reddit data")
piechartreddit = []
linechartreddit = {}
engagementreddit = {}
toppostsreddit = []
topusersreddit = []
wordcloudreddit = []
unencoded_query = str(d["query"])
unavailable_chars = ['$', '*', "'", '&', "‘"]
for i in unavailable_chars :
unencoded_query = unencoded_query.replace(i, '')
query = urllib.parse.quote(unencoded_query)
#Create the token to get acess to the Twitter
bearer_token = auth()
headers = create_headers(bearer_token)
#API call to get back a dictionary with 10 api call without any duplicates
try:
json_response = api_caller(query, headers)
print(json_response)
except Exception as e:
print(e)
json_response = {"data": "No data"}
# New call to the the Twitter API that uses the ID of the retweeted tweets and adds the data of the original tweets to the dictionary
#The create_id_url creates the url that is used to call the api with.
if json_response["data"] != "No data":
ids = extract_retweets(json_response)
if len(ids) > 0:
url_ids = create_id_url(ids)
json_response2 = connect_to_endpoint(url_ids, headers)
for item in json_response2["data"]:
json_response["data"].append(item)
json_response["data"] = sorted(json_response["data"], key = lambda i: i['public_metrics']["retweet_count"],reverse=True)
json_response3 = extract_usernames(json_response, headers)
for i in range(len(json_response["data"])):
json_response3[i]["public_metrics_user"] = json_response3[i].pop("public_metrics")
json_response3[i]["author_id"] = json_response3[i].pop("id")
json_response["data"][i].update(json_response3[i])
#Create all of the data that is going to be displayed in the frontend from the json_response
barchart = create_barchart(json_response)
linechart = create_linechart(json_response)
topposts = create_topposts(json_response)
topusers = create_topusers(json_response)
activity = create_activity(json_response)
links = create_links(json_response)
nodes = create_nodes(links)
geochart = create_geochart(json_response)
links = create_links(json_response)
nodes = create_nodes(links)
alltext = all_text(json_response)
json_response["data"] = {d["query"]: {"barchart": barchart, "linechart": linechart, "topposts": topposts, "topusers": topusers,
"activity": activity, "query": d["query"], "nodes": nodes, "links": links, "geochart": geochart, "alltext": alltext,"engagementreddit": engagementreddit, "piechartreddit": piechartreddit, "linechartreddit":linechartreddit,
"toppostsreddit": toppostsreddit, "topusersreddit": topusersreddit, "wordcloudreddit": wordcloudreddit}}
sentiment = show_tweets_text_sentiment(json_response)
json_response["data"][d["query"]]["sentiment"] = sentiment
return json.dumps(json_response)
#The fuction api_caller is a fuction that is used to call the api 10 times and add the responses to the json_response
#We use the time libery to avoid getting the same json response back from the api, so it waits 1 second between every api call
def api_caller(query, headers):
url = create_url(query)
json_response = connect_to_endpoint(url, headers)
time.time()
count = 0
if "data" in json_response:
while True:
api_call = connect_to_endpoint(url, headers)
for item in api_call["data"]:
if item["id"] not in json_response["data"]:
json_response["data"].append(item)
time.sleep(2)
count += 1
print ("tick")
if count == 2:
count = 0
break
json_response_no_duplicates = remove_duplicates(json_response)
else:
json_response_no_duplicates = {"data": "No data"}
return json_response_no_duplicates
#Function for removing the duplicate responses when calling the api 10 times. When searching for a query that is popular you get few duplicates from the api.
#Searching for a query that is not popular you will get many duplicates often the same from all 10 api calls.
#We use a empty dictionary that we add to and if the same value gets stored in the same key it will just be overwritten so there will be no duplicates
def remove_duplicates(json_response):
response_map = {}
for i in range(len(json_response["data"])):
key = json_response["data"][i]["id"]
value = json_response["data"][i]
response_map[key] = value
json_response_no_duplicates = {"data":[]}
for value in response_map.values():
json_response_no_duplicates["data"].append(value)
return json_response_no_duplicates
#Function for extracting the data about the orginal tweets that was retweeted.
#The reasoning behind this is because the twitter search api returns alot of retweets of an orignal tweet.
#And when the api returns a retweet you only get the retweets not the likes, quotes or the replies of the orginal tweet.
#So we call the api for the orignal tweets that we extract from the retweets.
def extract_retweets(json_response):
id_list = []
joined_string = []
tweet_dict = json_response["data"]
for i in range(len(tweet_dict)):
if "referenced_tweets" in tweet_dict[i]:
if tweet_dict[i]["referenced_tweets"][0]["type"] == "retweeted":
if tweet_dict[i]["referenced_tweets"][0]["id"] not in id_list:
id_list.append(tweet_dict[i]["referenced_tweets"][0]["id"])
if len(id_list) == 100:
break
if len(id_list) > 0:
joined_string = ",".join(id_list)
return joined_string
#Function for extracting all off the usernames and calling the API for every 100 tweet.
#Returning all of the user data that is missing from the first API call.
#With this function we get the location and stats like how many followers a user has.xz z
def extract_usernames(json_response, headers):
author_id_list = []
tweet_dict = json_response["data"]
for i in range(len(tweet_dict)):
author_id_list.append(tweet_dict[i]["author_id"])
url_list = []
for i in range(0, len(author_id_list), 100):
chunk = author_id_list[i:i + 100]
joined_string = ",".join(chunk)
url_users_ids = create_users_url(joined_string)
url_list.append(url_users_ids)
time.time()
count = 0
json_response2 = connect_to_endpoint(url_list[0], headers)
while True:
if len(url_list) == 1:
break
for i in range(len(url_list)-1):
api_call = connect_to_endpoint(url_list[i+1], headers)
for item in api_call["data"]:
json_response2["data"].append(item)
time.sleep(1)
count += 1
print("tick")
if count == len(url_list)-1:
count = 0
break
return json_response2["data"]
#Function to extract the total likes, retweets, replies and quotes. The API return the total retweets of the original tweet is a user has retweeted it.
#So the function does not count the retweets of a retweet. Only the retweets of the orignal tweet
def create_barchart(json_response):
tweets = json_response["data"]
total_retweets = 0
total_likes = 0
total_replies = 0
total_quotes = 0
for i in range(len(tweets)):
if "referenced_tweets" in tweets[i]:
if tweets[i]['referenced_tweets'][0]["type"] != "retweeted":
total_retweets += tweets[i]['public_metrics']["retweet_count"]
total_likes += tweets[i]['public_metrics']["like_count"]
total_replies += tweets[i]['public_metrics']["reply_count"]
total_quotes += tweets[i]['public_metrics']["quote_count"]
else:
total_retweets += tweets[i]['public_metrics']["retweet_count"]
total_likes += tweets[i]['public_metrics']["like_count"]
total_quotes += tweets[i]['public_metrics']["quote_count"]
total_replies += tweets[i]['public_metrics']["reply_count"]
barchartlist = [['Likes', total_likes], ['Retweeets', total_retweets],['Replies', total_replies],['Quotes',total_quotes]]
return barchartlist
#Function to make a list that is needed to display the areachart
def create_linechart(json_response):
tweets = json_response["data"]
allDates = []
finalDates = {}
for i in range(len(tweets)):
element = tweets[i]["created_at"]
allDates.append(element)
for i in range(len(tweets)):
allDates[i] = allDates[i].replace(".000Z", "")
allDates.sort()
allDates = allDates[7:]
for i in range(len(allDates)):
finalDates[allDates[i]]=i+1
return finalDates
#Function that returns the dates from when a person retweets a tweet
def create_retweet_linechart(json_response):
tweets = json_response["data"]
allDates = []
finalDates = []
for i in range(len(tweets)):
if "referenced_tweets" in tweets[i]:
if tweets[i]['referenced_tweets'][0]["type"] == "retweeted":
element = tweets[i]["created_at"]
allDates.append(element)
for i in range(len(tweets)):
allDates[i] = allDates[i].replace(".000Z", "")
allDates.sort()
allDates = allDates[5:]
for i in range(len(allDates)):
finalDates.append([allDates[i],i+1])
return finalDates
#Function the extract the top 3 post and returns a dictonary with all the data needed to display as a tweet
def create_topposts(json_response):
tweets = json_response["data"]
topposts = []
for i in range(len(tweets)):
if "referenced_tweets" not in tweets[i]:
date = format_date(tweets[i]["created_at"])
topposts.append({"author_id": tweets[i]["author_id"], "retweets": tweets[i]['public_metrics']["retweet_count"], "likes": tweets[i]['public_metrics']["like_count"], "text": tweets[i]['text'],
"username": tweets[i]["username"], "img": tweets[i]["profile_image_url"], "date": date, "followers": tweets[i]['public_metrics_user']["followers_count"], "verified": tweets[i]["verified"], "id": tweets[i]["id"]})
if len(topposts) == 3:
break
return topposts
#Maybe add functionality that returns day and month like 13 Feb...
def format_date(timestamp):
ts = time.strptime(timestamp[:19], "%Y-%m-%dT%H:%M:%S")
s = time.strftime("%m/%d/%Y", ts)
return s
#Function for extracting the top 9 users with the most followers with a check that is not added
def create_topusers(json_response):
tweets = json_response["data"]
topusers = []
for i in range(len(tweets)):
if tweets[i]["username"] not in topusers:
topusers.append({"username": tweets[i]["username"], "img": tweets[i]["profile_image_url"], "followers": tweets[i]['public_metrics_user']["followers_count"], "verified": tweets[i]["verified"]})
if len(topusers) == 9:
break
sorted_topusers = sorted(topusers, key = lambda i: i['followers'],reverse=True)
return sorted_topusers
#Function to extact the data displayed in the yellow header. Returning a dictionary with the total posts, users and engagement
#Users is only users that is posting something not a user that is retweeting. Total posts is the total tweets, replies and quotes.
#Engangement is likes and retweets
def create_activity(json_response):
activity = {}
tweets = json_response["data"]
user_ids = []
engagement = 0
total_posts = 0
for i in range(len(tweets)):
if "referenced_tweets" in tweets[i]:
if tweets[i]['referenced_tweets'][0]["type"] != "retweeted":
engagement += tweets[i]['public_metrics']["retweet_count"]
engagement += tweets[i]['public_metrics']["like_count"]
total_posts += 1
if tweets[i]["author_id"] not in user_ids:
user_ids.append(tweets[i]["author_id"])
else:
engagement += tweets[i]['public_metrics']["retweet_count"]
engagement += tweets[i]['public_metrics']["like_count"]
total_posts += 1
if tweets[i]["author_id"] not in user_ids:
user_ids.append(tweets[i]["author_id"])
activity["posts"] = total_posts
activity["users"] = len(user_ids)
activity["engagement"] = engagement
return activity
def create_links(json_response):
links = []
tweets = json_response["data"]
for i in range(len(tweets)):
if "referenced_tweets" in tweets[i]:
if tweets[i]['referenced_tweets'][0]["type"] == "retweeted":
text = tweets[i]['text']
idxAt = text.find('@')
idxCo = text.find(':')
followers = tweets[i]['public_metrics_user']['followers_count']
if followers <= 10000:
size = 3
elif followers <= 50000:
size = 5
elif followers <= 100000:
size = 7
elif followers <= 1000000:
size = 10
elif followers > 1000000:
size = 13
links.append({'source': text[idxAt+1:idxCo], 'target': tweets[i]['username'], 'size':size})
elif tweets[i]['referenced_tweets'][0]["type"] == "replied_to":
text = tweets[i]['text']
idxAt = text.find('@')
idxS = text.find(' ')
followers = tweets[i]['public_metrics_user']['followers_count']
if followers <= 10000:
size = 3
elif followers <= 50000:
size = 5
elif followers <= 100000:
size = 7
elif followers <= 1000000:
size = 10
elif followers > 1000000:
size = 13
links.append({'source': text[idxAt+1:idxS], 'target': tweets[i]['username'], 'size':size})
return links
def create_nodes(links):
nodes = []
for i in range(len(links)):
if links[i]['source'] not in nodes:
nodes.append({"id":links[i]['source'], 'size':links[i]['size']})
if links[i]['target'] not in nodes:
nodes.append({"id":links[i]['target'], 'size':links[i]['size'] })
return nodes
#Legg inn en error catcher her for geochart kommer ofte feil når det er en query med lite resultater
def create_geochart(json_response):
all_locations = []
tweets = json_response["data"]
for tweet in tweets:
if "location" in tweet:
all_locations.append(tweet["location"])
if len(all_locations) == 99:
break
all_countries = []
try:
g = geocoder.mapquest(all_locations, method='batch', key=config.mapquest_key)
for result in g:
all_countries.append(str(result.country))
geochart = dict((x,all_countries.count(x)) for x in set(all_countries))
except:
geochart = {}
return geochart
def all_text(json_response):
tweets = json_response["data"]
allText = []
for i in range(len(tweets)):
allText.append({"tweets_text": tweets[i]["text"]})
return allText
# Print tweet text
def show_tweets_text_sentiment(json_response):
tweets = json_response["data"]
textTweets=[]
for i, (k,v) in enumerate(tweets.items()):
for i in range(len(tweets[k]["alltext"])):
textTweets.append(tweets[k]["alltext"][i]["tweets_text"])
# Create a dataframe with a column called Tweets
df = pd.DataFrame(columns=['Tweets'])
for tweet in textTweets:
cleantweet = cleanTxt(tweet)
df = df.append({"Tweets": cleantweet}, ignore_index=True)
# Show rows of data
# Create two new columns 'Subjectivity' & 'Polarity'
df['Subjectivity'] = df['Tweets'].apply(getSubjectivity)
df['Polarity'] = df['Tweets'].apply(getPolarity)
df['Analysis'] = df['Polarity'].apply(getAnalysis)
pd.set_option('display.max_rows', df.shape[0]+1)
dictionaryObject = df.to_dict()
sentiment = {"Positive": 0, "Negative": 0, "Neutral": 0}
analysis = dictionaryObject["Analysis"]
for i in range(len(analysis)):
if analysis[i] == "Positive":
sentiment["Positive"] += 1
elif analysis[i] == "Negative":
sentiment["Negative"] += 1
else:
sentiment["Neutral"] += 1
return sentiment
# Create a function to clean the tweets
def cleanTxt(text):
text = re.sub('@[A-Za-z0–9]+', '', text) #Removing @mentions
text = re.sub('#', '', text) # Removing '#' hash tag
text = re.sub('RT[\s]+', '', text) # Removing RT
text = re.sub('https?:\/\/\S+', '', text) # Removing hyperlink
return text
# A function to get the subjectivity
def getSubjectivity(text):
return TextBlob(text).sentiment.subjectivity
# A function to get the polarity
def getPolarity(text):
return TextBlob(text).sentiment.polarity
# function to compute negative (-1), neutral (0) and positive (+1) analysis
def getAnalysis(score):
if score < 0:
return 'Negative'
elif score == 0:
return 'Neutral'
else:
return 'Positive'
def reddit_api(query):
reddit_data = []
#Reddit API call, time displayed in unix
reddit = praw.Reddit(
client_id=config.client_id,
client_secret=config.client_secret,
user_agent="my user agent")
for submission in reddit.subreddit("all").search(query, limit=100):
try:
reddit_data.append({"author": str(submission.author.name), "title": str(submission.title),"name": str(submission.name), "upvote_ratio": submission.upvote_ratio, "upvotes": submission.ups,
"url": str(submission.permalink), "created_at": str(submission.created_utc), "subreddit": str(submission.subreddit), "number_of_comments": str(submission.num_comments),
"post_karma": submission.author.link_karma, "comment_karma": submission.author.comment_karma, "icon_img": submission.author.icon_img})
except:
print("User suspended")
print(reddit_data)
return reddit_data
def reddit_piechart(reddit_data):
ratio_sum = 0
for i in range(len(reddit_data)):
ratio_sum += reddit_data[i]["upvote_ratio"]
upvote_ratio = round(ratio_sum/len(reddit_data),2)
downvote_ratio = round(1-upvote_ratio,2)
ratio = [["Upvote Percentage", upvote_ratio*100], ["Downvote Percentage", downvote_ratio*100]]
return ratio
def reddit_wordcloud(reddit_data):
wordcloud = {}
for i in range(len(reddit_data)):
subreddit = reddit_data[i]["subreddit"]
if subreddit not in wordcloud:
wordcloud[subreddit] = 1
else:
wordcloud[subreddit] += 1
wordcloud_list = []
for value in wordcloud.keys():
if wordcloud[value]>1:
if wordcloud[value] <= 3:
wordcloud_list.append({"subreddit": value, "value": 1})
elif wordcloud[value] <= 6:
wordcloud_list.append({"subreddit": value, "value": 2})
elif wordcloud[value] <= 10:
wordcloud_list.append({"subreddit": value, "value": 3})
elif wordcloud[value] <= 20:
wordcloud_list.append({"subreddit": value, "value": 4})
else:
wordcloud_list.append({"subreddit": value, "value": 5})
return wordcloud_list
def reddit_linechart(reddit_data):
allDates = []
finalDates = {}
for i in range(len(reddit_data)):
timestamp = reddit_data[i]["created_at"]
timestamp = timestamp.replace(".0", "")
allDates.append(datetime.utcfromtimestamp(int(timestamp)).strftime('%Y-%m-%dT%H:%M:%S'))
allDates.sort()
for i in range(len(allDates)):
finalDates[allDates[i]]=i+1
return finalDates
def reddit_top_posts(reddit_data):
top_posts = []
for i in range(len(reddit_data)):
top_post = {}
top_post["title"] = reddit_data[i]["title"]
top_post["url"] = reddit_data[i]["url"]
top_post["title"] = reddit_data[i]["title"]
top_post["author"] = reddit_data[i]["author"]
top_post["upvotes"] = reddit_data[i]["upvotes"]
top_post["icon_img"] = reddit_data[i]["icon_img"]
top_post["subreddit"] = reddit_data[i]["subreddit"]
top_post["number_of_comments"] = reddit_data[i]["number_of_comments"]
timestamp = reddit_data[i]["created_at"]
timestamp = timestamp.replace(".0", "")
top_post["created_at"]= datetime.utcfromtimestamp(int(timestamp)).strftime('%Y-%m-%d')
top_posts.append(top_post)
if len(top_posts) == 3:
break
return top_posts
def reddit_top_users(reddit_data):
reddit_data = sorted(reddit_data, key = lambda i: i['post_karma'],reverse=True)
top_users = []
for i in range(len(reddit_data)):
top_user = {}
top_user["author"] = reddit_data[i]["author"]
top_user["post_karma"] = reddit_data[i]["post_karma"]
top_user["comment_karma"] = reddit_data[i]["comment_karma"]
top_user["icon_img"] = reddit_data[i]["icon_img"]
if top_user not in top_users:
top_users.append(top_user)
if len(top_users) == 9:
break
return top_users
def reddit_engagement(reddit_data):
engagement = {}
user_ids = []
upvotes = 0
for i in range(len(reddit_data)):
upvotes += reddit_data[i]["upvotes"]
if reddit_data[i]["author"] not in user_ids:
user_ids.append(reddit_data[i]["author"])
engagement["posts"] = len(reddit_data)
engagement["users"] = len(user_ids)
engagement["engagement"] = upvotes
return engagement
if __name__ == '__main__':
app.run(debug=True)
"export FLASK_DEBUG=ON"
| 35.684507 | 230 | 0.623934 | 3,163 | 25,336 | 4.828328 | 0.136579 | 0.05186 | 0.011393 | 0.020168 | 0.384953 | 0.335909 | 0.261983 | 0.211891 | 0.188973 | 0.171687 | 0 | 0.0116 | 0.254855 | 25,336 | 709 | 231 | 35.734838 | 0.797288 | 0.128237 | 0 | 0.330709 | 0 | 0 | 0.143246 | 0.015064 | 0 | 0 | 0 | 0 | 0 | 1 | 0.066929 | false | 0 | 0.031496 | 0.005906 | 0.169291 | 0.019685 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
8016a787d092f12d5ad3256d0e0eaded387f8caa | 4,484 | py | Python | test/test_criteria.py | enioluwa23/wordplay | 4a423dcb96458c6100beb88ba2fa19ad9dc81c3a | [
"Apache-2.0"
] | 3 | 2019-03-03T20:07:38.000Z | 2020-07-01T12:05:15.000Z | test/test_criteria.py | enioluwa23/wordplay | 4a423dcb96458c6100beb88ba2fa19ad9dc81c3a | [
"Apache-2.0"
] | null | null | null | test/test_criteria.py | enioluwa23/wordplay | 4a423dcb96458c6100beb88ba2fa19ad9dc81c3a | [
"Apache-2.0"
] | 1 | 2020-07-01T12:21:43.000Z | 2020-07-01T12:21:43.000Z | from collections import Counter as Ct
from wordplay.criteria import Criteria
from wordplay.utils import ArgumentError, CriteriaError, Utils
def setup_function(function):
Utils.set_disallowed_chars(set())
def test_copy_constructor():
orig_words = Criteria()
orig_words.begins_with('d').ends_with('n').contains('mna', 'tion')
orig_words.contains_at(('t', 6), ('o', 8)).size_is(9)
new_words = Criteria(orig_words)
if orig_words.get_begins_with() != new_words.get_begins_with():
assert False
if orig_words.get_ends_with() != new_words.get_ends_with():
assert False
if Ct(orig_words.get_contains()) != Ct(new_words.get_contains()):
assert False
if Ct(orig_words.get_contains_at()) != Ct(new_words.get_contains_at()):
assert False
if orig_words.get_size() != new_words.get_size():
assert False
def test_begins_with():
words1 = Criteria().begins_with('NOIR')
words2 = Criteria().begins_with(['n', 'o', 'i', 'r'])
words3 = Criteria().begins_with(('n', 'o', 'i', 'r'))
assert words1.get_begins_with() == 'noir'
assert words1.get_begins_with() == words2.get_begins_with()
assert words2.get_begins_with() == words3.get_begins_with()
Utils.set_disallowed_chars({'1', '2', '3'})
try:
Criteria().begins_with('123')
assert False
except ArgumentError:
assert True
def test_remove_begins_with():
words1 = Criteria().begins_with('noir')
words1.remove_begins_with('oi')
assert words1.get_begins_with() == 'nr'
words1.remove_begins_with()
assert len(words1.get_begins_with()) == 0
Utils.set_disallowed_chars({'1', '2', '3'})
try:
words1.remove_begins_with('123')
assert False
except ArgumentError:
assert True
def test_ends_with():
words1 = Criteria().ends_with('NOIR')
words2 = Criteria().ends_with(['n', 'o', 'i', 'r'])
words3 = Criteria().ends_with(('n', 'o', 'i', 'r'))
assert words1.get_ends_with() == 'noir'
assert words1.get_ends_with() == words2.get_ends_with()
assert words2.get_ends_with() == words3.get_ends_with()
Utils.set_disallowed_chars({'1', '2', '3'})
try:
Criteria().ends_with('123')
assert False
except ArgumentError:
assert True
def test_remove_ends_with():
words1 = Criteria().ends_with('noir')
words1.remove_ends_with('oi')
assert words1.get_ends_with() == 'nr'
words1.remove_ends_with()
assert len(words1.get_ends_with()) == 0
Utils.set_disallowed_chars({'1', '2', '3'})
try:
words1.remove_ends_with('123')
assert False
except ArgumentError:
assert True
def test_contains():
exp_list = ['or', 'we', 'a', 'moo']
words1 = Criteria().contains('or', 'a', 'we')
words1.contains('moo')
assert Ct(words1.get_contains()) == Ct(exp_list)
# add more failing cases
Utils.set_disallowed_chars({'1', '2', '3'})
try:
Criteria().contains('a', 'B', '123')
assert False
except ArgumentError:
assert True
def test_remove_contains():
words1 = Criteria().contains('or', 'a', 'we', 'moo')
exp_list = ['or']
words1.remove_contains('we', 'a', 'moo')
assert Ct(words1.get_contains()) == Ct(exp_list)
words1.remove_contains('or')
try:
words1.remove_contains('nonexistent')
assert False
except CriteriaError:
assert True
def test_contains_at():
exp_dict = {2: 'b', 1: 'a', 3: 'c'}
words1 = Criteria().contains_at(('a', 1), ('b', 2), ('c', 3))
assert exp_dict == words1.get_contains_at()
# add more failing cases
try:
Criteria().contains_at((1, 'a'), ('b', 2), ('c', 3))
assert False
except ArgumentError:
assert True
def test_remove_contains_at():
words1 = Criteria().contains_at(('a', 1), ('b', 2), ('c', 3))
exp_dict = {2: 'b'}
words1.remove_contains_at(('c', 3), ('a', 1))
assert exp_dict == words1.get_contains_at()
try:
words1.remove_contains_at(('a', 1))
assert False
except CriteriaError:
assert True
def test_size_is():
words1 = Criteria().size_is(8)
assert words1.get_size() == 8
# add more failing cases
try:
words1.size_is('a')
assert False
except ArgumentError:
assert True
def test_remove_size():
words1 = Criteria().size_is(8)
words1.remove_size()
assert words1.get_size() is None
| 27.012048 | 75 | 0.626673 | 597 | 4,484 | 4.452261 | 0.127303 | 0.075245 | 0.044018 | 0.057562 | 0.639202 | 0.516554 | 0.477427 | 0.369074 | 0.290068 | 0.228367 | 0 | 0.029043 | 0.216771 | 4,484 | 165 | 76 | 27.175758 | 0.72779 | 0.015165 | 0 | 0.439024 | 0 | 0 | 0.034678 | 0 | 0 | 0 | 0 | 0 | 0.317073 | 1 | 0.097561 | false | 0 | 0.02439 | 0 | 0.121951 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
801b0a820b6146857753c0497931ce3d9e514bf4 | 736 | py | Python | docs/source/ipython_utils.py | pilillo/nilmtk | 00bb1f948b6de1cc5c891102728c19b9bfc7c739 | [
"Apache-2.0"
] | null | null | null | docs/source/ipython_utils.py | pilillo/nilmtk | 00bb1f948b6de1cc5c891102728c19b9bfc7c739 | [
"Apache-2.0"
] | null | null | null | docs/source/ipython_utils.py | pilillo/nilmtk | 00bb1f948b6de1cc5c891102728c19b9bfc7c739 | [
"Apache-2.0"
] | null | null | null | from IPython.core.display import HTML, display
def dict_to_html(dictionary):
html = '<ul>'
for key, value in dictionary.iteritems():
html += '<li><strong>{}</strong>: '.format(key)
if isinstance(value, list):
html += '<ul>'
for item in value:
html += '<li>{}</li>'.format(item)
html += '</ul>'
elif isinstance(value, dict):
html += dict_to_html(value)
else:
try:
html += '{}'.format(value)
except UnicodeEncodeError:
pass
html += '</li>'
html += '</ul>'
return html
def print_dict(dictionary):
html = dict_to_html(dictionary)
display(HTML(html))
| 28.307692 | 55 | 0.508152 | 78 | 736 | 4.705128 | 0.397436 | 0.065395 | 0.081744 | 0.108992 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.34375 | 736 | 25 | 56 | 29.44 | 0.759834 | 0 | 0 | 0.086957 | 0 | 0 | 0.08288 | 0.032609 | 0 | 0 | 0 | 0 | 0 | 1 | 0.086957 | false | 0.043478 | 0.043478 | 0 | 0.173913 | 0.043478 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
80223a50e4ed12592f7caf55b094af49cf85c2c3 | 996 | py | Python | experiments/gethpduplex.py | timnbraun/hpduplexfix | 1110641c23cd469c5fa1a88eebad375fbc83f4b9 | [
"Apache-2.0"
] | null | null | null | experiments/gethpduplex.py | timnbraun/hpduplexfix | 1110641c23cd469c5fa1a88eebad375fbc83f4b9 | [
"Apache-2.0"
] | null | null | null | experiments/gethpduplex.py | timnbraun/hpduplexfix | 1110641c23cd469c5fa1a88eebad375fbc83f4b9 | [
"Apache-2.0"
] | null | null | null | #! /usr/bin/python
from winreg import QueryValueEx, OpenKey, CloseKey, HKEY_LOCAL_MACHINE
DEBUG = False
def decode_key_val(keyword_bytes):
keyword_list = keyword_bytes.split(b'\0')
DEBUG and print(keyword_list)
duplex_key = str( keyword_list[0], encoding='utf8' )
duplex_val = str( keyword_list[1], encoding='utf8' )
print(duplex_key, '=', duplex_val)
driverdata = OpenKey( HKEY_LOCAL_MACHINE, "SYSTEM\CurrentControlSet\Control\Print\Printers\hplaser (HP LaserJet Professional P1606dn)\PrinterDriverData" )
dup_val_type = QueryValueEx( driverdata, 'FeatureKeyword' )
CloseKey( driverdata )
DEBUG and print(dup_val_type[0])
decode_key_val( dup_val_type[0] )
driverdata = OpenKey( HKEY_LOCAL_MACHINE, 'SOFTWARE\WOW6432Node\Microsoft\Windows NT\CurrentVersion\Print\Printers\hplaser (HP LaserJet Professional P1606dn)\PrinterDriverData' )
dup_val_type = QueryValueEx( driverdata, 'FeatureKeyword' )
CloseKey( driverdata )
decode_key_val( dup_val_type[0] )
| 34.344828 | 179 | 0.769076 | 125 | 996 | 5.872 | 0.408 | 0.040872 | 0.06812 | 0.044959 | 0.506812 | 0.416894 | 0.416894 | 0.354223 | 0.354223 | 0.354223 | 0 | 0.022989 | 0.126506 | 996 | 28 | 180 | 35.571429 | 0.82069 | 0.017068 | 0 | 0.352941 | 0 | 0.058824 | 0.293994 | 0.194942 | 0 | 0 | 0 | 0 | 0 | 1 | 0.058824 | false | 0 | 0.058824 | 0 | 0.117647 | 0.176471 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
80225b992a73e8ebabee59d36fcf5edf65e69f25 | 4,685 | py | Python | Word.py | eiaiestyi/project_15c | 653bf50d07fd512befbb3d8818864252a7f8caf1 | [
"MIT"
] | null | null | null | Word.py | eiaiestyi/project_15c | 653bf50d07fd512befbb3d8818864252a7f8caf1 | [
"MIT"
] | null | null | null | Word.py | eiaiestyi/project_15c | 653bf50d07fd512befbb3d8818864252a7f8caf1 | [
"MIT"
] | null | null | null | import random
from airtable import Airtable
import config
class Word:
def __init__(self, user):
self.user_name = user
# Take scores
print('Collecting scores...')
self.scores_table = Airtable(config.base_key, 'scores', config.api_key)
scores = self.scores_table.search('user_name', self.user_name)
self.scores = {}
self.existing_scores = {}
self.scores_total = 0
for score in scores:
self.scores[score['fields']['word_id']] = score['fields']
self.existing_scores[score['fields']['word_id']] = score['fields']
self.scores_total += score['fields']['score']
self.scores_mean = self.scores_total / len(self.scores) if len(self.scores) > 0 else 0
print('Scores collected.')
# Take words
print('Collecting words...')
words_table = Airtable(config.base_key, 'words', config.api_key)
words = words_table.get_all()
self.known_words = {}
self.learn_words = {}
for word in words:
word_id = word['fields']['id']
if word_id in self.scores:
if self.scores[word_id]['score'] > self.scores_mean:
# If score is bigger than mean, add word to known_words
self.known_words[word_id] = word['fields']
else:
# If score smaller or equal to mean, add to learn_word
self.learn_words[word_id] = word['fields']
else:
self.learn_words[word_id] = word['fields']
print('Words collected.')
def learn(self):
"""
Show words in English and write answer in Spanish.
Calculate correct answers.
:return:
"""
print('If you want to end learning, type /END at any time.')
# Get random word id from learn_words list
word_id = random.choice(list(self.learn_words))
word = self.learn_words[word_id]
# Take answer from input
answer = input('Enter Spanish word for "{}": '.format(word['english']))
if answer.upper() == '/END':
# If /END entered, then show stats and end program
return self.stats()
elif answer.lower() == word['spanish'].lower():
# If answer correct
print('Correct.')
score = 1
else:
# If answer incorrect
print('Incorrect. Correct word was:', word['spanish'])
score = -1
if word_id in self.scores:
# If word already in scores, add score to current score
self.scores[word_id]['score'] += score
# Update score in table
self.scores_table.update_by_field('word_id', word_id, {'score': self.scores[word_id]['score']})
else:
# If word is not in scores, add new score
self.scores[word_id] = {'user_name': self.user_name, 'word_id': word_id, 'score': score}
# Add new score in table
self.scores_table.insert(self.scores[word_id])
# Update total score
self.scores_total += score
# Update scores mean
self.scores_mean = self.scores_total / len(self.scores)
if self.scores[word_id]['score'] > self.scores_mean:
# If word score is higher than mean, then put it to known_words
self.known_words[word_id] = self.learn_words.pop(word_id)
self.learn()
def stats(self):
"""
Show words statistics.
:return:
"""
# Total
print('Total score:', self.scores_total)
# Mean
print('Mean score:', round(self.scores_mean, 2))
# Lowest
lowest = min(self.scores, key=lambda x: self.scores[x]['score'])
print('Lowest score:', self.scores[lowest]['score'], end=' ')
if lowest in self.learn_words:
print('({} - {})'.format(self.learn_words[lowest]['english'], self.learn_words[lowest]['spanish']))
else:
print('({} - {})'.format(self.known_words[lowest]['english'], self.known_words[lowest]['spanish']))
# Highest
highest = max(self.scores, key=lambda x: self.scores[x]['score'])
print('Highest score:', self.scores[highest]['score'], end=' ')
if highest in self.learn_words:
print('({} - {})'.format(self.learn_words[highest]['english'], self.learn_words[highest]['spanish']))
else:
print('({} - {})'.format(self.known_words[highest]['english'], self.known_words[highest]['spanish']))
| 42.981651 | 114 | 0.560512 | 561 | 4,685 | 4.538324 | 0.180036 | 0.133543 | 0.065986 | 0.037706 | 0.395522 | 0.311862 | 0.25216 | 0.187745 | 0.132757 | 0.10055 | 0 | 0.001857 | 0.310352 | 4,685 | 108 | 115 | 43.37963 | 0.786134 | 0.141942 | 0 | 0.166667 | 0 | 0 | 0.137363 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.041667 | false | 0 | 0.041667 | 0 | 0.111111 | 0.208333 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
8026ac69d84de1cb16032bb05a12822b2e9ca512 | 2,805 | py | Python | app.py | lucashenning/ntag424-backend | bfa442969d82357b9e9fc482dfe1a32f0827709a | [
"MIT"
] | null | null | null | app.py | lucashenning/ntag424-backend | bfa442969d82357b9e9fc482dfe1a32f0827709a | [
"MIT"
] | null | null | null | app.py | lucashenning/ntag424-backend | bfa442969d82357b9e9fc482dfe1a32f0827709a | [
"MIT"
] | null | null | null | import argparse
import binascii
from flask import Flask, request, render_template, jsonify
from werkzeug.exceptions import BadRequest
from config import SDMMAC_PARAM, ENC_FILE_DATA_PARAM, ENC_PICC_DATA_PARAM, SDM_FILE_READ_KEY, SDM_META_READ_KEY
from ntag424 import decrypt_sun_message, InvalidMessage
app = Flask(__name__)
@app.route('/')
def sdm_main():
"""
Main page with a few examples.
"""
return render_template('sdm_main.html')
@app.route('/tag')
def sdm_info():
"""
SUN decrypting/validating endpoint.
"""
enc_picc_data = request.args.get(ENC_PICC_DATA_PARAM)
enc_file_data = request.args.get(ENC_FILE_DATA_PARAM)
sdmmac = request.args.get(SDMMAC_PARAM)
if not enc_picc_data or not sdmmac:
raise BadRequest("Parameter {} is required".format(ENC_PICC_DATA_PARAM))
if not sdmmac:
raise BadRequest("Parameter {} is required".format(SDMMAC_PARAM))
try:
enc_file_data_b = None
enc_picc_data_b = binascii.unhexlify(enc_picc_data)
sdmmac_b = binascii.unhexlify(sdmmac)
if enc_file_data:
enc_file_data_b = binascii.unhexlify(enc_file_data)
except binascii.Error:
raise BadRequest("Failed to decode parameters.")
try:
res = decrypt_sun_message(sdm_meta_read_key=SDM_META_READ_KEY,
sdm_file_read_key=SDM_FILE_READ_KEY,
picc_enc_data=enc_picc_data_b,
sdmmac=sdmmac_b,
enc_file_data=enc_file_data_b)
except InvalidMessage:
raise BadRequest("Invalid message (most probably wrong signature).")
picc_data_tag, uid, read_ctr_num, file_data = res
file_data_utf8 = ""
if file_data:
file_data_utf8 = file_data.decode('utf-8', 'ignore')
if request.content_type == 'application/json':
return jsonify(picc_data_tag=picc_data_tag.hex(),
uid=uid.hex(),
read_ctr_num=read_ctr_num,
file_data_utf8=file_data_utf8)
else:
return render_template('sdm_info.html',
picc_data_tag=picc_data_tag,
uid=uid,
read_ctr_num=read_ctr_num,
file_data=file_data,
file_data_utf8=file_data_utf8)
if __name__ == '__main__':
parser = argparse.ArgumentParser(description='OTA NFC Server')
parser.add_argument('--host', type=str, nargs='?',
help='address to listen on')
parser.add_argument('--port', type=int, nargs='?',
help='port to listen on')
args = parser.parse_args()
app.run(host=args.host, port=args.port)
| 32.616279 | 111 | 0.622816 | 352 | 2,805 | 4.585227 | 0.292614 | 0.099133 | 0.061338 | 0.02974 | 0.315366 | 0.241636 | 0.158612 | 0.095415 | 0 | 0 | 0 | 0.005018 | 0.289483 | 2,805 | 85 | 112 | 33 | 0.804817 | 0.023529 | 0 | 0.101695 | 0 | 0 | 0.094165 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.033898 | false | 0 | 0.101695 | 0 | 0.186441 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
8029ba05d1267e539bc300c17161b1a4be505f74 | 6,025 | py | Python | bmcs_beam/mxn/matresdev/simiter/sim_pstudy/tutorial.py | bmcs-group/bmcs_beam | b53967d0d0461657ec914a3256ec40f9dcff80d5 | [
"MIT"
] | 1 | 2021-05-07T11:10:27.000Z | 2021-05-07T11:10:27.000Z | bmcs_beam/mxn/matresdev/simiter/sim_pstudy/tutorial.py | bmcs-group/bmcs_beam | b53967d0d0461657ec914a3256ec40f9dcff80d5 | [
"MIT"
] | null | null | null | bmcs_beam/mxn/matresdev/simiter/sim_pstudy/tutorial.py | bmcs-group/bmcs_beam | b53967d0d0461657ec914a3256ec40f9dcff80d5 | [
"MIT"
] | null | null | null | #-------------------------------------------------------------------------------
#
# Copyright (c) 2009, IMB, RWTH Aachen.
# All rights reserved.
#
# This software is provided without warranty under the terms of the BSD
# license included in simvisage/LICENSE.txt and may be redistributed only
# under the conditions described in the aforementioned license. The license
# is also available online at http://www.simvisage.com/licenses/BSD.txt
#
# Thanks for using Simvisage open source!
#
# Created on Feb 3, 2010 by: rch
if __name__ == '__main__':
# DEFINE A SIMULATION MODEL
# -------------------------
# import a model - a class that provides
# - factors as traits with a prescribed set of levels
# that should be included in the parametric study.
# A trait is regarded as Factor if it specifies
# a set of levels in form ps_levels = True.
# - get_sim_outputs() as instances of SimOut class
# defining the name and order of outputs
# - peval() method returning a vector of results
# the order of outputs is prescribed by the
# specification given in the get_outputs() method
#
# Here we just import predefined Foo model with three four inputs
#
# [ index_1, material_model, param_1, param_2 ]
#
# The inputs have the types [ Int, Callable, Float, Float
#
from .sim_model import SimModel
sim_model = SimModel()
# The model response is obtained by issueing
#
print('default evaluation', sim_model.peval())
# returning an array with two values.
# In this call, the default values of the factors
# [input_1, material_model, param_1, param_2 ]
# were taken. The factor levels were ignored.
# DEFINING A STUDY
# --------------
# In order to study the response of the model in a broader range
# of specified levels we now construct the parametric study
#
from .sim_pstudy import SimArray
pstudy = SimArray( sim_model = sim_model )
# ACCESSING OUTPUTS
# --------------
# The pstudy can be regarded as an n-dimensional array
# providing the results for each combination of factor levels
# In order to get the model response for the first level
# of all parameters the index operator can be used as follows:
#
print('model output for ground levels', pstudy[0,0,0,0])
# The combination of last levels is obtained as
#
print('model output for floor levels', pstudy[-1,-1,-1,-1])
# The computation of outputs is performed on demand and cached.
# Thus, if an index appears the second time only the cached value
# is returned.
#
print('lookup the output in the cache', pstudy[-1,-1,-1,-1])
# Just like for any array, indexes may be sliced
#
print('get the outputs for all levels of index_1', pstudy[:,0,0,0])
# the result of this call is 2-dimensional array with the first index
# specifying the level of index_1 and second index giving the output
# In analogy, for slice over the last two indexes
#
print('get the outputs for all combinations of param_1 x param_2', pstudy[0,0,:,:])
# a 3-dimensional array is returned with first two indexes correspond
# to the levels of param_1 and param_2 and third index identifying the output
# Finally, the whole study can be performed using ellipsis
#
print('get all the values in the n-dimensional space', pstudy[...])
# The result is a 5-dimensional array with first four indexes
# denoting the factor levels and last index the output.
# Note that the previously accessed slices are reused in this call.
# MODIFYING LEVELS
# ----------------
# The initially specified set of default levels for each factor
# can be changed within an existing study.
#
pstudy.factor_dict['param_1'].max_level = 10
pstudy.factor_dict['param_1'].n_levels = 5
print('get output for first two levels of param_1', pstudy[-1,-1,1,-1])
# Note that values for pstudy[:,:,0,:] were included in the old
# grid of levels and are reused in the new study as well.
# COMMENTS
# --------
# The study is limited to an a regular grid of levels. This can be regarded
# as a full factorial in the nomenclature of the Design of Experiments (DOE).
# The table showing the full factorial in the user interface can be seen by issuing
#
pstudy.configure_traits()
# The evaluation of the model is not performed for all possible
# combinations of factor levels. The evaluation is done first when
# the particular value is accessed using the level indexes.
#
# The numpy indexing functions are used to access values in the
# factor space. The data structure can be used for
# - on demand construction of 2D and 3D views into the output
# space of the study
# - construction of a regression model within the factor space in analogy
# to the DOE
# - supporting adaptive integration procedures within the factor space
# as it is the case in statistical analysis of multi-variate response.
#
# An example of the first application can be demonstrated by using the
# SimArrayView class
#
from .sim_array_view import SimArrayView
SimArrayView( model = pstudy ).configure_traits()
# SAVING THE STUDY
# ----------------
# In order to save the study, it the SimPStudy class manages
# the object persistence. It associates the study with a file name
# and monitors whether the study has been changed or not.
#
from .sim_pstudy import SimPStudy
sim_pstudy = SimPStudy( sim_model = sim_model )
sim_pstudy.configure_traits()
# RESETTING THE CACHE
# -------------------
# LOADING AN EXISTING STUDY
# -------------------------
| 39.638158 | 87 | 0.642324 | 829 | 6,025 | 4.609168 | 0.325694 | 0.013086 | 0.004711 | 0.007066 | 0.045538 | 0.026171 | 0.013609 | 0 | 0 | 0 | 0 | 0.012212 | 0.266058 | 6,025 | 152 | 88 | 39.638158 | 0.851877 | 0.704232 | 0 | 0 | 0 | 0 | 0.187463 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.190476 | 0 | 0.190476 | 0.380952 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
802a91a4886d822f94d7ffe84abf1dae5e5b1d5c | 6,409 | py | Python | poky/meta/lib/oeqa/selftest/cases/oelib/buildhistory.py | Eyerunmyden/HWMgmt-MegaRAC-OpenEdition | 72b03e9fc6e2a13184f1c57b8045b616db9b0a6d | [
"Apache-2.0",
"MIT"
] | 14 | 2021-11-04T07:47:37.000Z | 2022-03-21T10:10:30.000Z | poky/meta/lib/oeqa/selftest/cases/oelib/buildhistory.py | Eyerunmyden/HWMgmt-MegaRAC-OpenEdition | 72b03e9fc6e2a13184f1c57b8045b616db9b0a6d | [
"Apache-2.0",
"MIT"
] | null | null | null | poky/meta/lib/oeqa/selftest/cases/oelib/buildhistory.py | Eyerunmyden/HWMgmt-MegaRAC-OpenEdition | 72b03e9fc6e2a13184f1c57b8045b616db9b0a6d | [
"Apache-2.0",
"MIT"
] | 6 | 2021-11-02T10:56:19.000Z | 2022-03-06T11:58:20.000Z | #
# SPDX-License-Identifier: MIT
#
import os
from oeqa.selftest.case import OESelftestTestCase
import tempfile
import operator
from oeqa.utils.commands import get_bb_var
class TestBlobParsing(OESelftestTestCase):
def setUp(self):
import time
self.repo_path = tempfile.mkdtemp(prefix='selftest-buildhistory',
dir=get_bb_var('TOPDIR'))
try:
from git import Repo
self.repo = Repo.init(self.repo_path)
except ImportError:
self.skipTest('Python module GitPython is not present')
self.test_file = "test"
self.var_map = {}
def tearDown(self):
import shutil
shutil.rmtree(self.repo_path)
def commit_vars(self, to_add={}, to_remove = [], msg="A commit message"):
if len(to_add) == 0 and len(to_remove) == 0:
return
for k in to_remove:
self.var_map.pop(x,None)
for k in to_add:
self.var_map[k] = to_add[k]
with open(os.path.join(self.repo_path, self.test_file), 'w') as repo_file:
for k in self.var_map:
repo_file.write("%s = %s\n" % (k, self.var_map[k]))
self.repo.git.add("--all")
self.repo.git.commit(message=msg)
def test_blob_to_dict(self):
"""
Test conversion of git blobs to dictionary
"""
from oe.buildhistory_analysis import blob_to_dict
valuesmap = { "foo" : "1", "bar" : "2" }
self.commit_vars(to_add = valuesmap)
blob = self.repo.head.commit.tree.blobs[0]
self.assertEqual(valuesmap, blob_to_dict(blob),
"commit was not translated correctly to dictionary")
def test_compare_dict_blobs(self):
"""
Test comparisson of dictionaries extracted from git blobs
"""
from oe.buildhistory_analysis import compare_dict_blobs
changesmap = { "foo-2" : ("2", "8"), "bar" : ("","4"), "bar-2" : ("","5")}
self.commit_vars(to_add = { "foo" : "1", "foo-2" : "2", "foo-3" : "3" })
blob1 = self.repo.heads.master.commit.tree.blobs[0]
self.commit_vars(to_add = { "foo-2" : "8", "bar" : "4", "bar-2" : "5" })
blob2 = self.repo.heads.master.commit.tree.blobs[0]
change_records = compare_dict_blobs(os.path.join(self.repo_path, self.test_file),
blob1, blob2, False, False)
var_changes = { x.fieldname : (x.oldvalue, x.newvalue) for x in change_records}
self.assertEqual(changesmap, var_changes, "Changes not reported correctly")
def test_compare_dict_blobs_default(self):
"""
Test default values for comparisson of git blob dictionaries
"""
from oe.buildhistory_analysis import compare_dict_blobs
defaultmap = { x : ("default", "1") for x in ["PKG", "PKGE", "PKGV", "PKGR"]}
self.commit_vars(to_add = { "foo" : "1" })
blob1 = self.repo.heads.master.commit.tree.blobs[0]
self.commit_vars(to_add = { "PKG" : "1", "PKGE" : "1", "PKGV" : "1", "PKGR" : "1" })
blob2 = self.repo.heads.master.commit.tree.blobs[0]
change_records = compare_dict_blobs(os.path.join(self.repo_path, self.test_file),
blob1, blob2, False, False)
var_changes = {}
for x in change_records:
oldvalue = "default" if ("default" in x.oldvalue) else x.oldvalue
var_changes[x.fieldname] = (oldvalue, x.newvalue)
self.assertEqual(defaultmap, var_changes, "Defaults not set properly")
class TestFileListCompare(OESelftestTestCase):
def test_compare_file_lists(self):
# Test that a directory tree that moves location such as /lib/modules/5.4.40-yocto-standard -> /lib/modules/5.4.43-yocto-standard
# is correctly identified as a move
from oe.buildhistory_analysis import compare_file_lists, FileChange
with open(self.tc.files_dir + "/buildhistory_filelist1.txt", "r") as f:
filelist1 = f.readlines()
with open(self.tc.files_dir + "/buildhistory_filelist2.txt", "r") as f:
filelist2 = f.readlines()
expectedResult = [
'/lib/libcap.so.2 changed symlink target from libcap.so.2.33 to libcap.so.2.34',
'/lib/libcap.so.2.33 moved to /lib/libcap.so.2.34',
'/lib/modules/5.4.40-yocto-standard moved to /lib/modules/5.4.43-yocto-standard',
'/lib/modules/5.4.43-yocto-standard/modules.builtin.alias.bin was added',
'/usr/bin/gawk-5.0.1 moved to /usr/bin/gawk-5.1.0',
'/usr/lib/libbtrfsutil.so changed symlink target from libbtrfsutil.so.1.1.1 to libbtrfsutil.so.1.2.0',
'/usr/lib/libbtrfsutil.so.1 changed symlink target from libbtrfsutil.so.1.1.1 to libbtrfsutil.so.1.2.0',
'/usr/lib/libbtrfsutil.so.1.1.1 moved to /usr/lib/libbtrfsutil.so.1.2.0',
'/usr/lib/libkmod.so changed symlink target from libkmod.so.2.3.4 to libkmod.so.2.3.5',
'/usr/lib/libkmod.so.2 changed symlink target from libkmod.so.2.3.4 to libkmod.so.2.3.5',
'/usr/lib/libkmod.so.2.3.4 moved to /usr/lib/libkmod.so.2.3.5',
'/usr/lib/libpixman-1.so.0 changed symlink target from libpixman-1.so.0.38.4 to libpixman-1.so.0.40.0',
'/usr/lib/libpixman-1.so.0.38.4 moved to /usr/lib/libpixman-1.so.0.40.0',
'/usr/lib/opkg/alternatives/rtcwake was added',
'/usr/lib/python3.8/site-packages/PyGObject-3.34.0.egg-info moved to /usr/lib/python3.8/site-packages/PyGObject-3.36.1.egg-info',
'/usr/lib/python3.8/site-packages/btrfsutil-1.1.1-py3.8.egg-info moved to /usr/lib/python3.8/site-packages/btrfsutil-1.2.0-py3.8.egg-info',
'/usr/lib/python3.8/site-packages/pycairo-1.19.0.egg-info moved to /usr/lib/python3.8/site-packages/pycairo-1.19.1.egg-info',
'/usr/sbin/rtcwake changed type from file to symlink',
'/usr/sbin/rtcwake changed permissions from rwxr-xr-x to rwxrwxrwx',
'/usr/sbin/rtcwake changed symlink target from None to /usr/sbin/rtcwake.util-linux',
'/usr/sbin/rtcwake.util-linux was added'
]
result = compare_file_lists(filelist1, filelist2)
rendered = []
for entry in sorted(result, key=operator.attrgetter("path")):
rendered.append(str(entry))
self.maxDiff = None
self.assertCountEqual(rendered, expectedResult)
| 43.89726 | 151 | 0.626775 | 926 | 6,409 | 4.24622 | 0.219222 | 0.027467 | 0.035605 | 0.042726 | 0.467192 | 0.398016 | 0.365717 | 0.306714 | 0.244151 | 0.203459 | 0 | 0.037097 | 0.234514 | 6,409 | 145 | 152 | 44.2 | 0.76437 | 0.055079 | 0 | 0.1 | 0 | 0.17 | 0.340411 | 0.173717 | 0 | 0 | 0 | 0 | 0.04 | 1 | 0.07 | false | 0 | 0.13 | 0 | 0.23 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
802c2808abfe71ea1b67247ca6afede8b7346d0f | 13,059 | py | Python | fio_plot/fiolib/supporting.py | bhelm/fio-plot | 60dc206a5486c7e43647d90ea36f4474b4601431 | [
"BSD-3-Clause"
] | 148 | 2017-04-05T22:15:39.000Z | 2022-03-31T14:39:30.000Z | fio_plot/fiolib/supporting.py | bhelm/fio-plot | 60dc206a5486c7e43647d90ea36f4474b4601431 | [
"BSD-3-Clause"
] | 66 | 2017-05-08T22:21:45.000Z | 2022-02-26T22:53:58.000Z | fio_plot/fiolib/supporting.py | bhelm/fio-plot | 60dc206a5486c7e43647d90ea36f4474b4601431 | [
"BSD-3-Clause"
] | 61 | 2017-11-09T21:52:24.000Z | 2022-03-31T14:38:37.000Z | #!/usr/local/bin env
import pprint as pprint
import statistics
import numpy as np
from datetime import datetime
from PIL.PngImagePlugin import PngImageFile, PngInfo
import random
import string
def running_mean(l, N):
"""From a list of values (N), calculate the running mean with a
window of (l) items. How larger the value l is, the more smooth the graph.
"""
sum = 0
result = list(0 for x in l)
for i in range(0, N):
sum = sum + l[i]
result[i] = sum / (i + 1)
for i in range(N, len(l)):
sum = sum - l[i - N] + l[i]
result[i] = sum / N
return result
def scale_xaxis_time(dataset):
"""FIO records log data time stamps in microseconds. To prevent huge numbers
on the x-axis, the values are scaled to seconds, minutes or hours basedon the
mean value of all data."""
result = {"format": "Time (ms)", "data": dataset}
mean = statistics.mean(dataset)
if (mean > 1000) & (mean < 1000000):
result["data"] = [x / 1000 for x in dataset]
result["format"] = "Time (s)"
if mean > 1000000:
result["data"] = [x / 60000 for x in dataset]
result["format"] = "Time (m)"
if mean > 36000000: # only switch to hours with enough datapoints (+10)
result["data"] = [x / 3600000 for x in dataset]
result["format"] = "Time (h)"
return result
def get_scale_factor_lat(dataset):
"""The mean of the dataset is calculated. The size of the mean will determine
which scale factor should be used on the data. The data is not scaled, only
the scale factor and the y-axis label is returned in a dictionary.
"""
mean = statistics.mean(dataset)
scale_factors = [
{"scale": 1000000, "label": "Latency (ms)"},
{"scale": 1000, "label": "Latency (\u03BCs)"},
{"scale": 1, "label": "Latency (ns)"},
]
for item in scale_factors:
"""Notice the factor, prevents scaling the graph up too soon if values
are small, thus becomming almost unreadable"""
if mean > item["scale"] * 5:
return item
def get_largest_scale_factor(scale_factors):
"""Based on multiple dataset, it is determined what the highest scale factor
is. This assures that the values on the y-axis don't become too large.
"""
scalefactor = scale_factors[0]
largestscalefactor = [x for x in scale_factors if x["scale"] > scalefactor["scale"]]
if not largestscalefactor:
largestscalefactor = scalefactor
if isinstance(largestscalefactor, list):
largestscalefactor = largestscalefactor[0]
return largestscalefactor
def scale_yaxis(dataset, scale):
"""The dataset supplied is scaled with the supplied scale. The scaled
dataset is returned."""
result = {}
result["data"] = [x / scale["scale"] for x in dataset]
result["format"] = scale["label"]
return result
def get_scale_factor_iops(dataset):
mean = statistics.mean(dataset)
scale_factors = [
{"scale": 1000000, "label": "M IOPs"},
{"scale": 1000, "label": "K IOPs"},
{"scale": 1, "label": "IOPs"},
]
for item in scale_factors:
if mean > item["scale"] * 5:
return item
def get_scale_factor_bw(dataset):
mean = statistics.mean(dataset)
scale_factors = [
{"scale": 1048576, "label": "GB/s"},
{"scale": 1024, "label": "MB/s"},
{"scale": 1, "label": "KB/s"},
]
for item in scale_factors:
if mean > item["scale"] * 5:
return item
def get_scale_factor_bw_ss(dataset):
mean = statistics.mean(dataset)
scale_factors = [
{"scale": 1073741824, "label": "GB/s"},
{"scale": 1048576, "label": "MB/s"},
{"scale": 1024, "label": "KB/s"},
{"scale": 1, "label": "B/s"},
]
for item in scale_factors:
if mean > item["scale"] * 5:
return item
def lookupTable(metric):
lookup = {
"iops": {"ylabel": "IOPS", "label_pos": 0, "label_rot": "vertical"},
"bw": {"ylabel": "Througput (KB/s)", "label_pos": -55, "label_rot": "vertical"},
"lat": {"ylabel": "LAT Latency (ms)", "label_pos": 5, "label_rot": "vertical"},
"slat": {
"ylabel": "SLAT Latency (ms)",
"label_pos": 5,
"label_rot": "vertical",
},
"clat": {
"ylabel": "CLAT Latency (ms)",
"label_pos": 5,
"label_rot": "vertical",
},
}
return lookup[metric]
def generate_axes(ax, datatypes):
axes = {}
metrics = ["iops", "lat", "bw", "clat", "slat"]
tkw = dict(size=4, width=1.5)
first_not_used = True
positions = [0, 5, -55]
for item in metrics:
if item in datatypes:
if first_not_used:
value = ax
value.tick_params(axis="x", **tkw)
first_not_used = False
ax.grid(ls="dotted")
else:
value = ax.twinx()
value.tick_params(axis="y", **tkw)
axes[item] = value
axes[item].ticklabel_format(style="plain")
axes[f"{item}_pos"] = positions.pop(0)
if len(axes) == 6:
axes[item].spines["right"].set_position(("axes", -0.24))
break
return axes
def round_metric(value):
if value > 1:
value = round(value, 2)
if value <= 1:
value = round(value, 3)
if value >= 20:
value = int(round(value, 0))
return value
def round_metric_series(dataset):
data = [round_metric(x) for x in dataset]
return data
def raw_stddev_to_percent(values, stddev_series):
result = []
for x, y in zip(values, stddev_series):
# pprint.pprint(f"{x} - {y}")
try:
percent = round((int(y) / int(x)) * 100, 0)
except ZeroDivisionError:
percent = 0
result.append(percent)
# pprint.pprint(result)
return result
def process_dataset(settings, dataset):
datatypes = []
new_list = []
new_structure = {"datatypes": None, "dataset": None}
final_list = []
scale_factors = []
"""
This first loop is to unpack the data in series and add scale the xaxis
"""
for item in dataset:
for rw in settings["filter"]:
if len(item["data"][rw]) > 0:
datatypes.append(item["type"])
# pprint.pprint(item['data'][rw])
unpacked = list(zip(*item["data"][rw]))
# pprint.pprint(unpacked)
item[rw] = {}
item[rw]["xvalues"] = unpacked[0]
item[rw]["yvalues"] = unpacked[1]
scaled_xaxis = scale_xaxis_time(item[rw]["xvalues"])
item["xlabel"] = scaled_xaxis["format"]
item[rw]["xvalues"] = scaled_xaxis["data"]
if "lat" in item["type"]:
scale_factors.append(get_scale_factor_lat(item[rw]["yvalues"]))
if "bw" in item["type"]:
scale_factors.append(get_scale_factor_bw(item[rw]["yvalues"]))
item.pop("data")
new_list.append(item)
"""
This second loop assures that all data is scaled with the same factor
"""
if len(scale_factors) > 0:
scale_factor = get_largest_scale_factor(scale_factors)
for item in new_list:
for rw in settings["filter"]:
if rw in item.keys():
if "lat" in item["type"] or "bw" in item["type"]:
scaled_data = scale_yaxis(item[rw]["yvalues"], scale_factor)
item[rw]["ylabel"] = scaled_data["format"]
item[rw]["yvalues"] = scaled_data["data"]
else:
item[rw]["ylabel"] = lookupTable(item["type"])["ylabel"]
max = np.max(item[rw]["yvalues"])
mean = np.mean(item[rw]["yvalues"])
stdv = round((np.std(item[rw]["yvalues"]) / mean) * 100, 2)
percentile = round(
np.percentile(item[rw]["yvalues"], settings["percentile"]), 2
)
if mean > 1:
mean = round(mean, 2)
if mean <= 1:
mean = round(mean, 3)
if mean >= 20:
mean = int(round(mean, 0))
if percentile > 1:
percentile = round(percentile, 2)
if percentile <= 1:
percentile = round(percentile, 3)
if percentile >= 20:
percentile = int(round(percentile, 0))
item[rw]["max"] = max
item[rw]["mean"] = mean
item[rw]["stdv"] = stdv
item[rw]["percentile"] = percentile
final_list.append(item)
new_structure["datatypes"] = list(set(datatypes))
new_structure["dataset"] = final_list
return new_structure
def get_highest_maximum(settings, data):
highest_max = {
"read": {"iops": 0, "lat": 0, "bw": 0},
"write": {"iops": 0, "lat": 0, "bw": 0},
}
for item in data["dataset"]:
for rw in settings["filter"]:
if rw in item.keys():
if item[rw]["max"] > highest_max[rw][item["type"]]:
highest_max[rw][item["type"]] = item[rw]["max"]
return highest_max
def create_title_and_sub(
settings, plt, bs=None, skip_keys=[], sub_x_offset=0, sub_y_offset=0
):
#
# Offset title/subtitle if there is a 3rd y-axis
#
number_of_types = len(settings["type"])
y_offset = 1.02
if number_of_types <= 2:
x_offset = 0.5
else:
x_offset = 0.425
if sub_x_offset > 0:
x_offset = sub_x_offset
if sub_y_offset > 0:
y_offset = sub_y_offset
#
# plt.subtitle sets title and plt.title sets subtitle ....
#
plt.suptitle(settings["title"])
subtitle = None
sub_title_items = {
"rw": settings["rw"],
"iodepth": str(settings["iodepth"]).strip("[]"),
"numjobs": str(settings["numjobs"]).strip("[]"),
"type": str(settings["type"]).strip("[]").replace("'", ""),
"filter": str(settings["filter"]).strip("[]").replace("'", ""),
}
if bs:
sub_title_items.update({"bs": bs})
if settings["subtitle"]:
subtitle = settings["subtitle"]
else:
temporary_string = "|"
for key in sub_title_items.keys():
if key not in skip_keys:
if len(sub_title_items[key]) > 0:
temporary_string += f" {key} {sub_title_items[key]} |"
subtitle = temporary_string
plt.title(
subtitle, fontsize=8, horizontalalignment="center", x=x_offset, y=y_offset
)
class bcolors:
HEADER = "\033[95m"
OKBLUE = "\033[94m"
OKGREEN = "\033[92m"
WARNING = "\033[93m"
FAIL = "\033[91m"
ENDC = "\033[0m"
BOLD = "\033[1m"
UNDERLINE = "\033[4m"
def plot_source(settings, plt, ax1, vertical=-0.08):
if settings["source"]:
calculation = len(settings["source"]) / 130
horizontal = 1 - calculation
align = "left"
plot_text_line(
settings["source"], plt, ax1, horizontal, align, vertical, fontsize=5
)
def plot_fio_version(settings, value, plt, ax1, vertical=-0.08):
if not settings["disable_fio_version"]:
horizontal = 0
align = "left"
if value:
text = f"Fio version: {value}\nGraph generated with fio-plot"
else:
text = "Data generated by Fio, graph generated with Fio-plot"
plot_text_line(text, plt, ax1, horizontal, align, vertical, fontsize=5)
def plot_text_line(value, plt, ax1, horizontal, align, vertical, fontsize=7):
plt.text(
horizontal,
vertical,
str(value),
ha=align,
va="top",
transform=ax1.transAxes,
fontsize=fontsize,
)
def random_char(y):
return "".join(random.choice(string.ascii_letters) for x in range(y))
def save_png(settings, plt, fig):
now = datetime.now().strftime("%Y-%m-%d_%H%M%S")
title = settings["title"].replace(" ", "-")
title = title.replace("/", "-")
plt.tight_layout(rect=[0, 0, 1, 1])
random = random_char(2)
savename = f"{title}_{now}_{random}.png" if settings["output_filename"] is None else settings["output_filename"]
print(f"\n Saving to file {savename}\n")
fig.savefig(savename, dpi=settings["dpi"])
write_png_metadata(savename, settings)
def write_png_metadata(filename, settings):
targetImage = PngImageFile(filename)
metadata = PngInfo()
for (k, v) in settings.items():
if type(v) == list:
value = ""
for item in v:
value += str(item) + " "
v = value
if type(v) == bool:
v = str(v)
if v is None:
continue
else:
metadata.add_text(k, str(v))
targetImage.save(filename, pnginfo=metadata)
| 30.583138 | 116 | 0.553335 | 1,612 | 13,059 | 4.376551 | 0.207196 | 0.01786 | 0.011481 | 0.017718 | 0.211339 | 0.185967 | 0.133948 | 0.110418 | 0.081077 | 0.03657 | 0 | 0.02779 | 0.305613 | 13,059 | 426 | 117 | 30.65493 | 0.750221 | 0.079792 | 0 | 0.133333 | 0 | 0 | 0.11933 | 0.004124 | 0 | 0 | 0 | 0 | 0 | 1 | 0.069841 | false | 0 | 0.022222 | 0.003175 | 0.171429 | 0.006349 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
802d45597dd184c0c91348e98b8980088407bfb9 | 25,692 | py | Python | tests/notifications/test_activity_alerts.py | plastr/extrasolar-game | 1aad5971556d498e3617afe75f27e2f4132d4668 | [
"MIT",
"Unlicense"
] | null | null | null | tests/notifications/test_activity_alerts.py | plastr/extrasolar-game | 1aad5971556d498e3617afe75f27e2f4132d4668 | [
"MIT",
"Unlicense"
] | null | null | null | tests/notifications/test_activity_alerts.py | plastr/extrasolar-game | 1aad5971556d498e3617afe75f27e2f4132d4668 | [
"MIT",
"Unlicense"
] | null | null | null | # Copyright (c) 2010-2011 Lazy 8 Studios, LLC.
# All rights reserved.
import re
from front import Constants, target_image_types, activity_alert_types
from front.lib import gametime, urls, db, utils
from front.models import species
from front.backend import notifications
from front.tests import base
from front.tests.base import points, rects, SIX_HOURS, DELAYED_SPECIES_KEY
class TestActivityAlerts(base.TestCase):
def setUp(self):
super(TestActivityAlerts, self).setUp()
# The player will be notified about activity resulting from completing the simulator so do not force
# completion to more accurately model the real player experience.
self.create_validated_user('testuser@example.com', 'pw')
self.user = self.get_logged_in_user()
def test_activity_alerts_medium_frequency(self):
self._run_example_activity_alerts_for_frequency(activity_alert_types.MEDIUM)
def test_activity_alerts_long_frequency(self):
self._run_example_activity_alerts_for_frequency(activity_alert_types.LONG)
def test_no_activity_alerts_past_inactive_threshold(self):
# Advance gametime.now() to a point beyond where all the the initial gamestate creation is done.
self.advance_now(seconds=self.user.activity_alert_frequency_window)
user_activities = self._send_activity_alert(notified=0, processed=1)
# Create on target to be sure there is reportable activity.
self.create_target_and_move(arrival_delta=SIX_HOURS, **points.FIRST_MOVE)
# Now advance to the next notification window and verify there is a processed row and notification sent.
self.advance_now(seconds=self.user.activity_alert_frequency_window + 10)
user_activities = self._send_activity_alert(notified=1, processed=1)
self.assertEqual(len(user_activities[0].unviewed_targets), 1)
# Load the gamestate to update the last_accessed time to now.
self.get_gamestate()
# Now advance to the next notification window and verify there is a processed row and no notifications.
self.advance_now(seconds=self.user.activity_alert_frequency_window + 10)
user_activities = self._send_activity_alert(notified=0, processed=1)
# Load the gamestate to update the last_accessed time to now.
self.get_gamestate()
# Move just before the the activity alerts inactive threshold and make sure there is still a processed row.
self.advance_now(seconds=Constants.ACTIVITY_ALERT_INACTIVE_THRESHOLD - 10)
user_activities = self._send_activity_alert(notified=0, processed=1)
# Finally, move past the activity alerts inactive threshold and verify there are no rows processed.
self.advance_now(seconds=10 + self.user.activity_alert_frequency_window)
user_activities = self._send_activity_alert(notified=0, processed=0)
# Do one final big jump foward in time way past the inactive threshold to be extra sure no rows are processed.
self.advance_now(seconds=Constants.ACTIVITY_ALERT_INACTIVE_THRESHOLD + 10)
user_activities = self._send_activity_alert(notified=0, processed=0)
# Load the gamestate to update the last_accessed time to now, emulating the user coming back to the game.
self.get_gamestate()
# Create on target to be sure there is reportable activity.
self.create_target_and_move(arrival_delta=SIX_HOURS, **points.SECOND_MOVE)
# Now advance to the next notification window and verify there is a processed row and notification sent.
self.advance_now(seconds=self.user.activity_alert_frequency_window + 10)
user_activities = self._send_activity_alert(notified=1, processed=1)
self.assertEqual(len(user_activities[0].unviewed_targets), 1)
def test_activity_alerts_multiple_targets(self):
# Advance gametime.now() to a point beyond where all the the initial gamestate creation is done.
self.advance_now(seconds=self.user.activity_alert_frequency_window)
user_activities = self._send_activity_alert(notified=0, processed=1)
# Create two targets and move the game to them, for a total of 12 hours of advancement.
chip_result = self.create_target_and_move(arrival_delta=SIX_HOURS, **points.FIRST_MOVE)
chip = self.last_chip_for_path(['user', 'rovers', '*', 'targets', '*'], chip_result)
arrival_time_date = self.user.after_epoch_as_datetime(chip['value']['arrival_time'])
chip_result = self.create_target_and_move(arrival_delta=SIX_HOURS, **points.SECOND_MOVE)
chip = self.last_chip_for_path(['user', 'rovers', '*', 'targets', '*'], chip_result)
self.advance_now(seconds=self.user.activity_alert_frequency_window + 10)
user_activities = self._send_activity_alert(notified=1, processed=1)
# The earliest element should be the oldest target.
self.assert_equal_seconds(user_activities[0].earliest, arrival_time_date)
self.assertEqual(len(user_activities[0].unread_messages), 0)
self.assertEqual(len(user_activities[0].unviewed_targets), 2)
def test_activity_alerts_delayed_species(self):
# Advance gametime.now() to a point beyond where all the the initial gamestate creation is done.
self.advance_now(seconds=self.user.activity_alert_frequency_window)
user_activities = self._send_activity_alert(notified=0, processed=1)
# Create a target and arrive at it. This target's arrival time becomes the 'anchor' for the
# notification window (the oldest alerted activity).
chip_result = self.create_target_and_move(arrival_delta=SIX_HOURS, **points.FIRST_MOVE)
chip = self.last_chip_for_path(['user', 'rovers', '*', 'targets', '*'], chip_result)
arrival_time_date = self.user.after_epoch_as_datetime(chip['value']['arrival_time'])
check_species_url = str(chip['value']['urls']['check_species'])
# Verify the species being detected has delayed availability.
species_id = species.get_id_from_key(DELAYED_SPECIES_KEY)
species_delay = utils.in_seconds(minutes=species.delayed_minutes_for_id(species_id))
self.assertTrue(species_delay > 0)
# Advance time to half of the species delayed availability time subtracted from the full notification window.
# This means that the species data will still be delayed within the notification window.
self.advance_now(seconds=self.user.activity_alert_frequency_window - (species_delay / 2))
# Detect a species on a new target known to have delayed species data.
result = self.check_species(check_species_url, [rects.for_species_key(DELAYED_SPECIES_KEY)])
# Verify the species appears to have delayed data.
chip = self.last_chip_value_for_path(['user', 'species', '*'], result)
self.assertEqual(chip['species_id'], species_id)
self.assertTrue("PENDING" in chip['icon'])
self.assertTrue(chip['available_at'] > chip['detected_at'])
self.assertEqual(chip['available_at'] - chip['detected_at'], species_delay)
# The delayed species should not show up in the activity immediatly as it has delayed data.
# Advance the rest of the delayed data window to trigger the activity window which is still anchored by
# the original target creation.
self.advance_now(seconds=(species_delay / 2) + 10)
user_activities = self._send_activity_alert(notified=1, processed=1)
# The earliest element should be the target.
self.assert_equal_seconds(user_activities[0].earliest, arrival_time_date)
self.assertEqual(len(user_activities[0].unviewed_targets), 1)
self.assertEqual(len(user_activities[0].unviewed_species), 0)
# Now advance to the next notification window and the delayed species should now be notifiable activity.
self.advance_now(seconds=self.user.activity_alert_frequency_window + (species_delay / 2))
user_activities = self._send_activity_alert(notified=1, processed=1)
self.assertEqual(len(user_activities[0].unviewed_targets), 0)
self.assertEqual(len(user_activities[0].unviewed_species), 1)
# The species data should now be fully available.
detected_species = user_activities[0].unviewed_species[0]
# The earliest element should be the species (when it was detected, not available).
self.assert_equal_seconds(user_activities[0].earliest, detected_species.detected_at_date)
# And that species should be the same species that was detected.
self.assertFalse(detected_species.is_currently_delayed())
self.assertEqual(detected_species.species_id, species_id)
self.assertTrue("PENDING" not in detected_species.icon)
def test_activity_alerts_delayed_species_outside_window(self):
## Detected a delayed species on its own, with no other activity to anchor it within the notification window
## which means it should be sent out normally.
# Advance gametime.now() to a point beyond where all the the initial gamestate creation is done.
self.advance_now(seconds=self.user.activity_alert_frequency_window)
user_activities = self._send_activity_alert(notified=0, processed=1)
# Create a target and arrive at it.
chip_result = self.create_target_and_move(arrival_delta=SIX_HOURS, **points.FIRST_MOVE)
chip = self.last_chip_for_path(['user', 'rovers', '*', 'targets', '*'], chip_result)
check_species_url = str(chip['value']['urls']['check_species'])
# Flush out the target notification.
self.advance_now(seconds=self.user.activity_alert_frequency_window + 10)
user_activities = self._send_activity_alert(notified=1, processed=1)
# Detect a species on a new target known to have delayed species data.
self.check_species(check_species_url, [rects.for_species_key(DELAYED_SPECIES_KEY)])
species_id = species.get_id_from_key(DELAYED_SPECIES_KEY)
# Now advance to the next notification window and the delayed species should now be notifiable activity.
self.advance_now(seconds=self.user.activity_alert_frequency_window + 10)
user_activities = self._send_activity_alert(notified=1, processed=1)
self.assertEqual(len(user_activities[0].unviewed_targets), 0)
self.assertEqual(len(user_activities[0].unviewed_species), 1)
# The species data should be fully available.
detected_species = user_activities[0].unviewed_species[0]
# The earliest element should be the species (when it was detected, not available).
self.assert_equal_seconds(user_activities[0].earliest, detected_species.detected_at_date)
# And that species should be the same species that was detected.
self.assertFalse(detected_species.is_currently_delayed())
self.assertEqual(detected_species.species_id, species_id)
self.assertTrue("PENDING" not in detected_species.icon)
def test_activity_alerts_with_templating(self):
# Advance gametime.now() to a point beyond where all the the initial gamestate creation is done.
self.advance_now(seconds=self.user.activity_alert_frequency_window)
# Add a target and render it but do not advance the game.
self.create_target(arrival_delta=SIX_HOURS)
self.render_next_target()
# Advance 1 hour and create a message.
self.advance_now(hours=1)
message = self.send_mock_message_now(self.user, 'MSG_TEST_1')
# Now move past the activity_alert_frequency_window and target arrival time and there
# should be reportable activity.
self.advance_now(seconds=self.user.activity_alert_frequency_window + SIX_HOURS + 10)
# The target should be arrived at now so thumbnail URL should be in gamestate.
notify_target = self.get_most_recent_target_from_gamestate()
thumbnail_url = notify_target['images'][target_image_types.THUMB]
# Run the notifications tool with the real email sending callback, which in unit tests will
# capture all sent emails, but the real templating will still occur.
with db.commit_or_rollback(self.get_ctx()) as ctx:
with db.conn(ctx) as ctx:
processed = notifications.send_activity_alert_at(ctx, gametime.now(),
notifications.send_activity_alert_email_callback)
self.assertEqual(processed, 1)
self.assertEqual(len(self.get_sent_emails()), 1)
email_body = self.get_sent_emails()[0].body_html
self.assertTrue("Recent Extrasolar Activity" in self.get_sent_emails()[0].subject)
self.assertTrue("Hello, Testfirst" in email_body)
# Verify the message being notified on is in the email.
self.assertTrue(message.subject in email_body)
# Verify the thumbnail URL for the target being notified on is in the email.
self.assertTrue(thumbnail_url in email_body)
# Verify that running the process again at the next window sends no emails.
self.clear_sent_emails()
self.advance_now(seconds=self.user.activity_alert_frequency_window + 10)
with db.commit_or_rollback(self.get_ctx()) as ctx:
with db.conn(ctx) as ctx:
processed = notifications.send_activity_alert_at(ctx, gametime.now(),
notifications.send_activity_alert_email_callback)
self.assertEqual(processed, 1)
self.assertEqual(len(self.get_sent_emails()), 0)
def test_activity_alerts_all_activity_types(self):
# Advance to a point relatively deep into the story where we know achievements, species, and missions
# have been added. Most importantly we need to advance to a point where we have a mission
# that has parts to make sure those don't appear in the notification email.
self.logout_user()
self.replay_game('testuser@example.com', 'pw', to_point='OUTSIDE_AUDIO_MYSTERY01_ZONE')
self.login_user('testuser@example.com', 'pw')
# Load the gamestate to update the last_accessed time to now.
self.get_gamestate()
# Send one message too since all messages are automatically marked viewed by the
# replay_game system.
message = self.send_mock_message_now(self.get_logged_in_user(), 'MSG_TEST_1')
# Run the notifications tool with the real email sending callback, which in unit tests will
# capture all sent emails, but the real templating will still occur.
with db.commit_or_rollback(self.get_ctx()) as ctx:
with db.conn(ctx) as ctx:
processed = notifications.send_activity_alert_at(ctx, gametime.now(),
notifications.send_activity_alert_email_callback)
# Since this is a brand new user which just had a large number of moves played back
# and has never had a notification email sent, we expect to see quite a bit of data to be
# in the notification email. The purpose of this test is to make sure the template
# is parsing/displaying all of the data types correctly.
# Verify only one email was sent.
self.assertEqual(processed, 1)
self.assertEqual(len(self.get_sent_emails()), 1)
email_body = self.get_sent_emails()[0].body_html
# Verify that expected parts of the gamestate that were added while replaying the game
# are all included in the notification email.
user = self.get_logged_in_user()
self.assertTrue("Recent Extrasolar Activity" in self.get_sent_emails()[0].subject)
self.assertTrue("Hello, " + user.first_name in email_body)
# Verify the message being notified on is in the email.
self.assertTrue(message.subject in email_body)
# The number of user created targets in the gamestate should equal the number
# of thumbnail URLs listed in the notification email.
target_count = 0
for r in user.rovers.itervalues():
for t in r.targets.pictures():
if t.was_user_created():
target_count += 1
self.assertTrue(target_count > 0, "Must have targets in the gamestate to test.")
# Currently all fake rendered targets have the same thumbnail URL so just pull it
# from the last target.
found = re.findall(t.url_image_thumbnail, email_body)
self.assertEqual(len(found), target_count)
# Make sure that we have at least one child mission in the gamestate to be tested
# that it is not in the email.
at_least_one_child = False
self.assertTrue(len(user.missions) > 0, "Must have missions in the gamestate to test.")
for m in user.missions.itervalues():
# The simulator mission is started/created when the user is created so it will never
# be sent in a notification email.
if m.mission_definition.startswith('MIS_SIMULATOR'):
self.assertTrue(m.title not in email_body)
# If a mission is a child mission, it should not be in the email.
elif not m.is_root_mission():
at_least_one_child = True
self.assertTrue(m.title not in email_body)
else:
self.assertTrue(m.title in email_body)
self.assertTrue(at_least_one_child, "Must test to see if child missions are excluded from the email.")
# All species are detected after the user has been created so they should
# all be in the email.
self.assertTrue(len(user.species) > 0, "Must have species in the gamestate to test.")
for s in user.species.itervalues():
self.assertTrue(s.name in email_body)
# All achieved achievements other than the created user achievement are achieved after the user has
# been created so they should all be in the email.
self.assertTrue(len(user.achievements.achieved()) > 0, "Must have achievements in the gamestate to test.")
for a in user.achievements.achieved():
if a.achievement_key == 'ACH_GAME_CREATE_USER':
self.assertTrue(a.title not in email_body)
else:
self.assertTrue(a.title in email_body)
def test_unsubcribe_activity_alerts(self):
# Advance gametime.now() to a point beyond where all the the initial gamestate creation is done.
self.advance_now(seconds=self.user.activity_alert_frequency_window)
# Create a message.
self.send_mock_message_now(self.user, 'MSG_TEST_1')
# Now move past the activity_alert_frequency_window and there should be reportable activity.
self.advance_now(seconds=self.user.activity_alert_frequency_window + 10)
# Run the notifications tool with the real email sending callback, which in unit tests will
# capture all sent emails, but the real templating will still occur.
with db.commit_or_rollback(self.get_ctx()) as ctx:
with db.conn(ctx) as ctx:
processed = notifications.send_activity_alert_at(ctx, gametime.now(),
notifications.send_activity_alert_email_callback)
self.assertEqual(processed, 1)
self.assertEqual(len(self.get_sent_emails()), 1)
email_body = self.get_sent_emails()[0].body_html
# Extract unsubscribe link from email_body and the link (we are not logged in right? be sure)
unsubscribe_url = str(re.search(r'(%s)' % self.user.url_unsubscribe(), email_body).group(1))
# Following this link should work without a valid authentication cookie.
self.logout_user()
response = self.app.get(unsubscribe_url)
self.assertTrue("You have been unsubscribed" in response)
# Send another message.
self.send_mock_message_now(self.user, 'MSG_TEST_2')
# Verify that running the process again at the next window sends no emails.
self.clear_sent_emails()
self.advance_now(seconds=self.user.activity_alert_frequency_window + 10)
with db.commit_or_rollback(self.get_ctx()) as ctx:
with db.conn(ctx) as ctx:
processed = notifications.send_activity_alert_at(ctx, gametime.now(),
notifications.send_activity_alert_email_callback)
self.assertEqual(processed, 0)
self.assertEqual(len(self.get_sent_emails()), 0)
def _run_example_activity_alerts_for_frequency(self, frequency):
# Configure this test user to have the given notifications frequency.
with db.commit_or_rollback(self.get_ctx()) as ctx:
with db.conn(ctx) as ctx:
user = self.get_logged_in_user(ctx=ctx)
user.set_activity_alert_frequency(frequency)
# Advance time to a point beyond the window size. There should be a row to process
# but no activity as the user has done no reportable activity and digest_window_start
# should have been initialized to a few minutes into the future to hide all user
# creation activity from notification.
self.advance_now(seconds=self.user.activity_alert_frequency_window + 10)
user_activities = self._send_activity_alert(notified=0, processed=1)
# Advance just shy of the window and verify there is no row to process and no activity.
self.advance_now(seconds=self.user.activity_alert_frequency_window - 10)
user_activities = self._send_activity_alert(notified=0, processed=0)
# Create a message to generate activity.
message = self.send_mock_message_now(self.user, 'MSG_TEST_1')
# There is still no reportable activity and no row should have been processed
# as time has not advanced past the activity_alert_frequency_window.
user_activities = self._send_activity_alert(notified=0, processed=0)
# Now move past the activity_alert_frequency_window and there should be reportable activity.
self.advance_now(seconds=self.user.activity_alert_frequency_window + 10)
user_activities = self._send_activity_alert(notified=1, processed=1)
self.assert_equal_seconds(user_activities[0].earliest, message.sent_at_date)
self.assertEqual(len(user_activities[0].unread_messages), 1)
self.assertEqual(len(user_activities[0].unviewed_targets), 0)
# Jump forward to just before the next window. There should be no row to process and no activity.
self.advance_now(seconds=self.user.activity_alert_frequency_window - 10)
user_activities = self._send_activity_alert(notified=0, processed=0)
# And then move past the max window size. There should be a user to process
# but no activity to notify on.
self.advance_now(seconds=11)
user_activities = self._send_activity_alert(notified=0, processed=1)
# Jump time forward far past the max window size. This simulates the digest system being
# broken for a number of hours.
self.advance_now(seconds=self.user.activity_alert_frequency_window + utils.in_seconds(hours=4))
# Create a message.
message = self.send_mock_message_now(self.user, 'MSG_TEST_2')
# There should be activity to process, but nothing to notify on. This simulates
# the digest system working again, and the digest_window_start time being updated
# for this user to the earliest activity, namely the just sent message, but no
# digest being sent.
self.advance_now(seconds=self.user.activity_alert_frequency_window - 10)
user_activities = self._send_activity_alert(notified=0, processed=1)
# Now move past the activity_alert_frequency_window and there should be reportable activity.
self.advance_now(seconds=11)
user_activities = self._send_activity_alert(notified=1, processed=1)
# Create a message.
message = self.send_mock_message_now(self.user, 'MSG_TEST_3')
# There should be no activity to report just before the window.
self.advance_now(seconds=self.user.activity_alert_frequency_window - 10)
user_activities = self._send_activity_alert(notified=0, processed=0)
# Mark the message as read.
self.json_get(urls.message_content(message.message_id))
# Now move past the activity_alert_frequency_window and there should be no reportable
# activity as the message was marked read.
self.advance_now(seconds=11)
user_activities = self._send_activity_alert(notified=0, processed=1)
def _send_activity_alert(self, at_time=None, notified=0, processed=0):
if at_time is None:
at_time = gametime.now()
with db.commit_or_rollback(self.get_ctx()) as ctx:
with db.conn(ctx) as ctx:
capture = _CaptureUserActivity()
count = notifications.send_activity_alert_at(ctx, at_time, capture)
self.assertEqual(count, processed, "Unexpected number of users processed for activity.")
self.assertEqual(len(capture.user_activities), notified, "Unexpected number of users notified for activity.")
return capture.user_activities
class _CaptureUserActivity(object):
def __init__(self):
self.user_activities = []
def __call__(self, ctx, user, user_activity, at_time):
self.user_activities.append(user_activity)
| 58.657534 | 119 | 0.706095 | 3,517 | 25,692 | 4.945977 | 0.117714 | 0.056051 | 0.03616 | 0.038632 | 0.656683 | 0.638574 | 0.612935 | 0.608796 | 0.593791 | 0.579362 | 0 | 0.008991 | 0.220769 | 25,692 | 437 | 120 | 58.791762 | 0.85989 | 0.319049 | 0 | 0.555102 | 0 | 0 | 0.05152 | 0.001612 | 0 | 0 | 0 | 0 | 0.244898 | 1 | 0.057143 | false | 0 | 0.028571 | 0 | 0.097959 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
802f3955e4f983aab0465d9c45a9474fcbaaf0ea | 8,954 | py | Python | malib/algorithm/common/misc.py | ReinholdM/play_football_with_human | 9ac2f0a8783aede56f4ac1f6074db7daa41b6b6c | [
"MIT"
] | 5 | 2021-11-17T03:11:13.000Z | 2021-12-23T09:04:21.000Z | malib/algorithm/common/misc.py | ReinholdM/play_football_with_human | 9ac2f0a8783aede56f4ac1f6074db7daa41b6b6c | [
"MIT"
] | null | null | null | malib/algorithm/common/misc.py | ReinholdM/play_football_with_human | 9ac2f0a8783aede56f4ac1f6074db7daa41b6b6c | [
"MIT"
] | null | null | null | import torch
import numpy as np
import torch.nn.functional as F
from torch.autograd import Variable
from malib.utils.typing import Dict, Any, List, Union, DataTransferType
from malib.backend.datapool.offline_dataset_server import Episode
def soft_update(target, source, tau):
"""Perform DDPG soft update (move target params toward source based on weight factor tau).
Reference:
https://github.com/ikostrikov/pytorch-ddpg-naf/blob/master/ddpg.py#L11
:param torch.nn.Module target: Net to copy parameters to
:param torch.nn.Module source: Net whose parameters to copy
:param float tau: Range form 0 to 1, weight factor for update
"""
for target_param, param in zip(target.parameters(), source.parameters()):
target_param.data.copy_(target_param.data * (1.0 - tau) + param.data * tau)
def hard_update(target, source):
"""Copy network parameters from source to target.
Reference:
https://github.com/ikostrikov/pytorch-ddpg-naf/blob/master/ddpg.py#L15
:param torch.nn.Module target: Net to copy parameters to.
:param torch.nn.Module source: Net whose parameters to copy
"""
for target_param, param in zip(target.parameters(), source.parameters()):
target_param.data.copy_(param.data)
def onehot_from_logits(logits, eps=0.0):
"""
Given batch of logits, return one-hot sample using epsilon greedy strategy
(based on given epsilon)
"""
# get best (according to current policy) actions in one-hot form
argmax_acs = (logits == logits.max(-1, keepdim=True)[0]).float()
if eps == 0.0:
return argmax_acs
# get random actions in one-hot form
rand_acs = Variable(
torch.eye(logits.shape[1])[
[np.random.choice(range(logits.shape[1]), size=logits.shape[0])]
],
requires_grad=False,
)
# chooses between best and random actions using epsilon greedy
return torch.stack(
[
argmax_acs[i] if r > eps else rand_acs[i]
for i, r in enumerate(torch.rand(logits.shape[0]))
]
)
def sample_gumbel(shape, eps=1e-20, tens_type=torch.FloatTensor):
"""Sample from Gumbel(0, 1).
Note:
modified for PyTorch from https://github.com/ericjang/gumbel-softmax/blob/master/Categorical%20VAE.ipynb
"""
U = Variable(tens_type(*shape).uniform_(), requires_grad=False)
return -torch.log(-torch.log(U + eps) + eps)
def gumbel_softmax_sample(logits, temperature):
"""Draw a sample from the Gumbel-Softmax distribution.
Note:
modified for PyTorch from https://github.com/ericjang/gumbel-softmax/blob/master/Categorical%20VAE.ipynb
"""
y = logits + sample_gumbel(logits.shape, tens_type=type(logits.data))
return F.softmax(y / temperature, dim=-1)
def gumbel_softmax(logits: DataTransferType, temperature=1.0, hard=False):
"""Sample from the Gumbel-Softmax distribution and optionally discretize.
Note:
modified for PyTorch from https://github.com/ericjang/gumbel-softmax/blob/master/Categorical%20VAE.ipynb
:param DataTransferType logits: Unnormalized log-probs.
:param float temperature: Non-negative scalar.
:param bool hard: If ture take argmax, but differentiate w.r.t. soft sample y
:returns [batch_size, n_class] sample from the Gumbel-Softmax distribution. If hard=True, then the returned sample
will be one-hot, otherwise it will be a probability distribution that sums to 1 across classes
"""
y = gumbel_softmax_sample(logits, temperature)
if hard:
y_hard = onehot_from_logits(y)
y = (y_hard - y).detach() + y
return y
def cumulative_td_errors(
start: int, end: int, offset: int, value, td_errors, ratios, gamma: float
):
v = np.zeros_like(value)
assert end - offset > start, (start, end, offset)
for s in range(start, end - offset):
pi_of_c = 1.0
trace_errors = [td_errors[s]]
for t in range(s + 1, s + offset):
pi_of_c *= ratios[t - 1]
trace_errors.append(gamma ** (t - start) * pi_of_c * td_errors[t])
v[s] = value[s] + np.sum(trace_errors)
return v
def v_trace(
policy: "Policy", batch: Dict[str, Any], ratio_clip: float = 1e3
) -> Dict[str, Any]:
"""Implementation for V-trace (https://arxiv.org/abs/1802.01561)
:param policy: Policy, policy instance
:param batch: Dict[str, Any], batch
:param ratio_clip: float, ratio clipping value
:return: return new batch with V-trace target
"""
# compute importance sampling along the horizon
old_policy_dist = batch[Episode.ACTION_DIST]
old_action_dist = old_policy_dist[batch[Episode.ACTIONS]]
cur_dist = policy.actor()(batch[Episode.CUR_OBS])
cur_action_dist = cur_dist[batch[Episode.ACTIONS]]
# NOTE(ming): we should avoid zero division here
clipped_is_ratio = np.minimum(ratio_clip, cur_action_dist / old_action_dist)
# calculate new state value
state_values = batch[Episode.STATE_VALUE]
rewards = batch[Episode.REWARDS]
dones = batch[Episode.DONES]
# ignore the last one state value?
td_errors = np.zeros_like(rewards)
td_errors[:-1] = (
rewards[:-1] + policy.config["gamma"] * state_values[1:] - state_values[:-1]
)
terminal_state_value = policy.critic()(batch[Episode.NEXT_OBS][-1])
# we support infinite episode mode
td_errors[-1] = (
rewards[-1]
+ policy.config["gamma"] * terminal_state_value * dones[-1]
- state_values[-1]
)
discounted_td_errors = clipped_is_ratio * td_errors
batch[Episode.STATE_VALUE] = cumulative_td_errors(
start=0,
end=len(rewards),
offset=1,
value=state_values,
td_errors=discounted_td_errors,
ratios=clipped_is_ratio,
gamma=policy.config["gamma"],
)
return batch
def non_centered_rmsprop(
gradient: Union[torch.Tensor, DataTransferType],
delta: Union[torch.Tensor, DataTransferType],
alpha: float,
eta: float,
eps: float,
):
"""Implementation of non-centered RMSProb algorithm (# TODO(ming): add reference here)
:param gradient: Union[torch.Tensor, DataTransferType], bootstrapped gradient
:param delta: Union[torch.Tensor, DataTransferType]
:param alpha: float, moving factor
:param eta: flat, learning step
:param eps: float, control exploration
:return:
"""
gradient = alpha * gradient + (1.0 - alpha) * delta ** 2
delta = -eta * delta / np.sqrt(gradient + eps)
return delta
class GradientOps:
@staticmethod
def add(source: Dict, delta: Dict):
for k, v in delta.items():
if isinstance(v, Dict):
source[k] = GradientOps.add(source[k], v)
else: # if isinstance(v, DataTransferType):
assert source[k].data.shape == v.shape, (source[k].data.shape, v.shape)
source[k].data = source[k].data + v # v
# else:
# raise errors.UnexpectedType(f"unexpected gradient type: {type(v)}")
return source
@staticmethod
def mean(gradients: List):
if len(gradients) < 1:
return gradients
if isinstance(gradients[0], dict):
keys = list(gradients[0].keys())
res = {}
for k in keys:
res[k] = GradientOps.mean([grad[k] for grad in gradients])
return res
else:
res = np.mean(gradients, axis=0)
return res
@staticmethod
def sum(gradients: List):
"""Sum gradients.
:param List gradients: A list of gradients.
:return:
"""
if len(gradients) < 1:
return gradients
if isinstance(gradients[0], dict):
keys = list(gradients[0].keys())
res = {}
for k in keys:
res[k] = GradientOps.sum([grad[k] for grad in gradients])
return res
else: # if isinstance(gradients[0], DataTransferType):
res = np.sum(gradients, axis=0)
return res
class OUNoise:
"""https://github.com/songrotek/DDPG/blob/master/ou_noise.py"""
def __init__(self, action_dimension: int, scale=0.1, mu=0, theta=0.15, sigma=0.2):
self.action_dimension = action_dimension
self.scale = scale
self.mu = mu
self.theta = theta
self.sigma = sigma
self.state = np.ones(self.action_dimension) * self.mu
self.reset()
def reset(self):
self.state = np.ones(self.action_dimension) * self.mu
def noise(self):
x = self.state
dx = self.theta * (self.mu - x) + self.sigma * np.random.randn(len(x))
self.state = x + dx
return self.state * self.scale
class EPSGreedy:
def __init__(self, action_dimension: int, threshold: float = 0.3):
self._action_dim = action_dimension
self._threshold = threshold
| 33.286245 | 118 | 0.644405 | 1,189 | 8,954 | 4.749369 | 0.243061 | 0.017 | 0.014875 | 0.01275 | 0.319285 | 0.247388 | 0.216929 | 0.216929 | 0.204888 | 0.168231 | 0 | 0.011954 | 0.243243 | 8,954 | 268 | 119 | 33.410448 | 0.821429 | 0.312263 | 0 | 0.18543 | 0 | 0 | 0.003559 | 0 | 0 | 0 | 0 | 0.003731 | 0.013245 | 1 | 0.10596 | false | 0 | 0.039735 | 0 | 0.271523 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
802f90308891c39086116b74bd256d4b9568a575 | 7,746 | py | Python | nuage_tempest_plugin/services/vpnaas/vpnaas_client.py | nuagenetworks/nuage-tempest-plugin | ac1bfb0709c7bbaf04017af3050fb3ed1ad1324a | [
"Apache-1.1"
] | 1 | 2021-01-03T01:47:51.000Z | 2021-01-03T01:47:51.000Z | nuage_tempest_plugin/services/vpnaas/vpnaas_client.py | nuagenetworks/nuage-tempest-plugin | ac1bfb0709c7bbaf04017af3050fb3ed1ad1324a | [
"Apache-1.1"
] | null | null | null | nuage_tempest_plugin/services/vpnaas/vpnaas_client.py | nuagenetworks/nuage-tempest-plugin | ac1bfb0709c7bbaf04017af3050fb3ed1ad1324a | [
"Apache-1.1"
] | 1 | 2020-10-16T12:04:39.000Z | 2020-10-16T12:04:39.000Z | import abc
import json
import six
try:
from urllib.parse import urlencode # py35
except ImportError:
from urllib import urlencode # py27
from tempest.lib.common import rest_client
from tempest.lib import exceptions as lib_exc
from nuage_tempest_plugin.lib.topology import Topology
CONF = Topology.get_conf()
@six.add_metaclass(abc.ABCMeta)
class BaseNeutronResourceClient(rest_client.RestClient):
URI_PREFIX = "v2.0"
def __init__(self, auth_provider, resource, parent=None, path_prefix=None):
self.resource = resource.replace('-', '_')
self.parent = parent + '/%s/' if parent else ''
prefix = self.URI_PREFIX + '/'
if path_prefix:
prefix = prefix + path_prefix + '/'
if resource[-1] == 'y':
self.resource_url = (
'%s%sies' % (prefix, self.parent + resource[:-1])
)
else:
self.resource_url = '%s%ss' % (prefix, self.parent + resource)
self.single_resource_url = self.resource_url + '/%s'
super(BaseNeutronResourceClient, self).__init__(
auth_provider,
CONF.network.catalog_type,
CONF.network.region or CONF.identity.region,
endpoint_type=CONF.network.endpoint_type,
build_interval=CONF.network.build_interval,
build_timeout=CONF.network.build_timeout)
def is_resource_deleted(self, id):
try:
self.show(id)
except lib_exc.NotFound:
return True
return False
def create(self, parent=None, **kwargs):
if parent:
uri = self.resource_url % parent
else:
uri = self.resource_url
resource = kwargs
req_post_data = json.dumps({self.resource: resource})
resp, body = self.post(uri, req_post_data)
body = json.loads(body)
self.expected_success(201, resp.status)
return rest_client.ResponseBody(resp, body)[self.resource]
def list(self, parent=None, **filters):
if parent:
uri = self.resource_url % parent
else:
uri = self.resource_url
if filters:
uri += '?' + urlencode(filters, doseq=1)
resp, body = self.get(uri)
body = json.loads(body)
self.expected_success(200, resp.status)
if self.resource[-1] == 'y':
return rest_client.ResponseBody(
resp, body
)['%sies' % self.resource[:-1]]
else:
return rest_client.ResponseBody(resp, body)['%ss' % self.resource]
def show(self, id, parent=None, fields=None):
if parent:
uri = self.single_resource_url % (parent, id)
else:
uri = self.single_resource_url % id
if fields:
uri += '?' + urlencode(fields, doseq=1)
resp, body = self.get(uri)
body = json.loads(body)
self.expected_success(200, resp.status)
return rest_client.ResponseBody(resp, body)[self.resource]
def update(self, id, parent=None, **kwargs):
if parent:
uri = self.single_resource_url % (parent, id)
else:
uri = self.single_resource_url % id
resource = kwargs
req_data = json.dumps({self.resource: resource})
resp, body = self.put(uri, req_data)
body = json.loads(body)
self.expected_success(200, resp.status)
return rest_client.ResponseBody(resp, body)[self.resource]
def delete(self, id, parent=None):
if parent:
uri = self.single_resource_url % (parent, id)
else:
uri = self.single_resource_url % id
resp, body = super(BaseNeutronResourceClient, self).delete(uri)
self.expected_success(204, resp.status)
rest_client.ResponseBody(resp, body)
class IKEPolicyClient(BaseNeutronResourceClient):
"""CRUD Operations for IKEPolicy """
def __init__(self, auth_provider):
super(IKEPolicyClient, self).__init__(auth_provider, 'ikepolicy',
path_prefix='vpn')
def create_ikepolicy(self, name, **kwargs):
kwargs = {'name': name}
return super(IKEPolicyClient, self).create(**kwargs)
def show_ikepolicy(self, id, fields=None):
return super(IKEPolicyClient, self).show(id, fields)
def list_ikepolicy(self, **filters):
return super(IKEPolicyClient, self).list(**filters)
def update_ikepolicy(self, id, **kwargs):
return super(IKEPolicyClient, self).update(id, **kwargs)
def delete_ikepolicy(self, id):
super(IKEPolicyClient, self).delete(id)
class IPSecPolicyClient(BaseNeutronResourceClient):
"""CRUD Operations for IPSecPolicy """
def __init__(self, auth_provider):
super(IPSecPolicyClient, self).__init__(auth_provider, 'ipsecpolicy',
path_prefix='vpn')
def create_ipsecpolicy(self, name, **kwargs):
kwargs = {'name': name}
return super(IPSecPolicyClient, self).create(**kwargs)
def show_ipsecpolicy(self, id, fields=None):
return super(IPSecPolicyClient, self).show(id, fields)
def list_ipsecpolicy(self, **filters):
return super(IPSecPolicyClient, self).list(**filters)
def update_ipsecpolicy(self, id, **kwargs):
return super(IPSecPolicyClient, self).update(id, **kwargs)
def delete_ipsecpolicy(self, id):
super(IPSecPolicyClient, self).delete(id)
class VPNServiceClient(BaseNeutronResourceClient):
"""CRUD Operations for VPNService """
def __init__(self, auth_provider):
super(VPNServiceClient, self).__init__(auth_provider, 'vpnservice',
path_prefix='vpn')
def create_vpnservice(self, router_id, subnet_id, **kwargs):
kwargs['router_id'] = router_id
kwargs['subnet_id'] = subnet_id
return super(VPNServiceClient, self).create(**kwargs)
def show_vpnservice(self, id, fields=None):
return super(VPNServiceClient, self).show(id, fields)
def list_vpnservice(self, **filters):
return super(VPNServiceClient, self).list(**filters)
def update_vpnservice(self, id, **kwargs):
return super(VPNServiceClient, self).update(id, **kwargs)
def delete_vpnservice(self, id):
super(VPNServiceClient, self).delete(id)
class IPSecSiteConnectionClient(BaseNeutronResourceClient):
"""CRUD Operations for IPSecSiteConnection """
def __init__(self, auth_provider):
super(IPSecSiteConnectionClient, self).__init__(
auth_provider, 'ipsec-site-connection', path_prefix='vpn')
def create_ipsecsiteconnection(self, vpnservice_id, ikepolicy_id,
ipsecpolicy_id, peer_address, peer_id,
peer_cidrs, psk, **kwargs):
kwargs['vpnservice_id'] = vpnservice_id
kwargs['ikepolicy_id'] = ikepolicy_id
kwargs['ipsecpolicy_id'] = ipsecpolicy_id
kwargs['peer_address'] = peer_address
kwargs['peer_id'] = peer_id
kwargs['peer_cidrs'] = peer_cidrs
kwargs['psk'] = psk
return super(IPSecSiteConnectionClient, self).create(**kwargs)
def show_ipsecsiteconnection(self, id, fields=None):
return super(IPSecSiteConnectionClient, self).show(id, fields)
def list_ipsecsiteconnection(self, **filters):
return super(IPSecSiteConnectionClient, self).list(**filters)
def update_ipsecsiteconnection(self, id, **kwargs):
return super(IPSecSiteConnectionClient, self).update(id, **kwargs)
def delete_ipsecsiteconnection(self, id):
super(IPSecSiteConnectionClient, self).delete(id)
| 35.209091 | 79 | 0.63323 | 843 | 7,746 | 5.629893 | 0.141163 | 0.040455 | 0.022124 | 0.030973 | 0.387906 | 0.299199 | 0.195533 | 0.188791 | 0.163085 | 0.144121 | 0 | 0.004689 | 0.256649 | 7,746 | 219 | 80 | 35.369863 | 0.819555 | 0.018978 | 0 | 0.283133 | 0 | 0 | 0.026264 | 0.002772 | 0 | 0 | 0 | 0 | 0 | 1 | 0.186747 | false | 0 | 0.054217 | 0.072289 | 0.415663 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
803202984fca2cb4249027461cb40adaa874e2de | 1,402 | py | Python | plugins/default.py | FredG71/ircbot | 148131b2156116ceb071d785ade4a9ca6e0f9ff8 | [
"MIT"
] | null | null | null | plugins/default.py | FredG71/ircbot | 148131b2156116ceb071d785ade4a9ca6e0f9ff8 | [
"MIT"
] | null | null | null | plugins/default.py | FredG71/ircbot | 148131b2156116ceb071d785ade4a9ca6e0f9ff8 | [
"MIT"
] | null | null | null | import random
import re
import ircbot.plugin
class DefaultPlugin(ircbot.plugin.Plugin):
def __init__(self, bot, channel):
super().__init__(bot, channel)
self.insults = (
(re.compile(r'.*fuck(\s+you)\s*,?\s*'+self.bot.nick+'.*', re.IGNORECASE),
'fuck you too {nick}'),
(re.compile(r'.*'+self.bot.nick+'[,:]?\s+fuck\s+you.*', re.IGNORECASE),
'fuck you too {nick}'),
)
@ircbot.plugin.command('mumble')
def mumble(self, msg):
mumble_cfg = self.bot.config.get('mumble')
if not mumble_cfg:
return None
retstr = 'Mumble (http://mumble.info) - address: {address} - port: {port}'
if mumble_cfg.get('password'):
retstr += ' - password: {password}'
return retstr.format(**mumble_cfg)
@ircbot.plugin.reply()
def tableflip(self, msg):
if '(╯°□°)╯︵ ┻━┻' in msg.message:
return '┬─┬ ノ( ゜-゜ノ)'
@ircbot.plugin.reply()
def return_insults(self, msg):
for expr, reply in self.insults:
if expr.match(msg.message):
return reply.format(nick=msg.user.nick)
no_work = re.compile(r".*(__)?bot(__)?\s+(no|not|doesn.?t|does not)\s+work.*", re.IGNORECASE)
@ircbot.plugin.reply()
def bot_always_works(self, msg):
if self.no_work.match(msg.message):
return 'I always work'
@ircbot.plugin.command('coinflip')
def coinflip(self, cmd):
if not cmd.user.is_admin:
return
value = random.randint(0, 1)
if value == 1:
return 'Heads!'
return 'Tails!'
| 25.035714 | 94 | 0.649786 | 209 | 1,402 | 4.320574 | 0.339713 | 0.093023 | 0.033223 | 0.066445 | 0.057586 | 0.057586 | 0 | 0 | 0 | 0 | 0 | 0.002525 | 0.152639 | 1,402 | 55 | 95 | 25.490909 | 0.745791 | 0 | 0 | 0.116279 | 0 | 0.023256 | 0.21398 | 0.043509 | 0 | 0 | 0 | 0 | 0 | 1 | 0.139535 | false | 0.046512 | 0.069767 | 0 | 0.44186 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
80320bae1d99f90d4823b9ae7806c404113080c3 | 1,598 | py | Python | python/load_generator/load_generator.py | jared-ong/data-projects | 21ceccacb8e408ca45fe95c1c4d311f48e8f7708 | [
"MIT"
] | null | null | null | python/load_generator/load_generator.py | jared-ong/data-projects | 21ceccacb8e408ca45fe95c1c4d311f48e8f7708 | [
"MIT"
] | null | null | null | python/load_generator/load_generator.py | jared-ong/data-projects | 21ceccacb8e408ca45fe95c1c4d311f48e8f7708 | [
"MIT"
] | null | null | null | import pyodbc
import os
from multiprocessing import Process
def get_file_content(full_path):
"""Get file content function from read_sql_files_to_db.py"""
print(full_path)
bytes = min(32, os.path.getsize(full_path))
raw = open(full_path, 'rb').read(bytes)
if '\\xff\\xfe' in str(raw):
print("file is utf-16")
the_file = open(full_path, encoding="utf-16",
errors="backslashreplace")
data = the_file.read()
else:
print("file is latin-1")
the_file = open(full_path, encoding="latin-1",
errors="backslashreplace")
data = the_file.read()
return data
def update_database_data():
cnxn = pyodbc.connect('Driver={SQL Server};'
'Server=localhost;'
'Database=ravexdemo6;'
'Trusted_Connection=yes;queryTimeout=60', autocommit=True)
thesql = get_file_content("C:\\Users\\jong\\Documents\\GitHub\\data-projects\\python\\load_generator\\generate_load.sql")
cursor = cnxn.cursor()
cursor.execute(thesql)
cursor.close
cnxn.close
if __name__ == '__main__':
update_database_data()
# p1 = Process(target=update_database_data)
# p1.start()
# p2 = Process(target=update_database_data)
# p2.start()
# p3 = Process(target=update_database_data)
# p3.start()
# p4 = Process(target=update_database_data)
# p4.start()
# p5 = Process(target=update_database_data)
# p5.start()
# p1.join()
# p2.join()
# p3.join()
# p4.join()
# p5.join() | 32.612245 | 125 | 0.609512 | 194 | 1,598 | 4.798969 | 0.427835 | 0.105263 | 0.135338 | 0.145005 | 0.303974 | 0.137487 | 0 | 0 | 0 | 0 | 0 | 0.021886 | 0.256571 | 1,598 | 49 | 126 | 32.612245 | 0.761785 | 0.231539 | 0 | 0.133333 | 0 | 0.033333 | 0.232423 | 0.107527 | 0 | 0 | 0 | 0 | 0 | 1 | 0.066667 | false | 0 | 0.1 | 0 | 0.2 | 0.1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
80325695defe4f63bf8d054e65c1c88b1385a507 | 6,005 | py | Python | examples/plot_powerlaw_fgl.py | bio-datascience/GGLasso | ace4758f2a796defa7fd35ee9bf6f343934c6b65 | [
"MIT"
] | 17 | 2020-03-13T10:43:38.000Z | 2022-03-11T20:21:05.000Z | examples/plot_powerlaw_fgl.py | bio-datascience/GGLasso | ace4758f2a796defa7fd35ee9bf6f343934c6b65 | [
"MIT"
] | 7 | 2021-02-15T08:27:37.000Z | 2021-12-08T13:35:33.000Z | examples/plot_powerlaw_fgl.py | bio-datascience/GGLasso | ace4758f2a796defa7fd35ee9bf6f343934c6b65 | [
"MIT"
] | 4 | 2021-01-29T17:37:33.000Z | 2021-12-10T14:20:43.000Z | """
Fused Graphical Lasso experiment
=================================
We investigate the performance of Fused Graphical Lasso on powerlaw networks, compared to estimating the precision matrices independently with SGL.
In particular, we demonstrate that FGL - in contrast to SGL - is capable of estimating time-consistent precision matrices.
We generate a precision matrix with block-wise powerlaw networks.
At time K=5, one of the blocks disappears and another block appears. A third block decays exponentially over time (indexed by K).
"""
# sphinx_gallery_thumbnail_number = 2
import numpy as np
from sklearn.covariance import GraphicalLasso
from regain.covariance import TimeGraphicalLasso
from gglasso.solver.admm_solver import ADMM_MGL
from gglasso.helper.data_generation import time_varying_power_network, sample_covariance_matrix
from gglasso.helper.experiment_helper import lambda_grid, discovery_rate, error
from gglasso.helper.utils import get_K_identity
from gglasso.helper.experiment_helper import plot_evolution, plot_deviation, surface_plot, single_heatmap_animation
from gglasso.helper.model_selection import aic, ebic
p = 100
K = 10
N = 5000
M = 5
L = int(p/M)
reg = 'FGL'
Sigma, Theta = time_varying_power_network(p, K, M, scale = False, nxseed = 2340)
S, sample = sample_covariance_matrix(Sigma, N)
results = {}
results['truth'] = {'Theta' : Theta}
#%%
# Animate precision matrix over time
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
# colored squares represent non-zero entries
#
anim = single_heatmap_animation(Theta)
# %%
# Parameter selection (FGL)
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
# We do a grid search over :math:`\lambda_1` and :math:`\lambda_2` values.
# On each grid point we evaluate True/False Discovery Rate (TPR/FPR), True/False Discovery of Differential edges and AIC and eBIC.
#
# Note: the package contains functions for doing this grid search, but here we also want to evaluate True and False positive rates on each grid points.
#
#
L1, L2, _ = lambda_grid(num1 = 10, num2 = 5, reg = reg)
grid1 = L1.shape[0]; grid2 = L2.shape[1]
ERR = np.zeros((grid1, grid2))
FPR = np.zeros((grid1, grid2))
TPR = np.zeros((grid1, grid2))
DFPR = np.zeros((grid1, grid2))
DTPR = np.zeros((grid1, grid2))
AIC = np.zeros((grid1, grid2))
BIC = np.zeros((grid1, grid2))
Omega_0 = get_K_identity(K,p)
Theta_0 = get_K_identity(K,p)
X_0 = np.zeros((K,p,p))
for g2 in np.arange(grid2):
for g1 in np.arange(grid1):
lambda1 = L1[g1,g2]
lambda2 = L2[g1,g2]
sol, info = ADMM_MGL(S, lambda1, lambda2, reg , Omega_0, Theta_0 = Theta_0, X_0 = X_0, tol = 1e-8, rtol = 1e-8, verbose = False, measure = False)
Theta_sol = sol['Theta']
Omega_sol = sol['Omega']
X_sol = sol['X']
# warm start
Omega_0 = Omega_sol.copy()
Theta_0 = Theta_sol.copy()
X_0 = X_sol.copy()
dr = discovery_rate(Theta_sol, Theta)
TPR[g1,g2] = dr['TPR']
FPR[g1,g2] = dr['FPR']
DTPR[g1,g2] = dr['TPR_DIFF']
DFPR[g1,g2] = dr['FPR_DIFF']
ERR[g1,g2] = error(Theta_sol, Theta)
AIC[g1,g2] = aic(S, Theta_sol, N)
BIC[g1,g2] = ebic(S, Theta_sol, N, gamma = 0.1)
# get optimal lambda
ix= np.unravel_index(np.nanargmin(BIC), BIC.shape)
ix2= np.unravel_index(np.nanargmin(AIC), AIC.shape)
l1opt = L1[ix]
l2opt = L2[ix]
print("Optimal lambda values: (l1,l2) = ", (l1opt,l2opt))
# %%
# Solving time-varying problems with SGL
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
#
# We now solve K independent SGL problems and find the best :math:`\lambda_1` parameter.
#
#
ALPHA = 2*np.logspace(start = -3, stop = -1, num = 10, base = 10)
SGL_BIC = np.zeros(len(ALPHA))
all_res = list()
for j in range(len(ALPHA)):
res = np.zeros((K,p,p))
singleGL = GraphicalLasso(alpha = ALPHA[j], tol = 1e-3, max_iter = 20, verbose = False)
for k in np.arange(K):
model = singleGL.fit(sample[k,:,:].T)
res[k,:,:] = model.precision_
all_res.append(res)
SGL_BIC[j] = ebic(S, res, N, gamma = 0.1)
ix_SGL = np.argmin(SGL_BIC)
results['SGL'] = {'Theta' : all_res[ix_SGL]}
# %%
# Solve with ADMM
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Omega_0 = get_K_identity(K,p)
sol, info = ADMM_MGL(S, l1opt, l2opt, reg, Omega_0, rho = 1, max_iter = 500, \
tol = 1e-10, rtol = 1e-10, verbose = False, measure = True)
results['ADMM'] = {'Theta' : sol['Theta']}
# %%
# Solve with regain
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
#
# ``regain`` needs data in format (N*K,p).
# ``regain`` includes the TV penalty also on the diagonal, hence results may be slightly different than ``ADMM_MGL``.
tmp = sample.transpose(1,0,2).reshape(p,-1).T
ltgl = TimeGraphicalLasso(alpha = N*l1opt, beta = N*l2opt , psi = 'l1', \
rho = 1., tol = 1e-10, rtol = 1e-10, max_iter = 500, verbose = False)
ltgl = ltgl.fit(X = tmp, y = np.repeat(np.arange(K),N))
results['LTGL'] = {'Theta' : ltgl.precision_}
# %%
# Plotting: deviation, eBIC surface, recovery
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
#
# Description of plots:
#
# 1) Deviation of subsequent precision matrices: SGL varies heavily over time while FGL is able to recover the true deviation quite well.
#
# 2) Plot each entry of the disappearing block over time (one line = one precision matrix entry)
#
# 3) Plot each entry of the exponentially decaying block over time (one line = one precision matrix entry)
#
# 4) Surface plot of eBIC over the grid of :math:`\lambda_1` and :math:`\lambda_2`.
#
Theta_admm = results.get('ADMM').get('Theta')
Theta_ltgl = results.get('LTGL').get('Theta')
Theta_sgl = results.get('SGL').get('Theta')
print("Norm(Regain-ADMM)/Norm(ADMM):", np.linalg.norm(Theta_ltgl - Theta_admm)/ np.linalg.norm(Theta_admm))
plot_deviation(results)
plot_evolution(results, block = 0, L = L)
plot_evolution(results, block = 2, L = L)
fig = surface_plot(L1, L2, BIC, name = 'eBIC')
| 30.175879 | 154 | 0.643797 | 889 | 6,005 | 4.239595 | 0.294713 | 0.018573 | 0.022287 | 0.031573 | 0.126559 | 0.079331 | 0.046697 | 0.022818 | 0.022818 | 0 | 0 | 0.031674 | 0.190341 | 6,005 | 198 | 155 | 30.328283 | 0.743521 | 0.349875 | 0 | 0.023256 | 0 | 0 | 0.044427 | 0.007534 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.104651 | 0 | 0.104651 | 0.023256 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
803575bd4b6af7e9e93c67f9cf32c400ade94b42 | 849 | py | Python | analytics/setup.py | Genometric/ToolVisibilityQuantifier | 82572a678c27820ec1a8dbbc54dcee18ee601096 | [
"MIT"
] | 3 | 2020-04-03T02:00:10.000Z | 2020-06-18T01:39:22.000Z | analytics/setup.py | Genometric/ToolVisibilityQuantifier | 82572a678c27820ec1a8dbbc54dcee18ee601096 | [
"MIT"
] | 1 | 2020-07-14T06:39:02.000Z | 2020-07-14T06:39:02.000Z | analytics/setup.py | Genometric/ToolVisibilityQuantifier | 82572a678c27820ec1a8dbbc54dcee18ee601096 | [
"MIT"
] | 1 | 2020-05-22T20:12:47.000Z | 2020-05-22T20:12:47.000Z | """
Package install information.
This build script provides information about
the `lib` package (e.g., the name and version,
what should be included the built package,
and etc.).
"""
import setuptools
with open("README.md", "r") as fh:
long_description = fh.read()
setuptools.setup(
name="TVQ-Scripts",
version="0.0.1",
author="https://github.com/Genometric/TVQ/graphs/contributors",
description="Scripts to post-process data generated by TVQ.",
long_description=long_description,
long_description_content_type="text/markdown",
url="https://github.com/Genometric/TVQ",
packages=setuptools.find_packages(),
classifiers=[
"Programming Language :: Python :: 3",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
],
python_requires='>=3.7',
)
| 27.387097 | 67 | 0.681979 | 104 | 849 | 5.490385 | 0.701923 | 0.105079 | 0.049037 | 0.084063 | 0.094571 | 0 | 0 | 0 | 0 | 0 | 0 | 0.008646 | 0.182568 | 849 | 30 | 68 | 28.3 | 0.814121 | 0.210836 | 0 | 0 | 0 | 0 | 0.427492 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.052632 | 0 | 0.052632 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
803902a284694bf2e2871a900f13d53f88448114 | 13,479 | py | Python | tests/pytest/manipulation_tests/test_dims.py | SX-Aurora/nlcpy | 0a53eec8778073bc48b12687b7ce37ab2bf2b7e0 | [
"BSD-3-Clause"
] | 11 | 2020-07-31T02:21:55.000Z | 2022-03-10T03:12:11.000Z | tests/pytest/manipulation_tests/test_dims.py | SX-Aurora/nlcpy | 0a53eec8778073bc48b12687b7ce37ab2bf2b7e0 | [
"BSD-3-Clause"
] | null | null | null | tests/pytest/manipulation_tests/test_dims.py | SX-Aurora/nlcpy | 0a53eec8778073bc48b12687b7ce37ab2bf2b7e0 | [
"BSD-3-Clause"
] | null | null | null | #
# * The source code in this file is based on the soure code of CuPy.
#
# # NLCPy License #
#
# Copyright (c) 2020-2021 NEC Corporation
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are met:
# * Redistributions of source code must retain the above copyright notice,
# this list of conditions and the following disclaimer.
# * Redistributions in binary form must reproduce the above copyright notice,
# this list of conditions and the following disclaimer in the documentation
# and/or other materials provided with the distribution.
# * Neither NEC Corporation nor the names of its contributors may be
# used to endorse or promote products derived from this software
# without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
# WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
# (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
# ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
# SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
#
# # CuPy License #
#
# Copyright (c) 2015 Preferred Infrastructure, Inc.
# Copyright (c) 2015 Preferred Networks, Inc.
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
import unittest
import pytest
import numpy
import nlcpy
from nlcpy import testing
class TestDims(unittest.TestCase):
@testing.with_requires('numpy>=1.10')
@testing.for_all_dtypes()
@testing.numpy_nlcpy_array_equal()
def test_broadcast_to(self, xp, dtype):
# Note that broadcast_to is only supported on numpy>=1.10
a = testing.shaped_arange((3, 1, 4), xp, dtype)
b = xp.broadcast_to(a, (2, 3, 3, 4))
return b
@testing.with_requires('numpy>=1.10')
@testing.for_all_dtypes()
@testing.numpy_nlcpy_raises()
def test_broadcast_to_fail(self, xp, dtype):
# Note that broadcast_to is only supported on numpy>=1.10
a = testing.shaped_arange((3, 1, 4), xp, dtype)
xp.broadcast_to(a, (1, 3, 4))
@testing.with_requires('numpy>=1.10')
@testing.for_all_dtypes()
@testing.numpy_nlcpy_raises()
def test_broadcast_to_short_shape(self, xp, dtype):
# Note that broadcast_to is only supported on numpy>=1.10
a = testing.shaped_arange((1, 3, 4), xp, dtype)
xp.broadcast_to(a, (3, 4))
@testing.for_all_dtypes()
@testing.numpy_nlcpy_array_equal()
def test_broadcast_to_numpy19(self, xp, dtype):
# Note that broadcast_to is only supported on numpy>=1.10
a = testing.shaped_arange((3, 1, 4), xp, dtype)
if xp is nlcpy:
b = xp.broadcast_to(a, (2, 3, 3, 4))
else:
dummy = xp.empty((2, 3, 3, 4))
b, _ = xp.broadcast_arrays(a, dummy)
return b
@testing.for_all_dtypes()
def test_broadcast_to_fail_numpy19(self, dtype):
# Note that broadcast_to is only supported on numpy>=1.10
a = testing.shaped_arange((3, 1, 4), nlcpy, dtype)
with self.assertRaises(ValueError):
nlcpy.broadcast_to(a, (1, 3, 4))
@testing.for_all_dtypes()
def test_broadcast_to_short_shape_numpy19(self, dtype):
# Note that broadcast_to is only supported on numpy>=1.10
a = testing.shaped_arange((1, 3, 4), nlcpy, dtype)
with self.assertRaises(ValueError):
nlcpy.broadcast_to(a, (3, 4))
@testing.numpy_nlcpy_array_equal()
def test_expand_dims0(self, xp):
a = testing.shaped_arange((2, 3), xp)
return xp.expand_dims(a, 0)
@testing.numpy_nlcpy_array_equal()
def test_expand_dims1(self, xp):
a = testing.shaped_arange((2, 3), xp)
return xp.expand_dims(a, 1)
@testing.numpy_nlcpy_array_equal()
def test_expand_dims2(self, xp):
a = testing.shaped_arange((2, 3), xp)
return xp.expand_dims(a, 2)
@testing.numpy_nlcpy_array_equal()
def test_expand_dims_negative1(self, xp):
a = testing.shaped_arange((2, 3), xp)
return xp.expand_dims(a, -2)
@testing.numpy_nlcpy_raises()
def test_expand_dims_negative2(self, xp):
a = testing.shaped_arange((2, 3), xp)
return xp.expand_dims(a, -4)
@testing.numpy_nlcpy_array_equal()
def test_expand_dims_tuple_axis(self, xp):
a = testing.shaped_arange((2, 2, 2), xp)
return [xp.expand_dims(a, axis) for axis in [
(0, 1, 2),
(0, -1, -2),
(0, 3, 5),
(0, -3, -5),
(),
(1,),
]]
def test_expand_dims_out_of_range(self):
for xp in (numpy, nlcpy):
a = testing.shaped_arange((2, 2, 2), xp)
for axis in [(1, -6), (1, 5)]:
with pytest.raises(numpy.AxisError):
xp.expand_dims(a, axis)
def test_expand_dims_repeated_axis(self):
for xp in (numpy, nlcpy):
a = testing.shaped_arange((2, 2, 2), xp)
with pytest.raises(ValueError):
xp.expand_dims(a, (1, 1))
@testing.numpy_nlcpy_array_equal()
def test_squeeze1(self, xp):
a = testing.shaped_arange((1, 2, 1, 3, 1, 1, 4, 1), xp)
return a.squeeze()
@testing.numpy_nlcpy_array_equal()
def test_squeeze2(self, xp):
a = testing.shaped_arange((2, 3, 4), xp)
return xp.squeeze(a)
@testing.numpy_nlcpy_array_equal()
def test_squeeze_int_axis1(self, xp):
a = testing.shaped_arange((1, 2, 1, 3, 1, 1, 4, 1), xp)
return a.squeeze(axis=2)
@testing.numpy_nlcpy_array_equal()
def test_squeeze_int_axis2(self, xp):
a = testing.shaped_arange((1, 2, 1, 3, 1, 1, 4, 1), xp)
return xp.squeeze(a, axis=-3)
@testing.with_requires('numpy>=1.13')
@testing.numpy_nlcpy_raises()
def test_squeeze_int_axis_failure1(self, xp):
a = testing.shaped_arange((1, 2, 1, 3, 1, 1, 4, 1), xp)
xp.squeeze(a, axis=-9)
def test_squeeze_int_axis_failure2(self):
a = testing.shaped_arange((1, 2, 1, 3, 1, 1, 4, 1), nlcpy)
with self.assertRaises(nlcpy.core.error._AxisError):
nlcpy.squeeze(a, axis=-9)
@testing.numpy_nlcpy_array_equal()
def test_squeeze_tuple_axis1(self, xp):
a = testing.shaped_arange((1, 2, 1, 3, 1, 1, 4, 1), xp)
return xp.squeeze(a, axis=(2, 4))
@testing.numpy_nlcpy_array_equal()
def test_squeeze_tuple_axis2(self, xp):
a = testing.shaped_arange((1, 2, 1, 3, 1, 1, 4, 1), xp)
return xp.squeeze(a, axis=(-4, -3))
@testing.numpy_nlcpy_array_equal()
def test_squeeze_tuple_axis3(self, xp):
a = testing.shaped_arange((1, 2, 1, 3, 1, 1, 4, 1), xp)
return xp.squeeze(a, axis=(4, 2))
@testing.numpy_nlcpy_array_equal()
def test_squeeze_tuple_axis4(self, xp):
a = testing.shaped_arange((1, 2, 1, 3, 1, 1, 4, 1), xp)
return xp.squeeze(a, axis=())
@testing.with_requires('numpy>=1.13')
@testing.numpy_nlcpy_raises()
def test_squeeze_tuple_axis_failure1(self, xp):
a = testing.shaped_arange((1, 2, 1, 3, 1, 1, 4, 1), xp)
xp.squeeze(a, axis=(-9,))
@testing.numpy_nlcpy_raises()
def test_squeeze_tuple_axis_failure2(self, xp):
a = testing.shaped_arange((1, 2, 1, 3, 1, 1, 4, 1), xp)
xp.squeeze(a, axis=(2, 2))
def test_squeeze_tuple_axis_failure3(self):
a = testing.shaped_arange((1, 2, 1, 3, 1, 1, 4, 1), nlcpy)
with self.assertRaises(nlcpy.core.error._AxisError):
nlcpy.squeeze(a, axis=(-9,))
@testing.numpy_nlcpy_array_equal()
def test_squeeze_scalar1(self, xp):
a = testing.shaped_arange((), xp)
return xp.squeeze(a, axis=0)
@testing.numpy_nlcpy_array_equal()
def test_squeeze_scalar2(self, xp):
a = testing.shaped_arange((), xp)
return xp.squeeze(a, axis=-1)
@testing.with_requires('numpy>=1.13')
@testing.numpy_nlcpy_raises()
def test_squeeze_scalar_failure1(self, xp):
a = testing.shaped_arange((), xp)
xp.squeeze(a, axis=-2)
@testing.with_requires('numpy>=1.13')
@testing.numpy_nlcpy_raises()
def test_squeeze_scalar_failure2(self, xp):
a = testing.shaped_arange((), xp)
xp.squeeze(a, axis=1)
def test_squeeze_scalar_failure3(self):
a = testing.shaped_arange((), nlcpy)
with self.assertRaises(nlcpy.core.error._AxisError):
nlcpy.squeeze(a, axis=-2)
def test_squeeze_scalar_failure4(self):
a = testing.shaped_arange((), nlcpy)
with self.assertRaises(nlcpy.core.error._AxisError):
nlcpy.squeeze(a, axis=1)
@testing.numpy_nlcpy_raises()
def test_squeeze_failure(self, xp):
a = testing.shaped_arange((2, 1, 3, 4), xp)
xp.squeeze(a, axis=2)
@testing.numpy_nlcpy_array_equal()
def test_external_squeeze(self, xp):
a = testing.shaped_arange((1, 2, 1, 3, 1, 1, 4, 1), xp)
return xp.squeeze(a)
@testing.parameterize(
{'shapes': [(), ()]},
{'shapes': [(0,), (0,)]},
{'shapes': [(1,), (1,)]},
{'shapes': [(2,), (2,)]},
{'shapes': [(0,), (1,)]},
{'shapes': [(2, 3), (1, 3)]},
{'shapes': [(2, 1, 3, 4), (3, 1, 4)]},
{'shapes': [(4, 3, 2, 3), (2, 3)]},
{'shapes': [(2, 0, 1, 1, 3), (2, 1, 0, 0, 3)]},
{'shapes': [(0, 1, 1, 3), (2, 1, 0, 0, 3)]},
{'shapes': [(0, 1, 1, 0, 3), (5, 2, 0, 1, 0, 0, 3), (2, 1, 0, 0, 0, 3)]},
)
class TestBroadcastArrays(unittest.TestCase):
@testing.for_all_dtypes()
@testing.for_orders('CF')
@testing.numpy_nlcpy_array_equal()
def test_broadcast_arrays(self, xp, dtype, order):
arrays = [testing.shaped_arange(s, xp, dtype, order) for s in self.shapes]
return xp.broadcast_arrays(*arrays)
@testing.numpy_nlcpy_array_equal()
def test_broadcast_arrays_with_list_input(self, xp):
arrays = [testing.shaped_arange(s, xp).tolist() for s in self.shapes]
return xp.broadcast_arrays(*arrays)
@testing.parameterize(
{'shapes': [(3,), (2,)]},
{'shapes': [(3, 2), (2, 3)]},
{'shapes': [(3, 2), (3, 4)]},
{'shapes': [(0, ), (2, )]},
)
class TestBroadcastArraysInvalidShape(unittest.TestCase):
@testing.numpy_nlcpy_raises()
def test_broadcast_arrays_invalid_shape(self, xp):
arrays = [testing.shaped_arange(s, xp) for s in self.shapes]
xp.broadcast_arrays(*arrays)
class TestBroadcastArraysFailure(unittest.TestCase):
def test_broadcast_arrays_subok(self):
try:
nlcpy.broadcast_arrays(nlcpy.empty([1, 3]), nlcpy.empty([2, 1]), subok=True)
except NotImplementedError:
return
raise Exception
class TestAtLeast(unittest.TestCase):
def check_atleast(self, func, xp):
a = testing.shaped_arange((), xp, 'i')
b = testing.shaped_arange((2,), xp, 'f')
c = testing.shaped_arange((3, 4), xp, 'd')
d = testing.shaped_arange((4, 2, 3), xp, 'F', order='F')
e = 1
f = xp.float32(1)
return func(a, b, c, d, e, f)
@testing.numpy_nlcpy_array_equal()
def test_atleast_1d(self, xp):
return self.check_atleast(xp.atleast_1d, xp)
@testing.numpy_nlcpy_array_equal()
def test_atleast_1d2(self, xp):
a = testing.shaped_arange((4, 2, 3), xp)
return xp.atleast_1d(a)
@testing.numpy_nlcpy_array_equal()
def test_atleast_2d(self, xp):
return self.check_atleast(xp.atleast_2d, xp)
@testing.numpy_nlcpy_array_equal()
def test_atleast_2d2(self, xp):
a = testing.shaped_arange((4, 2, 3), xp)
return xp.atleast_2d(a)
@testing.numpy_nlcpy_array_equal()
def test_atleast_3d(self, xp):
return self.check_atleast(xp.atleast_3d, xp)
@testing.numpy_nlcpy_array_equal()
def test_atleast_3d2(self, xp):
a = testing.shaped_arange((4, 2, 3), xp)
return xp.atleast_3d(a)
| 37.234807 | 88 | 0.636472 | 1,956 | 13,479 | 4.213701 | 0.143661 | 0.038219 | 0.103737 | 0.094637 | 0.637224 | 0.606285 | 0.581776 | 0.55205 | 0.490415 | 0.395535 | 0 | 0.039915 | 0.234216 | 13,479 | 361 | 89 | 37.33795 | 0.758574 | 0.228726 | 0 | 0.408907 | 0 | 0 | 0.016836 | 0 | 0 | 0 | 0 | 0 | 0.024292 | 1 | 0.186235 | false | 0 | 0.020243 | 0.012146 | 0.34413 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
803c3c3bd07616f3ed463288eceacaefcc475176 | 1,200 | py | Python | chromedriver.py | taha-shafique/colab | 481e110ee6796dac2ef82c22fa6e138688fcd0b0 | [
"Apache-2.0"
] | null | null | null | chromedriver.py | taha-shafique/colab | 481e110ee6796dac2ef82c22fa6e138688fcd0b0 | [
"Apache-2.0"
] | null | null | null | chromedriver.py | taha-shafique/colab | 481e110ee6796dac2ef82c22fa6e138688fcd0b0 | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Created on Fri Oct 29 23:04:12 2021
@author: tahashafique
"""
from selenium import webdriver
from selenium.webdriver.common.action_chains import ActionChains
class chrome_driver:
def __init__(self, executable_path, max_window = True, window_size = None, incognito = True, headless = False):
self.options = webdriver.ChromeOptions()
self.driver = webdriver.Chrome(executable_path = executable_path ,options = self.options)
self.options.add_argument("user-agent = Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/94.0.4606.71 Safari/537.36")
if headless == True:
self.options.add_argument('--headless')
if max_window == True:
self.driver.maximize_window()
if window_size != None:
self.options.add_argument(f"window-size={window_size[0]},{window_size[1]}")
if incognito == True:
self.options.add_argument("--incognito")
def close_driver(self):
self.driver.quit() | 30.769231 | 170 | 0.608333 | 142 | 1,200 | 4.978873 | 0.528169 | 0.093352 | 0.079208 | 0.12447 | 0.07355 | 0 | 0 | 0 | 0 | 0 | 0 | 0.048611 | 0.28 | 1,200 | 39 | 171 | 30.769231 | 0.769676 | 0.085 | 0 | 0 | 0 | 0.058824 | 0.182569 | 0.041284 | 0 | 0 | 0 | 0 | 0 | 1 | 0.117647 | false | 0 | 0.117647 | 0 | 0.294118 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
803c7c054becf664ad3ff0f679aef0fcea592bbf | 992 | py | Python | Core/TestConvInst.py | gabrieloandco/RiscV-Arqui1 | 6495370d23d3a7e9e1f579a1b4e8c1be799c3913 | [
"MIT"
] | null | null | null | Core/TestConvInst.py | gabrieloandco/RiscV-Arqui1 | 6495370d23d3a7e9e1f579a1b4e8c1be799c3913 | [
"MIT"
] | null | null | null | Core/TestConvInst.py | gabrieloandco/RiscV-Arqui1 | 6495370d23d3a7e9e1f579a1b4e8c1be799c3913 | [
"MIT"
] | null | null | null | from myhdl import *
from ConvInst import *
import random
def tbConvInst():
datain = Signal(modbv(0)[32:])
#din = Signal(modbv(0)[32:])
dataout = Signal(modbv(0)[32:])
#dout = Signal(modbv(0)[32:])
dut = ConvInst(datain,dataout)
interv = delay(7)
@always(interv)
def stim():
#hi = bin(random.randint(0,2**25-1))[2:]
#lo= random.choice(['0000011','0010011','1101111','0100011','1100011','0110111','0010111','11001110'])
#seed = int(hi+lo,2)
datain.next= random.randint(0,2**32-1) #seed
#din.next = seed
#dout.next =ConvInstCheck(seed)
#if dout != dataout:
# print('lo: ' + lo)
# print('datain: ' + str(datain))
# print('din: ' + str(din))
# print('dataout: ' + str(dataout))
# print('dout: ' + str(dout))
# assert dout == dataout, 'Theres Something Wrong'
return instances()
test = tbConvInst()
test.config_sim(trace=True)
test.run_sim(1000)
#Arreglar el problema con las tuples: AttributeError: 'tuple' object has no attribute 'config_sim'
| 24.195122 | 104 | 0.646169 | 137 | 992 | 4.656934 | 0.50365 | 0.068966 | 0.075235 | 0.087774 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.103241 | 0.160282 | 992 | 40 | 105 | 24.8 | 0.662665 | 0.571573 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.133333 | false | 0 | 0.2 | 0 | 0.4 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
803f229f31bff7100d337f359e938d71b79934ae | 631 | py | Python | LuHR_showcode/x0000.py | luhralive/python | b74bdc4c7bc8e75aee9530c27d621a773a71ac67 | [
"MIT"
] | 1 | 2019-05-05T11:55:50.000Z | 2019-05-05T11:55:50.000Z | LuHR_showcode/x0000.py | luhralive/python | b74bdc4c7bc8e75aee9530c27d621a773a71ac67 | [
"MIT"
] | null | null | null | LuHR_showcode/x0000.py | luhralive/python | b74bdc4c7bc8e75aee9530c27d621a773a71ac67 | [
"MIT"
] | null | null | null | # 将你的 QQ 头像(或者微博头像)右上角加上红色的数字,类似于微信未读信息数量那种提示效果。 类似于图中效果
from PIL import Image,ImageDraw,ImageFont
def add_num( imgPath, color = "#ff0000", msg = '4'):
try:
img = Image.open(imgPath, 'r')
draw = ImageDraw.Draw(img)
imagefont = ImageFont.truetype('C:/Windows/Fonts/Arial.ttf', 40)
width, height = img.size
draw.text( (width - 40, 0),msg, font=imagefont, fill=color) #左上角为原点,Y轴向下
img.save('result.jpg', 'jpeg')
except:
print("can't load picture")
if __name__ == "__main__":
add_num('D:/git/Project/picture/showcode/picture01.jpg',msg = '5')
| 31.55 | 81 | 0.616482 | 81 | 631 | 4.679012 | 0.753086 | 0.031662 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.02686 | 0.232964 | 631 | 19 | 82 | 33.210526 | 0.756198 | 0.103011 | 0 | 0 | 0 | 0 | 0.222836 | 0.130755 | 0 | 0 | 0 | 0 | 0 | 1 | 0.076923 | false | 0 | 0.076923 | 0 | 0.153846 | 0.076923 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
804137226516ea6cf2d963db93c0457e0bf4dc2c | 4,126 | py | Python | sonar_to_raw/sonar_to_raw.py | jan-bogaerts/sonar_tools | 16a38c4318ea38ec4259069616da33765de0f91a | [
"MIT"
] | null | null | null | sonar_to_raw/sonar_to_raw.py | jan-bogaerts/sonar_tools | 16a38c4318ea38ec4259069616da33765de0f91a | [
"MIT"
] | null | null | null | sonar_to_raw/sonar_to_raw.py | jan-bogaerts/sonar_tools | 16a38c4318ea38ec4259069616da33765de0f91a | [
"MIT"
] | null | null | null | __author__ = 'Jan Bogaerts'
__copyright__ = "Copyright 2018, Elastetic"
__credits__ = []
__maintainer__ = "Jan Bogaerts"
__email__ = "jb@elastetic.com"
__status__ = "Development" # "Prototype", or "Production"
"""
a small tool to convert files from the sonar corpus into raw text data (as it only comes annotated).
sonar source: https://ivdnt.org/downloads/taalmaterialen/tstc-sonar-corpus (login: janb, pwd: you-know-what-the-default-one
warning: extremely large file
"""
import logging
logger = logging.getLogger(__name__)
import os
import untangle
SKIP_SPACE_BEFORE = [')', ',', '.', ':', '/']
SKIP_SPACE_AFTER = ['(', '#', ':', '/']
QUOTES = ["'", '"']
quote_count = 0
def process_paragraph(paragraph):
prev_word = ''
quote_count = 0
text = ''
for sentence in paragraph.s:
for word_el in sentence.w:
word = word_el.cdata
if not prev_word == '' and (not word in SKIP_SPACE_BEFORE and not prev_word in SKIP_SPACE_AFTER):
if not ((prev_word in QUOTES and quote_count % 2 == 1) or (word in QUOTES and quote_count % 2 == 1)):
text += ' '
text += word
prev_word = word
if word in QUOTES:
quote_count += 1
prev_word = ''
quote_count = 0
text += ' '
text += '\n'
return text
def iterate_files(input_path, output_path):
"""
iterate over all the sonar files and extract the text out of it. Store the raw text in a new file.
:return: None
"""
if not os.path.exists(output_path):
os.makedirs(output_path)
directory = os.fsencode(input_path)
for file in os.listdir(directory):
filename = os.fsdecode(file)
if filename.endswith(".xml"):
try:
full_file = os.path.join(input_path, filename)
obj = untangle.parse(full_file)
text = ""
if hasattr(obj.DCOI.text.body, 'div1'):
body = obj.DCOI.text.body.div1
elif hasattr(obj.DCOI.text.body, 'div0'):
body = obj.DCOI.text.body.div0
else:
body = obj.DCOI.text.body.div
if type(body) is list:
for div in body:
if hasattr(div, 'head'):
if hasattr(div.head, 's'):
text = process_paragraph(div.head)
elif hasattr(div.head, 'cdata'):
text += div.head.cdata
text += '\n'
if hasattr(div, 'p'):
for paragraph in div.p:
text += process_paragraph(paragraph)
else:
if hasattr(body, 'head'):
if hasattr(body.head, 's'):
text = process_paragraph(body.head)
elif hasattr(body.head, 'cdata'):
text += body.head.cdata
text += '\n'
if hasattr(body, 'p'):
for paragraph in body.p:
text += process_paragraph(paragraph)
output_file = os.path.join(output_path, os.path.splitext(filename)[0] + '.txt')
with open(output_file, 'w') as out:
out.write(text)
except KeyboardInterrupt as key:
exit(1)
except:
logger.exception("failed to convert file " + filename)
if __name__ == "__main__":
# execute only if run as a script
print("start")
start_point = '../sonarCorpus/SoNaRCorpus_NC_1.2/SONAR500/DCOI/'
for root, directories, filenames in os.walk(start_point):
root_part = root[len(start_point):]
for dir in directories:
input_dir = os.path.join(root, dir)
print(input_dir)
output_path = os.path.join('.', 'output', root_part, dir)
print(output_path)
iterate_files(input_dir, output_path)
| 36.513274 | 124 | 0.524721 | 468 | 4,126 | 4.435897 | 0.331197 | 0.033719 | 0.026493 | 0.036127 | 0.179672 | 0.070328 | 0.026012 | 0.026012 | 0 | 0 | 0 | 0.008799 | 0.366457 | 4,126 | 112 | 125 | 36.839286 | 0.785386 | 0.042172 | 0 | 0.157303 | 0 | 0 | 0.061341 | 0.013086 | 0 | 0 | 0 | 0 | 0 | 1 | 0.022472 | false | 0 | 0.033708 | 0 | 0.067416 | 0.033708 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
8041c204d2d2ef90e334feb2ab2a86790a4d7bb6 | 655 | py | Python | databases/migrations/2019_11_07_110837_create_entries_table.py | mandarvaze/life-journal-masonite | e9482fd0da7f8383d31e46160a3ab297281a6171 | [
"MIT"
] | null | null | null | databases/migrations/2019_11_07_110837_create_entries_table.py | mandarvaze/life-journal-masonite | e9482fd0da7f8383d31e46160a3ab297281a6171 | [
"MIT"
] | null | null | null | databases/migrations/2019_11_07_110837_create_entries_table.py | mandarvaze/life-journal-masonite | e9482fd0da7f8383d31e46160a3ab297281a6171 | [
"MIT"
] | null | null | null | """Migration for Entries Table."""
from orator.migrations import Migration
class CreateEntriesTable(Migration):
"""Migration Class for Entries Table."""
def up(self):
"""Run the migrations."""
with self.schema.create("entries") as table:
table.increments("id")
table.string("note")
table.integer("rating")
table.date("entry_for_date")
table.integer("author_id").unsigned()
table.foreign("author_id").references("id").on("users")
table.timestamps()
def down(self):
"""Revert the migrations."""
self.schema.drop("entries")
| 27.291667 | 67 | 0.59084 | 69 | 655 | 5.550725 | 0.536232 | 0.052219 | 0.078329 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.261069 | 655 | 23 | 68 | 28.478261 | 0.791322 | 0.161832 | 0 | 0 | 0 | 0 | 0.123106 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.153846 | false | 0 | 0.076923 | 0 | 0.307692 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
804207d0605a0af0c0121c80e73f1b485bc1dae3 | 2,613 | py | Python | poise/poise.py | harbi/poise | a9a659cd95c5101d78d9ae1a90f345874324e307 | [
"MIT"
] | 5 | 2019-08-04T08:04:09.000Z | 2021-12-29T16:56:55.000Z | poise/poise.py | harbi/poise | a9a659cd95c5101d78d9ae1a90f345874324e307 | [
"MIT"
] | null | null | null | poise/poise.py | harbi/poise | a9a659cd95c5101d78d9ae1a90f345874324e307 | [
"MIT"
] | null | null | null | import click
import scrapy
from scrapy.crawler import CrawlerProcess
from langdetect import detect
__author__ = 'Abdullah Alharbi'
def get_urls(keyword):
return [
'https://www.goodreads.com/quotes/tag/{}'.format(keyword),
]
class QuotesSpider(scrapy.Spider):
name = "quotes"
quotes = []
def __init__(self, keyword, count, language, **kwargs):
self.start_urls = get_urls(keyword)
self.count = count
self.language = language
def parse(self, response):
for quote in response.css('div.quoteText'):
if len(self.quotes) < self.count:
text = quote.css('div.quoteText::text').extract_first()
text = text.replace('\n', '')
text = text.strip()
if text[0] == '“':
text = text[1:]
if text[-1] == '”':
text = text[:-1]
if detect(text) != self.language:
continue
author = quote.css('span.authorOrTitle::text').extract_first()
author = author.replace('\n', '')
author = author.strip()
if author[-1] == ',':
author = author[:-1]
result = {'text': text, 'author': author}
yield result
print('{}\n'.format(result))
self.quotes.append(result)
next_page = response.css('a.next_page::attr("href")').extract_first()
if next_page is not None and len(self.quotes) < self.count:
yield response.follow(next_page, self.parse)
def start(keyword, count, language, format):
settings = {
'FEED_FORMAT': '{}'.format(format),
'FEED_URI': 'quotes.{}'.format(format),
'LOG_ENABLED': False,
}
process = CrawlerProcess(settings=settings)
process.crawl(QuotesSpider, keyword=keyword, count=count, language=language)
process.start()
@click.command()
@click.option('--keyword', '-k', type=str, required=True)
@click.option('--count', '-c', type=int, default=1, show_default=True)
@click.option('--language', '-l', type=str, default='en', show_default=True)
@click.option('--format', '-f', type=click.Choice(['json', 'xml', 'csv']), default='json', show_default=True)
def get(keyword, count, language, format):
click.echo(
'\n✨ Poise, a CLI for retrieving quotes on Goodreads 📚\n'
)
click.echo(
'Retrieving {} {} quote(s) in {}...\n'.format(count, keyword, language)
)
start(keyword, count, language, format)
if __name__ == '__main__':
get()
| 30.741176 | 109 | 0.567164 | 292 | 2,613 | 4.972603 | 0.35274 | 0.041322 | 0.055096 | 0.053719 | 0.108815 | 0 | 0 | 0 | 0 | 0 | 0 | 0.003698 | 0.275545 | 2,613 | 84 | 110 | 31.107143 | 0.762282 | 0 | 0 | 0.03125 | 0 | 0 | 0.138155 | 0.018752 | 0 | 0 | 0 | 0 | 0 | 1 | 0.078125 | false | 0 | 0.0625 | 0.015625 | 0.203125 | 0.015625 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
8043e5f4d1359c25072690e31f281a2aca8cbb06 | 394 | py | Python | init_solutions.py | CLAHRCWessex/SymmetricTSP | 2cfce4146ece0c784aa62f1b0e2ac1cb2e91b6c4 | [
"MIT"
] | 1 | 2020-06-01T22:56:11.000Z | 2020-06-01T22:56:11.000Z | init_solutions.py | CLAHRCWessex/SymmetricTSP | 2cfce4146ece0c784aa62f1b0e2ac1cb2e91b6c4 | [
"MIT"
] | null | null | null | init_solutions.py | CLAHRCWessex/SymmetricTSP | 2cfce4146ece0c784aa62f1b0e2ac1cb2e91b6c4 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
"""
Created on Thu Sep 14 15:57:46 2017
@author: tm3y13
"""
from random import shuffle
def random_tour(tour):
"""
Initial solution to tour is psuedo random
"""
rnd_tour = tour[1:len(tour)-1]
base_city = tour[0]
shuffle(rnd_tour)
rnd_tour.append(base_city)
rnd_tour.insert(0, base_city)
return rnd_tour
| 18.761905 | 46 | 0.598985 | 57 | 394 | 3.982456 | 0.596491 | 0.154185 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.070671 | 0.281726 | 394 | 21 | 47 | 18.761905 | 0.731449 | 0.296954 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.125 | false | 0 | 0.125 | 0 | 0.375 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
8044be3f76a1e16d32b5f385fa802022a7352190 | 696 | py | Python | examples/basic/colormaps.py | mikami520/vedo | 1a3abcf3f1e495287e8934d9b5bb07b511ab8be5 | [
"MIT"
] | 1 | 2022-03-22T21:49:29.000Z | 2022-03-22T21:49:29.000Z | examples/basic/colormaps.py | mikami520/vedo | 1a3abcf3f1e495287e8934d9b5bb07b511ab8be5 | [
"MIT"
] | null | null | null | examples/basic/colormaps.py | mikami520/vedo | 1a3abcf3f1e495287e8934d9b5bb07b511ab8be5 | [
"MIT"
] | null | null | null | """
Example usage of cmap() to assign a color to each mesh vertex
by looking it up in matplotlib database of colormaps
"""
print(__doc__)
from vedo import Plotter, Mesh, dataurl
# these are the some matplotlib color maps
maps = [
"afmhot",
"binary",
"bone",
"cool",
"coolwarm",
"copper",
"gist_earth",
"gray",
"hot",
"jet",
"rainbow",
"winter",
]
mug = Mesh(dataurl+"mug.ply")
scalars = mug.points()[:, 1] # let y-coord be the scalar
plt = Plotter(N=len(maps))
for i, key in enumerate(maps): # for each available color map name
imug = mug.clone(deep=False).cmap(key, scalars, n=5)
plt.at(i).show(imug, key)
plt.interactive().close()
| 20.470588 | 67 | 0.630747 | 101 | 696 | 4.29703 | 0.722772 | 0.050691 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.003683 | 0.219828 | 696 | 33 | 68 | 21.090909 | 0.79558 | 0.310345 | 0 | 0 | 0 | 0 | 0.157447 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.043478 | 0 | 0.043478 | 0.043478 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
8045e28b5450ff355612318f9bf36e0b5119a567 | 2,696 | py | Python | Class.py | samli6479/PythonLearn | 2ad78f62d58612132d3a3759aecec4a52381566f | [
"Apache-2.0"
] | null | null | null | Class.py | samli6479/PythonLearn | 2ad78f62d58612132d3a3759aecec4a52381566f | [
"Apache-2.0"
] | null | null | null | Class.py | samli6479/PythonLearn | 2ad78f62d58612132d3a3759aecec4a52381566f | [
"Apache-2.0"
] | null | null | null | ''' class <name>:
<suite>
Assignments & def in <suite> create attributes of the class'''
# Object Construction
# Idea: All bank accounts have a balance and an account holder
# The account class should add those attributes to each of its instances
''' When a class is called:
1. New instance of the class is created
2. The __init__ method of the class is called with the new object as its first argument
name: self, along with any additional arguments provided.
3. The __init__ will be called automatically when new instance is created
4. __init__ is called constructor'''
# Objec Identity that has unique identity
# Identity operator test if two instances are equal
# Biding an object to a new name does not create a new object
# method are functions defined in the suite of a class statement
# self should always be bound to an instance of the Account class
# Invoking method - have access to the object via the self parameter
# - they can also access and manipulate the object's state
# Dot notation automatically supplies the first argument to a method
''' Dot notation accesses attributes of the instance or its class
<expression>.<name> <expression> Valid Python expression
<name> must be name within the class'''
# Attributes - named information of the objects
# Class attributes are "shared" accros all instances of the class, attributes of class not instances
class Account:
"""An account has a balance and holder.
All accounts share a common interest rate."""
interest = 0.02 # A class attribute
def __init__(self, account_holder):
self.balance = 0
self.holder = account_holder
def deposit(self, amount):
'''Add amount to balance'''
self.balance = self.balance + amount
return self.balance
def withdraw(self, amount):
'''Substract amount from balance if possible.'''
if amount > self.balance:
return 'Insufficient funds'
self.balance = self.balance - amount
return self.balance
Jim_Account = Account('Jim')
Edward_Account = Account('Edward')
# Object + Function = Bound Method
type(Account.deposit) # it is a function
type(Jim_Account.deposit) # it is a method
# A function: all arguments within parentheses
# Method: Buildin with its object
''' Assignment to Attributes
1. If the object is an instance, then assignment sets an instance attribute.
2. If the object is a class, then assignment sets a class attribute.'''
Jim_Account.interest = 0.08
Account.interest = 0.04
Edward_Account.interest
| 28.378947 | 100 | 0.690653 | 371 | 2,696 | 4.956873 | 0.345013 | 0.047852 | 0.021751 | 0.013051 | 0.069603 | 0.04894 | 0.04894 | 0.04894 | 0 | 0 | 0 | 0.007897 | 0.248516 | 2,696 | 94 | 101 | 28.680851 | 0.899803 | 0.437685 | 0 | 0.1 | 0 | 0 | 0.037293 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.15 | false | 0 | 0 | 0 | 0.4 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
804a41b943270e181d9be77bf29f7d8d5707a582 | 2,054 | py | Python | p.py | jen-soft/tools | 5a7203caebc90061c47fabb8b9bdb5c0128d3b1e | [
"Apache-2.0"
] | 1 | 2019-05-03T14:51:49.000Z | 2019-05-03T14:51:49.000Z | p.py | jen-soft/tools | 5a7203caebc90061c47fabb8b9bdb5c0128d3b1e | [
"Apache-2.0"
] | null | null | null | p.py | jen-soft/tools | 5a7203caebc90061c47fabb8b9bdb5c0128d3b1e | [
"Apache-2.0"
] | null | null | null | # -*- coding: utf-8 -*-
"""# - - -
data-edit: 2018-04-28(05:30)
name: Jen-Soft-Print (JSP)
author: jen-soft
email: jen.soft.master@gmail.com
license: Licensed under the Apache License, Version 2.0
http://www.apache.org/licenses/LICENSE-2.0
description: short tools for using in interactive-python
import p
"""
import math
print("""
# - - -
p.dir(obj) # print all publick atters from object
# p.dir( str, True, 7 )
""".strip())
class origin:
dir = dir
def chunks(l, n):
for i in range(0, len(l), n):
yield l[i:i + n]
def dir(obj, show_private=False, count_columns=5):
# get object attributes
if show_private: # allow: "_name..."
obj_attr = [n for n in origin.dir(obj) if not n.startswith('__')]
else:
obj_attr = [n for n in origin.dir(obj) if not n.startswith('_')]
# split list attributes for columns
obj_attr.sort()
column_len = int(math.ceil(len(obj_attr) / count_columns))
columns = list(chunks(obj_attr, column_len))
for column_items in columns:
dummy_items = range(0, column_len - len(column_items))
column_items.extend(['' for d in dummy_items])
# text format columns (normalize width)
for i, column_items in enumerate(columns):
column_width = len(max(column_items, key=len))
formatted_items = []
for item in column_items:
indent = ' ' * (column_width - len(item))
formatted_items.append(item + indent)
columns[i] = formatted_items
# print result data
for line_data in list(zip(*columns)):
line = ' '.join(line_data)
print(line)
print('\n')
import json as _json
print("""
# - - -
p.json(obj) # print structured dates: dict, list, set, tuple
# p.json({'id': 1, 'name': 'jen'})
""".strip())
def json(data):
print(_json.dumps(data, indent=2, ensure_ascii=False))
print("""
# - - -
""".strip()+'\n')
if __name__ == '__main__':
dir(str, True)
json({'id': 1, 'username': 'jen-soft'})
| 23.883721 | 73 | 0.601266 | 289 | 2,054 | 4.128028 | 0.404844 | 0.055323 | 0.016764 | 0.018441 | 0.070411 | 0.070411 | 0.070411 | 0.070411 | 0.070411 | 0.070411 | 0 | 0.015514 | 0.246835 | 2,054 | 85 | 74 | 24.164706 | 0.755656 | 0.218111 | 0 | 0.170213 | 0 | 0 | 0.163799 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.06383 | false | 0 | 0.042553 | 0 | 0.148936 | 0.170213 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
804d136e8e7766f94db77c9aeeb01f986bf70336 | 302 | py | Python | app/core/validators.py | petoandroide/recipe-app-api | bcd4376f4eee91c8ce1a6982afc7ca1be92eb9e9 | [
"MIT"
] | null | null | null | app/core/validators.py | petoandroide/recipe-app-api | bcd4376f4eee91c8ce1a6982afc7ca1be92eb9e9 | [
"MIT"
] | null | null | null | app/core/validators.py | petoandroide/recipe-app-api | bcd4376f4eee91c8ce1a6982afc7ca1be92eb9e9 | [
"MIT"
] | null | null | null | from django.core.exceptions import ValidationError
from django.core.validators import validate_email
def emailValidator(email):
'''Checks if an email is valid'''
try:
validate_email(str(email))
except ValidationError:
raise ValidationError(f'{email} is not a valid email') | 30.2 | 62 | 0.731788 | 38 | 302 | 5.763158 | 0.605263 | 0.091324 | 0.127854 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.18543 | 302 | 10 | 62 | 30.2 | 0.890244 | 0.089404 | 0 | 0 | 0 | 0 | 0.103704 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | false | 0 | 0.285714 | 0 | 0.428571 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
804fc7ed324daf636f71ef91cd44679bdebec4f6 | 8,887 | py | Python | ligeor/fitting/twogaussian.py | gecheline/ligeor | 34c33628741fb7eedd59660672b0557d48dfe4ff | [
"MIT"
] | 1 | 2021-09-21T07:18:14.000Z | 2021-09-21T07:18:14.000Z | ligeor/fitting/twogaussian.py | gecheline/ligeor | 34c33628741fb7eedd59660672b0557d48dfe4ff | [
"MIT"
] | null | null | null | ligeor/fitting/twogaussian.py | gecheline/ligeor | 34c33628741fb7eedd59660672b0557d48dfe4ff | [
"MIT"
] | null | null | null | import numpy as np
from ligeor.fitting.sampler import EmceeSampler
from ligeor.models.twogaussian import TwoGaussianModel
from ligeor.utils.lcutils import *
class EmceeSamplerTwoGaussian(EmceeSampler):
def __init__(self, filename='', times = [], fluxes = [], sigmas = [],
period_init=1, t0_init=0, n_downsample=0, **kwargs):
'''
Initializes a sampler for the light curve stored in 'filename'
with determined initial values for the period and t0.
Parameters
----------
filename: str
Filename to load the raw light curve from (expected format: times, fluxes, sigmas)
times: array-like
Time points of the observations
fluxes: array-like
Observed fluxes
sigmas: array-like
Uncertainities of the observed fluxes
period_init: float
Initial value of the period (from code/triage)
t0_init: float
Initial value of the time of superior conjunction (t0)
n_downsample: int
Number of data points to skip in raw light curve for downsampling
nbins: int
Number of bins (applies to the computed final model).
Keyword arguments are passed on to load_lc.
Attributes
----------
_period_init: float
Initial value of the orbital period
_t0_init: float
Initial value of the time of superior conjunction
'''
super(EmceeSamplerTwoGaussian,self).__init__(filename, times, fluxes, sigmas, period_init, t0_init, n_downsample, **kwargs)
def initial_fit(self):
'''
Runs an initial fit to the data with the chosen model (two-Gaussian or polyfit).
Attributes
----------
_func: str
Name of the best fit model function.
_model_params: array-like
Labels of all model parameters
_initial_fit: array-like
Initial values of the model parameters from optimization.
'''
phases, fluxes_ph, sigmas_ph = phase_fold(self._times,
self._fluxes,
self._sigmas,
period=self._period_init,
t0=self._t0_init,
interval = '05')
twogModel = TwoGaussianModel(phases=phases,
fluxes=fluxes_ph,
sigmas=sigmas_ph,
)
twogModel.fit()
# self._twogModel_init = twogModel
self._func = twogModel.best_fit['func']
self._model_params = twogModel.best_fit['param_names']
self._initial_fit = twogModel.best_fit['param_vals']
def logprob(self, values):
'''
Computes the logprobability of the sample.
Parameters
----------
values: array-like
period (for phase folding) + model values
'''
fmax = self._fluxes.max()
fmin = self._fluxes.min()
fdiff = fmax - fmin
bounds = {'C': ((0),(fmax)),
'CE': ((0, 1e-6, -0.5),(fmax, fdiff, 0.5)),
'CG': ((0., -0.5, 0., 0.), (fmax, 0.5, fdiff, 0.5)),
'CGE': ((0., -0.5, 0., 0., 1e-6, -0.5),(fmax, 0.5, fdiff, 0.5, fdiff, 0.5)),
'CG12': ((0.,-0.5, 0., 0., -0.5, 0., 0.),(fmax, 0.5, fdiff, 0.5, 0.5, fdiff, 0.5)),
'CG12E1': ((0.,-0.5, 0., 0., -0.5, 0., 0., 1e-6),(fmax, 0.5, fdiff, 0.5, 0.5, fdiff, 0.5, fdiff)),
'CG12E2': ((0.,-0.5, 0., 0., -0.5, 0., 0., 1e-6),(fmax, 0.5, fdiff, 0.5, 0.5, fdiff, 0.5, fdiff))}
period, *model_vals = values
for i,param_val in enumerate(model_vals):
if param_val < bounds[self._func][0][i] or param_val > bounds[self._func][1][i]:
# raise Warning('out of prior', self._func, bounds[self._func][0][i], bounds[self._func][1][i], param_val)
return -np.inf, np.nan, np.nan, np.nan, np.nan, np.nan, np.nan, np.nan, np.nan, np.nan, np.nan
# fold with period
phases, fluxes_ph, sigmas_ph = phase_fold(self._times,
self._fluxes,
self._sigmas,
period=period,
t0=self._t0_init,
interval='05')
# compute model with the selected twog function
# TODO: review this part for potentially more efficient option
twog = TwoGaussianModel(phases=phases,
fluxes=fluxes_ph,
sigmas=sigmas_ph)
twog.fit(fit_funcs=[self._func], param_vals=[model_vals])
eclipse_dir = twog.compute_eclipse_params()
pos1, pos2 = eclipse_dir['primary_position'], eclipse_dir['secondary_position']
width1, width2 = eclipse_dir['primary_width'], eclipse_dir['secondary_width']
depth1, depth2 = eclipse_dir['primary_depth'], eclipse_dir['secondary_depth']
ecl1_area, ecl2_area = twog.eclipse_area[1], twog.eclipse_area[2]
residuals_mean, residuals_stdev = compute_residuals_stdev(fluxes_ph, twog.model)
logprob = -0.5*(np.sum((fluxes_ph-twog.model)**2/sigmas_ph**2))
# print(logprob, pos1, width1, depth1, pos2, width2, depth2)#, ecl1_area, ecl2_area, residuals_mean, residuals_stdev)
return logprob, pos1, width1, depth1, pos2, width2, depth2, ecl1_area, ecl2_area, residuals_mean, residuals_stdev
def compute_model(self, means, sigmas_low, sigmas_high, save_lc = True, save_file='', show=False, failed=False):
'''
Computes the model parameter values from the sample.
Parameters
----------
means: array-like
Mean values from the sample
sigmas_low: array-like
Standard deviation of samples < mean
sigmas_high: array_like
Standard deviation of samples > mean
save_lc: bool
If True, saves the model light curve to a file
save_file: str
File name to save light curve to, if save_lc=True.
show: bool
If True, will display a plot of the model light curve.
failed: bool
If True, all model parameters are np.nan
'''
model_results = {'C': np.nan, 'mu1': np.nan, 'd1': np.nan, 'sigma1': np.nan,
'mu2': np.nan, 'd2': np.nan, 'sigma2': np.nan, 'Aell': np.nan, 'phi0': np.nan
}
model_results_err = {'C': np.nan, 'mu1': np.nan, 'd1': np.nan, 'sigma1': np.nan,
'mu2': np.nan, 'd2': np.nan, 'sigma2': np.nan, 'Aell': np.nan, 'phi0': np.nan
}
# results_str = '{}'.format(func)
if failed:
self._model_values = model_results
self._model_values_errs = model_results_err
self._chi2 = np.nan
else:
for mkey in model_results.keys():
if mkey in self._model_params:
pind = self._model_params.index(mkey)
model_results[mkey] = means[pind+1]
model_results_err[mkey] = np.max((sigmas_low[pind+1],sigmas_high[pind+1]))
# results_str += ',{},{}'.format(model_results[mkey],model_results_err[mkey])
self._model_values = model_results
self._model_values_errs = model_results_err
chi2 = np.nan
phases_obs, fluxes_ph_obs, sigmas_ph_obs = phase_fold(self._times,
self._fluxes,
self._sigmas,
period=self._period_mcmc['value'],
t0=self._t0_init, interval='05')
twog_func = getattr(TwoGaussianModel, self._func.lower())
fluxes_model = twog_func(phases_obs, *means[1:])
chi2 = -0.5*(np.sum((fluxes_ph_obs-fluxes_model)**2/sigmas_ph_obs**2))
if show:
import matplotlib.pyplot as plt
plt.plot(phases_obs, fluxes_ph_obs, 'k.')
plt.plot(phases_obs, fluxes_model, 'r-')
plt.show()
if save_lc:
np.savetxt(save_file, np.array([phases_obs, fluxes_model]).T)
self._chi2 = chi2 | 44.435 | 131 | 0.519073 | 1,026 | 8,887 | 4.308967 | 0.216374 | 0.03506 | 0.007464 | 0.016286 | 0.345171 | 0.311016 | 0.266682 | 0.232753 | 0.232753 | 0.206514 | 0 | 0.031789 | 0.373467 | 8,887 | 200 | 132 | 44.435 | 0.762213 | 0.262968 | 0 | 0.217391 | 0 | 0 | 0.036084 | 0 | 0 | 0 | 0 | 0.005 | 0 | 1 | 0.043478 | false | 0 | 0.054348 | 0 | 0.130435 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
8051fa690a144c0f791b552a283da2ac591cf7d6 | 2,078 | py | Python | pygres/pygres.py | jorgeviz/pygres | acc1db1bc67316f399396998424d3709e624ba0c | [
"MIT"
] | null | null | null | pygres/pygres.py | jorgeviz/pygres | acc1db1bc67316f399396998424d3709e624ba0c | [
"MIT"
] | null | null | null | pygres/pygres.py | jorgeviz/pygres | acc1db1bc67316f399396998424d3709e624ba0c | [
"MIT"
] | null | null | null | #-*- coding: utf-8 -*-
import psycopg2
from psycopg2.extensions import ISOLATION_LEVEL_AUTOCOMMIT
from .errors import PygresError
from .model import Model
class Pygres(object):
conn = None
curs = None
model = Model
config = None
q = None
def __init__(self, config, **kwargs):
self.config = config
# Kwargs
self.autocommit = kwargs.get('autocommit', False)
#global conn
#global cur
if not self.config:
raise Pygres("Configuration variables missing",'Missing vars in config')
# Connection
try:
self.conn = psycopg2.connect(
database=self.config['SQL_DB'],
user=self.config['SQL_USER'],
password=self.config['SQL_PASSWORD'],
host=self.config['SQL_HOST'],
port=self.config['SQL_PORT']
)
# Isolation level, connection with autocommit
if self.autocommit:
self.conn.set_isolation_level(ISOLATION_LEVEL_AUTOCOMMIT)
# Cursor
self.cur = self.conn.cursor()
except:
raise PygresError('Couldnt connect to Postgres','Missing')
sys.exit()
def close(self):
self.conn.close()
def model(self, table, pk, *initial_data,**kwargs):
return Model(self, table, pk, *initial_data,**kwargs)
def query(self,statement,values=[],commit=True):
self.cur.execute(statement, values)
self.q = self.cur.query
if commit:
self.conn.commit()
return self
def commit(self):
self.conn.commit()
return self
def rollback(self):
self.conn.rollback()
return self
def fetch(self):
columns = [desc[0] for desc in self.cur.description]
rows = self.cur.fetchall()
rows_list = []
for row in rows:
row_dict = {}
for i,col in enumerate(columns):
row_dict[col] = row[i]
rows_list.append(row_dict)
return rows_list
| 28.861111 | 84 | 0.57026 | 233 | 2,078 | 4.987124 | 0.360515 | 0.068847 | 0.055938 | 0.027539 | 0.10327 | 0.10327 | 0.056799 | 0 | 0 | 0 | 0 | 0.003574 | 0.326757 | 2,078 | 71 | 85 | 29.267606 | 0.827019 | 0.053417 | 0 | 0.090909 | 0 | 0 | 0.070918 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.127273 | false | 0.018182 | 0.072727 | 0.018182 | 0.4 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
8053ce47f182fba88eda0f37079062c6b26f462b | 5,697 | py | Python | cte/1.xml2txt/app.py | xiabo0816/zhuanli | fe21210b00f7da0cb258c0050deaf475eac128a5 | [
"Unlicense"
] | null | null | null | cte/1.xml2txt/app.py | xiabo0816/zhuanli | fe21210b00f7da0cb258c0050deaf475eac128a5 | [
"Unlicense"
] | null | null | null | cte/1.xml2txt/app.py | xiabo0816/zhuanli | fe21210b00f7da0cb258c0050deaf475eac128a5 | [
"Unlicense"
] | null | null | null | # -*- coding: utf-8 -*-
from lxml import etree
import lxml
import argparse
import os
import argparse
from tqdm import tqdm
import time
import threading
import regex
from io import StringIO, BytesIO
import zipfile
def get_args_parser():
parser = argparse.ArgumentParser(description='XML application parser')
parser.add_argument('-i', '--input', default='application',
type=str, help='input folder')
parser.add_argument('-o', '--output', default='application_out',
type=str, help='output folder')
parser.add_argument('-n', '--nfiles', default=5,
type=int, help='输出nfiles份文件,默认是8')
return parser.parse_args()
# https://www.w3school.com.cn/xpath/xpath_syntax.asp
# https://lxml.de/
def run(filename):
# file coding: UTF-8 with BOM
try:
root = etree.parse(BytesIO(open(filename, 'rb').read()),
etree.XMLParser(ns_clean=True)).getroot()
claim1 = ''.join(root.xpath('//cn-claims/claim[@num="1"]/claim-text/text()', namespaces=root.nsmap))
claim2 = ''.join(root.xpath('//cn-claims/claim[@num="2"]/claim-text/text()', namespaces=root.nsmap))
return [claim1, claim2]
except Exception:
print('Error(1) Xml parsing, at file:' + filename)
return False
def strB2Q(ustring):
rstring = ''
# https://www.regular-expressions.info/unicode.html
ustring = regex.sub(
r'[^\p{Han}\p{Letter}\p{Number}\p{Punctuation}]+', '', ustring)
ustring = regex.sub(r'[—]+', '-', ustring)
# ustring = re.sub(r'[^\p{Han}\p{Letter}\p{Number}\p{Pd}\p{Ps}\p{Pe}\p{Pi}\p{Pf}\p{Pc}]+', '', ustring)
for uchar in ustring:
inside_code = ord(uchar)
if inside_code == 0x28:
inside_code += 65248
elif inside_code == 0x29:
inside_code += 65248
elif inside_code == 0x5B:
inside_code += 65248
elif inside_code == 0x5D:
inside_code += 65248
elif inside_code == 0x3A:
inside_code += 65248
elif inside_code == 0x3B:
inside_code += 65248
rstring += chr(inside_code)
return rstring
def cut_text(text, length):
text_list = regex.findall(r'.{'+str(length)+r'}', text)
text_list.append(text[(len(text_list)*length):])
return text_list
def _sent_tokenize(parah):
sents = [strB2Q(sent) for sent in regex.split(r'(。|!|\!|?|\?)', parah)]
sents.append("")
sents = ["".join(i) for i in zip(sents[0::2], sents[1::2])]
results = []
for sent in sents:
if len(sent) > 0:
results.extend(cut_text(sent, 500))
results = [r.strip('\x5C') for r in results if not r ==
'' and not r == '\x5c' and not r == '。']
return "\n".join(results)
def _t_writefile(input_folder, output_folder, inputfilelist, pbar, docid):
print(input_folder, output_folder, inputfilelist, pbar, docid)
with open(os.path.join(output_folder, str(docid)) + '_out', 'w', encoding='UTF-8') as fout:
for item in inputfilelist:
text = run(item)
publicid = 'null'
time = 'null'
ipc = 'null'
html = ''
html += '<#>%d PublicId=%s;Cat=%s;Time=%s\n' % (docid, publicid, _sent_tokenize(ipc), time)
html += '<p>claim1\n%s\n</p>\n' % (_sent_tokenize(text[0]).strip())
html += '<p>claim2\n%s\n</p>\n' % (_sent_tokenize(text[1]).strip())
html += '</#>\n'
fout.write(html)
docid += 1
pbar.update(1)
fout.close()
# 为线程定义一个函数
class writeFileThread (threading.Thread):
def __init__(self, threadID, input_folder, output_folder, inputfilelist, pbar, docid):
threading.Thread.__init__(self)
self.threadID = threadID
self.input_folder = input_folder
self.output_folder = output_folder
self.inputfilelist = inputfilelist
self.docid = docid
self.pbar = pbar
def run(self):
_t_writefile(self.input_folder, self.output_folder,
self.inputfilelist, self.pbar, self.docid)
def _listdir(rootDir, affix):
names = []
for filename in os.listdir(rootDir):
pathname = os.path.join(rootDir, filename)
if os.path.isfile(pathname):
if filename.lower().endswith(affix):
names.append(pathname)
else:
names.extend(_listdir(pathname, affix))
return names
def _unzip_folder(path, prefix):
filelist = _listdir(path, 'zip')
with tqdm(total=len(filelist)) as pbar:
for f in filelist:
if os.path.getsize(f) and os.path.basename(f).startswith(prefix):
zipFile = zipfile.ZipFile(f)
zipFile.extractall(os.path.dirname(f))
pbar.update(1)
if __name__ == '__main__':
args = get_args_parser()
print(args)
_unzip_folder(args.input, 'DA')
filelist = _listdir(args.input, 'xml')
filelist = [i for i in filelist if os.path.dirname(i).endswith('100001-') and os.path.basename(i)]
print(filelist)
print('total files: ' + str(len(filelist)))
n_thread = args.nfiles if len(filelist) > args.nfiles else len(filelist)
with tqdm(total=len(filelist)) as pbar:
threads = []
# 创建新线程
length = len(filelist)
size = int(len(filelist)/n_thread)
for i in range(0, length, size):
threads.append(writeFileThread(1, args.input,
args.output, filelist[i:i+size], pbar, i))
# 开启新线程
for t in threads:
t.start()
for t in threads:
t.join()
| 32.554286 | 108 | 0.585922 | 717 | 5,697 | 4.541144 | 0.283124 | 0.042998 | 0.027641 | 0.029177 | 0.20516 | 0.168919 | 0.10473 | 0.027027 | 0.014128 | 0 | 0 | 0.020115 | 0.266983 | 5,697 | 174 | 109 | 32.741379 | 0.759339 | 0.05108 | 0 | 0.105263 | 0 | 0 | 0.084878 | 0.038176 | 0 | 0 | 0.004448 | 0 | 0 | 1 | 0.075188 | false | 0 | 0.082707 | 0 | 0.218045 | 0.037594 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
80565e10fd9e73420aa3d1a44436994e9aca11f5 | 1,858 | py | Python | tests/correctly_installed.py | lifengjin/transition-amr-parser | ec1f854fda520de97fc4e4c5f5e03133aab0560f | [
"Apache-2.0"
] | 1 | 2021-07-08T08:24:21.000Z | 2021-07-08T08:24:21.000Z | tests/correctly_installed.py | lifengjin/transition-amr-parser | ec1f854fda520de97fc4e4c5f5e03133aab0560f | [
"Apache-2.0"
] | null | null | null | tests/correctly_installed.py | lifengjin/transition-amr-parser | ec1f854fda520de97fc4e4c5f5e03133aab0560f | [
"Apache-2.0"
] | 1 | 2021-07-08T08:24:22.000Z | 2021-07-08T08:24:22.000Z | import torch
import subprocess
from torch.utils.cpp_extension import CUDAExtension
def check_cuda_torch_binary_vs_bare_metal():
# command line CUDA
cuda_dir = torch.utils.cpp_extension.CUDA_HOME
cuda_call = [cuda_dir + "/bin/nvcc", "-V"]
raw_output = subprocess.check_output(cuda_call, universal_newlines=True)
output = raw_output.split()
release_idx = output.index("release") + 1
release = output[release_idx].split(".")
bare_metal_major = release[0]
bare_metal_minor = release[1][0]
# torch compilation CUDA
torch_binary_major = torch.version.cuda.split(".")[0]
torch_binary_minor = torch.version.cuda.split(".")[1]
if (bare_metal_major != torch_binary_major) or (bare_metal_minor != torch_binary_minor):
print(
"Pytorch binaries were compiled with Cuda {} but binary {} is {}".format(
torch.version.cuda, cuda_dir + "/bin/nvcc", output[release_idx])
)
if __name__ == '__main__':
# Pytorch and CUDA
print()
print(f'pytorch {torch.__version__}')
if torch.cuda.is_available():
print(f'cuda {torch.version.cuda}')
# happens when CUDA missconfigured
assert torch.cuda.device_count(), "0 GPUs found"
try:
import apex
print("Apex installed")
except ImportError:
print("Apex not installed")
check_cuda_torch_binary_vs_bare_metal()
if torch.cuda.get_device_capability(0)[0] < 7:
print("GPU wont support --fp")
# fairseq
from transition_amr_parser.roberta_utils import extract_features_aligned_to_words_batched
import fairseq
print(f'fairseq {fairseq.__version__}')
# spacy
import spacy
print(f'spacy {spacy.__version__}')
# If we get here we passed
print(f'[\033[92mOK\033[0m] correctly installed\n')
| 32.034483 | 93 | 0.667922 | 239 | 1,858 | 4.878661 | 0.39749 | 0.056604 | 0.054889 | 0.037736 | 0.053173 | 0.053173 | 0.053173 | 0 | 0 | 0 | 0 | 0.013158 | 0.22282 | 1,858 | 57 | 94 | 32.596491 | 0.794321 | 0.069429 | 0 | 0 | 0 | 0 | 0.181871 | 0.012202 | 0 | 0 | 0 | 0 | 0.025641 | 1 | 0.025641 | false | 0 | 0.205128 | 0 | 0.230769 | 0.25641 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
80570e057a980aa667e5b87368933ad6ee43efcd | 6,120 | py | Python | DeblurGANv2/project_lib/losses.py | gcinbis/deep-generative-models-spring20 | d377bd63d5e79539477cca47c71462e5cc12adfa | [
"MIT"
] | 18 | 2020-07-06T10:47:26.000Z | 2021-05-30T11:43:17.000Z | DeblurGANv2/project_lib/losses.py | gcinbis/deep-generative-models-course-projects | d377bd63d5e79539477cca47c71462e5cc12adfa | [
"MIT"
] | 1 | 2022-03-12T00:39:12.000Z | 2022-03-12T00:39:12.000Z | DeblurGANv2/project_lib/losses.py | gcinbis/deep-generative-models-spring20 | d377bd63d5e79539477cca47c71462e5cc12adfa | [
"MIT"
] | 2 | 2020-07-13T20:46:44.000Z | 2020-10-01T13:15:25.000Z | import torch
import torch.nn as nn
import torchvision.models as models
import torchvision.transforms as transforms
import random
class ScoreSamples():
"""
Keeps the scores in lists of a predefined size to take inner expectations in the
Discriminator and generator adverserial losses.
"""
def __init__(self, maxSize):
self.score_b_List = list()
self.score_s_List = list()
self.maxSize = maxSize
self.numsamples = 0
def addSamp(self, scores_b_, scores_s_ ):
scores_b = torch.unsqueeze(scores_b_, 0)
scores_s = torch.unsqueeze(scores_s_, 0)
if self.numsamples < self.maxSize:
self.score_b_List.append(scores_b)
self.score_s_List.append(scores_s)
self.numsamples = self.numsamples + 1
else:
self.score_b_List.pop(0)
self.score_b_List.append(scores_b)
self.score_s_List.pop(0)
self.score_s_List.append(scores_s)
def getSamp(self):
if self.numsamples == self.maxSize:
samples_b = random.sample(self.score_b_List, self.maxSize)
samples_s = random.sample(self.score_s_List, self.maxSize)
else:
samples_b = self.score_b_List
samples_s = self.score_s_List
samples_s = torch.cat(samples_s).detach()
samples_b = torch.cat(samples_b).detach()
return samples_s, samples_b
class VGGContent(nn.Module):
"""
Returns the layers of pretrained VGG19 until the relu4_4 layer
This network outputs will be used to compare the content between
deblurred and sharp (real) images as proposed in :
Johnson et. al. "Perceptual losses for real-time style transfer and super-resolution." (ECCV 2016)
"""
def __init__(self):
super(VGGContent, self).__init__()
def get_net(self):
# Get the pretrained VGG19 model
vgg19 = models.vgg19(pretrained=True)
vgg19 = vgg19.cuda()
# We need only the first 11 layers for the output of relu4_3
cont_net = nn.Sequential(*vgg19.features[0:25])
cont_net = cont_net.cuda()
cont_net = cont_net.eval()
return cont_net
def discriminator_loss(sharp_scores, deblur_scores, SampleScores):
"""
Computes the DoubleScale RaGANLS Discriminator Loss.
LRaLSGAN definition= Ex[(D(x)- Ez[ D(G(z))]-1 )^2] + Ez[(D(G(z))- Ex[ D(x)]+1 )^2]
"""
# Pass the fake(deblurred) images generated by Generator to the discriminator to fake it
#deblur_scores = GANmodel.forward(deblur_images.detach())
# Pass the real(sharp) images to the discriminator
#sharp_scores = GANmodel.forward(sharp_images)
# Compute the discriminator loss using LRaLSGAN formulation
SampleScores.addSamp(deblur_scores, sharp_scores)
sharp_samples, deblur_samples = SampleScores.getSamp()
lossDisc = torch.mean((sharp_scores - torch.mean(deblur_samples) - 1).pow(2) ) + torch.mean((deblur_scores - torch.mean(sharp_samples) + 1).pow(2))
return lossDisc
def perceptual_loss(sharp_images, deblur_images):
"""
Computes the Perceptual Loss to compare the
reconstructed (deblurred) and the original(sharp) images
"""
#Measures the L2 distance between generated and original
loss = nn.MSELoss()
lossP = loss(sharp_images, deblur_images)
return lossP
def content_loss(sharp_images, deblur_images, cont_net):
"""
Computes the Content Loss to compare the
reconstructed (deblurred) and the original(sharp) images
Takes the output feature maps of the relu4_3 layer of pretrained VGG19 to compare the content between
images as proposed in :
Johnson et. al. "Perceptual losses for real-time style transfer and super-resolution." (ECCV 2016)
"""
# Torchvision models documentation:
# All pre-trained models expect input images normalized in the same way, The images have to be loaded in
# to a range of [0, 1] and then normalized using mean = [0.485, 0.456, 0.406] and std = [0.229, 0.224, 0.225].
deblur_images = (deblur_images + 1) * 0.5
sharp_images = (sharp_images + 1) * 0.5
normalize = transforms.Normalize(mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225])
deblur_images = normalize(deblur_images)
sharp_images= normalize(sharp_images)
content_deblur = cont_net(deblur_images)
content_sharp = cont_net(sharp_images)
content_sharp = content_sharp.detach()
loss = nn.MSELoss()
lossC = torch.mean(loss(content_deblur,content_sharp))
return lossC
def generator_loss_adv(sharp_scores, deblur_scores, SampleScores):
"""
Computes the DoubleScale RaGANLS Generator Adverseraial Loss
"""
# Compute the generator loss using
# Paper:
# The relativistic discriminator: a key element missing from standard gan. arXiv preprint arXiv:1807.00734, 2018.
# LRaLSGAN definition for generator:
# Ez[(D(G(z))- Ex[ D(x)]-1 )^2] + Ex[(D(x)- Ez[ D(G(z))]+1 )^2]
sharp_samples, deblur_samples = SampleScores.getSamp()
Ladv = torch.mean((deblur_scores - torch.mean(sharp_samples) - 1).pow(2))+ torch.mean((sharp_scores - torch.mean(deblur_samples) + 1).pow(2))
lossGenAdv = 0.01* Ladv
return lossGenAdv
def generator_loss_cont(sharp_images, deblur_images, cont_net):
"""
Computes the
Content Loss + Perceptual Loss for generator
"""
# Perceptual Loss at the output of the generator
# pixel-space loss LP , e.g.,the simplest L2 distance
Lp = perceptual_loss(sharp_images, deblur_images)
# Content Loss at the output of the generator
# In contrast to the L2, it computes the
# Euclidean loss on the VGG19 conv3 3 feature maps.
Lx = content_loss(sharp_images, deblur_images, cont_net)
# Overall Gen Loss= 0.5*Lp + 0.006*Lx + 0.01* Ladv
lossGenCont = 0.5*Lp + 0.006*Lx
return lossGenCont
| 36.213018 | 151 | 0.658007 | 824 | 6,120 | 4.724515 | 0.240291 | 0.042384 | 0.032366 | 0.021577 | 0.371436 | 0.337015 | 0.281017 | 0.247624 | 0.234267 | 0.176214 | 0 | 0.032948 | 0.251144 | 6,120 | 168 | 152 | 36.428571 | 0.816496 | 0.380882 | 0 | 0.133333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.133333 | false | 0 | 0.066667 | 0 | 0.32 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
8058352e8e1025b9c39b593adc41b94758599897 | 1,102 | py | Python | expense/expense_entry.py | thinkstack-co/ConnectPyse | ded8b426250aee352598f33ad08b7bcc3c6a3017 | [
"MIT"
] | 23 | 2017-01-24T05:44:05.000Z | 2021-11-26T17:08:01.000Z | expense/expense_entry.py | thinkstack-co/ConnectPyse | ded8b426250aee352598f33ad08b7bcc3c6a3017 | [
"MIT"
] | 10 | 2017-01-14T21:11:10.000Z | 2019-06-16T21:10:29.000Z | expense/expense_entry.py | thinkstack-co/ConnectPyse | ded8b426250aee352598f33ad08b7bcc3c6a3017 | [
"MIT"
] | 16 | 2017-01-24T02:28:19.000Z | 2021-07-13T17:23:22.000Z | from ..cw_model import CWModel
class ExpenseEntry(CWModel):
def __init__(self, json_dict=None):
self.id = None # (Integer)
self.company = None # **(CompanyReference)
self.chargeToId = None # (Integer)
self.chargeToType = None # **(Enum)
self.type = None # *(ExpenseTypeReference)
self.member = None # (MemberReference)
self.paymentMethod = None # (PaymentMethodReference)
self.classification = None # (ClassificationReference)
self.amount = None # *(Number)
self.billableOption = None # *(Enum)
self.date = None # *(String)
self.locationId = None # (Integer)
self.businessUnitId = None # (Integer)
self.notes = None # (String)
self.agreement = None # (AgreementReference)
self.invoiceAmount = None # (Number)
self.taxes = None # (ExpenseTax[])
self.invoice = None # (InvoiceReference)
self._info = None # (Metadata)
# initialize object with json dict
super().__init__(json_dict)
| 38 | 64 | 0.587114 | 99 | 1,102 | 6.414141 | 0.494949 | 0.069291 | 0.094488 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.303085 | 1,102 | 28 | 65 | 39.357143 | 0.826823 | 0.278584 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.043478 | false | 0 | 0.043478 | 0 | 0.130435 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
8058d1cb85bb6ca5a109da8af0c99c070a0a0578 | 600 | py | Python | cubework/module/_entry_module.py | kurisusnowdeng/Cubework | 56c0d35f87765efc8f2b6d47a4ccea6f2ec626aa | [
"Apache-2.0"
] | null | null | null | cubework/module/_entry_module.py | kurisusnowdeng/Cubework | 56c0d35f87765efc8f2b6d47a4ccea6f2ec626aa | [
"Apache-2.0"
] | null | null | null | cubework/module/_entry_module.py | kurisusnowdeng/Cubework | 56c0d35f87765efc8f2b6d47a4ccea6f2ec626aa | [
"Apache-2.0"
] | null | null | null | import torch.nn as nn
class CubeModule(nn.Module):
def __init__(self, module: nn.Module, **kwargs):
super().__init__()
# copy values
self.__dict__ = module.__dict__.copy()
# copy methods
for name, attr in module.__class__.__dict__.items():
if name not in ["__init__", "forward"] and callable(attr):
setattr(self, name, getattr(module, name))
self._forward_func = module.forward
for k, v in kwargs.items():
setattr(self, k, v)
def forward(self, *args):
return self._forward_func(*args)
| 31.578947 | 70 | 0.598333 | 74 | 600 | 4.418919 | 0.445946 | 0.04893 | 0.091743 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.281667 | 600 | 18 | 71 | 33.333333 | 0.758701 | 0.04 | 0 | 0 | 0 | 0 | 0.026178 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.153846 | false | 0 | 0.076923 | 0.076923 | 0.384615 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
3374f0de05ac6c8c99f55e6ea413524eae8ce723 | 19,218 | py | Python | horizomer/benchmark/reformat_input.py | biocore/horizomer | 6b30d7f3b79f4f4ce7cc6502f5b820c9b53ab52e | [
"BSD-3-Clause"
] | 8 | 2017-10-17T12:27:07.000Z | 2021-08-09T14:34:05.000Z | horizomer/benchmark/reformat_input.py | biocore/WGS-HGT | 6b30d7f3b79f4f4ce7cc6502f5b820c9b53ab52e | [
"BSD-3-Clause"
] | 59 | 2015-09-11T22:22:39.000Z | 2017-05-30T18:31:15.000Z | horizomer/benchmark/reformat_input.py | biocore/WGS-HGT | 6b30d7f3b79f4f4ce7cc6502f5b820c9b53ab52e | [
"BSD-3-Clause"
] | 9 | 2015-09-10T17:55:22.000Z | 2017-02-22T05:47:13.000Z | #!/usr/bin/env python
# ----------------------------------------------------------------------------
# Copyright (c) 2015--, The Horizomer Development Team.
#
# Distributed under the terms of the Modified BSD License.
#
# The full license is in the file LICENSE, distributed with this software.
# ----------------------------------------------------------------------------
"""
Reformat input files to format accepted by given HGT tool
=========================================================
"""
import click
from os.path import join
from skbio import TreeNode, TabularMSA, Sequence, Protein, DNA
from collections import OrderedDict
def join_trees(gene_tree,
species_tree,
output_tree_fp):
""" Concatenate Newick trees into one file (species followed by gene).
Parameters
----------
gene_tree: skbio.TreeNode
TreeNode instance for gene tree
species_tree_fp: skbio.TreeNode
TreeNode instance for species tree
output_tree_fp: string
file path to output species and gene tree
See Also
--------
skbio.TreeNode
"""
with open(output_tree_fp, 'w') as output_tree_f:
output_tree_f.write(
"%s\n%s\n" % (str(species_tree)[:-1], str(gene_tree)[:-1]))
def trim_gene_tree_leaves(gene_tree):
""" Keep only string before first '_' delimiter in node ID.
Parameters
----------
gene_tree: skbio.TreeNode
TreeNode instance
See Also
--------
skbio.TreeNode
Notes
-----
This function will keep only the word before the first '_' in the
complete node ID. In ALF simulated sequences, the genes are labeled
as "SPECIES_GENE". Most phylogenetic reconciliation tools
require the associations between species leaves and gene leaves to
be equal, therefore needing to remove the _GENENAME part in the gene
tree.
"""
for node in gene_tree.tips():
node.name = node.name.split()[0]
def species_gene_mapping(gene_tree,
species_tree):
""" Find the association between the leaves in species and gene trees.
Parameters
----------
gene_tree: skbio.TreeNode
TreeNode instance for gene tree
species_tree_fp: skbio.TreeNode
TreeNode instance for species tree
Returns
-------
mapping_leaves_t: OrderedDict
Mapping between the species tree leaves and the gene tree leaves;
species tips are the keys and gene tips are the values
See Also
--------
skbio.TreeNode
Notes
-----
Given the label format "SPECIES" for the species leaves and
"SPECIES_GENE" in the gene leaves, report the associations between all
species and gene leaves. Only one instance of the '_' delimiter is
allowed in the gene leaves and this is used as a separator between the
species name and the gene name.
Ex.
mapping = {"SE001":["SE001_1", "SE001_2"],
"SE002":["SE002_1"]}
"""
mapping_leaves = {}
for node in species_tree.tips():
if node.name not in mapping_leaves:
mapping_leaves[node.name] = []
else:
raise ValueError(
"Species tree leaves must be uniquely labeled: %s" % node.name)
for node in gene_tree.tips():
species, gene = node.name.split()
if species in mapping_leaves:
mapping_leaves[species].append("%s_%s" % (species, gene))
else:
raise ValueError(
"Species %s does not exist in the species tree" % species)
return OrderedDict(sorted(mapping_leaves.items(),
key=lambda x: x[1], reverse=True))
def remove_branch_lengths(tree):
""" Set branch lengths to None.
Parameters
----------
tree: skbio.TreeNode
TreeNode instance
See Also
--------
skbio.TreeNode
"""
for node in tree.postorder():
node.length = None
def id_mapper(ids):
mapping = {}
for _id in ids:
mapping[_id] = _id.split('/')[0]
return mapping
def reformat_rangerdtl(gene_tree,
species_tree,
output_tree_fp):
""" Reformat input trees to the format accepted by RANGER-DTL.
Parameters
----------
gene_tree: skbio.TreeNode
TreeNode instance for gene tree
species_tree_fp: skbio.TreeNode
TreeNode instance for species tree
output_tree_fp: string
file path to output trees (species followed by gene)
See Also
--------
skbio.TreeNode
Notes
-----
The species name in the leaves of species and gene trees must be equal.
For multiple genes from the same species, the format "SPECIES_GENE" is
acceptable in the gene trees.
"""
remove_branch_lengths(tree=gene_tree)
remove_branch_lengths(tree=species_tree)
join_trees(gene_tree,
species_tree,
output_tree_fp)
def reformat_trex(gene_tree,
species_tree,
output_tree_fp):
""" Reformat input trees to the format accepted by T-REX.
Parameters
----------
gene_tree: skbio.TreeNode
TreeNode instance for gene tree
species_tree_fp: skbio.TreeNode
TreeNode instance for species tree
output_tree_fp: string
file path to output trees (species followed by gene)
See Also
--------
skbio.TreeNode
Notes
-----
Binary trees only, leaves of species and gene trees must have equal
names.
"""
# trim gene tree leaves to exclude '_GENENAME' (if exists)
trim_gene_tree_leaves(gene_tree)
# join species and gene tree into one file
join_trees(gene_tree,
species_tree,
output_tree_fp)
def reformat_riatahgt(gene_tree,
species_tree,
output_tree_fp):
""" Reformat input trees to the format accepted by RIATA-HGT (PhyloNet).
Parameters
----------
gene_tree: skbio.TreeNode
TreeNode instance for gene tree
species_tree_fp: skbio.TreeNode
TreeNode instance for species tree
output_tree_fp: string
file path to output trees (Nexus format)
See Also
--------
skbio.TreeNode
Notes
-----
Input to RIATA-HGT is a Nexus file. The number of leaves in the species
and gene tree must be equal with the same naming.
"""
nexus_file = """#NEXUS
BEGIN TREES;
Tree speciesTree = %s
Tree geneTree = %s
END;
BEGIN PHYLONET;
RIATAHGT speciesTree {geneTree};
END;
"""
# trim gene tree leaves to exclude '_GENENAME' (if exists)
trim_gene_tree_leaves(gene_tree)
with open(output_tree_fp, 'w') as output_tree_f:
output_tree_f.write(nexus_file % (str(species_tree)[:-1],
str(gene_tree)[:-1]))
def reformat_jane4(gene_tree,
species_tree,
output_tree_fp):
""" Reformat input trees to the format accepted by Jane4.
Parameters
----------
gene_tree: skbio.TreeNode
TreeNode instance for gene tree
species_tree_fp: skbio.TreeNode
TreeNode instance for species tree
output_tree_fp: string
file path to output trees (Nexus format)
See Also
--------
skbio.TreeNode
Notes
-----
Input to Jane4 is a Nexus file, the trees cannot not contain branch
lengths and the species/gene leaves mapping is required.
"""
nexus_file = """#NEXUS
begin host;
tree host = %s
endblock;
begin parasite;
tree parasite = %s
endblock;
begin distribution;
Range %s;
endblock;
"""
# create a mapping between the species and gene tree leaves
mapping_dict = species_gene_mapping(gene_tree=gene_tree,
species_tree=species_tree)
remove_branch_lengths(tree=gene_tree)
remove_branch_lengths(tree=species_tree)
mapping_str = ""
for species in mapping_dict:
for gene in mapping_dict[species]:
mapping_str = "%s%s:%s, " % (mapping_str, gene, species)
with open(output_tree_fp, 'w') as output_tree_f:
output_tree_f.write(nexus_file % (str(species_tree),
str(gene_tree),
mapping_str[:-2]))
def reformat_treepuzzle(gene_tree,
species_tree,
gene_msa_fa_fp,
output_tree_fp,
output_msa_phy_fp):
""" Reformat input trees to the format accepted by Tree-Puzzle.
Parameters
----------
gene_tree: skbio.TreeNode
TreeNode instance for gene tree
species_tree_fp: skbio.TreeNode
TreeNode instance for species tree
gene_msa_fa_fp: string
file path to gene alignments in FASTA format
output_tree_fp: string
file path to output trees (Nexus format)
output_msa_phy_fp: string
file path to output MSA in PHYLIP format
See Also
--------
skbio.TreeNode
"""
# remove the root branch length (output with ALF)
for node in gene_tree.postorder():
if node.is_root():
node.length = None
for node in species_tree.postorder():
if node.is_root():
node.length = None
# trim gene tree leaves to exclude '_GENENAME' (if exists)
trim_gene_tree_leaves(gene_tree)
join_trees(gene_tree,
species_tree,
output_tree_fp)
# trim FASTA sequence labels to exclude '/GENENAME' (if exists)
msa_fa = TabularMSA.read(gene_msa_fa_fp, constructor=Protein)
msa_fa.reassign_index(minter='id')
mapping = id_mapper(msa_fa.index)
msa_fa.reassign_index(mapping=mapping)
msa_fa.write(output_msa_phy_fp, format='phylip')
def _merge_genbank_seqs(genbank_fp):
""" Merge one to multiple sequences in a GenBank file into one.
Parameters
----------
genbank_fp: string
file path to genome in GenBank format
Returns
-------
tuple of (
skbio.Sequence,
Genome sequence, genes and metadata
dict of { list of [ string, int, int, string ] }
Gene name : translation, start, end, and strand
)
"""
loci = []
nucl_seq = ''
genes = {}
nseq = 0 # number of nucleotide sequences
with open(genbank_fp, 'r') as input_f:
for line in input_f:
if line.startswith('//'):
nseq += 1
abs_pos = 0 # absolute position in concantenated nucleotide sequence
for i in range(nseq):
gb = Sequence.read(genbank_fp, seq_num=i+1, format='genbank')
locus_name = gb.metadata['LOCUS']['locus_name']
size = gb.metadata['LOCUS']['size']
loci.append([locus_name, size])
nucl_seq += str(gb)
for feature in gb.interval_metadata.query(metadata={'type': 'CDS'}):
m = feature.metadata
if 'protein_id' in m:
protein_id = m['protein_id'].replace('\"', '')
if protein_id not in genes:
translation = m['translation'].replace(' ', '') \
.replace('\"', '')
strand = m['strand']
start = feature.bounds[0][0] + abs_pos + 1
end = feature.bounds[0][1] + abs_pos
genes[protein_id] = [translation, start, end, strand]
abs_pos += int(size)
gb = DNA(nucl_seq)
# generate mock metadata for the merged sequence
gb.metadata['LOCUS'] = {'locus_name': 'locus001', 'size': len(nucl_seq),
'unit': 'bp', 'shape': 'circular',
'division': 'CON', 'mol_type': 'DNA',
'date': '01-JAN-1900'}
gb.metadata['id'] = 'locus001'
gid = 1 # assign an incremental integer to the current gene
gb.interval_metadata._intervals = []
for (gene, l) in sorted(genes.items(), key=lambda x: x[1][1]):
# generate "gene" and "CDS" records for each protein-coding gene
location = str(l[1]) + '..' + str(l[2]) # start and end coordinates
if l[3] == '-': # negative strand
location = 'complement(' + location + ')'
feature = {'type': 'gene', 'locus_tag': 'gene' + str(gid),
'__location': location}
gb.interval_metadata.add([(l[1] - 1, l[2])], metadata=feature)
feature = {'type': 'CDS', 'locus_tag': 'gene' + str(gid),
'__location': location, 'protein_id': gene,
'translation': l[0]}
gb.interval_metadata.add([(l[1] - 1, l[2])], metadata=feature)
gid += 1
return (gb, genes)
def reformat_egid(genbank_fp,
output_dir):
""" Reformat input genome to the formats accepted by EGID.
Parameters
----------
genbank_fp: string
file path to genome in GenBank format
output_dir: string
output directory path
Notes
-----
Input to EGID are five obsolete NCBI standard files: gbk, fna, faa, ffn
and ptt.
"""
(gb, genes) = _merge_genbank_seqs(genbank_fp)
DNA.write(gb, join(output_dir, 'id.fna'), format='fasta')
DNA.write(gb, join(output_dir, 'id.gbk'), format='genbank')
nucl_seq = str(gb)
output_f = {}
for x in ('faa', 'ffn', 'ptt'):
output_f[x] = open(join(output_dir, 'id.' + x), 'w')
output_f['ptt'].write('locus001\n' + str(len(genes)) + ' proteins\n')
# a ptt file contains the following columns:
fields = ('Location', 'Strand', 'Length', 'PID', 'Gene', 'Synonym',
'Code', 'COG', 'Product')
output_f['ptt'].write('\t'.join(fields) + '\n')
gid = 1 # assign an incremental integer to the current gene
for (gene, l) in sorted(genes.items(), key=lambda x: x[1][1]):
output_f['faa'].write('>' + gene + '\n' + l[0] + '\n')
output_f['ptt'].write(str(l[1]) + '..' + str(l[2]) + '\t' +
l[3] + '\t' + str(len(l[0])) + '\t' +
str(gid) + '\t-\tgene' + str(gid) +
'\t-\t-\t-\n')
if l[3] == '+': # positive strand
output_f['ffn'].write('>locus001:' + str(l[1]) + '-' +
str(l[2]) + '\n' +
nucl_seq[l[1]-1:l[2]] + '\n')
else: # negative strand (reverse complement)
rc_seq = str(DNA(nucl_seq[l[1]-1:l[2]]).reverse_complement())
output_f['ffn'].write('>locus001:c' + str(l[2]) + '-' +
str(l[1]) + '\n' + rc_seq + '\n')
gid += 1
for x in output_f:
output_f[x].close()
def reformat_genemark(genbank_fp,
output_dir):
""" Reformat input genome to the formats accepted by GeneMark.
Parameters
----------
genbank_fp: string
file path to genome in GenBank format
output_dir: string
output directory path
Notes
-----
GeneMark's acceptable input file format is FASTA (genome sequence).
"""
gb = _merge_genbank_seqs(genbank_fp)[0]
DNA.write(gb, join(output_dir, 'id.fna'), format='fasta')
DNA.write(gb, join(output_dir, 'id.gbk'), format='genbank')
@click.command()
@click.option('--gene-tree-fp', required=False,
type=click.Path(resolve_path=True, readable=True, exists=True,
file_okay=True),
help='Gene tree in Newick format')
@click.option('--species-tree-fp', required=False,
type=click.Path(resolve_path=True, readable=True, exists=True,
file_okay=True),
help='Species tree in Newick format')
@click.option('--gene-msa-fa-fp', required=False,
type=click.Path(resolve_path=True, readable=True, exists=True,
file_okay=True),
help='MSA of genes in FASTA format')
@click.option('--genbank-fp', required=False,
type=click.Path(resolve_path=True, readable=True, exists=True,
file_okay=True),
help='Genome in GenBank format')
@click.option('--output-tree-fp', required=False,
type=click.Path(resolve_path=True, readable=True, exists=False,
file_okay=True),
help='Output formatted species and gene tree')
@click.option('--output-msa-phy-fp', required=False,
type=click.Path(resolve_path=True, readable=True, exists=False,
file_okay=True),
help='Output MSA in PHYLIP format')
@click.option('--output-dir', required=False,
type=click.Path(resolve_path=True, readable=True, exists=False),
help='Output directory path')
@click.option('--method', required=True,
type=click.Choice(['trex', 'ranger-dtl',
'riata-hgt', 'consel',
'darkhorse', 'hgtector',
'genemark', 'egid',
'jane4', 'wn-svm',
'tree-puzzle']),
help='The method to be used for HGT detection')
def _main(gene_tree_fp,
species_tree_fp,
gene_msa_fa_fp,
genbank_fp,
output_tree_fp,
output_msa_phy_fp,
output_dir,
method):
""" Reformat input files to format accepted by various HGT tools.
For phylogenetic methods, a species tree and a gene tree are mandatory.
Species tree can be multifurcating, however will be converted to
bifurcating trees for software that require them. Leaf labels of
species tree and gene tree must match, however the label
SPECIES_GENE is acceptable for multiple genes in the gene
tree. Leaf labels must also be at most 10 characters long (for
PHYLIP manipulations).
For compositional methods, a GenBank file containing both the genome
sequence and the coordinates of its gene regions is required. Draft
genomes (multiple sequences) are acceptable.
"""
# add function to check where tree is multifurcating and the labeling
# is correct
gene_tree = TreeNode.read(gene_tree_fp, format='newick') \
if gene_tree_fp is not None else None
species_tree = TreeNode.read(species_tree_fp, format='newick') \
if species_tree_fp is not None else None
if method in ('ranger-dtl', 'trex', 'riata-hgt', 'jane4'):
eval('reformat_%s' % method.replace('-', ''))(
gene_tree=gene_tree,
species_tree=species_tree,
output_tree_fp=output_tree_fp)
elif method == 'tree-puzzle':
reformat_treepuzzle(
gene_tree=gene_tree,
species_tree=species_tree,
gene_msa_fa_fp=gene_msa_fa_fp,
output_tree_fp=output_tree_fp,
output_msa_phy_fp=output_msa_phy_fp)
elif method == 'egid':
reformat_egid(
genbank_fp=genbank_fp,
output_dir=output_dir)
elif method == 'genemark':
reformat_genemark(
genbank_fp=genbank_fp,
output_dir=output_dir)
if __name__ == "__main__":
_main()
| 33.834507 | 79 | 0.584972 | 2,374 | 19,218 | 4.5754 | 0.149115 | 0.04861 | 0.034524 | 0.034984 | 0.484994 | 0.431965 | 0.410606 | 0.381882 | 0.359602 | 0.32471 | 0 | 0.007543 | 0.296389 | 19,218 | 567 | 80 | 33.89418 | 0.79574 | 0.364242 | 0 | 0.328413 | 0 | 0 | 0.12122 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.051661 | false | 0 | 0.01476 | 0 | 0.077491 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
3377a3bd3f3d919bab7ca9757f1c2cbb6e8c4e7b | 8,248 | py | Python | MCTS.py | arijitnoobstar/OnitamaDeepRL | e561b22fe7728f51c1f1a078dfb19aa008bf010e | [
"Apache-2.0"
] | 3 | 2021-05-16T08:43:09.000Z | 2021-05-31T16:23:43.000Z | MCTS.py | mion666459/OnitamaAI | e561b22fe7728f51c1f1a078dfb19aa008bf010e | [
"Apache-2.0"
] | null | null | null | MCTS.py | mion666459/OnitamaAI | e561b22fe7728f51c1f1a078dfb19aa008bf010e | [
"Apache-2.0"
] | 1 | 2021-05-28T10:07:50.000Z | 2021-05-28T10:07:50.000Z | from std_imports import *
# random policy function used for MCTS
def randomPolicy(state):
while not state.isTerminal():
try:
action = random.choice(state.getPossibleActions())
except IndexError:
state.show_game_state
raise Exception("Non-terminal state has no possible actions: " + str(state))
state = state.takeAction(action)
return state.getReward()
# Space Efficient representation of the Tree node class for MCTS
# Does not save the state, instead it uses reference values from action
# to play the game whenever the state is "set". Traversing a tree using
# such nodes is slow, but space efficient
class treeNode_spaceEfficient():
def __init__(self, action = None, parent = None):
self.action = action
self.parent = parent
self.numVisits = 0
self.totalReward = 0
self.children = {}
self.num_blue_wins = 0
self.num_red_wins = 0
self.isFullyExpanded = False
def set_state(self, state = None):
""" set the state and variables associated """
if state == None:
self.state = self.get_state()
else:
self.state = state
self.tree_number_of_turns = self.state.number_of_turns
self.isTerminal = self.state.isTerminal()
if self.isTerminal:
self.isFullyExpanded = True
def get_state(self):
""" recursively obtain the state from root node """
# reset the state first
self.reset_state()
if self.parent == None:
return self.state
else:
state = self.parent.get_state()
return state.takeAction(self.action)
def reset_state(self):
""" reset the state back to that of root """
if self.parent == None:
self.state.load_history(self.tree_number_of_turns)
else:
self.parent.reset_state()
# A time efficient tree node class for MCTS, which saves the state of game
# at every node using deepcopy, taking up a lot of space, but it is fast,
# as the game need not be replayed when at a node
class treeNode():
def __init__(self, state, parent, action):
self.state = state
self.tree_number_of_turns = self.state.number_of_turns
self.isTerminal = state.isTerminal()
self.isFullyExpanded = self.isTerminal
self.parent = parent
self.numVisits = 0
self.totalReward = 0
self.children = {}
self.num_blue_wins = 0
self.num_red_wins = 0
self.action = action
# The mcts class that handles all operations of the monte carlo tree search
class mcts():
def __init__(self, timeLimit=None, iterationLimit=None, explorationConstant=1 / math.sqrt(2),
rolloutPolicy=randomPolicy, verbose = 1, efficiency = "normal"):
if timeLimit != None:
if iterationLimit != None:
raise ValueError("Cannot have both a time limit and an iteration limit")
# time taken for each MCTS search in seconds
self.timeLimit = timeLimit
self.limitType = 'time'
else:
if iterationLimit == None:
raise ValueError("Must have either a time limit or an iteration limit")
# number of iterations of the search
if iterationLimit < 1:
raise ValueError("Iteration limit must be greater than one")
self.searchLimit = iterationLimit
self.limitType = 'iterations'
self.explorationConstant = explorationConstant
self.rollout = rolloutPolicy
self.verbose = verbose
self.efficiency = efficiency
def search(self, initialState):
if self.efficiency == "space":
self.root = treeNode_spaceEfficient(None, None)
self.root.set_state(initialState)
else:
self.root = treeNode(initialState, None, None)
if self.limitType == 'time':
iter = 0
timeLimit = time.time() + self.timeLimit
while time.time() < timeLimit:
self.executeRound()
iter += 1
if self.verbose:
print("{} Seconds: {} iterations ran".format(self.timeLimit, iter))
else:
if self.verbose:
for i in tqdm(range(self.searchLimit), position = 0, leave = True):
self.i = i+1
self.executeRound()
else:
for i in range(self.searchLimit):
self.executeRound()
bestChild = self.getBestChild(self.root, 0, final_selection = True)
# self.show_final_results()
return self.getAction(self.root, bestChild)
def executeRound(self):
node = self.selectNode(self.root)
# set the state for the space efficient method
if self.efficiency == "space":
node.set_state()
reward = self.rollout(node.state)
self.backpropogate(node, reward)
def selectNode(self, node):
""" If node is not fully expanded, it expands it, otherwise it selects best child and then checks for expansion again and etc."""
while not node.isTerminal:
if node.isFullyExpanded:
node = self.getBestChild(node, self.explorationConstant)
else:
return self.expand(node)
return node
def expand(self, node):
# need to set state to get the correct possible actions
actions = node.state.getPossibleActions()
for action in actions:
if action not in node.children:
if self.efficiency == "space":
newNode = treeNode_spaceEfficient(action, node)
else:
newNode = treeNode(copy.deepcopy(node.state.takeAction(action)), node, action)
node.children[action] = newNode
if len(actions) == len(node.children):
node.isFullyExpanded = True
return newNode
raise Exception("Should never reach here")
def backpropogate(self, node, reward):
while node is not None:
node.numVisits += 1
node.totalReward += reward
if reward == -1:
node.num_blue_wins += 1 # blue win
elif reward == 1:
node.num_red_wins += 1 # red win
node.state.load_history(node.tree_number_of_turns)
node = node.parent
def show_final_results(self):
""" Prints the output of the MCTS search """
for action, node in self.root.children.items():
print("{}: B:{}, R:{} / {}".format(action,node.num_blue_wins,node.num_red_wins,node.numVisits))
def getBestChild(self, node, explorationValue, final_selection = False):
# this check is needed as in the MCTS algo, when choosing the best child in the end, it based on number of visits, not the selection heuristic
if final_selection == False:
bestValue = float("-inf")
bestNodes = []
for child in node.children.values():
nodeValue = node.state.getCurrentPlayer() * child.totalReward / child.numVisits + explorationValue * math.sqrt(
2 * math.log(node.numVisits) / child.numVisits)
if nodeValue > bestValue:
bestValue = nodeValue
bestNodes = [child]
elif nodeValue == bestValue:
bestNodes.append(child)
selected_node = random.choice(bestNodes)
if self.efficiency == "space":
selected_node.set_state()
return selected_node
else:
# Check for final selection of child
bestValue = 0
bestNodes = []
for child in node.children.values():
nodeValue = child.numVisits
if nodeValue > bestValue:
bestValue = nodeValue
bestNodes = [child]
elif nodeValue == bestValue:
bestNodes.append(child)
return random.choice(bestNodes)
def getAction(self, root, bestChild):
for action, node in root.children.items():
if node is bestChild:
return action
# get a list of all actions, valid or not
action_list = self.root.state.getPossibleActions(valid = False)
# move index recorded for Deep RL training
self.move_index = action_list.index(action) | 36.334802 | 148 | 0.615907 | 963 | 8,248 | 5.1973 | 0.235722 | 0.017982 | 0.015584 | 0.013586 | 0.153247 | 0.127073 | 0.127073 | 0.127073 | 0.108691 | 0.108691 | 0 | 0.004154 | 0.299467 | 8,248 | 227 | 149 | 36.334802 | 0.862063 | 0.161494 | 0 | 0.313609 | 0 | 0 | 0.044567 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.088757 | false | 0 | 0.005917 | 0 | 0.171598 | 0.011834 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
3377ab4d402bece3c10bbb704ed335b82b661b18 | 614 | py | Python | plotting/plot_BPT.py | griffij/QuakeRates | 70069bb271a1987e72fcbdf3aa0c0a8a79591580 | [
"Apache-2.0"
] | null | null | null | plotting/plot_BPT.py | griffij/QuakeRates | 70069bb271a1987e72fcbdf3aa0c0a8a79591580 | [
"Apache-2.0"
] | null | null | null | plotting/plot_BPT.py | griffij/QuakeRates | 70069bb271a1987e72fcbdf3aa0c0a8a79591580 | [
"Apache-2.0"
] | null | null | null | """Plot some basic BPT distributions
"""
import os, sys
from matplotlib import pyplot as plt
import numpy as np
from scipy.stats import expon, gamma, weibull_min, invgauss
mu = 100
alphas = [ 0.5, 1., 2., 5., 10.]
#alpha = 1
x_vals = np.arange(0, 4*mu)
# Plot for a range of alpha values
for alpha in alphas:
bpt = invgauss(alpha, scale=mu)
pdf_vals = bpt.pdf(x_vals)
plt.plot(x_vals, pdf_vals, label=alpha)
# Now add exponential
exp_dist = expon(scale = mu)
pdf_vals = exp_dist.pdf(x_vals)
plt.plot(x_vals, pdf_vals, label='Exponential', color='k')
plt.legend()
plt.savefig('BPT_distribution.png')
| 24.56 | 59 | 0.710098 | 107 | 614 | 3.953271 | 0.514019 | 0.059102 | 0.047281 | 0.066194 | 0.1513 | 0.1513 | 0.1513 | 0.1513 | 0.1513 | 0.1513 | 0 | 0.025243 | 0.161238 | 614 | 24 | 60 | 25.583333 | 0.796117 | 0.156352 | 0 | 0 | 0 | 0 | 0.062868 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.25 | 0 | 0.25 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
3377d7cced7a6e1a455c13090ea445c75d6794d8 | 2,125 | py | Python | vang/tfs/create_release_definition.py | mattiasl/scripts | e9245ce432b0dd5743506654ada52e017d0b6be0 | [
"Apache-2.0"
] | 6 | 2018-01-31T09:59:18.000Z | 2020-06-09T08:55:22.000Z | vang/tfs/create_release_definition.py | mattiasl/scripts | e9245ce432b0dd5743506654ada52e017d0b6be0 | [
"Apache-2.0"
] | null | null | null | vang/tfs/create_release_definition.py | mattiasl/scripts | e9245ce432b0dd5743506654ada52e017d0b6be0 | [
"Apache-2.0"
] | 2 | 2018-11-19T09:56:46.000Z | 2020-06-08T10:53:11.000Z | #!/usr/bin/env python3
import argparse
from json import dumps
from sys import argv
from vang.tfs.api import call
from vang.tfs.definition_utils import get_definition, get_definition_name
def get_release_definition(template, project, repo, branch, comment=None):
return get_definition(template,
{
'name': get_definition_name(project, repo, branch),
'project': project,
'branch': branch
},
{'comment': comment} if comment else {})
def create_release_definition(organisation, project, release_definition):
return call(
f'/{organisation}/{project}/_apis/release/definitions?api-version=3.2-preview',
request_data=release_definition,
method='POST',
only_response_code=True
)
def main(project, repo, branch, template, comment=None):
organisation, project = project.split('/')
x = get_release_definition(template, project, repo, branch, comment)
print(dumps(x, indent=4))
response = create_release_definition(organisation, project,
get_release_definition(template, project, repo, branch, comment))
print(response)
def parse_args(args):
parser = argparse.ArgumentParser(description='Create TFS release definitions')
parser.add_argument(
'project',
help='TFS projects, e.g organisation/project')
parser.add_argument(
'repo',
help='The TFS git repo name, e.g. spam.eggs')
parser.add_argument(
'branch',
help='The TFS git repo branch, e.g. 1.0.x')
parser.add_argument(
'template',
help='The TFS release definition template file, e.g. release_definition_template.json')
parser.add_argument(
'-c',
'--comment',
help='The comment to add to the release definition, e.g. a commit sha of the release definition template')
return parser.parse_args(args)
if __name__ == '__main__': # pragma: no cover
main(**parse_args(argv[1:]).__dict__)
| 34.836066 | 114 | 0.632 | 244 | 2,125 | 5.319672 | 0.331967 | 0.144068 | 0.115562 | 0.064715 | 0.218798 | 0.127889 | 0.127889 | 0.127889 | 0.087827 | 0 | 0 | 0.004481 | 0.264941 | 2,125 | 60 | 115 | 35.416667 | 0.826504 | 0.017882 | 0 | 0.104167 | 0 | 0.020833 | 0.223022 | 0.051319 | 0 | 0 | 0 | 0 | 0 | 1 | 0.083333 | false | 0 | 0.104167 | 0.041667 | 0.25 | 0.041667 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
3379417d5fd88f05297c00144ec0950b728cbbcd | 949 | py | Python | data-prep/download_weather_data.py | DanteLore/tfl-cycles | 21a1be93824fbbe4cfadd182755b7979ffbf84f0 | [
"MIT"
] | 1 | 2020-03-25T11:18:22.000Z | 2020-03-25T11:18:22.000Z | data-prep/download_weather_data.py | DanteLore/tfl-cycles | 21a1be93824fbbe4cfadd182755b7979ffbf84f0 | [
"MIT"
] | null | null | null | data-prep/download_weather_data.py | DanteLore/tfl-cycles | 21a1be93824fbbe4cfadd182755b7979ffbf84f0 | [
"MIT"
] | null | null | null | import os
import requests
import re
HEATHROW_URL= "https://www.metoffice.gov.uk/pub/data/weather/uk/climate/stationdata/heathrowdata.txt"
WEATHER_DIR = "../data/weather"
HEATHROW_FILE = WEATHER_DIR + "/heathrow.csv"
HEADER ="year,month,maximum_temp,minimum_temp,days_of_air_frost,total_rainfall,total_sunshine\n"
if __name__ == "__main__":
if not os.path.exists(WEATHER_DIR):
os.makedirs(WEATHER_DIR)
print("Downloading file: " + HEATHROW_URL)
r = requests.get(HEATHROW_URL, allow_redirects=True, stream=True)
decoded = [l.decode('utf-8') for l in r.iter_lines()]
r = r"^[\s]+([0-9]+)[\s]+([0-9]+)[\s]+([0-9.-]+)[\s]+([0-9.-]+)[\s]+([0-9.-]+)[\s]+([0-9.-]+)[\s]+([0-9.-]+).*$"
matches = [re.match(r, l) for l in decoded]
fields = [",".join(m.groups()) for m in matches if m]
cleaned = [f.replace('-', '') for f in fields]
text = HEADER + "\n".join(cleaned)
open(HEATHROW_FILE, 'w').write(text)
| 35.148148 | 116 | 0.635406 | 147 | 949 | 3.92517 | 0.510204 | 0.024263 | 0.036395 | 0.041594 | 0.036395 | 0.036395 | 0.036395 | 0.036395 | 0.036395 | 0.036395 | 0 | 0.018382 | 0.140148 | 949 | 26 | 117 | 36.5 | 0.688725 | 0 | 0 | 0 | 0 | 0.105263 | 0.358272 | 0.201264 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.157895 | 0 | 0.157895 | 0.052632 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
337bbc68170b4d83cd1dddafc374df6001cdf9ef | 2,776 | py | Python | server/receiver.py | ZeroxTM/IDEA-HMK-Cryptor-1 | 85dfa02aa1b941dd0245a27c46ddf02ef0fad1fa | [
"MIT"
] | null | null | null | server/receiver.py | ZeroxTM/IDEA-HMK-Cryptor-1 | 85dfa02aa1b941dd0245a27c46ddf02ef0fad1fa | [
"MIT"
] | null | null | null | server/receiver.py | ZeroxTM/IDEA-HMK-Cryptor-1 | 85dfa02aa1b941dd0245a27c46ddf02ef0fad1fa | [
"MIT"
] | null | null | null | __author__ = "Adam Mahameed"
__copyright__ = "2020 HMK-IDEA-Cryptor"
__credits__ = ["Adam Mahameed"]
__license__ = "MIT"
__email__ = "adam.mah315@gmail.com"
from Cryptors.DSA import DSA
from Cryptors.HMKnapsack import HMKnapsack
from pckgIDEA.IDEA import IDEA
KEY_SIZE = 128
class Receiver():
def __init__(self, socket):
self.rec_file = open("files/receiver_decrypted.txt", "w", encoding="utf-8")
self.socket = socket
print("\n------RECEIVER------")
print("Generating private and public keys...")
self.hmk_cryptor = HMKnapsack(KEY_SIZE)
def send_key(self):
"""
Returns HMK public key to exchange with the sender
:return: HMK Public key
"""
print("Sending {0}...] public key to sender...".format(str(self.hmk_cryptor.get_public_key())[:10]))
return self.hmk_cryptor.get_public_key()
def exchange_keys(self, ciphered_key, signed_idea, DSA_keys):
print("\n------RECEIVER------")
print("Received encrypted IDEA Key and signature, decrypting and verifying...")
self.DSA_keys = {'p': DSA_keys[0], 'q': DSA_keys[1], 'g': DSA_keys[2], 'pkey': DSA_keys[3]}
idea_key = self.hmk_cryptor.decrypt(int(ciphered_key))
#idea_key = idea_key.rstrip('\x00')
if self.verify_message(idea_key, signed_idea[0], signed_idea[1]):
self.idea_cryptor = IDEA(int(idea_key.rstrip('\x00')))
print("IDEA key was exchanged and verified successfully.")
print("Decrypted IDEA Key: " + hex(self.idea_cryptor.key))
print("Decryption keys were generated successfully")
else:
print("Incorrect received IDEA key value")
def receive(self, M, signature):
print("\n------RECEIVER------")
print("-> Message {0} and signature received from sender".format(M))
print("-> Decrypting and verifying received message".format(M))
r, s = signature
decrypted_text = self.idea_cryptor.decrypt(M)
if self.verify_message(decrypted_text, r, s):
print("-> Verified -> decrypted message: " + decrypted_text)
self.rec_file.write(decrypted_text)
else:
print("-> Verification failed! -> decrypted message: " + decrypted_text)
def verify_message(self, M, r, s):
"""
Verify message
:param M: Received message
:param r: signature
:param s: signature
:return: If the message is valid
"""
if DSA.verify(M, r, s, self.DSA_keys['p'], self.DSA_keys['q'], self.DSA_keys['g'], self.DSA_keys['pkey']):
# print('Result: Verified!')
return True
else:
# print("Result: Verification failed!")
return False
| 38.027397 | 114 | 0.615274 | 343 | 2,776 | 4.77551 | 0.311953 | 0.042735 | 0.033578 | 0.034799 | 0.031746 | 0.031746 | 0 | 0 | 0 | 0 | 0 | 0.01195 | 0.246398 | 2,776 | 72 | 115 | 38.555556 | 0.771033 | 0.104467 | 0 | 0.12766 | 0 | 0 | 0.272424 | 0.047977 | 0 | 0 | 0 | 0 | 0 | 1 | 0.106383 | false | 0 | 0.06383 | 0 | 0.255319 | 0.297872 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
337f01f7fab004df576400a43f1c3869a2c046af | 557 | py | Python | rclpy/examples_rclpy_topics/examples_rclpy_topics/subscriber.py | AtsukiYokota/examples_rclpy | 51c8d10623672ab91c88be3fd699e92fc0c07c41 | [
"Apache-2.0"
] | null | null | null | rclpy/examples_rclpy_topics/examples_rclpy_topics/subscriber.py | AtsukiYokota/examples_rclpy | 51c8d10623672ab91c88be3fd699e92fc0c07c41 | [
"Apache-2.0"
] | null | null | null | rclpy/examples_rclpy_topics/examples_rclpy_topics/subscriber.py | AtsukiYokota/examples_rclpy | 51c8d10623672ab91c88be3fd699e92fc0c07c41 | [
"Apache-2.0"
] | null | null | null | import rclpy
from rclpy.node import Node
from std_msgs.msg import String
class MinimalSubscriber(Node):
def __init__(self):
super().__init__('minimal_subscriber')
self.subscription = self.create_subscription(
String, 'chatter', self.listener_callback)
def listener_callback(self, msg):
self.get_logger().info(msg.data)
def main(args=None):
rclpy.init(args=args)
minimal_subscriber = MinimalSubscriber()
rclpy.spin(minimal_subscriber)
rclpy.shutdown()
if __name__ == '__main__':
main()
| 21.423077 | 54 | 0.694794 | 65 | 557 | 5.584615 | 0.492308 | 0.140496 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.197487 | 557 | 25 | 55 | 22.28 | 0.812081 | 0 | 0 | 0 | 0 | 0 | 0.059246 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.176471 | false | 0 | 0.176471 | 0 | 0.411765 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
337f1b5807ffa83c345e8dafa91086d6a8ed3380 | 719 | py | Python | py/2015/day10/aoc_day_10.py | cs-cordero/advent-of-code | 614b8f78b43c54ef180a7dc411a0d1366a62944f | [
"MIT"
] | null | null | null | py/2015/day10/aoc_day_10.py | cs-cordero/advent-of-code | 614b8f78b43c54ef180a7dc411a0d1366a62944f | [
"MIT"
] | null | null | null | py/2015/day10/aoc_day_10.py | cs-cordero/advent-of-code | 614b8f78b43c54ef180a7dc411a0d1366a62944f | [
"MIT"
] | 2 | 2019-12-01T15:33:27.000Z | 2020-12-14T05:37:23.000Z | from typing import Generator, Tuple
def break_string(string: str) -> Generator[Tuple[int, int], None, None]:
i = 0
while i < len(string):
count = 0
current_digit = string[i]
while i < len(string) and string[i] == current_digit:
count += 1
i += 1
yield (str(count), str(current_digit))
def solution(value: str, iterations: int) -> str:
for i in range(iterations):
value = "".join("".join(tup) for tup in break_string(value))
return value
assert solution("1", 5) == "312211"
PUZZLE_INPUT = "1113222113"
part1 = len(solution(PUZZLE_INPUT, 40))
print(f"Part 1: {part1}")
part2 = len(solution(PUZZLE_INPUT, 50))
print(f"Part 2: {part2}")
| 26.62963 | 72 | 0.617524 | 101 | 719 | 4.316832 | 0.435644 | 0.082569 | 0.041284 | 0.068807 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.058394 | 0.23783 | 719 | 26 | 73 | 27.653846 | 0.737226 | 0 | 0 | 0 | 0 | 0 | 0.065369 | 0 | 0 | 0 | 0 | 0 | 0.05 | 1 | 0.1 | false | 0 | 0.05 | 0 | 0.2 | 0.1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
337f8be8a863ba3c83c574bea3199d4c04528548 | 2,526 | py | Python | russian_roulette.py | Study-Repos-Forks/Python-1 | 49353458404e5cb0e01deb0497c8d3cdc5e2e73f | [
"MIT"
] | null | null | null | russian_roulette.py | Study-Repos-Forks/Python-1 | 49353458404e5cb0e01deb0497c8d3cdc5e2e73f | [
"MIT"
] | null | null | null | russian_roulette.py | Study-Repos-Forks/Python-1 | 49353458404e5cb0e01deb0497c8d3cdc5e2e73f | [
"MIT"
] | null | null | null | """ author: Ataba29
the code is just a russian roulette game against
the computer
"""
from random import randrange
import time
def main():
# create the gun and set the bullet
numOfRounds = 6
gun = [0, 0, 0, 0, 0, 0]
bullet = randrange(0, 6)
gun[bullet] = 1
player = False # is player dead
pc = False # is pc dead
# menu
print("/********************************/")
print(" Welcome to russian roulette")
print("/********************************/")
time.sleep(2)
print("you are going to play against the pc")
time.sleep(2)
print("there is one gun and one bullet")
time.sleep(2)
print("all you have to do is pick who starts first")
time.sleep(2)
# take input from the user
answer = input(
"please press 'm' if you want to start first or 'p' if you want the pc to start first: "
)
# check input
while answer != "m" and answer != "p":
answer = input("please enter again ('m' or 'p'): ")
# set turn
if answer == 'm':
turn = "player"
else:
turn = "pc"
# game starts
while numOfRounds != 0 and (pc == False and player == False):
print(f"\nRound number {numOfRounds}/6")
time.sleep(1)
print("the gun is being loaded")
time.sleep(3)
print("the gun is placed on " + ("your head" if turn ==
"player" else "the cpu of the pc"))
time.sleep(3)
print("and...")
time.sleep(1)
print("...")
time.sleep(2)
print("...")
time.sleep(2)
print("...")
time.sleep(2)
# get the bullet in the chamber
shot = gun.pop(numOfRounds - 1)
if shot:
print("THE GUN WENT OFF!!!")
print("YOU DIED" if turn == "player" else "THE PC DIED")
if turn == "player": # set up who died
player = True
else:
pc = True
else:
print("nothing happened phew!")
if turn == "player": # flip the turn
turn = "pc"
else:
turn = "player"
time.sleep(2)
numOfRounds -= 1
time.sleep(1)
print("")
if player:
print("sorry man you died better luck next time")
print("don't forget to send a pic from heaven :)")
else:
print("good job man you survived")
print("you just got really lucky")
print("anyways hope you had fun because i sure did")
main()
| 26.3125 | 96 | 0.511876 | 330 | 2,526 | 3.918182 | 0.363636 | 0.090487 | 0.061872 | 0.058005 | 0.0843 | 0.034803 | 0.034803 | 0.034803 | 0 | 0 | 0 | 0.017554 | 0.346002 | 2,526 | 95 | 97 | 26.589474 | 0.765133 | 0.103325 | 0 | 0.414286 | 0 | 0.014286 | 0.321572 | 0.030371 | 0 | 0 | 0 | 0 | 0 | 1 | 0.014286 | false | 0 | 0.028571 | 0 | 0.042857 | 0.314286 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
33812b3b1b19a16b3d3f2a404fb13f942417dd99 | 500 | py | Python | tests/page_template/test_page_template.py | Crown-Commercial-Service/govuk-frontend-jinja | ddbe208a976ffa4ca330881c506c5200dfa69851 | [
"MIT"
] | 7 | 2019-09-25T13:59:35.000Z | 2021-06-30T11:13:22.000Z | tests/page_template/test_page_template.py | Crown-Commercial-Service/govuk-frontend-jinja | ddbe208a976ffa4ca330881c506c5200dfa69851 | [
"MIT"
] | 23 | 2019-08-20T10:52:49.000Z | 2021-06-02T14:21:16.000Z | tests/page_template/test_page_template.py | Crown-Commercial-Service/govuk-frontend-jinja | ddbe208a976ffa4ca330881c506c5200dfa69851 | [
"MIT"
] | 6 | 2019-08-29T14:02:25.000Z | 2021-04-10T20:20:23.000Z | import pytest
import govuk_frontend_jinja
@pytest.fixture
def env(loader):
return govuk_frontend_jinja.Environment(
# for some reason the page_template tests only pass with trim_blocks=False
loader=loader,
autoescape=True,
keep_trailing_newline=True,
trim_blocks=False,
lstrip_blocks=True,
)
def test_page_template(env, template, expected, similar):
template = env.from_string(template)
assert similar(template.render(), expected)
| 23.809524 | 82 | 0.716 | 61 | 500 | 5.655738 | 0.606557 | 0.075362 | 0.104348 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.212 | 500 | 20 | 83 | 25 | 0.875635 | 0.144 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.071429 | 1 | 0.142857 | false | 0 | 0.142857 | 0.071429 | 0.357143 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
33821cb4b0dc7fb87562a852e4ad10758b596f2a | 1,044 | py | Python | pip_services3_expressions-3.3.4/test/calculator/functions/test_FunctionCollection.py | pip-services3-python/pip-services3-expressions-python | 4ea237fbbba32e62f920e6be3bd48e6cc02184e5 | [
"MIT"
] | null | null | null | pip_services3_expressions-3.3.4/test/calculator/functions/test_FunctionCollection.py | pip-services3-python/pip-services3-expressions-python | 4ea237fbbba32e62f920e6be3bd48e6cc02184e5 | [
"MIT"
] | null | null | null | pip_services3_expressions-3.3.4/test/calculator/functions/test_FunctionCollection.py | pip-services3-python/pip-services3-expressions-python | 4ea237fbbba32e62f920e6be3bd48e6cc02184e5 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
from pip_services3_expressions.calculator.functions.DelegatedFunction import DelegatedFunction
from pip_services3_expressions.calculator.functions.FunctionCollection import FunctionCollection
from pip_services3_expressions.variants.Variant import Variant
class TestFunctionCollection:
def tst_func(self, params, operations, callback):
callback(None, Variant("ABC"))
def test_add_remove_functions(self):
collection = FunctionCollection()
func1 = DelegatedFunction("ABC", self.tst_func)
collection.add(func1)
assert 1 == collection.length
func2 = DelegatedFunction("XYZ", self.tst_func)
collection.add(func2)
assert 2 == collection.length
index = collection.find_index_by_name('abc')
assert 0 == index
func = collection.find_by_name('Xyz')
assert func2 == func
collection.remove(0)
assert 1 == collection.length
collection.remove_by_name('XYZ')
assert 0 == collection.length
| 29.828571 | 96 | 0.700192 | 111 | 1,044 | 6.414414 | 0.369369 | 0.078652 | 0.067416 | 0.113764 | 0.196629 | 0.129213 | 0 | 0 | 0 | 0 | 0 | 0.018204 | 0.210728 | 1,044 | 34 | 97 | 30.705882 | 0.845874 | 0.020115 | 0 | 0.090909 | 0 | 0 | 0.01763 | 0 | 0 | 0 | 0 | 0 | 0.272727 | 1 | 0.090909 | false | 0 | 0.136364 | 0 | 0.272727 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
3389cc520bcd5c3160afa507c80f7f8499a2e5ea | 3,563 | py | Python | tutorials/mechanisms/tutorial_michaelismenten.py | AQ18/skimpy | 435fc50244f2ca815bbb39d525a82a4692f5c0ac | [
"Apache-2.0"
] | 13 | 2020-11-05T10:59:13.000Z | 2022-03-21T01:38:31.000Z | tutorials/mechanisms/tutorial_michaelismenten.py | AQ18/skimpy | 435fc50244f2ca815bbb39d525a82a4692f5c0ac | [
"Apache-2.0"
] | 4 | 2022-01-27T10:23:40.000Z | 2022-03-10T18:16:06.000Z | tutorials/mechanisms/tutorial_michaelismenten.py | AQ18/skimpy | 435fc50244f2ca815bbb39d525a82a4692f5c0ac | [
"Apache-2.0"
] | 6 | 2020-08-04T17:01:33.000Z | 2022-03-21T01:38:32.000Z | # -*- coding: utf-8 -*-
"""
.. module:: skimpy
:platform: Unix, Windows
:synopsis: Simple Kinetic Models in Python
.. moduleauthor:: SKiMPy team
[---------]
Copyright 2017 Laboratory of Computational Systems Biotechnology (LCSB),
Ecole Polytechnique Federale de Lausanne (EPFL), Switzerland
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
"""
# Test models
import numpy as np
from skimpy.core import *
from skimpy.mechanisms import *
name = 'pfk'
metabolites = ReversibleMichaelisMenten.Reactants(substrate = 'A',
product = 'B')
## QSSA Method
parameters = ReversibleMichaelisMenten.Parameters(
vmax_forward = 1.0,
k_equilibrium=2.0,
km_substrate = 10.0,
km_product = 10.0,
total_enzyme_concentration = 1.0,
)
pfk = Reaction(name=name,
mechanism = ReversibleMichaelisMenten,
reactants=metabolites,
)
this_model = KineticModel()
this_model.add_reaction(pfk)
this_model.parametrize_by_reaction({pfk.name:parameters})
this_model.compile_ode(sim_type = QSSA)
this_model.initial_conditions['A'] = 1.0
this_model.initial_conditions['B'] = 1.0
this_sol_qssa = this_model.solve_ode(np.linspace(0.0, 100.0, 1000), solver_type='cvode')
this_sol_qssa.plot('output/uni_uni_base_out_qssa.html')
## Full rate method
this_model.compile_ode(sim_type = ELEMENTARY)
this_model.initial_conditions['A'] = 1.0
this_model.initial_conditions['B'] = 1.0
this_model.initial_conditions['pfk'] = 0.8
this_model.initial_conditions['EC_pfk'] = 0.2
this_sol_full = this_model.solve_ode(np.linspace(0.0, 100.0, 1000), solver_type='cvode')
this_sol_full.plot('output/uni_uni_base_out_elemetary.html')
"""
BiBi Michaelis Menten Kinetics
"""
name = 'hxk'
metabolites = RandBiBiReversibleMichaelisMenten.Reactants(substrate1 = 'A',
substrate2 = 'C1',
product1 = 'B',
product2 = 'C2'
)
parameters = RandBiBiReversibleMichaelisMenten.Parameters(
vmax_forward=1.0,
k_equilibrium=5.0,
ki_substrate1=1.0,
ki_substrate2=1.0,
km_substrate2=10,
ki_product1=1.0,
ki_product2=1.0,
km_product1=10.0,
)
hxk = Reaction(name=name,
mechanism=RandBiBiReversibleMichaelisMenten,
reactants=metabolites,
)
this_model = KineticModel()
this_model.add_reaction(hxk)
this_model.parametrize_by_reaction({hxk.name:parameters})
this_model.compile_ode(sim_type = QSSA)
this_model.initial_conditions['A'] = 100.0
this_model.initial_conditions['B'] = 1.0
this_model.initial_conditions['C1'] = 3.0
this_model.initial_conditions['C2'] = 5.0
this_sol_qssa = this_model.solve_ode(np.linspace(0.0, 10.0, 1000),solver_type = 'cvode')
this_sol_qssa.plot('output/bi_bi_base_out_qssa.html')
| 30.982609 | 89 | 0.657311 | 445 | 3,563 | 5.062921 | 0.361798 | 0.087883 | 0.071016 | 0.115402 | 0.390146 | 0.351531 | 0.324012 | 0.292943 | 0.292943 | 0.238793 | 0 | 0.038148 | 0.242212 | 3,563 | 114 | 90 | 31.254386 | 0.796296 | 0.240247 | 0 | 0.209677 | 0 | 0 | 0.059032 | 0.040142 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.048387 | 0 | 0.048387 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
338a7321f8c4550125a8a900551b7bd850eac6f3 | 981 | py | Python | WebScrapping/makePrelimTable.py | marcsze/pythonPrograms | b2fdc42f73a83948debdb4049f3b715854005579 | [
"MIT"
] | null | null | null | WebScrapping/makePrelimTable.py | marcsze/pythonPrograms | b2fdc42f73a83948debdb4049f3b715854005579 | [
"MIT"
] | null | null | null | WebScrapping/makePrelimTable.py | marcsze/pythonPrograms | b2fdc42f73a83948debdb4049f3b715854005579 | [
"MIT"
] | null | null | null | #! Python
def getInput(datafile):
links = open(datafile, 'r')
dataTable = []
for line in links:
goodLine = line.strip('\n')
dataTable.append(goodLine)
links.close()
return dataTable
def createOutput(dataTable):
finalTable = {}
namesfile = []
x = 1
for i, data in enumerate(dataTable):
temppoint = []
if x%2 == 0:
for point in data.split("_"):
pointGood = point.strip('\n')
temppoint.append(pointGood)
finalTable[dataTable[i-1]] = temppoint
x = x + 1
outfile = open("test.txt", 'w')
for j in finalTable:
data2 = finalTable[j]
print("{0}\t".format(j), end ='', file = outfile)
for k, jobInfo in enumerate(data2):
if k != len(data2) - 1:
print("{0}\t".format(jobInfo), end ='', file = outfile)
else:
print("{0}".format(jobInfo), end ='\n', file = outfile)
outfile.close()
def main():
dataTable = getInput("editedOuput2.txt")
createOutput(dataTable)
if __name__ == '__main__': main() | 18.166667 | 59 | 0.611621 | 124 | 981 | 4.766129 | 0.403226 | 0.030457 | 0.023689 | 0.043993 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.016971 | 0.219164 | 981 | 54 | 60 | 18.166667 | 0.754569 | 0.008155 | 0 | 0 | 0 | 0 | 0.055498 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.088235 | false | 0 | 0 | 0 | 0.117647 | 0.088235 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
338ab460ecb80227bb2b6281bfef9dd2c57a9757 | 619 | py | Python | UVa Online Judge/v104/10433.py | mjenrungrot/algorithm | e0e8174eb133ba20931c2c7f5c67732e4cb2b703 | [
"MIT"
] | 1 | 2021-12-08T08:58:43.000Z | 2021-12-08T08:58:43.000Z | UVa Online Judge/v104/10433.py | mjenrungrot/algorithm | e0e8174eb133ba20931c2c7f5c67732e4cb2b703 | [
"MIT"
] | null | null | null | UVa Online Judge/v104/10433.py | mjenrungrot/algorithm | e0e8174eb133ba20931c2c7f5c67732e4cb2b703 | [
"MIT"
] | null | null | null | # =============================================================================
# Author: Teerapat Jenrungrot - https://github.com/mjenrungrot/
# FileName: 10433.py
# Description: UVa Online Judge - 10433
# =============================================================================
while True:
try:
str_N = input()
except EOFError:
break
N = int(str_N)
N2 = N * N
str_N2 = str(N2)
len_N = len(str_N)
if str_N2[-len_N:] == str_N:
print("Automorphic number of {}-digit.".format(len_N))
else:
print("Not an Automorphic number.")
| 28.136364 | 79 | 0.421648 | 61 | 619 | 4.131148 | 0.590164 | 0.063492 | 0.063492 | 0.071429 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.029598 | 0.235864 | 619 | 21 | 80 | 29.47619 | 0.503171 | 0.479806 | 0 | 0 | 0 | 0 | 0.18038 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.153846 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
338ae74dca21eed87e61216b0a5b0cf376536b58 | 1,317 | py | Python | api/movies.py | saequus/fastapi-movies | a19f1e08058905ea97f9d3a0141419e7859b44ad | [
"MIT"
] | null | null | null | api/movies.py | saequus/fastapi-movies | a19f1e08058905ea97f9d3a0141419e7859b44ad | [
"MIT"
] | null | null | null | api/movies.py | saequus/fastapi-movies | a19f1e08058905ea97f9d3a0141419e7859b44ad | [
"MIT"
] | null | null | null | from fastapi.exceptions import HTTPException
from typing import List
from api.models import MovieIn, MovieOut
from fastapi import APIRouter
from api.db import get_all_movies, get_movie, add_movie, update_movie, delete_movie
movies_router = APIRouter()
@movies_router.get('/', response_model=List[MovieOut])
async def api_get_movie():
return await get_all_movies()
@movies_router.post('/', response_model=MovieIn, status_code=201)
async def api_add_movie(payload: MovieIn):
movie_id = await add_movie(payload)
response = {
'id': movie_id,
**payload.dict()
}
return response
@movies_router.put('/{movie_id}', response_model=MovieIn)
async def api_update_movie(movie_id: int, payload: MovieIn):
movie = await get_movie(movie_id)
if not movie:
raise HTTPException(status_code=404, detail='Movie not found')
update_data = payload.dict(exclude_unset=True)
movie_in_db = MovieIn(**movie)
updated_movie = movie_in_db.copy(update=update_data)
return await update_movie(movie_id, updated_movie)
@movies_router.delete('/{movie_id}')
async def api_delete_movie(movie_id: int):
movie = await get_movie(movie_id)
if not movie:
return HTTPException(status_code=404, detail='Movie not found')
return await delete_movie(movie_id)
| 29.266667 | 83 | 0.740319 | 186 | 1,317 | 4.967742 | 0.263441 | 0.075758 | 0.077922 | 0.038961 | 0.17316 | 0.17316 | 0.17316 | 0.17316 | 0.075758 | 0 | 0 | 0.008152 | 0.161731 | 1,317 | 44 | 84 | 29.931818 | 0.828804 | 0 | 0 | 0.125 | 0 | 0 | 0.042521 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.15625 | 0 | 0.3125 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
338b47629bf7031021974abb8c0b90be4d32aca3 | 10,457 | py | Python | recursive_knn/evaluation.py | tseste/Recursive-K-Nearest-Neighbors | 5e35c643dc8c530102554492c56bfcf05b242298 | [
"Apache-2.0"
] | 1 | 2022-02-06T16:08:44.000Z | 2022-02-06T16:08:44.000Z | recursive_knn/evaluation.py | tseste/Recursive-K-Nearest-Neighbors | 5e35c643dc8c530102554492c56bfcf05b242298 | [
"Apache-2.0"
] | null | null | null | recursive_knn/evaluation.py | tseste/Recursive-K-Nearest-Neighbors | 5e35c643dc8c530102554492c56bfcf05b242298 | [
"Apache-2.0"
] | null | null | null | """Evaluators for the recommender model."""
import numpy as np
import pandas as pd
class Evaluator:
"""RMSE, RMSUE, MAE, MAUE evaluators wrapper."""
def _preprocessing(self, test_file, predictions_file):
test = pd.read_csv(test_file)
predictions = pd.read_csv(predictions_file)
test = test.set_index(['user', 'item']).sort_index()
predictions = predictions.set_index(['user', 'item']).sort_index()
test = test.loc[test.index.isin(predictions.index)]
test_values = test.values
return test_values, predictions
@staticmethod
def _predictions_counter(n_pred, n_r_pred, pred_file):
pred_counter = {
'knn': [n_pred],
'r_knn': [n_r_pred],
'total': [n_pred + n_r_pred]
}
pd.DataFrame(pred_counter).to_csv('counter_' + pred_file, index=False)
@staticmethod
def _rmse_func(y_true, y_pred, recursive=False):
if recursive:
y_pred = y_pred.drop('recursive_neighbors', axis=1)
subtract = y_true - y_pred
sq_sum_div = (subtract**2).sum(axis=0) / y_pred.shape[0]
return np.sqrt(sq_sum_div)
@staticmethod
def _total_rmse(rmse_file, recursive_rmse_file, n1, n2):
"""Combine the rmse values of KNN and RecursiveKNN."""
rmse = pd.read_csv(rmse_file)
recursive_rmse = pd.read_csv(recursive_rmse_file)
# loc the same columns of knn ['1', '3', ...]
recursive_rmse_same = recursive_rmse.loc[:, rmse.columns]
# (rmse^2)*number_of_predictions
rmse_sq_n1 = (rmse**2)*n1
# (recursive_rmse^2)*number_of_recursive_predictions
recursive_rmse_sq_n2 = (recursive_rmse_same**2)*n2
# duplicate rmse shape as recursive shape for vector computations
rmse_sq_n1_shape_recursive = pd.concat(
[rmse_sq_n1]*recursive_rmse_sq_n2.shape[0],
ignore_index=True)
total = np.sqrt(
(rmse_sq_n1_shape_recursive + recursive_rmse_sq_n2)/(n1 + n2))
f_total = total.join(recursive_rmse['recursive_neighbors'])
f_total.to_csv('total_' + rmse_file, index=False)
def rmse(self, test_file=None, pred_file=None,
r_test_file=None, r_pred_file=None):
"""Compute and write the rmse and recursive rmse of the predictions."""
if test_file and pred_file:
test_values, predictions = self._preprocessing(test_file,
pred_file)
pred_file = pred_file.split('/')[-1]
n_pred = predictions.shape[0]
rmse = self._rmse_func(test_values, predictions)
rmse.to_frame().T.to_csv('rmse_{}'.format(pred_file),
index=False)
if r_test_file and r_pred_file:
test_values, predictions = self._preprocessing(r_test_file,
r_pred_file)
r_pred_file = r_pred_file.split('/')[-1]
predictions = predictions.groupby('recursive_neighbors')
get_first_group = list(predictions.groups.keys())[0]
n_r_pred = predictions.get_group(get_first_group).shape[0]
recursive_rmse = predictions.apply(
lambda group_predictions: self._rmse_func(test_values,
group_predictions,
True))
recursive_rmse.to_csv('rmse_{}'.format(r_pred_file))
if test_file and pred_file and r_test_file and r_pred_file:
self._predictions_counter(n_pred, n_r_pred, pred_file)
rmse_file = 'rmse_' + pred_file
r_rmse_file = 'rmse_' + r_pred_file
self._total_rmse(rmse_file, r_rmse_file, n_pred, n_r_pred)
@staticmethod
def _rmsue_func(y_true, y_pred, group_by, recursive=False):
if recursive:
y_pred = y_pred.drop('recursive_neighbors', axis=1)
subtract = y_true - y_pred
group_predictions = subtract.reset_index().groupby(group_by)
# calculate each user or item rmse
user_rmse = group_predictions.apply(lambda user: np.sqrt(
(user.drop(['user', 'item'], axis=1)**2).sum(axis=0) /
user.shape[0]))
# average rmses and return
return user_rmse.sum(axis=0)/user_rmse.shape[0]
def rmsue(self, group_by, test_file=None, pred_file=None,
r_test_file=None, r_pred_file=None):
"""Compute and write rmsue and recursive rmsue of the predictions."""
if test_file and pred_file:
test_values, predictions = self._preprocessing(test_file,
pred_file)
pred_file = pred_file.split('/')[-1]
rmsue = self._rmsue_func(test_values, predictions, group_by)
rmsue.to_frame().T.to_csv('rmsue_{}'.format(pred_file),
index=False)
if r_test_file and r_pred_file:
test_values, predictions = self._preprocessing(r_test_file,
r_pred_file)
r_pred_file = r_pred_file.split('/')[-1]
predictions = predictions.groupby('recursive_neighbors')
recursive_rmsue = predictions.apply(
lambda group_predictions: self._rmsue_func(test_values,
group_predictions,
group_by,
True))
recursive_rmsue.to_csv('rmsue_{}'.format(r_pred_file))
@staticmethod
def _mae_func(y_true, y_pred, recursive=False):
if recursive:
y_pred = y_pred.drop('recursive_neighbors', axis=1)
subtract = y_true - y_pred
return abs(subtract).sum(axis=0) / y_pred.shape[0]
@staticmethod
def _total_mae(mae_file, recursive_mae_file, n1, n2):
"""Combine the rmse values of KNN and RecursiveKNN."""
mae = pd.read_csv(mae_file)
recursive_mae = pd.read_csv(recursive_mae_file)
# loc the same columns of knn ['1', '3', ...]
recursive_mae_same = recursive_mae.loc[:, mae.columns]
# mae*number_of_predictions
mae_n1 = mae*n1
# (recursive_mae)*number_of_recursive_predictions
recursive_mae_n2 = recursive_mae_same*n2
# duplicate rmse shape as recursive shape for vector computations
mae_n1_shape_recursive = pd.concat(
[mae_n1]*recursive_mae_n2.shape[0],
ignore_index=True)
total = (mae_n1_shape_recursive + recursive_mae_n2)/(n1 + n2)
f_total = total.join(recursive_mae['recursive_neighbors'])
f_total.to_csv('total_' + mae_file, index=False)
def mae(self, test_file=None, pred_file=None,
r_test_file=None, r_pred_file=None):
"""Compute and write the mae and recursive mae of the predictions."""
if test_file and pred_file:
test_values, predictions = self._preprocessing(test_file,
pred_file)
pred_file = pred_file.split('/')[-1]
n_pred = predictions.shape[0]
mae = self._mae_func(test_values, predictions)
mae.to_frame().T.to_csv('mae_{}'.format(pred_file), index=False)
if r_test_file and r_pred_file:
test_values, predictions = self._preprocessing(r_test_file,
r_pred_file)
r_pred_file = r_pred_file.split('/')[-1]
predictions = predictions.groupby('recursive_neighbors')
get_first_group = list(predictions.groups.keys())[0]
n_r_pred = predictions.get_group(get_first_group).shape[0]
recursive_mae = predictions.apply(
lambda group_predictions: self._mae_func(test_values,
group_predictions,
True))
recursive_mae.to_csv('mae_{}'.format(r_pred_file))
if test_file and pred_file and r_test_file and r_pred_file:
self._predictions_counter(n_pred, n_r_pred, pred_file)
mae_file = 'mae_' + pred_file
r_mae_file = 'mae_' + r_pred_file
self._total_mae(mae_file, r_mae_file, n_pred, n_r_pred)
@staticmethod
def _maue_func(y_true, y_pred, group_by, recursive=False):
if recursive:
y_pred = y_pred.drop('recursive_neighbors', axis=1)
subtract = y_true - y_pred
group_predictions = subtract.reset_index().groupby(group_by)
# calculate each user or item rmse
user_mae = group_predictions.apply(lambda user: abs(
user.drop(['user', 'item'], axis=1)).sum(axis=0) / user.shape[0])
# average maes and return
return user_mae.sum(axis=0)/user_mae.shape[0]
def maue(self, group_by, test_file=None, pred_file=None,
r_test_file=None, r_pred_file=None):
"""Compute and write the maue and recursive maue of the predictions."""
if test_file and pred_file:
test_values, predictions = self._preprocessing(test_file,
pred_file)
pred_file = pred_file.split('/')[-1]
maue = self._maue_func(test_values, predictions, group_by)
maue.to_frame().T.to_csv('maue_{}'.format(pred_file), index=False)
if r_test_file and r_pred_file:
test_values, predictions = self._preprocessing(r_test_file,
r_pred_file)
r_pred_file = r_pred_file.split('/')[-1]
predictions = predictions.groupby('recursive_neighbors')
recursive_maue = predictions.apply(
lambda group_predictions: self._maue_func(test_values,
group_predictions,
group_by,
True))
recursive_maue.to_csv('maue_{}'.format(r_pred_file))
| 49.093897 | 79 | 0.573396 | 1,254 | 10,457 | 4.422648 | 0.082935 | 0.086549 | 0.045438 | 0.028128 | 0.719978 | 0.650198 | 0.591778 | 0.538045 | 0.516408 | 0.491706 | 0 | 0.010042 | 0.333365 | 10,457 | 212 | 80 | 49.325472 | 0.78554 | 0.088553 | 0 | 0.5 | 0 | 0 | 0.035552 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.070588 | false | 0 | 0.011765 | 0 | 0.117647 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
338c93743bd1e61375a780236ff4f85d60b88ab1 | 518 | py | Python | python/comTest.py | BYU-ELC/StalkerCar | a5681ea7e4f87ab24a70ac69d89c4f095fa4ab8c | [
"MIT"
] | null | null | null | python/comTest.py | BYU-ELC/StalkerCar | a5681ea7e4f87ab24a70ac69d89c4f095fa4ab8c | [
"MIT"
] | null | null | null | python/comTest.py | BYU-ELC/StalkerCar | a5681ea7e4f87ab24a70ac69d89c4f095fa4ab8c | [
"MIT"
] | 1 | 2021-05-20T22:46:34.000Z | 2021-05-20T22:46:34.000Z | import serial
import time
import struct
def pack(value):
return struct.pack('>B', value)
def main():
ser = serial.Serial("/dev/ttyUSB0", 9600)
time.sleep(3)
ser.reset_input_buffer()
ser.reset_output_buffer()
time.sleep(5)
val = 100
while True:
val = val + 10
ser.write(pack(1))
ser.write(pack(100))
ser.write(pack(0))
ser.write(pack(val))
time.sleep(1)
if val == 150:
val = 0
if __name__ == "__main__":
main()
| 18.5 | 45 | 0.559846 | 71 | 518 | 3.915493 | 0.450704 | 0.115108 | 0.172662 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.060773 | 0.301158 | 518 | 27 | 46 | 19.185185 | 0.707182 | 0 | 0 | 0 | 0 | 0 | 0.042471 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.086957 | false | 0 | 0.130435 | 0.043478 | 0.26087 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
338cf86ec7a4e1f1f89c0508d51c1f20e29168e9 | 6,865 | py | Python | basalt_utils/src/basalt_utils/sb3_compat/cnns.py | viniciusguigo/kairos_minerl_basalt | 8f76e1d293dbcf62653ed3f7f326bd090a0af6f0 | [
"MIT"
] | 26 | 2021-12-07T09:52:06.000Z | 2022-03-13T20:08:44.000Z | basalt_utils/src/basalt_utils/sb3_compat/cnns.py | viniciusguigo/kairos_minerl_basalt | 8f76e1d293dbcf62653ed3f7f326bd090a0af6f0 | [
"MIT"
] | null | null | null | basalt_utils/src/basalt_utils/sb3_compat/cnns.py | viniciusguigo/kairos_minerl_basalt | 8f76e1d293dbcf62653ed3f7f326bd090a0af6f0 | [
"MIT"
] | 2 | 2021-12-11T18:29:26.000Z | 2022-01-12T18:46:42.000Z | import torch
from torch import nn
from torchvision.models.resnet import BasicBlock as BasicResidualBlock
from stable_baselines3.common.preprocessing import preprocess_obs
NETWORK_ARCHITECTURE_DEFINITIONS = {
'BasicCNN': [
{'out_dim': 32, 'kernel_size': 8, 'stride': 4},
{'out_dim': 64, 'kernel_size': 4, 'stride': 2},
{'out_dim': 64, 'kernel_size': 3, 'stride': 1},
],
'MAGICALCNN': [
{'out_dim': 32, 'kernel_size': 5, 'stride': 1, 'padding': 2},
{'out_dim': 64, 'kernel_size': 3, 'stride': 2, 'padding': 1},
{'out_dim': 64, 'kernel_size': 3, 'stride': 2, 'padding': 1},
{'out_dim': 64, 'kernel_size': 3, 'stride': 2, 'padding': 1},
{'out_dim': 64, 'kernel_size': 3, 'stride': 2, 'padding': 1},
],
'MAGICALCNN-resnet': [
{'out_dim': 64, 'stride': 4, 'residual': True},
{'out_dim': 128, 'stride': 2, 'residual': True},
],
'MAGICALCNN-resnet-128': [
{'out_dim': 64, 'stride': 4, 'residual': True},
{'out_dim': 128, 'stride': 2, 'residual': True},
{'out_dim': 128, 'stride': 2, 'residual': True},
],
'MAGICALCNN-resnet-256': [
{'out_dim': 64, 'stride': 4, 'residual': True},
{'out_dim': 128, 'stride': 2, 'residual': True},
{'out_dim': 256, 'stride': 2, 'residual': True},
],
'MAGICALCNN-small': [
{'out_dim': 32, 'kernel_size': 5, 'stride': 2, 'padding': 2},
{'out_dim': 64, 'kernel_size': 3, 'stride': 2, 'padding': 1},
{'out_dim': 64, 'kernel_size': 3, 'stride': 2, 'padding': 1},
{'out_dim': 64, 'kernel_size': 3, 'stride': 2, 'padding': 1},
{'out_dim': 64, 'kernel_size': 3, 'stride': 2, 'padding': 1},
]
}
def compute_output_shape(observation_space, layers):
"""Compute the size of the output after passing an observation from
`observation_space` through the given `layers`."""
# [None] adds a batch dimension to the random observation
torch_obs = torch.tensor(observation_space.sample()[None])
with torch.no_grad():
sample = preprocess_obs(torch_obs, observation_space, normalize_images=True)
for layer in layers:
# forward prop to compute the right size
sample = layer(sample)
# make sure batch axis still matches
assert sample.shape[0] == torch_obs.shape[0]
# return everything else
return sample.shape[1:]
def magical_conv_block(in_chans, out_chans, kernel_size, stride, padding, use_bn, use_sn, dropout, activation_cls):
# We sometimes disable bias because batch norm has its own bias.
conv_layer = nn.Conv2d(
in_chans,
out_chans,
kernel_size=kernel_size,
stride=stride,
padding=padding,
bias=not use_bn,
padding_mode='zeros')
if use_sn:
# apply spectral norm if necessary
conv_layer = nn.utils.spectral_norm(conv_layer)
layers = [conv_layer]
if dropout:
# dropout after conv, but before activation
# (doesn't matter for ReLU)
layers.append(nn.Dropout2d(dropout))
layers.append(activation_cls())
if use_bn:
# Insert BN layer after convolution (and optionally after
# dropout). I doubt order matters much, but see here for
# CONTROVERSY:
# https://github.com/keras-team/keras/issues/1802#issuecomment-187966878
layers.append(nn.BatchNorm2d(out_chans))
return layers
class MAGICALCNN(nn.Module):
"""The CNN from the MAGICAL paper."""
def __init__(self,
observation_space,
representation_dim=128,
use_bn=True,
use_ln=False,
dropout=None,
use_sn=False,
arch_str='MAGICALCNN-resnet-128',
ActivationCls=torch.nn.ReLU):
super().__init__()
# If block_type == resnet, use ResNet's basic block.
# If block_type == magical, use MAGICAL block from its paper.
assert arch_str in NETWORK_ARCHITECTURE_DEFINITIONS.keys()
width = 1 if 'resnet' in arch_str else 2
self.features_dim = representation_dim
w = width
self.architecture_definition = NETWORK_ARCHITECTURE_DEFINITIONS[arch_str]
conv_layers = []
in_dim = observation_space.shape[0]
block = magical_conv_block
if 'resnet' in arch_str:
block = BasicResidualBlock
for layer_definition in self.architecture_definition:
if layer_definition.get('residual', False):
block_kwargs = {
'stride': layer_definition['stride'],
'downsample': nn.Sequential(nn.Conv2d(in_dim,
layer_definition['out_dim'],
kernel_size=1,
stride=layer_definition['stride']),
nn.BatchNorm2d(layer_definition['out_dim']))
}
conv_layers += [block(in_dim,
layer_definition['out_dim'] * w,
**block_kwargs)]
else:
block_kwargs = {
'stride': layer_definition['stride'],
'kernel_size': layer_definition['kernel_size'],
'padding': layer_definition['padding'],
'use_bn': use_bn,
'use_sn': use_sn,
'dropout': dropout,
'activation_cls': ActivationCls
}
conv_layers += block(in_dim,
layer_definition['out_dim'] * w,
**block_kwargs)
in_dim = layer_definition['out_dim']*w
if 'resnet' in arch_str:
conv_layers.append(nn.Conv2d(in_dim, 32, 1))
conv_layers.append(nn.Flatten())
# another FC layer to make feature maps the right size
fc_in_size, = compute_output_shape(observation_space, conv_layers)
fc_layers = [
nn.Linear(fc_in_size, 128 * w),
ActivationCls(),
nn.Linear(128 * w, representation_dim),
]
if use_sn:
# apply SN to linear layers too
fc_layers = [
nn.utils.spectral_norm(layer) if isinstance(layer, nn.Linear) else layer
for layer in fc_layers
]
all_layers = [*conv_layers, *fc_layers]
self.shared_network = nn.Sequential(*all_layers)
def forward(self, x):
# warn_on_non_image_tensor(x)
return self.shared_network(x)
| 39.682081 | 115 | 0.549745 | 766 | 6,865 | 4.706266 | 0.237598 | 0.043273 | 0.028849 | 0.038835 | 0.295146 | 0.244105 | 0.201942 | 0.180583 | 0.17337 | 0.17337 | 0 | 0.030376 | 0.33343 | 6,865 | 172 | 116 | 39.912791 | 0.75743 | 0.127895 | 0 | 0.244275 | 0 | 0 | 0.14089 | 0.010579 | 0 | 0 | 0 | 0 | 0.015267 | 1 | 0.030534 | false | 0 | 0.030534 | 0.007634 | 0.091603 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
338f0a2db55b72cc006a86c0ed71c0ac219d23d4 | 208 | py | Python | round-2/02_non-decreasing-digits.py | deshmukhmayur/tcs-codevita6-round1 | 975bd639b187fa6897d2acf58611a7e793b4072d | [
"MIT"
] | null | null | null | round-2/02_non-decreasing-digits.py | deshmukhmayur/tcs-codevita6-round1 | 975bd639b187fa6897d2acf58611a7e793b4072d | [
"MIT"
] | null | null | null | round-2/02_non-decreasing-digits.py | deshmukhmayur/tcs-codevita6-round1 | 975bd639b187fa6897d2acf58611a7e793b4072d | [
"MIT"
] | 1 | 2020-06-11T17:44:22.000Z | 2020-06-11T17:44:22.000Z | '''
Numbers with non-decreasing digits
'''
N = int(input().strip())
i = N
while i > 0:
digits = list(str(i))
if digits == sorted(digits):
print(i)
break
i -= 1
else:
print(i)
| 13.866667 | 34 | 0.528846 | 30 | 208 | 3.666667 | 0.666667 | 0.109091 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.013793 | 0.302885 | 208 | 14 | 35 | 14.857143 | 0.744828 | 0.163462 | 0 | 0.2 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.2 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
33905ae40451741da5c46e95e498c29dec6bfbfa | 499 | py | Python | tests/test_util.py | EJEmmett/functimer | 1ab5f996947e0cfa0ac3a62490d451fa7277170b | [
"MIT"
] | 1 | 2021-04-22T05:37:38.000Z | 2021-04-22T05:37:38.000Z | tests/test_util.py | EJEmmett/functimer | 1ab5f996947e0cfa0ac3a62490d451fa7277170b | [
"MIT"
] | 1 | 2021-05-12T05:24:23.000Z | 2021-05-12T05:24:23.000Z | tests/test_util.py | EJEmmett/functimer | 1ab5f996947e0cfa0ac3a62490d451fa7277170b | [
"MIT"
] | 2 | 2021-04-22T19:06:47.000Z | 2021-05-07T01:05:52.000Z | import pytest
from functimer import Unit, get_unit
@pytest.mark.parametrize(
"_input, expected",
[
("0.2 ns", Unit.NANOSECOND),
("0.2 µs", Unit.MICROSECOND),
("0.2 ms", Unit.MILLISECOND),
("0.2 s", Unit.SECOND),
("0.2 m", Unit.MINUTE),
],
)
def test_get_unit(_input, expected):
assert get_unit(_input) == expected
def test_get_unit_func(mock_timed):
assert get_unit(mock_timed(lambda x: x, unit=Unit.NANOSECOND)()) == Unit.NANOSECOND
| 22.681818 | 87 | 0.627255 | 69 | 499 | 4.347826 | 0.434783 | 0.116667 | 0.066667 | 0.093333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.025641 | 0.218437 | 499 | 21 | 88 | 23.761905 | 0.74359 | 0 | 0 | 0 | 0 | 0 | 0.088176 | 0 | 0 | 0 | 0 | 0 | 0.125 | 1 | 0.125 | false | 0 | 0.125 | 0 | 0.25 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
33915e863355adfd8513a6d2e255775deb45bc7f | 2,151 | py | Python | mysite/urls.py | reven-tang/ITMP | 8d6686edb19fcc26c9cf1f7e14037f9d38a6e702 | [
"BSD-2-Clause"
] | null | null | null | mysite/urls.py | reven-tang/ITMP | 8d6686edb19fcc26c9cf1f7e14037f9d38a6e702 | [
"BSD-2-Clause"
] | 11 | 2020-06-05T19:40:52.000Z | 2022-03-11T23:38:17.000Z | mysite/urls.py | reven-tang/ITMP | 8d6686edb19fcc26c9cf1f7e14037f9d38a6e702 | [
"BSD-2-Clause"
] | null | null | null | """mysite URL Configuration
The `urlpatterns` list routes URLs to views. For more information please see:
https://docs.djangoproject.com/en/2.0/topics/http/urls/
Examples:
Function views
1. Add an import: from my_app import views
2. Add a URL to urlpatterns: path('', views.home, name='home')
Class-based views
1. Add an import: from other_app.views import Home
2. Add a URL to urlpatterns: path('', Home.as_view(), name='home')
Including another URLconf
1. Import the include() function: from django.urls import include, path
2. Add a URL to urlpatterns: path('blog/', include('blog.urls'))
"""
from django.contrib import admin
from django.urls import path
from django.conf.urls import include, url
from django.conf import settings
from django.views import static as vstatic
from django.conf.urls.static import static
from .utils.upload import upload_image
# API
from rest_framework.routers import DefaultRouter
from api import views
# Create a router and register our viewsets with it.
# viewsets可以通过简单地使用路由器类注册视图来自动生成API的URL conf
router = DefaultRouter()
router.register(r'Details', views.MonitorViewSet)
urlpatterns = [
url(r'^admin/upload/(?P<dir_name>[^/]+)$', upload_image, name='upload_image'),
url(r'^uploads/(?P<path>.*)$', vstatic.serve, {'document_root': settings.MEDIA_ROOT, }),
path('admin/', admin.site.urls),
url(r'^$', include('cmdb.urls')),
url(r'^index', include('cmdb.urls')),
url(r'^cmdb/', include('cmdb.urls')),
url(r'^repo/', include('repo.urls')),
url(r'^taskdo/', include('taskdo.urls')),
url(r'^hosts/', include('hosts.urls')),
url(r'^users/', include('users.urls')),
url(r'^users/', include('django.contrib.auth.urls')),
url(r'mdeditor/', include('mdeditor.urls')),
url(r'^wssh/', include('wssh.urls')),
# API
url(r'^api', include(router.urls)),
url(r'^api-auth/', include('rest_framework.urls', namespace='rest_framework')),
url(r'^chart/', include('api.urls')),
]
if settings.DEBUG:
# static files (images, css, javascript, etc.)
urlpatterns += static(settings.MEDIA_URL, document_root=settings.MEDIA_ROOT)
| 38.410714 | 92 | 0.694561 | 304 | 2,151 | 4.865132 | 0.328947 | 0.040568 | 0.0595 | 0.016227 | 0.183908 | 0.079108 | 0.05071 | 0 | 0 | 0 | 0 | 0.004336 | 0.142259 | 2,151 | 55 | 93 | 39.109091 | 0.79729 | 0.357973 | 0 | 0 | 0 | 0 | 0.243066 | 0.058394 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.290323 | 0 | 0.290323 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
3396be485614286e00e27c90f1b8e8de4bc1c013 | 1,217 | py | Python | migrations/0017_auto_20170404_1023.py | gtoffoli/commons-cops | e4b1f556c550e25bb2e6a9eabe8db963877c08d3 | [
"MIT"
] | 5 | 2016-11-13T02:41:02.000Z | 2020-01-20T10:01:26.000Z | migrations/0017_auto_20170404_1023.py | gtoffoli/commons | 8b51a08a37c6d0b38fd4ecde82c20036c2dc168f | [
"MIT"
] | null | null | null | migrations/0017_auto_20170404_1023.py | gtoffoli/commons | 8b51a08a37c6d0b38fd4ecde82c20036c2dc168f | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
from __future__ import unicode_literals
from django.db import models, migrations
class Migration(migrations.Migration):
dependencies = [
('commons', '0016_auto_20170223_1149'),
]
operations = [
migrations.AddField(
model_name='oer',
name='content',
field=models.TextField(help_text='formal description of a problem or other original content', null=True, verbose_name='content', blank=True),
),
migrations.AddField(
model_name='project',
name='allow_external_mentors',
field=models.BooleanField(default=False, verbose_name='allow external mentors'),
),
migrations.AlterField(
model_name='project',
name='mentoring_model',
field=models.PositiveIntegerField(help_text='once mentoring projects exist, you can only move from model A or B to A+B.', null=True, verbose_name='mentoring setup model', choices=[(0, 'mentoring is not available'), (1, 'A - The community administrator chooses the mentor'), (2, 'B - The mentee chooses the mentor'), (3, 'B+A - The mentee or the administrator choose the mentor')]),
),
]
| 40.566667 | 393 | 0.646672 | 140 | 1,217 | 5.485714 | 0.55 | 0.035156 | 0.059896 | 0.070313 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.022678 | 0.239113 | 1,217 | 29 | 394 | 41.965517 | 0.806695 | 0.017256 | 0 | 0.304348 | 0 | 0 | 0.365159 | 0.037688 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.086957 | 0 | 0.217391 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
33970f006ffccfa842f4d3fd1b3f034367fb3e03 | 1,827 | py | Python | tools/kv_seek_account_history.py | torquem-ch/silksnake | 68794838e8c2be036f158b2a842ba9201be610a3 | [
"Apache-2.0"
] | 3 | 2020-09-16T14:47:58.000Z | 2021-03-08T13:26:40.000Z | tools/kv_seek_account_history.py | torquem-ch/silksnake | 68794838e8c2be036f158b2a842ba9201be610a3 | [
"Apache-2.0"
] | null | null | null | tools/kv_seek_account_history.py | torquem-ch/silksnake | 68794838e8c2be036f158b2a842ba9201be610a3 | [
"Apache-2.0"
] | 1 | 2021-03-15T11:02:08.000Z | 2021-03-15T11:02:08.000Z | #!/usr/bin/env python
# -*- coding: utf-8 -*-
"""The kv_seek_account_history command allows to query the KV 'History of Accounts' table."""
import argparse
import context # pylint: disable=unused-import
from silksnake.helpers.dbutils import tables
from silksnake.remote import kv_metadata
from silksnake.remote import kv_utils
from silksnake.remote.kv_remote import DEFAULT_TARGET
def kv_seek_account_history(account_address: str, block_number: int, target: str = DEFAULT_TARGET):
""" Search for the provided account address in KV 'History of Accounts' table.
"""
account_history_key = kv_metadata.encode_account_history_key(account_address, block_number)
print('REQ1 account_address:', account_address, '(key: ' + str(account_history_key.hex()) + ')')
print('RSP1 account history: [')
walker = lambda key, value: print('key:', key.hex(), 'value:', value.hex())
kv_utils.kv_walk(target, tables.ACCOUNTS_HISTORY_LABEL, account_history_key, walker)
print(']')
print('REQ2 account_address:', account_address, '(key: ' + str(account_history_key.hex()) + ')')
print('RSP2 storage history: [')
walker = lambda key, value: print('key:', key.hex(), 'value:', value.hex())
kv_utils.kv_walk(target, tables.STORAGE_HISTORY_LABEL, account_history_key, walker)
print(']')
if __name__ == '__main__':
parser = argparse.ArgumentParser(description=__doc__)
parser.add_argument('account_address', help='the account address as hex string (w or w/o 0x prefix)')
parser.add_argument('block_number', help='the block number as integer')
parser.add_argument('-t', '--target', default=DEFAULT_TARGET, help='the server location as string <address>:<port>')
args = parser.parse_args()
kv_seek_account_history(args.account_address, int(args.block_number), args.target)
| 44.560976 | 120 | 0.732348 | 248 | 1,827 | 5.133065 | 0.334677 | 0.109976 | 0.080126 | 0.047133 | 0.35978 | 0.279654 | 0.279654 | 0.216811 | 0.216811 | 0.216811 | 0 | 0.003802 | 0.136289 | 1,827 | 40 | 121 | 45.675 | 0.802915 | 0.131363 | 0 | 0.16 | 0 | 0 | 0.188175 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.04 | false | 0 | 0.24 | 0 | 0.28 | 0.32 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
3397df0d3ea3148ec7c1c0a5b963316189d8f5a8 | 790 | py | Python | fundamentals/sensors/dht11.py | etech-sw/iot-labs | 7d5fe9b096ddeca769cbea35119a744cebefacb4 | [
"MIT"
] | null | null | null | fundamentals/sensors/dht11.py | etech-sw/iot-labs | 7d5fe9b096ddeca769cbea35119a744cebefacb4 | [
"MIT"
] | null | null | null | fundamentals/sensors/dht11.py | etech-sw/iot-labs | 7d5fe9b096ddeca769cbea35119a744cebefacb4 | [
"MIT"
] | null | null | null | #! /usr/bin/env python
# Sample Demo program to interface with the DHT11
# Created by Etech-SW
import adafruit_dht
import board
import time
dhtSensor = adafruit_dht.DHT11(board.D4)
try:
while True:
try:
humidity = dhtSensor.humidity
temp_c = dhtSensor.temperature
temp_f = temp_c * (9 / 5) + 32
print(
"Temp: {:.1f} F / {:.1f} C Humidity: {}% ".format(
temp_f, temp_c, humidity
)
)
time.sleep(2.0)
except RuntimeError as error:
print(error.args[0])
time.sleep(2.0)
continue
except Exception as error:
dhtSensor.exit()
raise error
except KeyboardInterrupt:
# If there is a KeyboardInterrupt (when you press ctrl+c), exit the program and cleanup
print("Cleaning up!")
dhtSensor.exit()
| 21.351351 | 88 | 0.643038 | 107 | 790 | 4.682243 | 0.579439 | 0.02994 | 0.035928 | 0.03992 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.027165 | 0.25443 | 790 | 36 | 89 | 21.944444 | 0.82343 | 0.221519 | 0 | 0.230769 | 0 | 0 | 0.090164 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.115385 | 0 | 0.115385 | 0.115385 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
3398a39589c44cb218330153bc2c034273d76a03 | 18,460 | py | Python | bokeh/view_files_bokeh_jupyter.py | alexnaoki/EC-LHC | 5efb4647ae88156c94eaa59ab8304a9433155977 | [
"MIT"
] | 2 | 2021-04-04T16:04:18.000Z | 2021-04-29T18:21:10.000Z | bokeh/view_files_bokeh_jupyter.py | alexnaoki/EC-LHC | 5efb4647ae88156c94eaa59ab8304a9433155977 | [
"MIT"
] | null | null | null | bokeh/view_files_bokeh_jupyter.py | alexnaoki/EC-LHC | 5efb4647ae88156c94eaa59ab8304a9433155977 | [
"MIT"
] | null | null | null | import ipywidgets
import numpy as np
import pandas as pd
import pathlib
from scipy.stats import linregress
from bokeh.io import push_notebook, show, output_notebook
from bokeh.plotting import figure
from bokeh.models import ColumnDataSource, RangeTool, Circle, Slope, Label, Legend, LegendItem, LinearColorMapper
from bokeh.layouts import gridplot, column, row
from bokeh.transform import transform
class view_files:
def __init__(self):
self.ep_columns_filtered = ['date','time', 'H', 'qc_H', 'LE', 'qc_LE','sonic_temperature', 'air_temperature', 'air_pressure', 'air_density',
'ET', 'e', 'es', 'RH', 'VPD','Tdew', 'u_unrot', 'v_unrot', 'w_unrot', 'u_rot', 'v_rot', 'w_rot', 'wind_speed', 'max_wind_speed', 'wind_dir', 'u*', '(z-d)/L',
'un_H', 'H_scf', 'un_LE', 'LE_scf','u_var', 'v_var', 'w_var', 'ts_var','H_strg','LE_strg']
self.lf_columns_filtered = ['TIMESTAMP','Hs','u_star','Ts_stdev','Ux_stdev','Uy_stdev','Uz_stdev','Ux_Avg', 'Uy_Avg', 'Uz_Avg',
'Ts_Avg', 'LE_wpl', 'Hc','H2O_mean', 'amb_tmpr_Avg', 'amb_press_mean', 'Tc_mean', 'rho_a_mean','CO2_sig_strgth_mean',
'H2O_sig_strgth_mean','T_tmpr_rh_mean', 'e_tmpr_rh_mean', 'e_sat_tmpr_rh_mean', 'H2O_tmpr_rh_mean', 'RH_tmpr_rh_mean',
'Rn_Avg', 'albedo_Avg', 'Rs_incoming_Avg', 'Rs_outgoing_Avg', 'Rl_incoming_Avg', 'Rl_outgoing_Avg', 'Rl_incoming_meas_Avg',
'Rl_outgoing_meas_Avg', 'shf_Avg(1)', 'shf_Avg(2)', 'precip_Tot', 'panel_tmpr_Avg']
self.TOOLS="pan,wheel_zoom,box_zoom,box_select,lasso_select,reset"
output_notebook()
self.tabs = ipywidgets.Tab([self.tab00(), self.tab01(), self.tab02()])
self.tabs.set_title(0, 'EP - Master Folder')
self.tabs.set_title(1, 'LowFreq - Master Folder')
self.tabs.set_title(2, 'Plot')
self.source_ep = ColumnDataSource(data=dict(x=[], y=[], y2=[], date=[],time=[],et=[]))
self.fig_01 = figure(title='EP', plot_height=250, plot_width=700, x_axis_type='datetime', tools=self.TOOLS)
circle_ep = self.fig_01.circle(x='x', y='y', source=self.source_ep)
self.fig_02 = figure(title='LF', plot_height=250, plot_width=700, x_axis_type='datetime', x_range=self.fig_01.x_range)
circle_lf = self.fig_02.circle(x='x', y='y2', source=self.source_ep, color='red')
self.fig_03 = figure(title='EP x LF',plot_height=500, plot_width=500)
circle_teste = self.fig_03.circle(x='y2', y='y', source=self.source_ep, color='green', selection_color="green",selection_fill_alpha=0.3, selection_line_alpha=0.3,
nonselection_fill_alpha=0.1,nonselection_fill_color="grey",nonselection_line_color="grey",nonselection_line_alpha=0.1)
# self.fig_04 = figure(title='ET', plot_width=1200, plot_height=600)
# colors = ['#440154', '#404387', '#29788E', '#22A784', '#79D151', '#FDE724']
# self.colorMapper = LinearColorMapper(palette=colors)
# self.fig_04.rect(source=self.source_ep, x='date',y='time', fill_color=transform('et', self.colorMapper), line_color=None, width=1,height=1)
# self.hm = self.fig_04.rect(source=self.source_ep, x='date',y='time', line_color=None, width=1,height=1)
self.label = Label(x=1.1, y=18, text='teste', text_color='black')
self.label2 = Label(x=1.1, y=10, text='teste2', text_color='black')
self.label3 = Label(x=1.2, y=11, text='teste3', text_color='black')
self.label4 = Label(x=1,y=11, text='teste4', text_color='black')
# self.label5 = Label(x=1, y=11, text='teste5', text_color='black')
self.fig_03.add_layout(self.label)
self.fig_03.add_layout(self.label2)
self.fig_03.add_layout(self.label3)
self.fig_03.add_layout(self.label4)
# self.fig_03.add_layout(self.label5)
# self.label_teste = Label(x=0,y=0, text='fasdfasdfasdfasdfas', text_color='black')
# self.fig_03.add_layout(self.label_teste)
# self.source_ep.selected.on_change('indices', self.selection_change)
# slope11_l = self.fig_03.line(color='orange', line_dash='dashed')
slope_11 = Slope(gradient=1, y_intercept=0, line_color='orange', line_dash='dashed', line_width=3)
self.fig_03.add_layout(slope_11)
# self.slope_lin_label = self.fig_03.line(color='red', line_width=3)
self.slope_linregress = Slope(gradient=1.3, y_intercept=0,line_color='red', line_width=3)
self.fig_03.add_layout(self.slope_linregress)
c = column([self.fig_01, self.fig_02])
display(self.tabs)
show(row(c, self.fig_03), notebook_handle=True)
# def teste_apagar(self, attr, old,new):
# print(new)
def tab00(self):
self.out_00 = ipywidgets.Output()
with self.out_00:
self.path_EP = ipywidgets.Text(placeholder='Path EP output',
layout=ipywidgets.Layout(width='90%'))
self.button_path_ep = ipywidgets.Button(description='Show EP')
self.button_path_ep.on_click(self._button_Path)
self.select_meta = ipywidgets.Select(description='Configs:',
layout=ipywidgets.Layout(width='90%'),
style={'description_width':'initial'})
self.select_meta.observe(self._select_config, 'value')
return ipywidgets.VBox([ipywidgets.HBox([self.path_EP, self.button_path_ep]),
self.select_meta,
self.out_00])
def tab01(self):
self.out_01 = ipywidgets.Output()
with self.out_01:
self.path_LF = ipywidgets.Text(placeholder='Path LF output',
layout=ipywidgets.Layout(width='90%'))
self.button_path_lf = ipywidgets.Button(description='Show LF')
self.button_path_lf.on_click(self._button_Path)
self.html_lf = ipywidgets.HTML()
return ipywidgets.VBox([self.out_01,
ipywidgets.HBox([self.path_LF, self.button_path_lf]),
self.html_lf])
def tab02(self):
self.out_02 = ipywidgets.Output()
with self.out_02:
self.dropdown_yAxis_ep = ipywidgets.Dropdown(description='EP Y-Axis', options=self.ep_columns_filtered)
self.dropdown_yAxis_lf = ipywidgets.Dropdown(description='LF Y-Axis', options=self.lf_columns_filtered)
self.checkBox_EnergyBalance = ipywidgets.Checkbox(value=False, description='Energy Balance')
# self.intSlider_flagFilter = ipywidgets.IntSlider(value=2, min=0, max=2, step=1, description='Flag Filter')
self.selectionSlider_flagFilter = ipywidgets.SelectionSlider(options=[0,1,2,'All'], value='All', description='Flag Filter')
self.checkBox_rainfallFilter = ipywidgets.Checkbox(value=False, description='Rainfall Filter')
self.floatSlider_signalStrFilter = ipywidgets.FloatSlider(value=0, min=0, max=1, step=0.01, description='Signal Str Filter')
self.selectionRangeSlider_date = ipywidgets.SelectionRangeSlider(options=[0,1], description='Date Range', layout=ipywidgets.Layout(width='500px'))
self.selectionRangeSlider_hour = ipywidgets.SelectionRangeSlider(options=[0,1], description='Hour Range', layout=ipywidgets.Layout(width='500px'))
self.button_plot = ipywidgets.Button(description='Plot')
# self.button_plot.on_click(self.update_ep)
self.button_plot.on_click(self._button_plot)
controls_ep = [self.dropdown_yAxis_ep,
self.selectionSlider_flagFilter,
self.checkBox_rainfallFilter,
self.floatSlider_signalStrFilter,
self.checkBox_EnergyBalance,
self.selectionRangeSlider_date,
self.selectionRangeSlider_hour]
for control in controls_ep:
control.observe(self.update_ep, 'value')
controls_lf = [self.dropdown_yAxis_lf]
for control in controls_lf:
# control.observe(self.update_lf, 'value')
control.observe(self.update_ep, 'value')
return ipywidgets.VBox([ipywidgets.HBox([self.dropdown_yAxis_ep, self.dropdown_yAxis_lf, self.checkBox_EnergyBalance]),
ipywidgets.HBox([self.selectionSlider_flagFilter, self.checkBox_rainfallFilter, self.floatSlider_signalStrFilter]),
self.selectionRangeSlider_date,
self.selectionRangeSlider_hour,
self.button_plot])
def _button_Path(self, *args):
if self.tabs.selected_index == 0:
with self.out_00:
try:
self.folder_path_ep = pathlib.Path(self.path_EP.value)
readme = self.folder_path_ep.rglob('Readme.txt')
readme_df = pd.read_csv(list(readme)[0], delimiter=',')
temp_list = [row.to_list() for i,row in readme_df[['rotation', 'lowfrequency','highfrequency','wpl','flagging','name']].iterrows()]
a = []
self.config_name = []
for i in temp_list:
self.config_name.append(i[5])
a.append('Rotation:{} |LF:{} |HF:{} |WPL:{} |Flag:{}'.format(i[0],i[1],i[2],i[3],i[4]))
self.select_meta.options = a
except:
print('Erro')
if self.tabs.selected_index == 1:
with self.out_01:
try:
self.folder_path_lf = pathlib.Path(self.path_LF.value)
lf_files = self.folder_path_lf.rglob('TOA5*.flux.dat')
self.dfs_02_01 = []
for file in lf_files:
# print(file)
self.dfs_02_01.append(pd.read_csv(file, skiprows=[0,2,3], parse_dates=['TIMESTAMP'],na_values='NAN', usecols=self.lf_columns_filtered))
self.dfs_concat_02_01 = pd.concat(self.dfs_02_01)
# self.dropdown_yAxis_lf.options = self.lf_columns_filtered
self.html_lf.value = "<table> <tr><td><span style='font-weight:bold'>Number of Files:</spam></td> <td>{}</td></tr><tr><td><span style='font-weight:bold'>Begin:</span></td> <td>{}</td></tr> <tr> <td><span style='font-weight:bold'>End:</span></td><td>{}</td> </tr>".format(len(self.dfs_02_01), self.dfs_concat_02_01['TIMESTAMP'].min(),self.dfs_concat_02_01['TIMESTAMP'].max())
except:
print('erro')
def _select_config(self, *args):
with self.out_00:
# self.dfs_01_01 = []
# for i in self.select_meta.index:
full_output_files = self.folder_path_ep.rglob('*{}*_full_output*.csv'.format(self.config_name[self.select_meta.index]))
dfs_single_config = []
for file in full_output_files:
dfs_single_config.append(pd.read_csv(file, skiprows=[0,2], na_values=-9999, parse_dates={'TIMESTAMP':['date', 'time']},keep_date_col=True, usecols=self.ep_columns_filtered))
# self.df_ep = pd.read_csv(file, skiprows=[0,2], na_values=-9999, parse_dates={'TIMESTAMP':['date', 'time']}, usecols=self.ep_columns_filtered)
self.df_ep = pd.concat(dfs_single_config)
# try:
# self.dropdown_yAxis_ep.options = self.ep_columns_filtered
# self.dropdown_yAxis_ep.value = 'H'
# except:
# pass
def filter_flag_ep(self):
try:
flag = self.dfs_compare[
(self.dfs_compare['H2O_sig_strgth_mean'] >= self.floatSlider_signalStrFilter.value) &
(self.dfs_compare['TIMESTAMP'].dt.date >= self.selectionRangeSlider_date.value[0]) &
(self.dfs_compare['TIMESTAMP'].dt.date <= self.selectionRangeSlider_date.value[1]) &
(self.dfs_compare['TIMESTAMP'].dt.time >= self.selectionRangeSlider_hour.value[0]) &
(self.dfs_compare['TIMESTAMP'].dt.time <= self.selectionRangeSlider_hour.value[1])
]
except:
flag = self.dfs_compare[
(self.dfs_compare['H2O_sig_strgth_mean'] >= self.floatSlider_signalStrFilter.value)
]
if self.checkBox_rainfallFilter.value == True:
flag = flag[flag['precip_Tot']==0]
if self.checkBox_EnergyBalance.value == True:
if self.selectionSlider_flagFilter.value in [0,1,2]:
flag = flag[flag[['qc_H', 'qc_LE']].isin([self.selectionSlider_flagFilter.value]).sum(axis=1)==2]
if self.selectionSlider_flagFilter.value == 'All':
pass
if self.checkBox_EnergyBalance.value == False:
if self.selectionSlider_flagFilter.value in [0,1,2]:
flag = flag[flag['qc_{}'.format(self.dropdown_yAxis_ep.value)]==self.selectionSlider_flagFilter.value]
if self.selectionSlider_flagFilter.value == 'All':
pass
return flag
def _button_plot(self, *args):
with self.out_02:
self.dfs_compare = pd.merge(left=self.dfs_concat_02_01, right=self.df_ep, how='outer', on='TIMESTAMP', suffixes=("_lf","_ep"))
self.selectionRangeSlider_date.options = self.dfs_compare['TIMESTAMP'].dt.date.unique()
self.selectionRangeSlider_hour.options = sorted(list(self.dfs_compare['TIMESTAMP'].dt.time.unique()))
# print(self.dfs_compare)
# self.update_lf()
# self.slope_linregress.gradient = 5
self.update_ep()
def update_ep(self, *args):
self.df_filter_ep = self.filter_flag_ep()
# self.source_ep.data = dict(x=self.df_filter_ep['TIMESTAMP'], y=self.df_filter_ep['{}'.format(self.dropdown_yAxis_ep.value)], y2=self.df_filter_ep['{}'.format(self.dropdown_yAxis_lf.value)])
# self.fig_01.xaxis.axis_label = 'TIMESTAMP'
# self.fig_01.yaxis.axis_label = '{}'.format(self.dropdown_yAxis_ep.value)
if self.checkBox_EnergyBalance.value == True:
self.source_ep.data = dict(x=self.df_filter_ep['TIMESTAMP'],
y=self.df_filter_ep[['H', 'LE','H_strg','LE_strg']].sum(axis=1, min_count=1),
y2=self.df_filter_ep['Rn_Avg']-self.df_filter_ep[['shf_Avg(1)','shf_Avg(2)']].mean(axis=1))
# self.hm.fill_color=transform('et', self.colorMapper)
#self.df_filter_ep[['Rn_Avg', 'shf_Avg(1)']].sum(axis=1, min_count=1)
#self.df_filter_ep[['H', 'LE']].sum(axis=1, min_count=1)
self.fig_01.xaxis.axis_label = 'TIMESTAMP'
self.fig_01.yaxis.axis_label = 'H + LE'
self.fig_02.xaxis.axis_label = 'TIMESTAMP'
self.fig_02.yaxis.axis_label = 'Rn - G'
self.fig_03.yaxis.axis_label = 'H + LE'
self.fig_03.xaxis.axis_label = 'Rn - G'
# self.fig_04.x_range.factors = self.df_filter_ep['date'].unique()
# self.fig_04.y_range.factors = self.df_filter_ep['time'].unique()
# self.label.text = 'fasfdasfasfasfaf'
self.df_corr = pd.DataFrame()
self.df_corr['EP'] = self.df_filter_ep[['H','LE']].sum(axis=1, min_count=1)
self.df_corr['EP'] = self.df_filter_ep[['H','LE','H_strg','LE_strg']].sum(axis=1, min_count=1)
# self.df_corr['LF'] = self.df_filter_ep[['Rn_Avg', 'shf_Avg(2)']].sum(axis=1, min_count=1)
self.df_corr['LF'] = self.df_filter_ep['Rn_Avg']-self.df_filter_ep[['shf_Avg(1)','shf_Avg(2)']].mean(axis=1)
self.label.text = 'Pearson: {:.4f}'.format(self.df_corr.corr(method='pearson')['LF'][0])
self.df_corr.dropna(inplace=True)
# linear_regression = linregress(x=self.df_corr['LF'], y=self.df_corr['EP'])
linear_regression = linregress(x=self.df_corr['LF'], y=self.df_corr['EP'])
x = np.array(self.df_corr['LF'].to_list())
x1 = x[:, np.newaxis]
fit_linear = np.linalg.lstsq(x1, self.df_corr['EP'], rcond=None)
# pbias = 100*(self.df_corr['EP']-self.df_corr['LF']).sum()/self.df_corr['LF'].sum()
# self.label5.text = 'PBIAS: {:.4f}'.format(pbias)
self.slope_linregress.gradient = fit_linear[0][0]
self.label2.text = 'Slope: {:.4f}'.format(fit_linear[0][0])
# self.label2.text = 'R: {:.4f} '.format(linear_regression[2])
self.label.x = np.nanmin(self.df_corr['LF'])
self.label.y = np.nanmax(self.df_corr['EP'])
self.label2.x = np.nanmin(self.df_corr['LF'])
self.label2.y = np.nanmax(self.df_corr['EP']-0.1*np.nanmax(self.df_corr['EP']))
self.label3.x = np.nanmin(self.df_corr['LF'])
self.label3.y = np.nanmax(self.df_corr['EP']-0.2*np.nanmax(self.df_corr['EP']))
self.label4.x = np.nanmin(self.df_corr['LF'])
self.label4.y = np.nanmax(self.df_corr['EP']-0.3*np.nanmax(self.df_corr['EP']))
# self.label5.x = np.nanmin(self.df_corr['LF'])
# self.label5.y = np.nanmax(self.df_corr['EP']-0.4*np.nanmax(self.df_corr['EP']))
# self.slope_linregress.gradient = linear_regression[0]
# self.slope_linregress.y_intercept = linear_regression[1]
self.label3.text = 'ET: {:.2f}'.format(self.df_filter_ep['ET'].sum()/2)
self.label4.text = 'y = {:.4f}x + {:.4f}'.format(linear_regression[0], linear_regression[1])
# self.slope_lin_label.legend_label = 'ok'
# self.legend_fig03[0].label='ok'
if self.checkBox_EnergyBalance.value == False:
self.source_ep.data = dict(x=self.df_filter_ep['TIMESTAMP'], y=self.df_filter_ep['{}'.format(self.dropdown_yAxis_ep.value)], y2=self.df_filter_ep['{}'.format(self.dropdown_yAxis_lf.value)])
self.fig_01.xaxis.axis_label = 'TIMESTAMP'
self.fig_01.yaxis.axis_label = '{}'.format(self.dropdown_yAxis_ep.value)
push_notebook()
| 53.819242 | 395 | 0.605146 | 2,428 | 18,460 | 4.352965 | 0.150329 | 0.030656 | 0.028385 | 0.027817 | 0.494749 | 0.434194 | 0.316965 | 0.267197 | 0.219699 | 0.203425 | 0 | 0.029236 | 0.245883 | 18,460 | 342 | 396 | 53.976608 | 0.729976 | 0.156176 | 0 | 0.157143 | 0 | 0.004762 | 0.121972 | 0.014755 | 0 | 0 | 0 | 0 | 0 | 1 | 0.042857 | false | 0.009524 | 0.047619 | 0 | 0.114286 | 0.009524 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
339936181114473bdbde659c33ee962d60edd775 | 1,115 | py | Python | plot.py | lcesquivel10/ejercicio-13-14-lcesquivel10- | 6756d9f6391c64a2eaf8f97a49088e2b519d783c | [
"MIT"
] | null | null | null | plot.py | lcesquivel10/ejercicio-13-14-lcesquivel10- | 6756d9f6391c64a2eaf8f97a49088e2b519d783c | [
"MIT"
] | null | null | null | plot.py | lcesquivel10/ejercicio-13-14-lcesquivel10- | 6756d9f6391c64a2eaf8f97a49088e2b519d783c | [
"MIT"
] | null | null | null | import numpy as np
import matplotlib.pyplot as plt
DATA_14 = np.loadtxt("Poincare_1.400000.dat")
DATA_144 = np.loadtxt("Poincare_1.440000.dat")
DATA_1465 = np.loadtxt("Poincare_1.465000.dat")
Omega_14 = DATA_14[:,[0,0]]
Theta_14 = DATA_14[:,[1,1]]
time_14 = DATA_14[:,[2,2]]
Omega_144 = DATA_144[:,[0,0]]
Theta_144 = DATA_144[:,[1,1]]
time_144 = DATA_144[:,[2,2]]
Omega_1465 = DATA_1465[:,[0,0]]
Theta_1465 = DATA_1465[:,[1,1]]
time_1465 = DATA_1465[:,[2,2]]
plt.figure()
plt.scatter(Omega_14,Theta_14,s=2)
plt.xlabel("Theta")
plt.ylabel("Omega")
plt.savefig("Poincare_14.png")
plt.figure()
plt.scatter(Omega_144,Theta_144,s=2)
plt.xlabel("Theta")
plt.ylabel("Omega")
plt.savefig("Poincare_144.png")
plt.figure()
plt.scatter(Omega_1465,Theta_1465,s=2)
plt.xlabel("Theta")
plt.ylabel("Omega")
plt.savefig("Poincare_1465.png")
DATA_Bifur = np.loadtxt("Bifurcation.dat")
Omega_Bifur = DATA_Bifur[:,[0,0]]
Theta_Bifur = DATA_Bifur[:,[1,1]]
plt.figure()
plt.scatter(Omega_Bifur,Theta_Bifur,s=2)
plt.xlabel("FD")
plt.ylabel("Theta")
plt.savefig("Bifurcation.png")
| 23.229167 | 48 | 0.686099 | 188 | 1,115 | 3.856383 | 0.175532 | 0.027586 | 0.038621 | 0.104828 | 0.33931 | 0.273103 | 0.198621 | 0.198621 | 0.198621 | 0.198621 | 0 | 0.139086 | 0.116592 | 1,115 | 47 | 49 | 23.723404 | 0.596954 | 0 | 0 | 0.27027 | 0 | 0 | 0.166667 | 0.058989 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.054054 | 0 | 0.054054 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
339ac37d7664917be8732e0d113fe2befac05d73 | 1,153 | py | Python | zxc.py | AlexMcArrow/zxc | 061a2be2d9d3b4ab3de3cb19d31ade6e9b8e1f85 | [
"MIT"
] | null | null | null | zxc.py | AlexMcArrow/zxc | 061a2be2d9d3b4ab3de3cb19d31ade6e9b8e1f85 | [
"MIT"
] | null | null | null | zxc.py | AlexMcArrow/zxc | 061a2be2d9d3b4ab3de3cb19d31ade6e9b8e1f85 | [
"MIT"
] | null | null | null | #!/usr/local/bin/python
import os
import sys
import signal
global version, command_list
version = "0.0.1"
command_list = {
'exit': {'run': 'cl_exit', 'info': 'Exit from ZXC-shell'},
'help': {'run': 'cl_help', 'info': 'Show help page'}
}
def cl_exit():
print("ZXC shutdown")
sys.exit(0)
def cl_help():
zxc_info()
print("Avalible commands")
for key in command_list:
print("\t" + key+"\t"+command_list[key]['info'])
print("")
def zxc_info():
print("ZXC v" + version + " Copyright 2020 AlexMcArrow Licensed by MIT")
def execute_zxc():
print("")
zxc_info()
print("")
while(True):
input = raw_input('zxc-shell$ ')
input = input.split()
if 0 < len(input):
if input[0].lower() in command_list:
globals()[command_list[input[0].lower()]['run']]()
else:
print("unknown command [" + input[0].lower() + "]")
def signal_handler(signal, frame):
cl_exit()
if __name__ == '__main__':
original_sigint = signal.getsignal(signal.SIGINT)
signal.signal(signal.SIGINT, signal_handler)
execute_zxc()
| 20.963636 | 80 | 0.585429 | 147 | 1,153 | 4.401361 | 0.401361 | 0.102009 | 0.055641 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.013809 | 0.246314 | 1,153 | 54 | 81 | 21.351852 | 0.730725 | 0.019081 | 0 | 0.131579 | 0 | 0 | 0.179646 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.131579 | false | 0 | 0.078947 | 0 | 0.210526 | 0.210526 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
339ba2d3a11f75dfeb2cbc077b4331d86e729ee7 | 2,292 | py | Python | pubsub/google/cloud/pubsub_v1/subscriber/_helper_threads.py | MatiasApi/CourseBV11 | 7a291238d5b0879e97acdabe037afaf1b0b7d60b | [
"Apache-2.0"
] | 2 | 2018-02-01T06:30:24.000Z | 2018-04-12T15:39:56.000Z | pubsub/google/cloud/pubsub_v1/subscriber/_helper_threads.py | MatiasApi/CourseBV11 | 7a291238d5b0879e97acdabe037afaf1b0b7d60b | [
"Apache-2.0"
] | 7 | 2020-03-24T15:50:06.000Z | 2021-06-08T19:57:39.000Z | pubsub/google/cloud/pubsub_v1/subscriber/_helper_threads.py | MatiasApi/CourseBV11 | 7a291238d5b0879e97acdabe037afaf1b0b7d60b | [
"Apache-2.0"
] | 1 | 2018-09-19T05:55:27.000Z | 2018-09-19T05:55:27.000Z | # Copyright 2017, Google LLC All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
import uuid
__all__ = (
'QueueCallbackWorker',
'STOP',
)
_LOGGER = logging.getLogger(__name__)
# Helper thread stop indicator. This could be a sentinel object or None,
# but the sentinel object's ID can change if the process is forked, and
# None has the possibility of a user accidentally killing the helper
# thread.
STOP = uuid.uuid4()
class QueueCallbackWorker(object):
"""A helper that executes a callback for every item in the queue.
Calls a blocking ``get()`` on the ``queue`` until it encounters
:attr:`STOP`.
Args:
queue (~queue.Queue): A Queue instance, appropriate for crossing the
concurrency boundary implemented by ``executor``. Items will
be popped off (with a blocking ``get()``) until :attr:`STOP`
is encountered.
callback (Callable[[str, Dict], Any]): A callback that can process
items pulled off of the queue. Items are assumed to be a pair
of a method name to be invoked and a dictionary of keyword
arguments for that method.
"""
def __init__(self, queue, callback):
self.queue = queue
self._callback = callback
def __call__(self):
while True:
item = self.queue.get()
if item == STOP:
_LOGGER.debug('Exiting the QueueCallbackWorker.')
return
# Run the callback. If any exceptions occur, log them and
# continue.
try:
action, kwargs = item
self._callback(action, kwargs)
except Exception as exc:
_LOGGER.error('%s: %s', exc.__class__.__name__, exc)
| 33.217391 | 76 | 0.653141 | 301 | 2,292 | 4.877076 | 0.518272 | 0.040872 | 0.017711 | 0.021798 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.005373 | 0.269197 | 2,292 | 68 | 77 | 33.705882 | 0.871045 | 0.645288 | 0 | 0 | 0 | 0 | 0.082432 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.086957 | false | 0 | 0.086957 | 0 | 0.26087 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
339c1f600d7f4334f4115d74c3d7300b5e97c507 | 2,918 | py | Python | Aulas/Módulos_Python/Pillow-Redimensionando_varias_imagens_automaticamente/app.py | edersonhs/Python3_Basico_Ao_Avancado | a15754a6fbca407d5a7a7ed4116c2710b4635594 | [
"MIT"
] | null | null | null | Aulas/Módulos_Python/Pillow-Redimensionando_varias_imagens_automaticamente/app.py | edersonhs/Python3_Basico_Ao_Avancado | a15754a6fbca407d5a7a7ed4116c2710b4635594 | [
"MIT"
] | null | null | null | Aulas/Módulos_Python/Pillow-Redimensionando_varias_imagens_automaticamente/app.py | edersonhs/Python3_Basico_Ao_Avancado | a15754a6fbca407d5a7a7ed4116c2710b4635594 | [
"MIT"
] | null | null | null | """
Pillow - Módulo python com diversas funções para manipular imagens
"""
import os
from PIL import Image # Importando o Pillow
def main(main_images_folder, new_width=800):
if not os.path.isdir(main_images_folder): # Levantando exceção caso o dir não exista
raise NotADirectoryError(f'{main_images_folder} não existe.')
for root, dir, files in os.walk(main_images_folder):
for file in files:
file_full_path = os.path.join(root, file) # Caminho completo
file_name, extension = os.path.splitext(file) # Retorna o nome do arquivo e a extensão
converted_tag = '_CONVERTED'
new_file = file_name + converted_tag + extension # Montando o novo nome
new_file_full_path = os.path.join(root, new_file) # Caminho completo do novo arquivo
if converted_tag in file_full_path:
continue # Se o arquivo já tiver a tag de convertido, será ignorado.
img_pillow = Image.open(file_full_path) # Abrindo e atribuindo a imagem com pillow
# Pegando os dados da imagem (Velocidade do obturador, data, hora, localização, etc)
exif = img_pillow.getexif() # Retorna um dicionario com as tags
# print(exif.get(36867)) # mostrando a data em que a foto foi tirada
width, height = img_pillow.size # Retorna largura e altura original da imagem
new_height = round(new_width * height / width)
"""
Calculando as novas medidas da imagem:
new_width * height / width == new_height
"""
new_image = img_pillow.resize(
(new_width, new_height),
Image.LANCZOS # Redimensionando a img
)
try:
new_image.save(
new_file_full_path,
optmize=True,
quality=70, # De 1 a 100. (referencia: 70 reduz bastante o tamanho mantendo uma boa qualidade)
exif=img_pillow.info['exif'] # Passando o exif original para a nova imagem
)
except Exception as e: # Quando a imagem não tiver exif
try:
new_image.save(
new_file_full_path,
optmize=True,
quality=70, # De 1 a 100. (referencia: 70 reduz bastante o tamanho mantendo uma boa qualidade)
)
except:
raise RuntimeError(f'Could not convert "{file_full_path}".')
print(f'{file_full_path} convertido com sucesso!')
new_image.close() # Fechando a nova imagem (o save provavelmente fecha também)
img_pillow.close() # Fechando a imagem
if __name__ == '__main__':
main_images_folder = r'C:\Users\eders\OneDrive\Desktop\Pictures'
main(main_images_folder, 640)
| 42.911765 | 120 | 0.594928 | 359 | 2,918 | 4.657382 | 0.415042 | 0.038278 | 0.057416 | 0.035885 | 0.166268 | 0.166268 | 0.166268 | 0.135167 | 0.135167 | 0.135167 | 0 | 0.013946 | 0.336532 | 2,918 | 67 | 121 | 43.552239 | 0.84969 | 0.309801 | 0 | 0.227273 | 0 | 0 | 0.091738 | 0.021459 | 0 | 0 | 0 | 0 | 0 | 1 | 0.022727 | false | 0 | 0.045455 | 0 | 0.068182 | 0.022727 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
339f1367bb18969ae77f9bd1b8128ef049276813 | 11,456 | py | Python | healthSuggestionsServer/healthSuggestions/migrations/0001_initial.py | tiagoaf5/HealthSuggestions | 37fb892e2d568e36b62f41b1898329852e3e3d81 | [
"Apache-2.0"
] | null | null | null | healthSuggestionsServer/healthSuggestions/migrations/0001_initial.py | tiagoaf5/HealthSuggestions | 37fb892e2d568e36b62f41b1898329852e3e3d81 | [
"Apache-2.0"
] | null | null | null | healthSuggestionsServer/healthSuggestions/migrations/0001_initial.py | tiagoaf5/HealthSuggestions | 37fb892e2d568e36b62f41b1898329852e3e3d81 | [
"Apache-2.0"
] | null | null | null | # -*- coding: utf-8 -*-
from __future__ import unicode_literals
from django.db import models, migrations
import django.utils.timezone
class Migration(migrations.Migration):
dependencies = [
]
operations = [
migrations.CreateModel(
name='CHVConcept',
fields=[
('CUI', models.TextField(serialize=False, primary_key=True)),
('CHV_Pref_EN', models.TextField(blank=True)),
('CHV_Pref_PT', models.TextField(blank=True)),
('UMLS_Pref_EN', models.TextField(blank=True)),
('UMLS_Pref_PT', models.TextField(blank=True)),
],
),
migrations.CreateModel(
name='CHVStemmedIndexEN',
fields=[
('term', models.TextField(serialize=False, primary_key=True)),
('idf', models.FloatField()),
('stringlist', models.TextField()),
],
),
migrations.CreateModel(
name='CHVStemmedIndexPT',
fields=[
('term', models.TextField(serialize=False, primary_key=True)),
('idf', models.FloatField()),
('stringlist', models.TextField()),
],
),
migrations.CreateModel(
name='CHVString',
fields=[
('id', models.PositiveIntegerField(serialize=False, primary_key=True)),
('en', models.TextField(blank=True)),
('pt', models.TextField(blank=True)),
('en_stemmed', models.TextField(blank=True)),
('pt_stemmed', models.TextField(blank=True)),
('en_count', models.PositiveSmallIntegerField(blank=True)),
('pt_count', models.PositiveSmallIntegerField(blank=True)),
('en_stemmed_count', models.PositiveSmallIntegerField(blank=True)),
('pt_stemmed_count', models.PositiveSmallIntegerField(blank=True)),
('cui', models.ForeignKey(to='healthSuggestions.CHVConcept')),
],
),
migrations.CreateModel(
name='Event',
fields=[
('id', models.AutoField(serialize=False, primary_key=True)),
('eventTimestamp', models.DateTimeField(default=django.utils.timezone.now)),
],
),
migrations.CreateModel(
name='EventType',
fields=[
('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
('type', models.CharField(unique=True, max_length=20)),
],
),
migrations.CreateModel(
name='Search',
fields=[
('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
('query', models.CharField(max_length=120)),
('queryInputTimestamp', models.DateTimeField(default=django.utils.timezone.now)),
('hash', models.CharField(unique=True, max_length=40)),
('totalNoResults', models.CharField(max_length=16)),
('answerTime', models.FloatField(null=True, blank=True)),
],
),
migrations.CreateModel(
name='SearchEngine',
fields=[
('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
('name', models.CharField(unique=True, max_length=10)),
('url', models.URLField()),
],
),
migrations.CreateModel(
name='SearchPage',
fields=[
('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
('SERPOrder', models.PositiveSmallIntegerField()),
('totalTimeOverSearchPage', models.FloatField(null=True, blank=True)),
('totalTimeOverSuggestionBoard', models.FloatField(null=True, blank=True)),
('timestamp', models.DateTimeField(default=django.utils.timezone.now)),
('url', models.URLField()),
('search', models.ForeignKey(related_name='searchPages', to='healthSuggestions.Search')),
],
),
migrations.CreateModel(
name='SearchResult',
fields=[
('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
('rank', models.PositiveSmallIntegerField()),
('url', models.URLField()),
('title', models.CharField(max_length=100)),
('snippet', models.TextField(null=True, blank=True)),
('searchPage', models.ForeignKey(related_name='searchResults', to='healthSuggestions.SearchPage')),
],
),
migrations.CreateModel(
name='SERelatedSearch',
fields=[
('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
('suggestion', models.CharField(max_length=50)),
],
),
migrations.CreateModel(
name='Session',
fields=[
('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
('ip', models.GenericIPAddressField()),
('startTimestamp', models.DateTimeField(default=django.utils.timezone.now)),
('browser', models.CharField(max_length=50)),
('os', models.CharField(max_length=50)),
],
),
migrations.CreateModel(
name='Suggestion',
fields=[
('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
('suggestion', models.CharField(max_length=100)),
],
),
migrations.CreateModel(
name='SuggestionLanguage',
fields=[
('iso6391', models.CharField(max_length=2, serialize=False, primary_key=True)),
('language', models.CharField(unique=True, max_length=20)),
],
),
migrations.CreateModel(
name='SuggestionType',
fields=[
('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
('type', models.CharField(unique=True, max_length=20)),
],
),
migrations.CreateModel(
name='TestUser',
fields=[
('guid', models.UUIDField(serialize=False, primary_key=True)),
('registerDate', models.DateTimeField(auto_now_add=True)),
],
),
migrations.CreateModel(
name='WebPage',
fields=[
('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
('pageLoadTimestamp', models.DateTimeField(default=django.utils.timezone.now, null=True, blank=True)),
('timeOnPage', models.FloatField(null=True, blank=True)),
('numScrollEvents', models.PositiveSmallIntegerField(null=True, blank=True)),
('url', models.URLField(max_length=400)),
('searchResults', models.ManyToManyField(related_name='webPages', to='healthSuggestions.SearchResult')),
],
),
migrations.CreateModel(
name='Click',
fields=[
('id', models.OneToOneField(primary_key=True, serialize=False, to='healthSuggestions.Event')),
('linkText', models.CharField(max_length=200, null=True, blank=True)),
('seRelatedSearch', models.ForeignKey(blank=True, to='healthSuggestions.SERelatedSearch', null=True)),
('searchResult', models.ForeignKey(blank=True, to='healthSuggestions.SearchResult', null=True)),
],
),
migrations.CreateModel(
name='Copy',
fields=[
('id', models.OneToOneField(primary_key=True, serialize=False, to='healthSuggestions.Event')),
('copyText', models.TextField()),
],
),
migrations.CreateModel(
name='Find',
fields=[
('id', models.OneToOneField(primary_key=True, serialize=False, to='healthSuggestions.Event')),
('findText', models.CharField(max_length=50, null=True, blank=True)),
],
),
migrations.CreateModel(
name='SwitchSE',
fields=[
('id', models.OneToOneField(primary_key=True, serialize=False, to='healthSuggestions.Event')),
('destination', models.ForeignKey(related_name='engine_destination', to='healthSuggestions.SearchEngine')),
('origin', models.ForeignKey(related_name='engine_origin', to='healthSuggestions.SearchEngine')),
],
),
migrations.AddField(
model_name='suggestion',
name='suggestionLanguage',
field=models.ForeignKey(to='healthSuggestions.SuggestionLanguage'),
),
migrations.AddField(
model_name='suggestion',
name='suggestionType',
field=models.ForeignKey(to='healthSuggestions.SuggestionType'),
),
migrations.AddField(
model_name='session',
name='guid',
field=models.ForeignKey(related_name='sessions', to='healthSuggestions.TestUser'),
),
migrations.AddField(
model_name='search',
name='seRelatedSearches',
field=models.ManyToManyField(related_name='searches', to='healthSuggestions.SERelatedSearch'),
),
migrations.AddField(
model_name='search',
name='searchEngine',
field=models.ForeignKey(to='healthSuggestions.SearchEngine'),
),
migrations.AddField(
model_name='search',
name='session',
field=models.ForeignKey(related_name='searches', to='healthSuggestions.Session'),
),
migrations.AddField(
model_name='search',
name='suggestions',
field=models.ManyToManyField(related_name='searches', to='healthSuggestions.Suggestion'),
),
migrations.AddField(
model_name='event',
name='searchPage',
field=models.ForeignKey(related_name='events', blank=True, to='healthSuggestions.SearchPage', null=True),
),
migrations.AddField(
model_name='event',
name='type',
field=models.ForeignKey(related_name='events', to='healthSuggestions.EventType'),
),
migrations.AddField(
model_name='event',
name='webPage',
field=models.ForeignKey(related_name='events', blank=True, to='healthSuggestions.WebPage', null=True),
),
migrations.AddField(
model_name='click',
name='suggestion',
field=models.ForeignKey(blank=True, to='healthSuggestions.Suggestion', null=True),
),
migrations.AddField(
model_name='click',
name='webPage',
field=models.ForeignKey(blank=True, to='healthSuggestions.WebPage', null=True),
),
]
| 44.23166 | 123 | 0.5625 | 948 | 11,456 | 6.685654 | 0.146624 | 0.03834 | 0.082834 | 0.05112 | 0.665194 | 0.55538 | 0.41622 | 0.321079 | 0.274692 | 0.274692 | 0 | 0.005113 | 0.300017 | 11,456 | 258 | 124 | 44.403101 | 0.78526 | 0.001833 | 0 | 0.579365 | 0 | 0 | 0.156127 | 0.062888 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.011905 | 0 | 0.02381 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
339fba9fb9d40d9ccf925b4d2e2cfb5e809276fb | 2,115 | py | Python | ni_python_styleguide/_utils/string_helpers.py | ni/ni-python-styleguide | abad986d4fe4c0fbcdce957c5cbc50abcc88937d | [
"MIT"
] | 4 | 2020-07-13T16:10:37.000Z | 2020-07-24T01:59:30.000Z | ni_python_styleguide/_utils/string_helpers.py | ni/ni-python-styleguide | abad986d4fe4c0fbcdce957c5cbc50abcc88937d | [
"MIT"
] | 6 | 2020-07-07T21:04:21.000Z | 2020-07-27T20:51:13.000Z | ni_python_styleguide/_utils/string_helpers.py | ni/ni-python-styleguide | abad986d4fe4c0fbcdce957c5cbc50abcc88937d | [
"MIT"
] | 2 | 2020-07-07T20:56:43.000Z | 2020-07-27T12:24:00.000Z | import pathlib
from typing import List, Optional
import ni_python_styleguide._utils
class InMultiLineStringChecker:
"""Provide utility methods to decide if line is within a multiline string."""
def __init__(self, error_file: Optional[str] = None, *_, lines: Optional[List[str]] = None):
"""Cache off whether each line is in a multiline string or not."""
self._values = []
if error_file:
self._error_file = pathlib.Path(error_file)
self._load_lines()
else:
self._error_file = None
if not lines:
raise Exception(
"Error, must provide either path to `error_file` or provide `lines`"
)
self._set_lines(lines)
@property
def values(self):
"""Return the values for the file."""
return self._values
def in_multiline_string(self, lineno):
"""Check if lineno is in a multiline string."""
return self._values[lineno - 1] # 0 indexed, but we number files 1 indexed
@staticmethod
def _count_multiline_string_endings_in_line(line):
return line.count('"""'), line.count("'''")
def _set_lines(self, lines):
current_count = [0, 0]
for line in lines:
type1, type2 = InMultiLineStringChecker._count_multiline_string_endings_in_line(line)
current_count[0] += type1
current_count[1] += type2
code_part_of_line = line
if "#" in line:
code_part_of_line = line.split("#", maxsplit=1)[0]
# if occurrences of multiline string markers is odd, this must be in a multiline
# or, if line continuation token is on the ending, assume in a multiline statement
self._values.append(
any([part % 2 == 1 for part in current_count])
or code_part_of_line.strip().endswith("\\")
)
def _load_lines(self):
in_file = self._error_file.read_text(encoding=ni_python_styleguide._utils.DEFAULT_ENCODING)
self._set_lines(in_file.splitlines())
| 36.465517 | 99 | 0.615603 | 264 | 2,115 | 4.681818 | 0.352273 | 0.084951 | 0.042071 | 0.033981 | 0.121359 | 0.059871 | 0.059871 | 0 | 0 | 0 | 0 | 0.010027 | 0.292671 | 2,115 | 57 | 100 | 37.105263 | 0.816176 | 0.193381 | 0 | 0 | 0 | 0 | 0.045157 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.15 | false | 0 | 0.075 | 0.025 | 0.325 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
33a03729db5cacd804492efa6480f629af44b442 | 2,454 | py | Python | tests/unit/test_format_control.py | rogerhil/pip | 7616583dbb2dcbda5a19d78873642d6751fbf017 | [
"MIT"
] | 7,089 | 2015-01-01T10:48:04.000Z | 2022-03-31T08:47:02.000Z | tests/unit/test_format_control.py | rogerhil/pip | 7616583dbb2dcbda5a19d78873642d6751fbf017 | [
"MIT"
] | 8,417 | 2015-01-01T13:03:16.000Z | 2022-03-31T17:40:27.000Z | tests/unit/test_format_control.py | rogerhil/pip | 7616583dbb2dcbda5a19d78873642d6751fbf017 | [
"MIT"
] | 2,663 | 2015-01-02T04:02:12.000Z | 2022-03-30T02:30:46.000Z | from optparse import Values
from typing import FrozenSet, List, Set
import pytest
from pip._internal.cli import cmdoptions
from pip._internal.cli.base_command import Command
from pip._internal.cli.status_codes import SUCCESS
from pip._internal.models.format_control import FormatControl
class SimpleCommand(Command):
def __init__(self) -> None:
super().__init__("fake", "fake summary")
def add_options(self) -> None:
self.cmd_opts.add_option(cmdoptions.no_binary())
self.cmd_opts.add_option(cmdoptions.only_binary())
def run(self, options: Values, args: List[str]) -> int:
self.options = options
return SUCCESS
def test_no_binary_overrides() -> None:
cmd = SimpleCommand()
cmd.main(["fake", "--only-binary=:all:", "--no-binary=fred"])
format_control = FormatControl({"fred"}, {":all:"})
assert cmd.options.format_control == format_control
def test_only_binary_overrides() -> None:
cmd = SimpleCommand()
cmd.main(["fake", "--no-binary=:all:", "--only-binary=fred"])
format_control = FormatControl({":all:"}, {"fred"})
assert cmd.options.format_control == format_control
def test_none_resets() -> None:
cmd = SimpleCommand()
cmd.main(["fake", "--no-binary=:all:", "--no-binary=:none:"])
format_control = FormatControl(set(), set())
assert cmd.options.format_control == format_control
def test_none_preserves_other_side() -> None:
cmd = SimpleCommand()
cmd.main(["fake", "--no-binary=:all:", "--only-binary=fred", "--no-binary=:none:"])
format_control = FormatControl(set(), {"fred"})
assert cmd.options.format_control == format_control
def test_comma_separated_values() -> None:
cmd = SimpleCommand()
cmd.main(["fake", "--no-binary=1,2,3"])
format_control = FormatControl({"1", "2", "3"}, set())
assert cmd.options.format_control == format_control
@pytest.mark.parametrize(
"no_binary,only_binary,argument,expected",
[
({"fred"}, set(), "fred", frozenset(["source"])),
({"fred"}, {":all:"}, "fred", frozenset(["source"])),
(set(), {"fred"}, "fred", frozenset(["binary"])),
({":all:"}, {"fred"}, "fred", frozenset(["binary"])),
],
)
def test_fmt_ctl_matches(
no_binary: Set[str], only_binary: Set[str], argument: str, expected: FrozenSet[str]
) -> None:
fmt = FormatControl(no_binary, only_binary)
assert fmt.get_allowed_formats(argument) == expected
| 33.162162 | 87 | 0.662184 | 299 | 2,454 | 5.22408 | 0.234114 | 0.133163 | 0.06402 | 0.073624 | 0.459667 | 0.419974 | 0.381562 | 0.329065 | 0.234315 | 0.176056 | 0 | 0.002915 | 0.161369 | 2,454 | 73 | 88 | 33.616438 | 0.756074 | 0 | 0 | 0.181818 | 0 | 0 | 0.138957 | 0.015892 | 0 | 0 | 0 | 0 | 0.109091 | 1 | 0.163636 | false | 0 | 0.127273 | 0 | 0.327273 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
33a046627ebe9b3fc3a1aedef6dc7d98597fd36f | 1,742 | py | Python | test/test_files/function_test.py | Accruent/robotframework-historic-parser | 07b0c551167a9e4cd8a0450e42d59865075f288e | [
"MIT"
] | null | null | null | test/test_files/function_test.py | Accruent/robotframework-historic-parser | 07b0c551167a9e4cd8a0450e42d59865075f288e | [
"MIT"
] | 31 | 2021-04-12T11:23:52.000Z | 2022-03-31T13:51:36.000Z | test/test_files/function_test.py | Accruent/robotframework-historic-parser | 07b0c551167a9e4cd8a0450e42d59865075f288e | [
"MIT"
] | null | null | null | """Unit tests for functions used in Robot Framework Historic Parser"""
import os
import sys
import unittest
from unittest.mock import patch
from robotframework_historic_parser.parserargs import parse_options
from robotframework_historic_parser.rfhistoricparser import get_time_in_min, rfhistoric_parser
ROOT_PATH = os.path.abspath(os.path.dirname(__file__))
class TestFunctions(unittest.TestCase):
"""Unit Tests for functions"""
def test_get_time_in_min(self):
"""This test verifies that get_time_in_min returns a time in minutes. """
test_time = '01:02:03'
expected_result = 62.05
result_in_minutes = get_time_in_min(test_time)
self.assertEqual(result_in_minutes, expected_result)
def test_get_time_in_min_bad_input(self):
"""This test verifies that get_time_in_min returns error if invalid input is passed. """
self.assertRaisesRegex(ValueError, 'not enough values to unpack', get_time_in_min, 'a')
@patch('builtins.print')
def test_rfhistoric_parser_ignore_result(self, mock_print):
"""This test verifies that the parser ignores any results if the ignore result argument
is set to True. """
sys.argv[1:] = ['-g', 'True']
test_opts = parse_options()
result = rfhistoric_parser(test_opts)
mock_print.assert_called_with("Ignoring execution results...")
self.assertEqual(result, None)
# def test_rfhistoric_parser(self):
# """This test verifies that the rfhistoric parser function. """
# file_path = ROOT_PATH + "/" + "empty.xml"
# sys.argv[1:] = ['-o', file_path]
# test_opts = parse_options()
# result = rfhistoric_parser(test_opts)
# print(result)
| 39.590909 | 96 | 0.703215 | 232 | 1,742 | 5 | 0.383621 | 0.041379 | 0.05431 | 0.072414 | 0.236207 | 0.193103 | 0.160345 | 0.160345 | 0.160345 | 0.074138 | 0 | 0.008627 | 0.201493 | 1,742 | 43 | 97 | 40.511628 | 0.825306 | 0.355339 | 0 | 0 | 0 | 0 | 0.078486 | 0 | 0 | 0 | 0 | 0 | 0.181818 | 1 | 0.136364 | false | 0 | 0.272727 | 0 | 0.454545 | 0.136364 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
33a083ea109985fd4ee17df5e384f1dd2fed888a | 7,984 | py | Python | 8-puzzle_Solver.py | koyalbhartia/8-Puzzle-Solver | 40a95a879278a8e5a673dbe36fe27121b639079c | [
"MIT"
] | null | null | null | 8-puzzle_Solver.py | koyalbhartia/8-Puzzle-Solver | 40a95a879278a8e5a673dbe36fe27121b639079c | [
"MIT"
] | null | null | null | 8-puzzle_Solver.py | koyalbhartia/8-Puzzle-Solver | 40a95a879278a8e5a673dbe36fe27121b639079c | [
"MIT"
] | null | null | null | #
# Copyright 2019 Koyal Bhartia
# @file 8-puzzle solver.py
# @author Koyal Bhartia
# @date 26/02/2019
# @version 1.0
#
# @brief This is the code to solve the 8-puzzle
#
# @Description This code has functions which returns all the possible paths that can be
# traversed given any goal matrix. Given any input matrix it gives the path to the goal matrix
#
import argparse
import numpy as np
import os, sys
import math
from collections import Counter
# @brief Comapares each element of any 2 3*3 matrices
#
# @param The matrices A and b
#
# @return Flag indicating if the matrices A and B are equal or not
def compare(A,B):
for row in range(3):
for col in range(3):
if(A[row,col]!=B[row,col]):
return 0
return 1
# @brief Checks if the new node created is present in the nodes created earlier
#
# @param The 3D matrix of all nodes and the new node generated
#
# @return Flag indicating if the nodes are equal or not
'''
def checkrepeated(Nodes,new_mat):
for position in range(Nodes.shape[2]):
if(compare(Nodes[:,:,position],new_mat)):
return 1
return 0
'''
# @brief Generates all the possible nodes that can be traveresed along with its directions in
# nodeInfo
#
# @param The desired goal node and the format of the nodeinfo
#
# @return The updated 3D node matrix and the complete NodeInfo table
def createAllNodes(Nodes,NodesInfo):
def countDistinct(arr):
return len(Counter(arr).keys())
# @brief Returns the position of the 0 in any given matrix
#
# @param The matrix
#
# @return The 0 positions
def zero_position(matrix):
for i in range(0,len(matrix)):
for j in range(0,len(matrix)):
if(matrix[i,j]==0):
zx=i
zy=j
return zx , zy
# @brief Swipes the 0 with its adjacent cells (if possible) and keeps updating the node matrix and nodeinfo
#
# @param The matrix in which the 0 has to be swiped, the current parent and child status and the position of the "0" in the matrix
#
# @return The current child position
def Swipe(matrix,parent,child,row,col):
checkrepeat=0
if row is not 0: #top
new_mat = matrix.copy()
new_mat[row,col]=matrix[row-1,col]
new_mat[row-1,col]=0
#checkrepeat=checkrepeated(Nodes,new_mat)
#if(checkrepeat==0):
Nodes[:,:,child]=new_mat
print(Nodes[:,:,child],"top")
NodesInfo[child,:,:]=[parent, child, 0, 2] # 2-> top
child+=1
if row is not 2: #down
new_mat = matrix.copy()
new_mat[row,col]=matrix[row+1,col]
new_mat[row+1,col]=0
#checkrepeat=checkrepeated(Nodes,new_mat)
#if(checkrepeat==0):
Nodes[:,:,child]=new_mat
print(Nodes[:,:,child],"dowm")
NodesInfo[child,:,:]=[parent, child, 0, 8] #8-> down
child+=1
if col is not 0: #left
new_mat = matrix.copy()
new_mat[row,col]=matrix[row,col-1]
new_mat[row,col-1]=0
#checkrepeat=checkrepeated(Nodes,new_mat)
#if(checkrepeat==0):
Nodes[:,:,child]=new_mat
print(Nodes[:,:,child],"left")
NodesInfo[child,:,:]=[parent, child, 0, 4] #4 -> left
child+=1
if col is not 2: #right
new_mat = matrix.copy()
new_mat[row,col]=matrix[row,col+1]
new_mat[row,col+1]=0
#checkrepeat=checkrepeated(Nodes,new_mat)
#if(checkrepeat==0):
Nodes[:,:,child]=new_mat
print(Nodes[:,:,child],"right")
NodesInfo[child,:,:]=[parent, child, 0, 6] #6 -> right
child+=1
return child
parent=0
child=1
while(child<Total-4):
print("Child No",child)
New_Parent=Nodes[:,:,parent]
print("New_Parent",New_Parent)
zx, zy = zero_position(New_Parent)
child=Swipe(New_Parent,parent, child,zx, zy)
parent+=1
return Nodes, NodesInfo
# @brief Searches if the input node is present in the list of all possible nodes created earlier
#
# @param The node list, the input node and the flag as an indiactor of found node
#
# @return The position where the node is found and the Flag indiacting a successful or an unsuccessful find
def search(Nodes, Input,Found):
print(Nodes.shape[2],"Nodes.shape[2]")
for position in range(Nodes.shape[2]):
current =Nodes[:,:,position]
if(compare(current,Input)):
Found=True
break;
print(Found,"search pos")
return position, Found
# @brief Generates the node path from the input node to get to the goal node
#
# @param The desired goal node, the nodeInfo indiacting all the parent child relationships and the position where the node is found
#
# @return The path that the node takes to reach the goal node
def NodePath(Nodes,NodesInfo,node_position):
node_path=[]
node_path.append(node_position)
while(node_position!=0):
print(node_position,"node_position")
node_position=int(NodesInfo[node_position,0,0])
node_path.append(node_position)
nodePathMat=np.zeros((3,3,len(node_path)))
for i in range(len(node_path)):
nodePathMat[:,:,i]=Nodes[:,:,node_path[i]]
return nodePathMat
# @brief Prints the List of nodes, the node info table and the node path to the goal node in 3 different text files
#
# @param The updated matrices of the all possible nodes, the nodeinfo matrix and the node path from input to the goal
#
# @return The 3 output text files
def TextOutput(Nodes,NodesInfo,nodePathMat):
def NodesTransform(Nodes):
NodesTf=np.zeros((Nodes.shape[2],1,9))
for i in range(Nodes.shape[2]):
counter=0
for col in range(3):
for row in range(3):
NodesTf[i,0,counter]=Nodes[row,col,i].copy()
counter+=1
return NodesTf
Nodes=NodesTransform(Nodes)
with open('Nodes.txt', 'w') as file:
for data in Nodes:
np.savetxt(file, data, fmt='%-2.0f')
with open('NodesInfo.txt', 'w') as file:
for data in NodesInfo:
np.savetxt(file, data, fmt='%-2.0f')
with open('NodePath.txt', 'w') as file:
if(len(nodePathMat)==0):
data=[]
file.write("The above combination does not exist")
np.savetxt(file, data)
else:
NodePath=NodesTransform(nodePathMat)
for data in NodePath:
np.savetxt(file, data, fmt='%-2.0f')
if __name__ == '__main__':
#Total no of possible nodes: 9!/2=181441
Total=181441
#The desired goal
Goal=np.mat([[1,2,3],[4,5,6],[7,8,0]])
#Backup Input if user enters wrong matrix
Input=np.mat([[1,2,3],[4,5,6],[7,0,8]])
#Initializing node, nodeinfo matrix
Nodes=np.zeros((3,3,Total))
NodesInfo=np.zeros((Total,1,4))
Nodes[:,:,0]=Goal
NodesInfo[0,:,:]=[0,0,0,0]
#Indiactes if the iput node is found
Found=False
NodePathMat=[]
flag=0
while(flag!=1):
input("Please enter the nos between 0-8 rowise to start the 8-puzzle after pressing enter")
try:
Input=np.zeros((3,3))
for i in range(3):
for j in range(3):
Input[i][j]=int(input('Element [%d][%d]=' %(i,j)))
if (Input[i][j]>8):
raise ValueError
flag=1
print(flag,"flag")
except ValueError:
print("No valid..! Please enter integer less than 8 ...")
print("Input matrix",Input)
Nodes, NodesInfo=createAllNodes(Nodes,NodesInfo)
position,Found=search(Nodes,Input,Found)
if Found:
NodePathMat=NodePath(Nodes,NodesInfo,position)
TextOutput(Nodes,NodesInfo,NodePathMat)
| 32.587755 | 135 | 0.602204 | 1,133 | 7,984 | 4.198588 | 0.191527 | 0.027749 | 0.015136 | 0.015136 | 0.283792 | 0.183939 | 0.169224 | 0.131596 | 0.131596 | 0.113517 | 0 | 0.027366 | 0.281438 | 7,984 | 244 | 136 | 32.721311 | 0.801813 | 0.319639 | 0 | 0.147887 | 0 | 0 | 0.064187 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.06338 | false | 0 | 0.035211 | 0.007042 | 0.161972 | 0.084507 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
33a17f05aeaa5e9c684038bbf304e11283df433a | 22,112 | py | Python | interface_legacy.py | andersonic/cs232 | 8fdcbb7904db73c5f66a2bf28ad7621742f17d5b | [
"MIT"
] | null | null | null | interface_legacy.py | andersonic/cs232 | 8fdcbb7904db73c5f66a2bf28ad7621742f17d5b | [
"MIT"
] | null | null | null | interface_legacy.py | andersonic/cs232 | 8fdcbb7904db73c5f66a2bf28ad7621742f17d5b | [
"MIT"
] | null | null | null | from selenium import webdriver, common
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.common.action_chains import ActionChains
import math
import time
import os
import demjson
driver = None
all_pokemon_data = demjson.decode(open('pokemon_data.txt', 'r').read())
own_team = []
opponent_team = [None, None, None, None, None, None]
game_state = {'rocks':False, 'spikes': 0, 'tspikes': 0, 'weather':'none', 'trickroom':False, 'terrain':'none'}
turn = 0
own_mon_out = None
opponent_mon_out = None
def open_window(url):
"""Opens window"""
my_dir = os.path.dirname(__file__)
chrome_path = os.path.join(my_dir, 'chromedriver')
new_driver = webdriver.Chrome(chrome_path)
new_driver.get(url)
global driver
driver = new_driver
driver.implicitly_wait(3)
def log_out():
driver.find_element_by_name("openOptions").click()
try:
driver.find_element_by_name("logout").click()
driver.find_element_by_tag_name("strong").click()
except common.exceptions.NoSuchElementException:
pass
driver.refresh()
def log_in(username, password):
log_out()
logged_in = False
driver.find_element_by_name("login").click()
username_field = driver.find_element_by_name("username")
username_field.send_keys(username)
username_field.send_keys(Keys.RETURN)
try:
password_field = driver.find_element_by_name("password")
password_field.send_keys(password)
password_field.send_keys(Keys.RETURN)
except common.exceptions.NoSuchElementException:
logged_in = True
try:
driver.find_element_by_name("input")
logged_in = False
except common.exceptions.NoSuchElementException:
logged_in = True
return logged_in
def start():
open_window("https://play.pokemonshowdown.com")
time.sleep(2)
log_in("cs232-test-5", "cs232")
def find_randbat():
driver.find_element_by_name("search").click()
def act(action, switch=False):
"""Take an action (a move name or a Pokémon name) as a parameter and whether the action is a switch."""
if switch:
pokemon_buttons = driver.find_elements_by_name("chooseSwitch")
for pokemon in pokemon_buttons:
if pokemon.text == action:
pokemon.click()
return True
else:
move_buttons = driver.find_elements_by_name("chooseMove")
for move in move_buttons:
if move.text.split('\n')[0] == action:
move.click()
return True
return False
def send_out_team_preview(pokemon_name):
pokemon_buttons = driver.find_elements_by_name("chooseTeamPreview")
for pokemon in pokemon_buttons:
if pokemon.text == pokemon_name:
pokemon.click()
return True
return False
def send_out_after_KO(pokemon_name):
act(pokemon_name, True)
def mega_evolve():
try:
driver.find_element_by_class("megaevo").click()
return True
except common.exceptions.NoSuchElementException:
return False
def get_preview_options():
options = []
pokemon_buttons = driver.find_elements_by_name("chooseTeamPreview")
for pokemon in pokemon_buttons:
options.append(pokemon.text)
return options
def get_move_options():
moves = []
move_buttons = driver.find_elements_by_name("chooseMove")
for move in move_buttons:
moves.append(move.text.split('\n')[0])
return moves
def get_switch_options():
pokemon_list = []
pokemon_buttons = driver.find_elements_by_name("chooseSwitch")
for pokemon in pokemon_buttons:
pokemon_list.append(pokemon.text)
return pokemon_list
def get_own_team():
"""Precondition: first turn. Returns all Pokémon, including stats, moves and items."""
pokemon_list = []
current_mon = driver.find_element_by_name("chooseDisabled")
hover = ActionChains(driver).move_to_element(current_mon)
hover.perform()
pokemon = driver.find_element_by_id("tooltipwrapper")
pokemon_list.append(parse_own_team(pokemon))
benched_mons = driver.find_elements_by_name("chooseSwitch")
for mon in benched_mons:
hover = ActionChains(driver).move_to_element(mon)
hover.perform()
pokemon_list.append(parse_own_team(driver.find_element_by_id("tooltipwrapper")))
global own_team
own_team = pokemon_list
return pokemon_list
def parse_own_team(element):
text = element.text
# Get health
text = text.split("\n")
name = " ".join(text[0].split(" ")[:len(text[0].split(" "))-1])
level = int(text[0].split(" ")[len(text[0].split(" ")) - 1][1:])
current_health = int(text[1].split(" ")[2].split("/")[0][1:])
total_health_text = text[1].split(" ")[2].split("/")[1]
total_health = int(total_health_text[0:len(total_health_text) - 1])
# Get ability and item
temp = text[2].split(" / ")
try:
ability = " ".join(temp[0].split(" ")[1:])
item = " ".join(temp[1].split(" ")[1:])
except IndexError:
ability = " ".join(temp[0].split(" ")[1:])
# Get stats
stats = text[3].split("/")
temp = []
for i in range(0,5):
pieces = stats[i].split(" ")
for piece in pieces:
if piece != "":
temp.append(int(piece))
break
stats = temp
# Get moves
moves = []
try:
for i in range(4, 8):
moves.append(text[i][2:])
except IndexError:
pass
for move in moves:
query_data(move)
time.sleep(2)
moves = [parse_move_text(i) for i in moves]
images = element.find_elements_by_tag_name("img")
types = []
for image in images:
if image.get_attribute("alt") is not "M" and image.get_attribute("alt") is not "F":
types.append(image.get_attribute("alt"))
if len(types) == 1:
types.append('none')
return Pokemon(name, level, types, moves, item, ability, current_health, total_health, stats)
def query_data(data):
textbox = driver.find_element_by_class_name("battle-log-add").find_elements_by_class_name("textbox")[1]
textbox.send_keys("/data " + data)
textbox.send_keys(Keys.ENTER)
def retrieve_data():
return driver.find_elements_by_class_name("utilichart")
def calc_stats(base_stats, level):
stats = []
stats.append(math.floor((31 + 2 * base_stats[0] + 21) * level/100 + 10 + level))
for i in range(1, 6):
stats.append(math.floor((31 + 2 * base_stats[i] + 21) * level/100 + 5))
return stats
def get_base_stats(mon):
query_data(mon)
time.sleep(1)
all_mons = retrieve_data()
base_stats = []
for pokemon in all_mons:
try:
if pokemon.text.split('\n')[1] == mon:
stat_list = pokemon.find_elements_by_class_name("statcol")
for stat in stat_list:
base_stats.append(int(stat.text.split("\n")[1]))
break
except IndexError:
pass
try:
assert len(base_stats) != 0
except AssertionError:
time.sleep(2)
base_stats = get_base_stats(mon)
return base_stats
def get_possible_moves(name):
return all_pokemon_data[name.replace(" ", "").lower()]['randomBattleMoves']
def handle_list_moves(moves):
for move in moves:
query_data(move)
time.sleep(2)
parsed_moves = [parse_move_text(i) for i in moves]
return parsed_moves
def parse_opposing_mon():
# Get element with data
enemy_mon = driver.find_element_by_class_name("foehint").find_elements_by_tag_name("div")[2]
hover = ActionChains(driver).move_to_element(enemy_mon)
hover.perform()
tooltip = driver.find_element_by_id("tooltipwrapper")
help_text = tooltip.text.split("\n")
name_temp = help_text[0].split(" ")
name = " ".join(name_temp[:len(name_temp) - 1])
level = int(name_temp[len(name_temp) - 1][1:])
base_stats = get_base_stats(name)
stats = calc_stats(base_stats, level)
images = tooltip.find_elements_by_tag_name("img")
types = []
for image in images:
if image.get_attribute("alt") is not "M" and image.get_attribute("alt") is not "F":
types.append(image.get_attribute("alt"))
if len(types) == 1:
types.append('none')
moves = handle_list_moves(get_possible_moves(name))
new_mon = Pokemon(name, level, types, moves, None, None, stats[0], stats[0], stats[1:])
if new_mon not in opponent_team:
for i in range(0, len(opponent_team)):
if opponent_team[i] is None:
opponent_team[i] = new_mon
break
return new_mon
class Pokemon:
def __init__(self, name=None, level=None, type=None, moves=None, item=None, ability=None, presenthealth=None,
totalhealth=None, stats=None, statuses={}, mon=None):
if mon is None:
self.name = name
self.level = level
self.type = type
self.moves = moves
self.item = item
self.ability = ability
self.stats = stats
self.present_health = presenthealth
self.total_health = totalhealth
self.health_percent = presenthealth/totalhealth
self.statuses = statuses
else:
# For form changes ????
self.name = name
self.level = mon.level
self.type = type
self.moves = mon.moves
self.ability = ability
self.present_health = mon.presenthealth
self.total_health = mon.totalhealth
self.stat = stats
self.statuses = mon.statuses
def get_health_percent(self):
self.health_percent = self.present_health/self.total_health
return self.health_percent
def __eq__(self, other):
"""Note that this definition of equality breaks down when comparing Pokémon on opposite teams"""
if self is None:
return False
elif other is None:
return False
else:
return self.name == other.name
def __str__(self):
return self.name
def damage_calc(self, enemy_move, enemy_mon):
enemy_stats = enemy_mon.calc_effective_stats()
my_stats = self.calc_effective_stats()
damage = 0
if enemy_move.category == 'Physical':
damage = \
(((2*enemy_mon.level/5 + 2) * enemy_stats[0]*enemy_move.power/my_stats[1])/50 + 2) * 93/100
elif enemy_move.category == 'Special':
damage = \
(((2*enemy_mon.level/5 + 2) * enemy_stats[2]*enemy_move.power/my_stats[3])/50 + 2) * 93/100
if enemy_move.type in enemy_mon.type:
damage *= 1.5
damage *= self.calculate_type_multiplier(enemy_move.type)
return damage
def calc_effective_stats(self):
real_stats = []
for i in range(0, len(self.stats)):
if i == 0:
# dealing with attack
atk_mod = 1
if "BRN" in self.statuses:
atk_mod *= 0.5
if "Atk" in self.statuses:
atk_mod *= self.statuses["Atk"]
real_stats.append(self.stats[i] * atk_mod)
elif i == 1:
# dealing with defense
try:
real_stats.append(self.stats[i] * self.statuses["Def"])
except KeyError:
real_stats.append(self.stats[i])
elif i == 2:
try:
real_stats.append(self.stats[i] * self.statuses["SpA"])
except KeyError:
real_stats.append(self.stats[i])
elif i == 3:
try:
real_stats.append(self.stats[i] * self.statuses["SpD"])
except KeyError:
real_stats.append(self.stats[i])
elif i == 4:
spe_mod = 1
if "PAR" in self.statuses:
spe_mod *= 0.25
if "Spe" in self.statuses:
spe_mod *= self.statuses["Spe"]
real_stats.append(self.stats[i] * spe_mod)
return real_stats
def calculate_type_multiplier(self, move_type):
type_chart = {
"Normal": {"Rock": .5, "Steel": .5, "Ghost": 0},
"Fighting":{"Normal": 2, "Rock": 2, "Steel": 2, "Ice": 2, "Dark": 2, "Psychic": .5,
"Flying": .5, "Poison": .5, "Bug": .5, "Fairy": .5, "Ghost": 0},
"Dragon":{"Dragon": 2, "Steel": .5, "Fairy": 0},
"Fairy":{"Dragon": 2, "Fighting": 2, "Dark": 2, "Poison": .5, "Steel": .5, "Fire": .5},
"Steel":{"Fairy": 2, "Rock": 2, "Ice": 2, "Steel": .5, "Fire": .5, "Water": .5, "Electric": .5},
"Fire": {"Grass": 2, "Bug": 2, "Steel": 2, "Water": .5, "Rock": .5, "Fire": .5, "Dragon": .5},
"Water":{"Fire": 2, "Rock": 2, "Ground": 2, "Grass": .5, "Water": .5, "Dragon": .5},
"Grass":{"Water": 2, "Rock": 2, "Ground": 2, "Flying": .5, "Fire": .5, "Grass": .5, "Bug": .5,
"Poison": .5, "Steel": .5, "Dragon": .5},
"Bug":{"Grass": 2, "Psychic": 2, "Dark": 2, "Fighting": .5, "Flying": .5, "Poison": .5, "Ghost": .5,
"Steel": .5, "Fire": .5, "Fairy": .5},
"Rock":{"Ice": 2, "Fire": 2, "Flying": 2, "Bug": 2, "Steel": .5, "Fighting": .5, "Ground": .5},
"Ground":{"Fire": 2, "Electric": 2, "Rock": 2, "Steel": 2, "Poison": 2, "Grass": .5, "Bug": .5,
"Flying": 0},
"Electric":{"Water": 2, "Flying": 2, "Grass": .5, "Electric": .5, "Dragon": .5, "Ground": 0},
"Dark":{"Psychic": 2, "Ghost": 2, "Fighting": .5, "Dark": .5, "Fairy": .5},
"Ghost":{"Ghost": 2, "Psychic": 2, "Dark": .5, "Normal": 0},
"Flying":{"Bug": 2, "Grass": 2, "Fighting": 2, "Rock": .5, "Steel": .5, "Electric": .5},
"Poison":{"Grass": 2, "Fairy": 2, "Poison": .5, "Ground": .5, "Rock": .5, "Ghost": .5, "Steel": 0},
"Psychic":{"Fighting": 2, "Poison": 2, "Psychic": .5, "Steel": .5, "Dark": 0},
"Ice":{"Dragon": 2, "Flying": 2, "Ground": 2, "Grass": 2, "Steel": .5, "Fire": .5,
"Water": .5, "Ice": .5}
}
multiplier = 1
if self.type[0] in type_chart[move_type]:
multiplier *= type_chart[move_type][self.type[0]]
if self.type[1] in type_chart[move_type]:
multiplier *= type_chart[move_type][self.type[1]]
return multiplier
class Move:
def __init__(self, type, power, category, text=None, name=None):
self.type = type
self.power = power
self.category = category
self.text = text
self.name = name
def __eq__(self, other):
return self.type == other.type and self.power == other.power and self.category == other.category
def parse_move_text(move):
all_stuff = retrieve_data()
move_data = None
move_name = None
for item in all_stuff:
move_name = item.text.split('\n')[0]
if move_name == move or move_name.replace(" ", "").replace("-", "").lower() == move:
move_data = item
break
try:
assert move_data is not None
except AssertionError:
time.sleep(2)
all_stuff = retrieve_data()
for item in all_stuff:
move_name = item.text.split('\n')[0]
if move_name == move or move_name.replace(" ", "").replace("-", "").lower() == move:
move_data = item
break
assert move_data is not None
images = move_data.find_element_by_class_name("typecol").find_elements_by_tag_name("img")
type = images[0].get_attribute("alt")
category = images[1].get_attribute("alt")
power = 0
if category != "Status":
try:
power = int(move_data.text.split("\n")[2])
except ValueError:
pass
return Move(type, power, category, name=move_name)
def update():
"""Pre-condition: battle state is up to date until turn_to_parse - 1.
Post-condition: battle state is up to date. Except it probably misses loads of stuff"""
# Below is the log-reading method
"""first_line = 0
last_line = 0
logs = driver.find_elements_by_class_name("battle-history")
logs = [log.text for log in logs]
for i in range(0, len(logs)):
if logs[i] == "Turn " + str(turn_to_parse):
first_line = i
if logs[i] == "Turn " + str(turn_to_parse + 1):
last_line = i
relevant_logs = logs[first_line:last_line]
for log in relevant_logs:
if " used " in log:
# someone used a move
# Do I care?
pass
elif " lost " in log:
# someone lost health, due to being hit or life orb
percent = extract_percent(log)
if " opposing " in log:
# opponent lost health
opponent_mon_out.current_health *= percent
else:
# own pokemon lost health. find percent and multiply
own_mon_out.current_health *= percent
elif " restored " in log:
# someone recovered health
elif "Pointed stones dug into " in log:
# someone took stealth rocks damage
if "the opposing" in log:
# opposing mon took rocks damage
elif " had its energy drained!" in log:
# someone recovered health through draining
elif " fainted " in log:
# someone fainted
# should be detected elsewhere
pass
elif "Go! " in log:
# player send someone out. switch out pokemon
elif " sent out " in log:
# opponent sent someone out. see if they need to be added to opponent team. switch out mon"""
first_line = 0
logs = [log.text for log in driver.find_elements_by_class_name("battle-history")]
turns = [log for log in logs if "Turn " in log]
most_recent_turn = turns[len(turns) - 1]
for i in range(0, len(logs)):
if logs[i] == most_recent_turn:
first_line = i
logs = logs[first_line:]
my_fainted_mon = None
your_fainted_mon = None
for log in logs:
if " fainted!" and " opposing " in log:
# An opposing Pokémon has fainted
name = log.split(" ")[2]
for mon in opponent_team:
if mon.name == name:
your_fainted_mon = mon
# Harder because you might send an unrevealed mon in to die right away
elif " fainted!" in log:
# One of your Pokémon has fainted
name = log.split(" ")[0]
for mon in own_team:
if mon.name == name:
my_fainted_mon = mon
assert my_fainted_mon is not None
if your_fainted_mon is not None:
your_fainted_mon.present_health = 0
if my_fainted_mon is None:
update_opponent()
if my_fainted_mon is not None:
my_fainted_mon.present_health = 0
else:
update_own_mon()
# Not perfect for now but life goes on
update_opponent()
def update_own_mon():
try:
statbar = driver.find_element_by_class_name("rstatbar")
firstline = " ".join(statbar.text.split("\n")[0].split(" "))
mon = " ".join(firstline.split(" ")[:len(firstline.split(" ")) - 1])
global own_mon_out
try:
if own_mon_out.name != mon:
for pokemon in own_team:
if pokemon.name == mon:
own_mon_out = pokemon
except AttributeError:
for pokemon in own_team:
if pokemon.name == mon:
own_mon_out = pokemon
hptext = statbar.find_element_by_class_name("hptext").text
health_percent = int(hptext[:len(hptext) - 1]) / 100
own_mon_out.present_health = own_mon_out.total_health * health_percent
update_status(own_mon_out, statbar)
except common.exceptions.NoSuchElementException:
# Your Pokémon is not there, either because it fainted or because you have used a switching move
# For now, assume it is due to fainting
own_mon_out.present_health = 0
def update_opponent():
statbar = driver.find_element_by_class_name("lstatbar")
mon = " ".join(statbar.text.split(" ")[:len(statbar.text.split(" ")) - 1])
already_parsed = False
opp_mon_out = None
for pokemon in opponent_team:
try:
if mon == pokemon.name:
already_parsed = True
opp_mon_out = pokemon
except AttributeError:
pass
global opponent_mon_out
if not already_parsed:
opponent_mon_out = parse_opposing_mon()
elif opponent_mon_out is None or opponent_mon_out.name != mon:
opponent_mon_out = opp_mon_out
hptext = statbar.find_element_by_class_name("hptext").text
health_percent = int(hptext[:len(hptext) - 1]) / 100
opponent_mon_out.present_health = opponent_mon_out.total_health * health_percent
update_status(opponent_mon_out, statbar)
def update_status(pokemon, statbar):
status = statbar.find_element_by_class_name("status")
statuses = status.find_elements_by_tag_name("span")
statuses = [i.text for i in statuses]
stat_dict = {}
for s in statuses:
try:
text = s.split(" ")
stat_dict[text[1]] = float(text[0][:len(text[0]) - 1])
except ValueError:
stat_dict[s] = True
pokemon.statuses = stat_dict
def extract_percent(text):
percent_as_int = 0
percent_index = 0
for i in range(0, len(text)):
if text[i] == "%":
percent_index = i
for i in range(percent_index - 1, 0, -1):
try:
percent_as_int += int(text[i]) * 10 ** (percent_index - i - 1)
except ValueError:
break
return percent_as_int / 100 | 33.91411 | 113 | 0.58769 | 2,861 | 22,112 | 4.346033 | 0.130024 | 0.021715 | 0.021956 | 0.025977 | 0.385958 | 0.282451 | 0.207496 | 0.182001 | 0.148142 | 0.128519 | 0 | 0.01885 | 0.287446 | 22,112 | 652 | 114 | 33.91411 | 0.77031 | 0.041018 | 0 | 0.3159 | 0 | 0 | 0.069958 | 0 | 0 | 0 | 0 | 0 | 0.012552 | 1 | 0.075314 | false | 0.018828 | 0.014644 | 0.008368 | 0.156904 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
33a1ecfefab953e0ef22c8338a8a4681b64e6d18 | 5,583 | py | Python | qemu/scripts/simplebench/bench-backup.py | hyunjoy/scripts | 01114d3627730d695b5ebe61093c719744432ffa | [
"Apache-2.0"
] | 44 | 2022-03-16T08:32:31.000Z | 2022-03-31T16:02:35.000Z | qemu/scripts/simplebench/bench-backup.py | hyunjoy/scripts | 01114d3627730d695b5ebe61093c719744432ffa | [
"Apache-2.0"
] | 1 | 2022-03-29T02:30:28.000Z | 2022-03-30T03:40:46.000Z | qemu/scripts/simplebench/bench-backup.py | hyunjoy/scripts | 01114d3627730d695b5ebe61093c719744432ffa | [
"Apache-2.0"
] | 18 | 2022-03-19T04:41:04.000Z | 2022-03-31T03:32:12.000Z | #!/usr/bin/env python3
#
# Bench backup block-job
#
# Copyright (c) 2020 Virtuozzo International GmbH.
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
#
import argparse
import json
import simplebench
from results_to_text import results_to_text
from bench_block_job import bench_block_copy, drv_file, drv_nbd
def bench_func(env, case):
""" Handle one "cell" of benchmarking table. """
cmd_options = env['cmd-options'] if 'cmd-options' in env else {}
return bench_block_copy(env['qemu-binary'], env['cmd'],
cmd_options,
case['source'], case['target'])
def bench(args):
test_cases = []
sources = {}
targets = {}
for d in args.dir:
label, path = d.split(':') # paths with colon not supported
sources[label] = drv_file(path + '/test-source')
targets[label] = drv_file(path + '/test-target')
if args.nbd:
nbd = args.nbd.split(':')
host = nbd[0]
port = '10809' if len(nbd) == 1 else nbd[1]
drv = drv_nbd(host, port)
sources['nbd'] = drv
targets['nbd'] = drv
for t in args.test:
src, dst = t.split(':')
test_cases.append({
'id': t,
'source': sources[src],
'target': targets[dst]
})
binaries = [] # list of (<label>, <path>, [<options>])
for i, q in enumerate(args.env):
name_path = q.split(':')
if len(name_path) == 1:
label = f'q{i}'
path_opts = name_path[0].split(',')
else:
assert len(name_path) == 2 # paths with colon not supported
label = name_path[0]
path_opts = name_path[1].split(',')
binaries.append((label, path_opts[0], path_opts[1:]))
test_envs = []
bin_paths = {}
for i, q in enumerate(args.env):
opts = q.split(',')
label_path = opts[0]
opts = opts[1:]
if ':' in label_path:
# path with colon inside is not supported
label, path = label_path.split(':')
bin_paths[label] = path
elif label_path in bin_paths:
label = label_path
path = bin_paths[label]
else:
path = label_path
label = f'q{i}'
bin_paths[label] = path
x_perf = {}
is_mirror = False
for opt in opts:
if opt == 'mirror':
is_mirror = True
elif opt == 'copy-range=on':
x_perf['use-copy-range'] = True
elif opt == 'copy-range=off':
x_perf['use-copy-range'] = False
elif opt.startswith('max-workers='):
x_perf['max-workers'] = int(opt.split('=')[1])
if is_mirror:
assert not x_perf
test_envs.append({
'id': f'mirror({label})',
'cmd': 'blockdev-mirror',
'qemu-binary': path
})
else:
test_envs.append({
'id': f'backup({label})\n' + '\n'.join(opts),
'cmd': 'blockdev-backup',
'cmd-options': {'x-perf': x_perf} if x_perf else {},
'qemu-binary': path
})
result = simplebench.bench(bench_func, test_envs, test_cases, count=3)
with open('results.json', 'w') as f:
json.dump(result, f, indent=4)
print(results_to_text(result))
class ExtendAction(argparse.Action):
def __call__(self, parser, namespace, values, option_string=None):
items = getattr(namespace, self.dest) or []
items.extend(values)
setattr(namespace, self.dest, items)
if __name__ == '__main__':
p = argparse.ArgumentParser('Backup benchmark', epilog='''
ENV format
(LABEL:PATH|LABEL|PATH)[,max-workers=N][,use-copy-range=(on|off)][,mirror]
LABEL short name for the binary
PATH path to the binary
max-workers set x-perf.max-workers of backup job
use-copy-range set x-perf.use-copy-range of backup job
mirror use mirror job instead of backup''',
formatter_class=argparse.RawTextHelpFormatter)
p.add_argument('--env', nargs='+', help='''\
Qemu binaries with labels and options, see below
"ENV format" section''',
action=ExtendAction)
p.add_argument('--dir', nargs='+', help='''\
Directories, each containing "test-source" and/or
"test-target" files, raw images to used in
benchmarking. File path with label, like
label:/path/to/directory''',
action=ExtendAction)
p.add_argument('--nbd', help='''\
host:port for remote NBD image, (or just host, for
default port 10809). Use it in tests, label is "nbd"
(but you cannot create test nbd:nbd).''')
p.add_argument('--test', nargs='+', help='''\
Tests, in form source-dir-label:target-dir-label''',
action=ExtendAction)
bench(p.parse_args())
| 33.232143 | 78 | 0.573706 | 721 | 5,583 | 4.334258 | 0.303745 | 0.0432 | 0.0192 | 0.01824 | 0.1296 | 0.03264 | 0.01472 | 0 | 0 | 0 | 0 | 0.007914 | 0.298406 | 5,583 | 167 | 79 | 33.431138 | 0.78989 | 0.158696 | 0 | 0.157025 | 0 | 0.008264 | 0.250428 | 0.032562 | 0 | 0 | 0 | 0 | 0.016529 | 1 | 0.024793 | false | 0 | 0.041322 | 0 | 0.082645 | 0.008264 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |