hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
9f06512f5b8444b37252ffc2be6038a53e52748e | 47 | py | Python | akebono/formatter.py | OTA2000/akebono | 11f88f3605a66989ac5cf11cb6af7b93987bcf59 | [
"MIT"
] | 3 | 2018-09-28T01:35:41.000Z | 2020-06-22T07:09:14.000Z | akebono/formatter.py | OTA2000/akebono | 11f88f3605a66989ac5cf11cb6af7b93987bcf59 | [
"MIT"
] | 1 | 2020-01-06T08:15:10.000Z | 2020-01-06T08:15:10.000Z | akebono/formatter.py | OTA2000/akebono | 11f88f3605a66989ac5cf11cb6af7b93987bcf59 | [
"MIT"
] | 6 | 2018-08-10T03:04:28.000Z | 2020-02-03T02:28:08.000Z | def get_values(pdobj):
return pdobj.values
| 15.666667 | 23 | 0.744681 | 7 | 47 | 4.857143 | 0.714286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.170213 | 47 | 2 | 24 | 23.5 | 0.871795 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | false | 0 | 0 | 0.5 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
9f2e6388759e75bd386c38e16b810fe613a4a835 | 5,549 | py | Python | tests/test_pipeline_manager/test_itemview.py | nickderobertis/py-file-conf | 100773b86373035a5b485a1ed96d8f5a1d69d066 | [
"MIT"
] | 2 | 2020-11-29T19:09:14.000Z | 2021-09-11T19:21:21.000Z | tests/test_pipeline_manager/test_itemview.py | nickderobertis/py-file-conf | 100773b86373035a5b485a1ed96d8f5a1d69d066 | [
"MIT"
] | 47 | 2020-02-01T03:54:07.000Z | 2022-01-13T02:24:45.000Z | tests/test_pipeline_manager/test_itemview.py | nickderobertis/py-file-conf | 100773b86373035a5b485a1ed96d8f5a1d69d066 | [
"MIT"
] | null | null | null | import os
from copy import deepcopy
from pyfileconf import Selector, PipelineManager, context
from pyfileconf.sectionpath.sectionpath import SectionPath
from pyfileconf.selector.models.itemview import ItemView
from tests.input_files.amodule import SecondExampleClass, a_function
from tests.input_files.mypackage.cmodule import ExampleClass, ExampleClassProtocol
from tests.test_pipeline_manager.base import PipelineManagerTestBase, CLASS_CONFIG_DICT_LIST, SAME_CLASS_CONFIG_DICT_LIST, \
DIFFERENT_CLASS_CONFIG_DICT_LIST
class TestItemView(PipelineManagerTestBase):
def assert_valid_function_iv(self, iv: ItemView, pipeline_manager: PipelineManager):
assert isinstance(iv, ItemView)
assert iv() == pipeline_manager.run(iv)
assert iv.item() == pipeline_manager.get(iv)()
iv2 = ItemView.from_section_path_str(iv._section_path_str)
assert iv == iv2
assert iv2 in [iv]
assert iv == list({iv})[0]
assert {iv: 5}[iv] == 5
assert iv.type == type(a_function)
assert isinstance(iv, type(a_function))
assert hash(iv) == hash(iv.section_path_str)
def assert_valid_class_iv(self, iv: ItemView, pipeline_manager: PipelineManager):
assert isinstance(iv, ItemView)
assert iv() == pipeline_manager.get(iv)
assert iv()() == pipeline_manager.run(iv)
assert iv.item() == pipeline_manager.get(iv)()
iv2 = ItemView.from_section_path_str(iv._section_path_str)
assert iv == iv2
assert iv2 in [iv]
assert iv == list({iv})[0]
assert {iv: 5}[iv] == 5
assert iv.type == ExampleClass
assert isinstance(iv, ExampleClass)
assert hash(iv) == hash(iv.section_path_str)
def assert_valid_specific_class_iv(self, iv: ItemView, pipeline_manager: PipelineManager):
assert isinstance(iv, ItemView)
assert iv() == pipeline_manager.get(iv)()
assert iv() == pipeline_manager.run(iv)
assert iv.a == pipeline_manager.get(iv).a
assert iv.item() == pipeline_manager.get(iv)()
iv2 = ItemView.from_section_path_str(iv._section_path_str)
assert iv == iv2
assert iv2 in [iv]
assert iv == list({iv})[0]
assert {iv: 5}[iv] == 5
assert hash(iv) == hash(iv.section_path_str)
assert iv.type == ExampleClass
assert isinstance(iv, ExampleClass)
assert isinstance(iv, ExampleClassProtocol)
def test_function_iv_from_selector(self):
self.write_a_function_to_pipeline_dict_file()
pipeline_manager = self.create_pm()
pipeline_manager.load()
sel = Selector()
iv = sel.test_pipeline_manager.stuff.a_function
self.assert_valid_function_iv(iv, pipeline_manager)
def test_function_iv_from_str(self):
self.write_a_function_to_pipeline_dict_file()
pipeline_manager = self.create_pm()
pipeline_manager.load()
iv = ItemView.from_section_path_str('test_pipeline_manager.stuff.a_function')
self.assert_valid_function_iv(iv, pipeline_manager)
def test_class_iv_from_selector(self):
self.write_example_class_to_pipeline_dict_file()
pipeline_manager = self.create_pm()
pipeline_manager.load()
sel = Selector()
iv = sel.test_pipeline_manager.stuff.ExampleClass
self.assert_valid_class_iv(iv, pipeline_manager)
def test_class_iv_from_str(self):
self.write_example_class_to_pipeline_dict_file()
pipeline_manager = self.create_pm()
pipeline_manager.load()
iv = ItemView.from_section_path_str('test_pipeline_manager.stuff.ExampleClass')
self.assert_valid_class_iv(iv, pipeline_manager)
def test_specific_class_iv_from_selector(self):
self.write_example_class_dict_to_file()
pipeline_manager = self.create_pm(
specific_class_config_dicts=CLASS_CONFIG_DICT_LIST
)
pipeline_manager.load()
sel = Selector()
iv = sel.test_pipeline_manager.example_class.stuff.data
self.assert_valid_specific_class_iv(iv, pipeline_manager)
def test_specific_class_iv_attribute_is_specific_class_from_selector(self):
self.write_example_class_dict_to_file()
pipeline_manager = self.create_pm(
specific_class_config_dicts=CLASS_CONFIG_DICT_LIST
)
pipeline_manager.load()
sel = Selector()
pipeline_manager.create('example_class.stuff.data2')
pipeline_manager.update(
section_path_str='example_class.stuff.data',
a=sel.test_pipeline_manager.example_class.stuff.data2
)
sel = Selector()
iv = sel.test_pipeline_manager.example_class.stuff.data.a
self.assert_valid_specific_class_iv(iv, pipeline_manager)
# Accessing a second time was causing a new ItemView to be
# created for the attribute itself, which is not expected.
# Add this second check to ensure it doesn't happen again.
iv = sel.test_pipeline_manager.example_class.stuff.data.a
self.assert_valid_specific_class_iv(iv, pipeline_manager)
def test_specific_class_iv_from_str(self):
self.write_example_class_dict_to_file()
pipeline_manager = self.create_pm(
specific_class_config_dicts=CLASS_CONFIG_DICT_LIST
)
pipeline_manager.load()
iv = ItemView.from_section_path_str('test_pipeline_manager.example_class.stuff.data')
self.assert_valid_specific_class_iv(iv, pipeline_manager)
| 43.015504 | 124 | 0.705172 | 714 | 5,549 | 5.12605 | 0.128852 | 0.188525 | 0.060383 | 0.04153 | 0.76776 | 0.753279 | 0.746995 | 0.736339 | 0.729235 | 0.69153 | 0 | 0.004567 | 0.210849 | 5,549 | 128 | 125 | 43.351563 | 0.83124 | 0.030636 | 0 | 0.627273 | 0 | 0 | 0.032198 | 0.032198 | 0 | 0 | 0 | 0 | 0.409091 | 1 | 0.090909 | false | 0 | 0.072727 | 0 | 0.172727 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
9f67c2447734685be428341ad9304eb72de77af0 | 43 | py | Python | qflow/wavefunctions/__init__.py | johanere/qflow | 5453cd5c3230ad7f082adf9ec1aea63ab0a4312a | [
"MIT"
] | 5 | 2019-07-24T21:46:24.000Z | 2021-06-11T18:18:24.000Z | qflow/wavefunctions/__init__.py | johanere/qflow | 5453cd5c3230ad7f082adf9ec1aea63ab0a4312a | [
"MIT"
] | 22 | 2019-02-19T10:49:26.000Z | 2019-07-18T09:42:13.000Z | qflow/wavefunctions/__init__.py | bsamseth/FYS4411 | 72b879e7978364498c48fc855b5df676c205f211 | [
"MIT"
] | 2 | 2020-11-04T15:17:24.000Z | 2021-11-03T16:37:38.000Z | from _qflow_backend.wavefunctions import *
| 21.5 | 42 | 0.860465 | 5 | 43 | 7 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.093023 | 43 | 1 | 43 | 43 | 0.897436 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
4c9b50badb6f163da25a863ecbde910075b10c46 | 19 | py | Python | python/testData/completion/relativeImport/pkg/name.after.py | jnthn/intellij-community | 8fa7c8a3ace62400c838e0d5926a7be106aa8557 | [
"Apache-2.0"
] | 2 | 2019-04-28T07:48:50.000Z | 2020-12-11T14:18:08.000Z | python/testData/completion/relativeImport/pkg/name.after.py | Cyril-lamirand/intellij-community | 60ab6c61b82fc761dd68363eca7d9d69663cfa39 | [
"Apache-2.0"
] | 173 | 2018-07-05T13:59:39.000Z | 2018-08-09T01:12:03.000Z | python/testData/completion/relativeImport/pkg/name.after.py | Cyril-lamirand/intellij-community | 60ab6c61b82fc761dd68363eca7d9d69663cfa39 | [
"Apache-2.0"
] | 2 | 2020-03-15T08:57:37.000Z | 2020-04-07T04:48:14.000Z | from . import xyzzy | 19 | 19 | 0.789474 | 3 | 19 | 5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.157895 | 19 | 1 | 19 | 19 | 0.9375 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
4ccf6b59509741b679dcc8ed9b75257c8e6e75a2 | 740 | py | Python | test/tests/import_test.py | kevinxucs/pyston | bdb87c1706ac74a0d15d9bc2bae53798678a5f14 | [
"Apache-2.0"
] | 1 | 2015-11-06T03:39:51.000Z | 2015-11-06T03:39:51.000Z | test/tests/import_test.py | kevinxucs/pyston | bdb87c1706ac74a0d15d9bc2bae53798678a5f14 | [
"Apache-2.0"
] | null | null | null | test/tests/import_test.py | kevinxucs/pyston | bdb87c1706ac74a0d15d9bc2bae53798678a5f14 | [
"Apache-2.0"
] | null | null | null | # expected: fail
# (doesn't collapse '..' in module paths)
# imports test
import sys
print sys.path[0]
import import_target
print import_target.x
import import_target
import_target.foo()
c = import_target.C()
print import_target.import_nested_target.y
import_target.import_nested_target.bar()
d = import_target.import_nested_target.D()
print "testing importfrom:"
from import_target import x as z
print z
import_nested_target = 15
from import_nested_target import y
print "This should still be 15:",import_nested_target
import import_nested_target
print import_nested_target
print import_nested_target.y
import_target.import_nested_target.y = import_nested_target.y + 1
print import_nested_target.y
print z
print y
print __name__
| 19.473684 | 65 | 0.818919 | 119 | 740 | 4.773109 | 0.294118 | 0.253521 | 0.380282 | 0.167254 | 0.382042 | 0.286972 | 0.286972 | 0.172535 | 0.172535 | 0 | 0 | 0.009174 | 0.116216 | 740 | 37 | 66 | 20 | 0.859327 | 0.090541 | 0 | 0.25 | 0 | 0 | 0.064275 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.791667 | null | null | 0.5 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 6 |
4cf2962ed2321140a263150091cf0692c22b9a31 | 39 | py | Python | bunnychat/__init__.py | mlmarius/bunnychat | 62597d65fdfebd41fe9f911c33f2733d7b230d67 | [
"MIT"
] | null | null | null | bunnychat/__init__.py | mlmarius/bunnychat | 62597d65fdfebd41fe9f911c33f2733d7b230d67 | [
"MIT"
] | null | null | null | bunnychat/__init__.py | mlmarius/bunnychat | 62597d65fdfebd41fe9f911c33f2733d7b230d67 | [
"MIT"
] | null | null | null | from bunnychat.client import BunnyChat
| 19.5 | 38 | 0.871795 | 5 | 39 | 6.8 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.102564 | 39 | 1 | 39 | 39 | 0.971429 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
9809ce717e0b6e47f72915397820e1d8853b148c | 197 | py | Python | sqlalchemy_monetdb/provision.py | MitchellWeg/sqlalchemy-monetdb | f627345b92ce632c18b3031049a331444f9a37a2 | [
"MIT"
] | null | null | null | sqlalchemy_monetdb/provision.py | MitchellWeg/sqlalchemy-monetdb | f627345b92ce632c18b3031049a331444f9a37a2 | [
"MIT"
] | null | null | null | sqlalchemy_monetdb/provision.py | MitchellWeg/sqlalchemy-monetdb | f627345b92ce632c18b3031049a331444f9a37a2 | [
"MIT"
] | null | null | null | from sqlalchemy.testing.provision import temp_table_keyword_args
@temp_table_keyword_args.for_db("monetdb")
def _monetdb_temp_table_keyword_args(cfg, eng):
return {"prefixes": ["TEMPORARY"]}
| 28.142857 | 64 | 0.807107 | 27 | 197 | 5.444444 | 0.666667 | 0.183673 | 0.326531 | 0.408163 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.086294 | 197 | 6 | 65 | 32.833333 | 0.816667 | 0 | 0 | 0 | 0 | 0 | 0.121827 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.25 | 0.25 | 0.75 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 6 |
e2236e507bfec29e099397d37e287fb00a61fbf2 | 35,711 | py | Python | venv/lib/python3.6/site-packages/ansible_collections/f5networks/f5_modules/tests/unit/modules/network/f5/test_bigip_virtual_server.py | usegalaxy-no/usegalaxy | 75dad095769fe918eb39677f2c887e681a747f3a | [
"MIT"
] | 1 | 2020-01-22T13:11:23.000Z | 2020-01-22T13:11:23.000Z | venv/lib/python3.6/site-packages/ansible_collections/f5networks/f5_modules/tests/unit/modules/network/f5/test_bigip_virtual_server.py | usegalaxy-no/usegalaxy | 75dad095769fe918eb39677f2c887e681a747f3a | [
"MIT"
] | 12 | 2020-02-21T07:24:52.000Z | 2020-04-14T09:54:32.000Z | venv/lib/python3.6/site-packages/ansible_collections/f5networks/f5_modules/tests/unit/modules/network/f5/test_bigip_virtual_server.py | usegalaxy-no/usegalaxy | 75dad095769fe918eb39677f2c887e681a747f3a | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
#
# Copyright (c) 2017 F5 Networks Inc.
# GNU General Public License v3.0 (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import os
import json
import pytest
import sys
if sys.version_info < (2, 7):
pytestmark = pytest.mark.skip("F5 Ansible modules require Python >= 2.7")
from ansible.module_utils.basic import AnsibleModule
from ansible_collections.f5networks.f5_modules.plugins.modules.bigip_virtual_server import (
ApiParameters, ModuleParameters, ModuleManager, ArgumentSpec
)
from ansible_collections.f5networks.f5_modules.tests.unit.compat import unittest
from ansible_collections.f5networks.f5_modules.tests.unit.compat.mock import Mock, patch
from ansible_collections.f5networks.f5_modules.tests.unit.modules.utils import set_module_args
fixture_path = os.path.join(os.path.dirname(__file__), 'fixtures')
fixture_data = {}
def load_fixture(name):
path = os.path.join(fixture_path, name)
if path in fixture_data:
return fixture_data[path]
with open(path) as f:
data = f.read()
try:
data = json.loads(data)
except Exception:
pass
fixture_data[path] = data
return data
class TestParameters(unittest.TestCase):
def test_destination_mutex_1(self):
args = dict(
destination='1.1.1.1'
)
p = ApiParameters(params=args)
assert p.destination_tuple.ip == '1.1.1.1'
def test_destination_mutex_2(self):
args = dict(
destination='1.1.1.1%2'
)
p = ApiParameters(params=args)
assert p.destination_tuple.ip == '1.1.1.1'
assert p.destination_tuple.route_domain == 2
def test_destination_mutex_3(self):
args = dict(
destination='1.1.1.1:80'
)
p = ApiParameters(params=args)
assert p.destination_tuple.ip == '1.1.1.1'
assert p.destination_tuple.port == 80
def test_destination_mutex_4(self):
args = dict(
destination='1.1.1.1%2:80'
)
p = ApiParameters(params=args)
assert p.destination_tuple.ip == '1.1.1.1'
assert p.destination_tuple.port == 80
assert p.destination_tuple.route_domain == 2
def test_api_destination_mutex_5(self):
args = dict(
destination='/Common/1.1.1.1'
)
p = ApiParameters(params=args)
assert p.destination_tuple.ip == '1.1.1.1'
def test_api_destination_mutex_6(self):
args = dict(
destination='/Common/1.1.1.1%2'
)
p = ApiParameters(params=args)
assert p.destination_tuple.ip == '1.1.1.1'
assert p.destination_tuple.route_domain == 2
def test_api_destination_mutex_7(self):
args = dict(
destination='/Common/1.1.1.1:80'
)
p = ApiParameters(params=args)
assert p.destination_tuple.ip == '1.1.1.1'
assert p.destination_tuple.port == 80
def test_api_destination_mutex_8(self):
args = dict(
destination='/Common/1.1.1.1%2:80'
)
p = ApiParameters(params=args)
assert p.destination_tuple.ip == '1.1.1.1'
assert p.destination_tuple.port == 80
assert p.destination_tuple.route_domain == 2
def test_destination_mutex_9(self):
args = dict(
destination='2700:bc00:1f10:101::6'
)
p = ApiParameters(params=args)
assert p.destination_tuple.ip == '2700:bc00:1f10:101::6'
def test_destination_mutex_10(self):
args = dict(
destination='2700:bc00:1f10:101::6%2'
)
p = ApiParameters(params=args)
assert p.destination_tuple.ip == '2700:bc00:1f10:101::6'
assert p.destination_tuple.route_domain == 2
def test_destination_mutex_11(self):
args = dict(
destination='2700:bc00:1f10:101::6.80'
)
p = ApiParameters(params=args)
assert p.destination_tuple.ip == '2700:bc00:1f10:101::6'
assert p.destination_tuple.port == 80
def test_destination_mutex_12(self):
args = dict(
destination='2700:bc00:1f10:101::6%2.80'
)
p = ApiParameters(params=args)
assert p.destination_tuple.ip == '2700:bc00:1f10:101::6'
assert p.destination_tuple.port == 80
assert p.destination_tuple.route_domain == 2
def test_module_no_partition_prefix_parameters(self):
args = dict(
state='present',
partition='Common',
name='my-virtual-server',
destination='10.10.10.10',
port=443,
pool='my-pool',
snat='Automap',
description='Test Virtual Server',
profiles=[
dict(
name='fix',
context='all'
)
],
enabled_vlans=['vlan2']
)
p = ModuleParameters(params=args)
assert p.name == 'my-virtual-server'
assert p.partition == 'Common'
assert p.port == 443
assert p.destination == '/Common/10.10.10.10:443'
assert p.pool == '/Common/my-pool'
assert p.snat == {'type': 'automap'}
assert p.description == 'Test Virtual Server'
assert len(p.profiles) == 1
assert 'context' in p.profiles[0]
assert 'name' in p.profiles[0]
assert '/Common/vlan2' in p.enabled_vlans
def test_module_partition_prefix_parameters(self):
args = dict(
state='present',
partition='Common',
name='my-virtual-server',
destination='10.10.10.10',
port=443,
pool='/Common/my-pool',
snat='Automap',
description='Test Virtual Server',
profiles=[
dict(
name='fix',
context='all'
)
],
enabled_vlans=['/Common/vlan2']
)
p = ModuleParameters(params=args)
assert p.name == 'my-virtual-server'
assert p.partition == 'Common'
assert p.port == 443
assert p.destination == '/Common/10.10.10.10:443'
assert p.pool == '/Common/my-pool'
assert p.snat == {'type': 'automap'}
assert p.description == 'Test Virtual Server'
assert len(p.profiles) == 1
assert 'context' in p.profiles[0]
assert 'name' in p.profiles[0]
assert '/Common/vlan2' in p.enabled_vlans
def test_api_parameters_variables(self):
args = {
"kind": "tm:ltm:virtual:virtualstate",
"name": "my-virtual-server",
"partition": "Common",
"fullPath": "/Common/my-virtual-server",
"generation": 54,
"selfLink": "https://localhost/mgmt/tm/ltm/virtual/~Common~my-virtual-server?expandSubcollections=true&ver=12.1.2",
"addressStatus": "yes",
"autoLasthop": "default",
"cmpEnabled": "yes",
"connectionLimit": 0,
"description": "Test Virtual Server",
"destination": "/Common/10.10.10.10:443",
"enabled": True,
"gtmScore": 0,
"ipProtocol": "tcp",
"mask": "255.255.255.255",
"mirror": "disabled",
"mobileAppTunnel": "disabled",
"nat64": "disabled",
"rateLimit": "disabled",
"rateLimitDstMask": 0,
"rateLimitMode": "object",
"rateLimitSrcMask": 0,
"serviceDownImmediateAction": "none",
"source": "0.0.0.0/0",
"sourceAddressTranslation": {
"type": "automap"
},
"sourcePort": "preserve",
"synCookieStatus": "not-activated",
"translateAddress": "enabled",
"translatePort": "enabled",
"vlansEnabled": True,
"vsIndex": 3,
"vlans": [
"/Common/net1"
],
"vlansReference": [
{
"link": "https://localhost/mgmt/tm/net/vlan/~Common~net1?ver=12.1.2"
}
],
"policiesReference": {
"link": "https://localhost/mgmt/tm/ltm/virtual/~Common~my-virtual-server/policies?ver=12.1.2",
"isSubcollection": True
},
"profilesReference": {
"link": "https://localhost/mgmt/tm/ltm/virtual/~Common~my-virtual-server/profiles?ver=12.1.2",
"isSubcollection": True,
"items": [
{
"kind": "tm:ltm:virtual:profiles:profilesstate",
"name": "http",
"partition": "Common",
"fullPath": "/Common/http",
"generation": 54,
"selfLink": "https://localhost/mgmt/tm/ltm/virtual/~Common~my-virtual-server/profiles/~Common~http?ver=12.1.2",
"context": "all",
"nameReference": {
"link": "https://localhost/mgmt/tm/ltm/profile/http/~Common~http?ver=12.1.2"
}
},
{
"kind": "tm:ltm:virtual:profiles:profilesstate",
"name": "serverssl",
"partition": "Common",
"fullPath": "/Common/serverssl",
"generation": 54,
"selfLink": "https://localhost/mgmt/tm/ltm/virtual/~Common~my-virtual-server/profiles/~Common~serverssl?ver=12.1.2",
"context": "serverside",
"nameReference": {
"link": "https://localhost/mgmt/tm/ltm/profile/server-ssl/~Common~serverssl?ver=12.1.2"
}
},
{
"kind": "tm:ltm:virtual:profiles:profilesstate",
"name": "tcp",
"partition": "Common",
"fullPath": "/Common/tcp",
"generation": 54,
"selfLink": "https://localhost/mgmt/tm/ltm/virtual/~Common~my-virtual-server/profiles/~Common~tcp?ver=12.1.2",
"context": "all",
"nameReference": {
"link": "https://localhost/mgmt/tm/ltm/profile/tcp/~Common~tcp?ver=12.1.2"
}
}
]
}
}
p = ApiParameters(params=args)
assert p.name == 'my-virtual-server'
assert p.partition == 'Common'
assert p.port == 443
assert p.destination == '/Common/10.10.10.10:443'
assert p.snat == {'type': 'automap'}
assert p.description == 'Test Virtual Server'
assert 'context' in p.profiles[0]
assert 'name' in p.profiles[0]
assert 'fullPath' in p.profiles[0]
assert p.profiles[0]['context'] == 'all'
assert p.profiles[0]['name'] == 'http'
assert p.profiles[0]['fullPath'] == '/Common/http'
assert '/Common/net1' in p.vlans
def test_module_address_translation_enabled(self):
args = dict(
address_translation=True
)
p = ModuleParameters(params=args)
assert p.address_translation == 'enabled'
def test_module_address_translation_disabled(self):
args = dict(
address_translation=False
)
p = ModuleParameters(params=args)
assert p.address_translation == 'disabled'
class TestManager(unittest.TestCase):
def setUp(self):
self.spec = ArgumentSpec()
self.p1 = patch('ansible_collections.f5networks.f5_modules.plugins.modules.bigip_virtual_server.modules_provisioned')
self.m1 = self.p1.start()
self.m1.return_value = ['ltm', 'gtm', 'asm']
self.p2 = patch(
'ansible_collections.f5networks.f5_modules.plugins.modules.bigip_virtual_server.Parameters._read_current_clientssl_profiles_from_device'
)
self.p3 = patch(
'ansible_collections.f5networks.f5_modules.plugins.modules.bigip_virtual_server.Parameters._read_current_serverssl_profiles_from_device'
)
self.p4 = patch(
'ansible_collections.f5networks.f5_modules.plugins.modules.bigip_virtual_server.VirtualServerValidator.check_create'
)
self.p5 = patch(
'ansible_collections.f5networks.f5_modules.plugins.modules.bigip_virtual_server.VirtualServerValidator.check_update'
)
self.m2 = self.p2.start()
self.m3 = self.p3.start()
self.m4 = self.p4.start()
self.m5 = self.p5.start()
self.m2.return_value = ['asda', 'clientssl', 'cs_foobar.star.local']
self.m3.return_value = ['baz', 'serverssl', 'ss_foobar.star.local']
self.m4.return_value = Mock(return_value=True)
self.m5.return_value = Mock(return_value=True)
self.p6 = patch('ansible_collections.f5networks.f5_modules.plugins.modules.bigip_virtual_server.tmos_version')
self.p7 = patch('ansible_collections.f5networks.f5_modules.plugins.modules.bigip_virtual_server.send_teem')
self.m6 = self.p6.start()
self.m6.return_value = '14.1.0'
self.m7 = self.p7.start()
self.m7.return_value = True
def tearDown(self):
self.p1.stop()
self.p2.stop()
self.p3.stop()
self.p4.stop()
self.p5.stop()
def test_create_virtual_server(self, *args):
set_module_args(dict(
all_profiles=[
dict(
name='http'
),
dict(
name='clientssl'
)
],
description="Test Virtual Server",
destination="10.10.10.10",
name="my-snat-pool",
partition="Common",
port="443",
snat="Automap",
state="present",
provider=dict(
server='localhost',
password='password',
user='admin'
)
))
module = AnsibleModule(
argument_spec=self.spec.argument_spec,
supports_check_mode=self.spec.supports_check_mode,
mutually_exclusive=self.spec.mutually_exclusive
)
# Override methods to force specific logic in the module to happen
mm = ModuleManager(module=module)
mm.exists = Mock(return_value=False)
mm.create_on_device = Mock(return_value=True)
results = mm.exec_module()
assert results['changed'] is True
def test_delete_virtual_server(self, *args):
set_module_args(dict(
all_profiles=[
'http', 'clientssl'
],
description="Test Virtual Server",
destination="10.10.10.10",
name="my-snat-pool",
partition="Common",
port="443",
snat="Automap",
state="absent",
provider=dict(
server='localhost',
password='password',
user='admin'
)
))
module = AnsibleModule(
argument_spec=self.spec.argument_spec,
supports_check_mode=self.spec.supports_check_mode,
mutually_exclusive=self.spec.mutually_exclusive
)
# Override methods to force specific logic in the module to happen
mm = ModuleManager(module=module)
mm.exists = Mock(return_value=False)
results = mm.exec_module()
assert results['changed'] is False
def test_enable_vs_that_is_already_enabled(self, *args):
set_module_args(dict(
all_profiles=[
'http', 'clientssl'
],
description="Test Virtual Server",
destination="10.10.10.10",
name="my-snat-pool",
partition="Common",
port="443",
snat="Automap",
state="absent",
provider=dict(
server='localhost',
password='password',
user='admin'
)
))
# Configure the parameters that would be returned by querying the
# remote device
current = ApiParameters(
dict(
agent_status_traps='disabled'
)
)
module = AnsibleModule(
argument_spec=self.spec.argument_spec,
supports_check_mode=self.spec.supports_check_mode,
mutually_exclusive=self.spec.mutually_exclusive
)
# Override methods to force specific logic in the module to happen
mm = ModuleManager(module=module)
mm.exists = Mock(return_value=False)
mm.update_on_device = Mock(return_value=True)
mm.read_current_from_device = Mock(return_value=current)
results = mm.exec_module()
assert results['changed'] is False
def test_modify_port(self, *args):
set_module_args(dict(
name="my-virtual-server",
partition="Common",
port="10443",
state="present",
provider=dict(
server='localhost',
password='password',
user='admin'
)
))
# Configure the parameters that would be returned by querying the
# remote device
current = ApiParameters(params=load_fixture('load_ltm_virtual_1.json'))
module = AnsibleModule(
argument_spec=self.spec.argument_spec,
supports_check_mode=self.spec.supports_check_mode,
mutually_exclusive=self.spec.mutually_exclusive
)
# Override methods to force specific logic in the module to happen
mm = ModuleManager(module=module)
mm.exists = Mock(return_value=True)
mm.read_current_from_device = Mock(return_value=current)
mm.update_on_device = Mock(return_value=True)
results = mm.exec_module()
assert results['changed'] is True
def test_modify_port_idempotent(self, *args):
set_module_args(dict(
name="my-virtual-server",
partition="Common",
port="443",
state="present",
provider=dict(
server='localhost',
password='password',
user='admin'
)
))
# Configure the parameters that would be returned by querying the
# remote device
current = ApiParameters(params=load_fixture('load_ltm_virtual_1.json'))
module = AnsibleModule(
argument_spec=self.spec.argument_spec,
supports_check_mode=self.spec.supports_check_mode,
mutually_exclusive=self.spec.mutually_exclusive
)
# Override methods to force specific logic in the module to happen
mm = ModuleManager(module=module)
mm.exists = Mock(return_value=True)
mm.read_current_from_device = Mock(return_value=current)
results = mm.exec_module()
assert results['changed'] is False
def test_modify_vlans_idempotent(self, *args):
set_module_args(dict(
name="my-virtual-server",
partition="Common",
disabled_vlans=[
"net1"
],
state="present",
provider=dict(
server='localhost',
password='password',
user='admin'
)
))
# Configure the parameters that would be returned by querying the
# remote device
current = ApiParameters(params=load_fixture('load_ltm_virtual_2.json'))
module = AnsibleModule(
argument_spec=self.spec.argument_spec,
supports_check_mode=self.spec.supports_check_mode,
mutually_exclusive=self.spec.mutually_exclusive
)
# Override methods to force specific logic in the module to happen
mm = ModuleManager(module=module)
mm.exists = Mock(return_value=True)
mm.read_current_from_device = Mock(return_value=current)
results = mm.exec_module()
assert results['changed'] is False
def test_modify_profiles(self, *args):
set_module_args(dict(
name="my-virtual-server",
partition="Common",
profiles=[
'http', 'clientssl'
],
state="present",
provider=dict(
server='localhost',
password='password',
user='admin'
)
))
# Configure the parameters that would be returned by querying the
# remote device
current = ApiParameters(params=load_fixture('load_ltm_virtual_2.json'))
module = AnsibleModule(
argument_spec=self.spec.argument_spec,
supports_check_mode=self.spec.supports_check_mode,
mutually_exclusive=self.spec.mutually_exclusive
)
# Override methods to force specific logic in the module to happen
mm = ModuleManager(module=module)
mm.exists = Mock(return_value=True)
mm.read_current_from_device = Mock(return_value=current)
mm.update_on_device = Mock(return_value=True)
results = mm.exec_module()
assert results['changed'] is True
assert len(results['profiles']) == 2
assert 'name' in results['profiles'][0]
assert 'context' in results['profiles'][0]
assert results['profiles'][0]['name'] == 'http'
assert results['profiles'][0]['context'] == 'all'
assert 'name' in results['profiles'][1]
assert 'context' in results['profiles'][1]
assert results['profiles'][1]['name'] == 'clientssl'
assert results['profiles'][1]['context'] == 'clientside'
def test_update_virtual_server(self, *args):
set_module_args(dict(
profiles=[
dict(
name='http'
),
dict(
name='clientssl'
)
],
description="foo virtual",
destination="1.1.1.1",
name="my-virtual-server",
partition="Common",
port="8443",
snat="snat-pool1",
state="disabled",
source='1.2.3.4/32',
irules=[
'irule1',
'irule2'
],
policies=[
'policy1',
'policy2'
],
enabled_vlans=[
'vlan1',
'vlan2'
],
pool='my-pool',
default_persistence_profile='source_addr',
fallback_persistence_profile='dest_addr',
provider=dict(
server='localhost',
password='password',
user='admin'
)
))
# Configure the parameters that would be returned by querying the
# remote device
current = ApiParameters(params=load_fixture('load_ltm_virtual_3.json'))
module = AnsibleModule(
argument_spec=self.spec.argument_spec,
supports_check_mode=self.spec.supports_check_mode,
mutually_exclusive=self.spec.mutually_exclusive
)
# Override methods to force specific logic in the module to happen
mm = ModuleManager(module=module)
mm.exists = Mock(return_value=True)
mm.read_current_from_device = Mock(return_value=current)
mm.update_on_device = Mock(return_value=True)
results = mm.exec_module()
assert results['changed'] is True
assert results['source'] == '1.2.3.4/32'
assert results['description'] == 'foo virtual'
assert results['snat'] == '/Common/snat-pool1'
assert results['destination'] == '1.1.1.1'
assert results['port'] == 8443
assert results['default_persistence_profile'] == '/Common/source_addr'
assert results['fallback_persistence_profile'] == '/Common/dest_addr'
# policies
assert len(results['policies']) == 2
assert '/Common/policy1' in results['policies']
assert '/Common/policy2' in results['policies']
# irules
assert len(results['irules']) == 2
assert '/Common/irule1' in results['irules']
assert '/Common/irule2' in results['irules']
# vlans
assert len(results['enabled_vlans']) == 2
assert '/Common/vlan1' in results['enabled_vlans']
assert '/Common/vlan2' in results['enabled_vlans']
# profiles
assert len(results['profiles']) == 2
assert 'name' in results['profiles'][0]
assert 'context' in results['profiles'][0]
assert results['profiles'][0]['name'] == 'http'
assert results['profiles'][0]['context'] == 'all'
assert 'name' in results['profiles'][1]
assert 'context' in results['profiles'][1]
assert results['profiles'][1]['name'] == 'clientssl'
assert results['profiles'][1]['context'] == 'clientside'
def test_create_virtual_server_with_address_translation_bool_true(self, *args):
set_module_args(dict(
destination="10.10.10.10",
address_translation=True,
name="my-snat-pool",
partition="Common",
port="443",
state="present",
provider=dict(
server='localhost',
password='password',
user='admin'
)
))
module = AnsibleModule(
argument_spec=self.spec.argument_spec,
supports_check_mode=self.spec.supports_check_mode,
mutually_exclusive=self.spec.mutually_exclusive
)
# Override methods to force specific logic in the module to happen
mm = ModuleManager(module=module)
mm.exists = Mock(return_value=False)
mm.create_on_device = Mock(return_value=True)
results = mm.exec_module()
assert results['changed'] is True
assert results['address_translation'] is True
def test_create_virtual_server_with_address_translation_string_yes(self, *args):
set_module_args(dict(
destination="10.10.10.10",
address_translation='yes',
name="my-snat-pool",
partition="Common",
port="443",
state="present",
provider=dict(
server='localhost',
password='password',
user='admin'
)
))
module = AnsibleModule(
argument_spec=self.spec.argument_spec,
supports_check_mode=self.spec.supports_check_mode,
mutually_exclusive=self.spec.mutually_exclusive
)
# Override methods to force specific logic in the module to happen
mm = ModuleManager(module=module)
mm.exists = Mock(return_value=False)
mm.create_on_device = Mock(return_value=True)
results = mm.exec_module()
assert results['changed'] is True
assert results['address_translation'] is True
def test_create_virtual_server_with_address_translation_bool_false(self, *args):
set_module_args(dict(
destination="10.10.10.10",
address_translation=False,
name="my-snat-pool",
partition="Common",
port="443",
state="present",
provider=dict(
server='localhost',
password='password',
user='admin'
)
))
module = AnsibleModule(
argument_spec=self.spec.argument_spec,
supports_check_mode=self.spec.supports_check_mode,
mutually_exclusive=self.spec.mutually_exclusive
)
# Override methods to force specific logic in the module to happen
mm = ModuleManager(module=module)
mm.exists = Mock(return_value=False)
mm.create_on_device = Mock(return_value=True)
results = mm.exec_module()
assert results['changed'] is True
assert results['address_translation'] is False
def test_create_virtual_server_with_address_translation_string_no(self, *args):
set_module_args(dict(
destination="10.10.10.10",
address_translation='no',
name="my-snat-pool",
partition="Common",
port="443",
state="present",
provider=dict(
server='localhost',
password='password',
user='admin'
)
))
module = AnsibleModule(
argument_spec=self.spec.argument_spec,
supports_check_mode=self.spec.supports_check_mode,
mutually_exclusive=self.spec.mutually_exclusive
)
# Override methods to force specific logic in the module to happen
mm = ModuleManager(module=module)
mm.exists = Mock(return_value=False)
mm.create_on_device = Mock(return_value=True)
results = mm.exec_module()
assert results['changed'] is True
assert results['address_translation'] is False
def test_create_virtual_server_with_port_translation_bool_true(self, *args):
set_module_args(dict(
destination="10.10.10.10",
port_translation=True,
name="my-snat-pool",
partition="Common",
port="443",
state="present",
provider=dict(
server='localhost',
password='password',
user='admin'
)
))
module = AnsibleModule(
argument_spec=self.spec.argument_spec,
supports_check_mode=self.spec.supports_check_mode,
mutually_exclusive=self.spec.mutually_exclusive
)
# Override methods to force specific logic in the module to happen
mm = ModuleManager(module=module)
mm.exists = Mock(return_value=False)
mm.create_on_device = Mock(return_value=True)
results = mm.exec_module()
assert results['changed'] is True
assert results['port_translation'] is True
def test_create_virtual_server_with_port_translation_string_yes(self, *args):
set_module_args(dict(
destination="10.10.10.10",
port_translation='yes',
name="my-snat-pool",
partition="Common",
port="443",
state="present",
provider=dict(
server='localhost',
password='password',
user='admin'
)
))
module = AnsibleModule(
argument_spec=self.spec.argument_spec,
supports_check_mode=self.spec.supports_check_mode,
mutually_exclusive=self.spec.mutually_exclusive
)
# Override methods to force specific logic in the module to happen
mm = ModuleManager(module=module)
mm.exists = Mock(return_value=False)
mm.create_on_device = Mock(return_value=True)
results = mm.exec_module()
assert results['changed'] is True
assert results['port_translation'] is True
def test_create_virtual_server_with_port_translation_bool_false(self, *args):
set_module_args(dict(
destination="10.10.10.10",
port_translation=False,
name="my-snat-pool",
partition="Common",
port="443",
state="present",
provider=dict(
server='localhost',
password='password',
user='admin'
)
))
module = AnsibleModule(
argument_spec=self.spec.argument_spec,
supports_check_mode=self.spec.supports_check_mode,
mutually_exclusive=self.spec.mutually_exclusive
)
# Override methods to force specific logic in the module to happen
mm = ModuleManager(module=module)
mm.exists = Mock(return_value=False)
mm.create_on_device = Mock(return_value=True)
results = mm.exec_module()
assert results['changed'] is True
assert results['port_translation'] is False
def test_create_virtual_server_with_port_translation_string_no(self, *args):
set_module_args(dict(
destination="10.10.10.10",
port_translation='no',
name="my-snat-pool",
partition="Common",
port="443",
state="present",
provider=dict(
server='localhost',
password='password',
user='admin'
)
))
module = AnsibleModule(
argument_spec=self.spec.argument_spec,
supports_check_mode=self.spec.supports_check_mode,
mutually_exclusive=self.spec.mutually_exclusive
)
# Override methods to force specific logic in the module to happen
mm = ModuleManager(module=module)
mm.exists = Mock(return_value=False)
mm.create_on_device = Mock(return_value=True)
results = mm.exec_module()
assert results['changed'] is True
assert results['port_translation'] is False
def test_create_virtual_server_with_check_profiles_bool_true(self, *args):
set_module_args(dict(
all_profiles=[
'http', 'clientssl'
],
destination="10.10.10.10",
name="my-snat-pool",
partition="Common",
port="443",
state="present",
check_profiles='yes',
provider=dict(
server='localhost',
password='password',
user='admin'
)
))
module = AnsibleModule(
argument_spec=self.spec.argument_spec,
supports_check_mode=self.spec.supports_check_mode,
mutually_exclusive=self.spec.mutually_exclusive
)
# Override methods to force specific logic in the module to happen
mm = ModuleManager(module=module)
mm.exists = Mock(return_value=False)
mm.create_on_device = Mock(return_value=True)
results = mm.exec_module()
assert results['changed'] is True
def test_create_virtual_server_with_check_profiles_bool_false(self, *args):
set_module_args(dict(
all_profiles=[
'http', 'clientssl'
],
destination="10.10.10.10",
name="my-snat-pool",
partition="Common",
port="443",
state="present",
check_profiles='no',
provider=dict(
server='localhost',
password='password',
user='admin'
)
))
module = AnsibleModule(
argument_spec=self.spec.argument_spec,
supports_check_mode=self.spec.supports_check_mode,
mutually_exclusive=self.spec.mutually_exclusive
)
# Override methods to force specific logic in the module to happen
mm = ModuleManager(module=module)
mm.exists = Mock(return_value=False)
mm.create_on_device = Mock(return_value=True)
results = mm.exec_module()
assert results['changed'] is True
| 34.976494 | 148 | 0.565456 | 3,753 | 35,711 | 5.209699 | 0.083666 | 0.011661 | 0.031455 | 0.038666 | 0.840886 | 0.823803 | 0.814802 | 0.805851 | 0.785802 | 0.765957 | 0 | 0.028191 | 0.32553 | 35,711 | 1,020 | 149 | 35.010784 | 0.783567 | 0.050853 | 0 | 0.644186 | 0 | 0.011628 | 0.180677 | 0.04212 | 0 | 0 | 0 | 0 | 0.140698 | 1 | 0.044186 | false | 0.022093 | 0.011628 | 0 | 0.060465 | 0.001163 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
e236a5a6d3e050b1bcc99e1b6ccab765175c9689 | 108 | py | Python | EduData/DataSet/__init__.py | BAOOOOOM/EduData | affa465779cb94db00ed19291f8411229d342c0f | [
"Apache-2.0"
] | 98 | 2019-07-05T03:27:36.000Z | 2022-03-30T08:38:09.000Z | EduData/DataSet/__init__.py | BAOOOOOM/EduData | affa465779cb94db00ed19291f8411229d342c0f | [
"Apache-2.0"
] | 45 | 2020-12-25T03:49:43.000Z | 2021-11-26T09:45:42.000Z | EduData/DataSet/__init__.py | BAOOOOOM/EduData | affa465779cb94db00ed19291f8411229d342c0f | [
"Apache-2.0"
] | 50 | 2019-08-17T05:11:15.000Z | 2022-03-29T07:54:13.000Z | # coding: utf-8
# 2019/8/23 @ tongshiwei
from .download_data.download_data import get_data, list_resources
| 21.6 | 65 | 0.787037 | 17 | 108 | 4.764706 | 0.764706 | 0.296296 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.084211 | 0.12037 | 108 | 4 | 66 | 27 | 0.768421 | 0.333333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
2c4fc7a497c95fbd379e12fe64ec8486a162b79f | 217 | py | Python | procstream/source/__init__.py | nipunbalan/procstream-kafka-python | da7d28b66522721de7fefe85c890cc79a2785bdc | [
"MIT"
] | null | null | null | procstream/source/__init__.py | nipunbalan/procstream-kafka-python | da7d28b66522721de7fefe85c890cc79a2785bdc | [
"MIT"
] | null | null | null | procstream/source/__init__.py | nipunbalan/procstream-kafka-python | da7d28b66522721de7fefe85c890cc79a2785bdc | [
"MIT"
] | null | null | null | from __future__ import absolute_import
from procstream.source.source import DataSourceService
from procstream.source.source_twitter import TwitterDataCollector
__all__ = ['DataSourceService', 'TwitterDataCollector'] | 36.166667 | 65 | 0.866359 | 21 | 217 | 8.47619 | 0.47619 | 0.157303 | 0.224719 | 0.292135 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.078341 | 217 | 6 | 66 | 36.166667 | 0.89 | 0 | 0 | 0 | 0 | 0 | 0.169725 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.75 | 0 | 0.75 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
2cb113039c8bf818da3784c2ea7c56e3131922df | 1,779 | py | Python | tests/test_shortcut.py | jiri-otoupal/py-cross-kit | a39c22a5b84f914c4ddccbe3645635a18c50bbc8 | [
"Apache-2.0"
] | 4 | 2021-02-28T11:25:56.000Z | 2021-04-27T07:12:56.000Z | tests/test_shortcut.py | jiri-otoupal/py-cross-kit | a39c22a5b84f914c4ddccbe3645635a18c50bbc8 | [
"Apache-2.0"
] | 1 | 2021-05-02T16:40:54.000Z | 2021-05-08T14:16:26.000Z | tests/test_shortcut.py | jiri-otoupal/py-cross-kit | a39c22a5b84f914c4ddccbe3645635a18c50bbc8 | [
"Apache-2.0"
] | 1 | 2021-07-11T11:43:03.000Z | 2021-07-11T11:43:03.000Z | import os
import unittest
from pycrosskit.shortcuts import Shortcut
class Test_Shortcuts(unittest.TestCase):
def test_create_desktop(self):
try:
sh = Shortcut("Test", "__init__.py", desktop=True)
self.assertEqual(True, os.path.exists(sh.desktop_path))
except:
self.assertEqual(True, False)
def test_delete_desktop(self):
try:
desktop, startmenu = Shortcut.delete("Test", desktop=True)
self.assertEqual(True, not os.path.exists(desktop))
except:
self.assertEqual(True, False)
def test_create_startmenu(self):
try:
sh = Shortcut("Test", "__init__.py", start_menu=True)
self.assertEqual(True, os.path.exists(sh.startmenu_path))
except:
self.assertEqual(True, False)
def test_delete_startmenu(self):
try:
desktop, startmenu = Shortcut.delete("Test", start_menu=True)
self.assertEqual(True, not os.path.exists(startmenu))
except:
self.assertEqual(True, False)
def test_create_both(self):
try:
sh = Shortcut("Test", "__init__.py", desktop=True, start_menu=True)
self.assertEqual(True, os.path.exists(sh.desktop_path))
self.assertEqual(True, os.path.exists(sh.startmenu_path))
except:
self.assertEqual(True, False)
def test_delete_both(self):
try:
desktop, startmenu = Shortcut.delete("Test", desktop=True, start_menu=True)
self.assertEqual(True, not os.path.exists(desktop))
self.assertEqual(True, not os.path.exists(startmenu))
except:
self.assertEqual(True, False)
if __name__ == '__main__':
unittest.main()
| 31.767857 | 87 | 0.619449 | 203 | 1,779 | 5.226601 | 0.162562 | 0.197926 | 0.250707 | 0.130066 | 0.841659 | 0.841659 | 0.841659 | 0.777568 | 0.724788 | 0.473139 | 0 | 0 | 0.26869 | 1,779 | 55 | 88 | 32.345455 | 0.815527 | 0 | 0 | 0.590909 | 0 | 0 | 0.036537 | 0 | 0 | 0 | 0 | 0 | 0.318182 | 1 | 0.136364 | false | 0 | 0.068182 | 0 | 0.227273 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
e2e5d998522e6cf1d60c90d8c0770203e2e9507c | 2,303 | py | Python | epytope/Data/pssms/smm/mat/A_30_01_9.py | christopher-mohr/epytope | 8ac9fe52c0b263bdb03235a5a6dffcb72012a4fd | [
"BSD-3-Clause"
] | 7 | 2021-02-01T18:11:28.000Z | 2022-01-31T19:14:07.000Z | epytope/Data/pssms/smm/mat/A_30_01_9.py | christopher-mohr/epytope | 8ac9fe52c0b263bdb03235a5a6dffcb72012a4fd | [
"BSD-3-Clause"
] | 22 | 2021-01-02T15:25:23.000Z | 2022-03-14T11:32:53.000Z | epytope/Data/pssms/smm/mat/A_30_01_9.py | christopher-mohr/epytope | 8ac9fe52c0b263bdb03235a5a6dffcb72012a4fd | [
"BSD-3-Clause"
] | 4 | 2021-05-28T08:50:38.000Z | 2022-03-14T11:45:32.000Z | A_30_01_9 = {0: {'A': -0.311, 'C': 0.147, 'E': 0.673, 'D': 0.584, 'G': 0.064, 'F': 0.336, 'I': -0.063, 'H': -0.528, 'K': -0.923, 'M': -0.443, 'L': -0.099, 'N': 0.203, 'Q': 0.285, 'P': 0.539, 'S': -0.258, 'R': -0.824, 'T': 0.158, 'W': 0.256, 'V': 0.055, 'Y': 0.149}, 1: {'A': -0.279, 'C': 0.265, 'E': 1.035, 'D': 0.528, 'G': -0.102, 'F': -0.377, 'I': -0.217, 'H': 0.265, 'K': -0.1, 'M': -0.205, 'L': 0.022, 'N': 0.317, 'Q': -0.282, 'P': 0.783, 'S': -0.524, 'R': 0.525, 'T': -0.734, 'W': -0.039, 'V': -0.554, 'Y': -0.33}, 2: {'A': 0.27, 'C': 0.333, 'E': 0.52, 'D': 0.706, 'G': 0.394, 'F': -0.357, 'I': 0.09, 'H': -0.634, 'K': -0.829, 'M': -0.106, 'L': -0.085, 'N': -0.088, 'Q': 0.05, 'P': 0.297, 'S': 0.193, 'R': -1.171, 'T': 0.164, 'W': 0.221, 'V': 0.142, 'Y': -0.112}, 3: {'A': -0.086, 'C': -0.157, 'E': 0.163, 'D': 0.143, 'G': 0.162, 'F': -0.056, 'I': -0.003, 'H': 0.039, 'K': 0.0, 'M': -0.184, 'L': 0.058, 'N': -0.039, 'Q': 0.108, 'P': -0.054, 'S': -0.002, 'R': -0.099, 'T': 0.047, 'W': 0.061, 'V': -0.02, 'Y': -0.08}, 4: {'A': -0.13, 'C': 0.106, 'E': 0.119, 'D': 0.211, 'G': 0.058, 'F': -0.119, 'I': 0.015, 'H': -0.097, 'K': 0.066, 'M': -0.073, 'L': 0.087, 'N': 0.024, 'Q': -0.084, 'P': 0.082, 'S': -0.062, 'R': -0.134, 'T': -0.095, 'W': 0.17, 'V': -0.082, 'Y': -0.063}, 5: {'A': 0.061, 'C': 0.177, 'E': 0.163, 'D': 0.257, 'G': 0.013, 'F': -0.093, 'I': 0.016, 'H': -0.008, 'K': 0.118, 'M': -0.011, 'L': 0.071, 'N': -0.047, 'Q': 0.02, 'P': 0.033, 'S': -0.119, 'R': -0.169, 'T': -0.1, 'W': -0.181, 'V': 0.021, 'Y': -0.22}, 6: {'A': 0.035, 'C': -0.151, 'E': 0.406, 'D': 0.252, 'G': 0.094, 'F': 0.042, 'I': -0.213, 'H': -0.094, 'K': 0.199, 'M': -0.222, 'L': -0.101, 'N': 0.006, 'Q': -0.008, 'P': -0.336, 'S': 0.024, 'R': -0.073, 'T': 0.182, 'W': 0.071, 'V': -0.03, 'Y': -0.083}, 7: {'A': 0.074, 'C': 0.051, 'E': 0.012, 'D': 0.234, 'G': 0.084, 'F': -0.403, 'I': -0.053, 'H': -0.068, 'K': 0.199, 'M': 0.191, 'L': -0.068, 'N': 0.023, 'Q': 0.123, 'P': -0.276, 'S': -0.101, 'R': 0.088, 'T': 0.06, 'W': 0.103, 'V': -0.008, 'Y': -0.264}, 8: {'A': -0.774, 'C': 0.48, 'E': 0.401, 'D': 0.221, 'G': -0.307, 'F': 0.257, 'I': -0.309, 'H': 0.364, 'K': -1.245, 'M': -0.087, 'L': -0.209, 'N': 0.268, 'Q': 0.672, 'P': 0.083, 'S': 0.041, 'R': -0.03, 'T': 0.089, 'W': 0.613, 'V': -0.421, 'Y': -0.107}, -1: {'con': 4.25797}} | 2,303 | 2,303 | 0.393834 | 557 | 2,303 | 1.62298 | 0.319569 | 0.019912 | 0.011062 | 0.013274 | 0.030973 | 0 | 0 | 0 | 0 | 0 | 0 | 0.373057 | 0.161963 | 2,303 | 1 | 2,303 | 2,303 | 0.095337 | 0 | 0 | 0 | 0 | 0 | 0.079427 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
e2f09dd3d80694fbb7d0da938c7d1ff6c6f57df7 | 153 | py | Python | 10/01/02/1.py | pylangstudy/201707 | c1cc72667f1e0b6e8eef4ee85067d7fa4ca500b6 | [
"CC0-1.0"
] | null | null | null | 10/01/02/1.py | pylangstudy/201707 | c1cc72667f1e0b6e8eef4ee85067d7fa4ca500b6 | [
"CC0-1.0"
] | 46 | 2017-06-30T22:19:07.000Z | 2017-07-31T22:51:31.000Z | 10/01/02/1.py | pylangstudy/201707 | c1cc72667f1e0b6e8eef4ee85067d7fa4ca500b6 | [
"CC0-1.0"
] | null | null | null | class Base:
def __del__(self): print('Base.__del__')
class Super(Base):
def __del__(self): print('Super.__del__'); super().__del__()
c = Super()
| 25.5 | 64 | 0.666667 | 21 | 153 | 3.904762 | 0.380952 | 0.170732 | 0.243902 | 0.341463 | 0.463415 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.143791 | 153 | 5 | 65 | 30.6 | 0.625954 | 0 | 0 | 0 | 0 | 0 | 0.163399 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.4 | false | 0 | 0 | 0 | 0.8 | 0.4 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 6 |
390721d5a1dd00aa9e244c477c156080c9cf769e | 236 | py | Python | lib/nodes/ListNode.py | C1200/CorLang | 2cc9092a0f33173f2afeec066839b90fa3c62762 | [
"MIT"
] | 1 | 2022-02-05T01:23:43.000Z | 2022-02-05T01:23:43.000Z | lib/nodes/ListNode.py | C1200/CorLang | 2cc9092a0f33173f2afeec066839b90fa3c62762 | [
"MIT"
] | 1 | 2022-02-11T20:59:16.000Z | 2022-02-11T20:59:16.000Z | lib/nodes/ListNode.py | C1200/CorLang | 2cc9092a0f33173f2afeec066839b90fa3c62762 | [
"MIT"
] | null | null | null | class ListNode:
def __init__(self, list_elems, pos_start, pos_end):
self.list_elems = list_elems
self.pos_start = pos_start
self.pos_end = pos_end
def __repr__(self):
return f'{self.list_elems}' | 26.222222 | 55 | 0.652542 | 34 | 236 | 4 | 0.382353 | 0.264706 | 0.286765 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.258475 | 236 | 9 | 56 | 26.222222 | 0.777143 | 0 | 0 | 0 | 0 | 0 | 0.07173 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.285714 | false | 0 | 0 | 0.142857 | 0.571429 | 0 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
1a4fcb64cd737f409ed175b2fc3a210d726956b4 | 3,379 | py | Python | oops_fhir/r4/code_system/structure_definition_kind.py | Mikuana/oops_fhir | 77963315d123756b7d21ae881f433778096a1d25 | [
"MIT"
] | null | null | null | oops_fhir/r4/code_system/structure_definition_kind.py | Mikuana/oops_fhir | 77963315d123756b7d21ae881f433778096a1d25 | [
"MIT"
] | null | null | null | oops_fhir/r4/code_system/structure_definition_kind.py | Mikuana/oops_fhir | 77963315d123756b7d21ae881f433778096a1d25 | [
"MIT"
] | null | null | null | from pathlib import Path
from fhir.resources.codesystem import CodeSystem
from oops_fhir.utils import CodeSystemConcept
__all__ = ["StructureDefinitionKind"]
_resource = CodeSystem.parse_file(Path(__file__).with_suffix(".json"))
class StructureDefinitionKind:
"""
StructureDefinitionKind
Defines the type of structure that a definition is describing.
Status: active - Version: 4.0.1
Copyright None
http://hl7.org/fhir/structure-definition-kind
"""
primitive_type = CodeSystemConcept(
{
"code": "primitive-type",
"definition": "A primitive type that has a value and an extension. These can be used throughout complex datatype, Resource and extension definitions. Only the base specification can define primitive types.",
"display": "Primitive Data Type",
}
)
"""
Primitive Data Type
A primitive type that has a value and an extension. These can be used throughout complex datatype, Resource and extension definitions. Only the base specification can define primitive types.
"""
complex_type = CodeSystemConcept(
{
"code": "complex-type",
"definition": "A complex structure that defines a set of data elements that is suitable for use in 'resources'. The base specification defines a number of complex types, and other specifications can define additional types. These factory do not have a maintained identity.",
"display": "Complex Data Type",
}
)
"""
Complex Data Type
A complex structure that defines a set of data elements that is suitable for use in 'resources'. The base specification defines a number of complex types, and other specifications can define additional types. These factory do not have a maintained identity.
"""
resource = CodeSystemConcept(
{
"code": "resource",
"definition": "A 'resource' - a directed acyclic graph of elements that aggregrates other types into an identifiable entity. The base FHIR resources are defined by the FHIR specification itself but other 'resources' can be defined in additional specifications (though these will not be recognised as 'resources' by the FHIR specification (i.e. they do not get end-points etc, or act as the targets of references in FHIR defined resources - though other specificatiosn can treat them this way).",
"display": "Resource",
}
)
"""
Resource
A 'resource' - a directed acyclic graph of elements that aggregrates other types into an identifiable entity. The base FHIR resources are defined by the FHIR specification itself but other 'resources' can be defined in additional specifications (though these will not be recognised as 'resources' by the FHIR specification (i.e. they do not get end-points etc, or act as the targets of references in FHIR defined resources - though other specificatiosn can treat them this way).
"""
logical = CodeSystemConcept(
{
"code": "logical",
"definition": "A pattern or a template that is not intended to be a real resource or complex type.",
"display": "Logical",
}
)
"""
Logical
A pattern or a template that is not intended to be a real resource or complex type.
"""
class Meta:
resource = _resource
| 42.2375 | 507 | 0.698431 | 429 | 3,379 | 5.4662 | 0.275058 | 0.01791 | 0.034115 | 0.037527 | 0.700213 | 0.700213 | 0.700213 | 0.700213 | 0.700213 | 0.700213 | 0 | 0.001552 | 0.237348 | 3,379 | 79 | 508 | 42.772152 | 0.908421 | 0.054158 | 0 | 0 | 0 | 0.083333 | 0.601687 | 0.011409 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.083333 | 0 | 0.25 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
1a8d3306e6736f6603ba1e7cd7e24323ae67c1e5 | 74 | py | Python | src/mta/model/svd.py | JalexChang/cross-media-attribution | 09a94774798c0d05d9142fde056de72e69872acb | [
"BSD-2-Clause"
] | null | null | null | src/mta/model/svd.py | JalexChang/cross-media-attribution | 09a94774798c0d05d9142fde056de72e69872acb | [
"BSD-2-Clause"
] | null | null | null | src/mta/model/svd.py | JalexChang/cross-media-attribution | 09a94774798c0d05d9142fde056de72e69872acb | [
"BSD-2-Clause"
] | null | null | null | import numpy
from mta.model.mf import MF
class SVD(MF):
pass | 14.8 | 28 | 0.635135 | 12 | 74 | 3.916667 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.297297 | 74 | 5 | 29 | 14.8 | 0.903846 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.25 | 0.5 | 0 | 0.75 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
1accd622b042048fdefd22fcaa4097cce7f44192 | 19 | py | Python | rl_trader/engine/platform_handler/test/get_candles.test.py | AlexandreMahdhaoui/rl_trader | 5bda02622c7e17c4e6f28a90c510cfe8f914f7a8 | [
"Apache-2.0"
] | null | null | null | rl_trader/engine/platform_handler/test/get_candles.test.py | AlexandreMahdhaoui/rl_trader | 5bda02622c7e17c4e6f28a90c510cfe8f914f7a8 | [
"Apache-2.0"
] | null | null | null | rl_trader/engine/platform_handler/test/get_candles.test.py | AlexandreMahdhaoui/rl_trader | 5bda02622c7e17c4e6f28a90c510cfe8f914f7a8 | [
"Apache-2.0"
] | null | null | null | # TODO: get_candles | 19 | 19 | 0.789474 | 3 | 19 | 4.666667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.105263 | 19 | 1 | 19 | 19 | 0.823529 | 0.894737 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 1 | null | 1 | null | true | 0 | 0 | null | null | null | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
201738bef1e9d1f63cecf9c45559e4d5b2c7deac | 86 | py | Python | ntc_rosetta_conf/usr_datastore.py | networktocode/ntc-rosetta-conf | 06c8028e0bbafdd97d15e14ca13faa2601345d8b | [
"Apache-2.0"
] | 5 | 2019-07-31T03:06:48.000Z | 2020-09-01T21:51:04.000Z | ntc_rosetta_conf/usr_datastore.py | networktocode/ntc-rosetta-conf | 06c8028e0bbafdd97d15e14ca13faa2601345d8b | [
"Apache-2.0"
] | 1 | 2020-12-14T15:02:05.000Z | 2020-12-14T15:02:05.000Z | ntc_rosetta_conf/usr_datastore.py | networktocode/ntc-rosetta-conf | 06c8028e0bbafdd97d15e14ca13faa2601345d8b | [
"Apache-2.0"
] | 1 | 2021-04-05T09:53:53.000Z | 2021-04-05T09:53:53.000Z | from jetconf.data import JsonDatastore
class UserDatastore(JsonDatastore):
pass
| 14.333333 | 38 | 0.802326 | 9 | 86 | 7.666667 | 0.888889 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.151163 | 86 | 5 | 39 | 17.2 | 0.945205 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
201e57cfd573e0e4976f75d0a29fef656b47e4ed | 13,781 | py | Python | overlap.py | chicm/atlas | 5135c1b30846ae0cc9bafc3c10c0938aee295679 | [
"MIT"
] | null | null | null | overlap.py | chicm/atlas | 5135c1b30846ae0cc9bafc3c10c0938aee295679 | [
"MIT"
] | null | null | null | overlap.py | chicm/atlas | 5135c1b30846ae0cc9bafc3c10c0938aee295679 | [
"MIT"
] | null | null | null |
import pandas as pd
def update_sub(submission_df):
submission_df.loc[submission_df['Id']=='a8d73536-bad8-11e8-b2b9-ac1f6b6435d0', 'Predicted'] = '0 23'
submission_df.loc[submission_df['Id']=='63ed01a4-bad7-11e8-b2b9-ac1f6b6435d0', 'Predicted'] = '14 16'
submission_df.loc[submission_df['Id']=='84c046b4-bad4-11e8-b2b8-ac1f6b6435d0', 'Predicted'] = '14 16'
submission_df.loc[submission_df['Id']=='69cbf89a-bace-11e8-b2b8-ac1f6b6435d0', 'Predicted'] = '0 16 25'
submission_df.loc[submission_df['Id']=='b9a882d6-bacc-11e8-b2b8-ac1f6b6435d0', 'Predicted'] = '14 16'
submission_df.loc[submission_df['Id']=='29d1d616-bacd-11e8-b2b8-ac1f6b6435d0', 'Predicted'] = '19'
submission_df.loc[submission_df['Id']=='f8f6566a-bac8-11e8-b2b8-ac1f6b6435d0', 'Predicted'] = '0 16'
submission_df.loc[submission_df['Id']=='9edb2498-bad8-11e8-b2b9-ac1f6b6435d0', 'Predicted'] = '16'
submission_df.loc[submission_df['Id']=='8dd19ca8-bacf-11e8-b2b8-ac1f6b6435d0', 'Predicted'] = '0 16 25'
submission_df.loc[submission_df['Id']=='adfa9e8e-bac6-11e8-b2b7-ac1f6b6435d0', 'Predicted'] = '5'
submission_df.loc[submission_df['Id']=='da5b852e-bacb-11e8-b2b8-ac1f6b6435d0', 'Predicted'] = '3'
submission_df.loc[submission_df['Id']=='df8d2780-bac8-11e8-b2b8-ac1f6b6435d0', 'Predicted'] = '16 25'
submission_df.loc[submission_df['Id']=='72a6fbf8-bad4-11e8-b2b8-ac1f6b6435d0', 'Predicted'] = '16 25'
submission_df.loc[submission_df['Id']=='f6b06252-bad6-11e8-b2b9-ac1f6b6435d0', 'Predicted'] = '0 16 25'
submission_df.loc[submission_df['Id']=='edb5b41e-bad0-11e8-b2b8-ac1f6b6435d0', 'Predicted'] = '0 16'
submission_df.loc[submission_df['Id']=='0a96bf2c-bad3-11e8-b2b8-ac1f6b6435d0', 'Predicted'] = '4'
submission_df.loc[submission_df['Id']=='1f97ea4a-bad3-11e8-b2b8-ac1f6b6435d0', 'Predicted'] = '0 25'
submission_df.loc[submission_df['Id']=='2327a292-bac7-11e8-b2b7-ac1f6b6435d0', 'Predicted'] = '5 16'
submission_df.loc[submission_df['Id']=='10d4730a-bada-11e8-b2b9-ac1f6b6435d0', 'Predicted'] = '0 16'
submission_df.loc[submission_df['Id']=='0b651912-bad3-11e8-b2b8-ac1f6b6435d0', 'Predicted'] = '16 25'
submission_df.loc[submission_df['Id']=='12dea42a-bacd-11e8-b2b8-ac1f6b6435d0', 'Predicted'] = '0 16 17 18'
submission_df.loc[submission_df['Id']=='b43493dc-bac5-11e8-b2b7-ac1f6b6435d0', 'Predicted'] = '2 11'
submission_df.loc[submission_df['Id']=='ba6febf2-bad4-11e8-b2b8-ac1f6b6435d0', 'Predicted'] = '16 25'
submission_df.loc[submission_df['Id']=='58148166-bacf-11e8-b2b8-ac1f6b6435d0', 'Predicted'] = '14 16'
submission_df.loc[submission_df['Id']=='79970da6-bac8-11e8-b2b8-ac1f6b6435d0', 'Predicted'] = '2 16'
submission_df.loc[submission_df['Id']=='c02bb81c-bac7-11e8-b2b7-ac1f6b6435d0', 'Predicted'] = '0 16 17 18'
submission_df.loc[submission_df['Id']=='fcd88d84-bad2-11e8-b2b8-ac1f6b6435d0', 'Predicted'] = '16 25'
submission_df.loc[submission_df['Id']=='59604078-bad9-11e8-b2b9-ac1f6b6435d0', 'Predicted'] = '16'
submission_df.loc[submission_df['Id']=='89b31fae-bad2-11e8-b2b8-ac1f6b6435d0', 'Predicted'] = '0 16'
submission_df.loc[submission_df['Id']=='88b38a80-bac8-11e8-b2b8-ac1f6b6435d0', 'Predicted'] = '2 16'
submission_df.loc[submission_df['Id']=='adc182fa-bad2-11e8-b2b8-ac1f6b6435d0', 'Predicted'] = '0 21'
submission_df.loc[submission_df['Id']=='6a322caa-bad3-11e8-b2b8-ac1f6b6435d0', 'Predicted'] = '5'
submission_df.loc[submission_df['Id']=='869a7f8c-bac9-11e8-b2b8-ac1f6b6435d0', 'Predicted'] = '0 5'
submission_df.loc[submission_df['Id']=='b478cc78-bad4-11e8-b2b8-ac1f6b6435d0', 'Predicted'] = '0 16'
submission_df.loc[submission_df['Id']=='e0f9483a-bacb-11e8-b2b8-ac1f6b6435d0', 'Predicted'] = '0 7'
submission_df.loc[submission_df['Id']=='9eafcf6a-bacd-11e8-b2b8-ac1f6b6435d0', 'Predicted'] = '0 21'
submission_df.loc[submission_df['Id']=='b7ae02d8-bac9-11e8-b2b8-ac1f6b6435d0', 'Predicted'] = '16 17 23'
submission_df.loc[submission_df['Id']=='dbdcd95c-bac6-11e8-b2b7-ac1f6b6435d0', 'Predicted'] = '5'
submission_df.loc[submission_df['Id']=='e7f56384-bad1-11e8-b2b8-ac1f6b6435d0', 'Predicted'] = '1'
submission_df.loc[submission_df['Id']=='43357408-bad9-11e8-b2b9-ac1f6b6435d0', 'Predicted'] = '17 25'
submission_df.loc[submission_df['Id']=='c5deab72-bad9-11e8-b2b9-ac1f6b6435d0', 'Predicted'] = '0 21'
submission_df.loc[submission_df['Id']=='b9acf08a-bac9-11e8-b2b8-ac1f6b6435d0', 'Predicted'] = '0 16'
submission_df.loc[submission_df['Id']=='e8bae166-bad8-11e8-b2b9-ac1f6b6435d0', 'Predicted'] = '2 17'
submission_df.loc[submission_df['Id']=='5661665e-bacf-11e8-b2b8-ac1f6b6435d0', 'Predicted'] = '17 25'
submission_df.loc[submission_df['Id']=='9e6fe8be-bad2-11e8-b2b8-ac1f6b6435d0', 'Predicted'] = '4'
submission_df.loc[submission_df['Id']=='df533cce-bac7-11e8-b2b7-ac1f6b6435d0', 'Predicted'] = '17'
submission_df.loc[submission_df['Id']=='7c1f771c-bac7-11e8-b2b7-ac1f6b6435d0', 'Predicted'] = '25'
submission_df.loc[submission_df['Id']=='f8cd7738-bad0-11e8-b2b8-ac1f6b6435d0', 'Predicted'] = '25'
submission_df.loc[submission_df['Id']=='39508fe6-bad3-11e8-b2b8-ac1f6b6435d0', 'Predicted'] = '0 16'
submission_df.loc[submission_df['Id']=='a56d3f98-bacf-11e8-b2b8-ac1f6b6435d0', 'Predicted'] = '9 10'
submission_df.loc[submission_df['Id']=='28601ba0-bac6-11e8-b2b7-ac1f6b6435d0', 'Predicted'] = '14'
submission_df.loc[submission_df['Id']=='8617f44e-baca-11e8-b2b8-ac1f6b6435d0', 'Predicted'] = '0'
submission_df.loc[submission_df['Id']=='1144d38e-bacb-11e8-b2b8-ac1f6b6435d0', 'Predicted'] = '21 16'
submission_df.loc[submission_df['Id']=='201229ac-bad0-11e8-b2b8-ac1f6b6435d0', 'Predicted'] = '2'
submission_df.loc[submission_df['Id']=='0f3274c0-bada-11e8-b2b9-ac1f6b6435d0', 'Predicted'] = '13'
submission_df.loc[submission_df['Id']=='29414644-bad4-11e8-b2b8-ac1f6b6435d0', 'Predicted'] = '4 21 26'
submission_df.loc[submission_df['Id']=='92c7e608-bad5-11e8-b2b9-ac1f6b6435d0', 'Predicted'] = '25'
submission_df.loc[submission_df['Id']=='83509894-bad2-11e8-b2b8-ac1f6b6435d0', 'Predicted'] = '13'
submission_df.loc[submission_df['Id']=='af2c5f2e-bac9-11e8-b2b8-ac1f6b6435d0', 'Predicted'] = '25'
submission_df.loc[submission_df['Id']=='bf6c33d0-bad5-11e8-b2b9-ac1f6b6435d0', 'Predicted'] = '4'
submission_df.loc[submission_df['Id']=='9da67d5c-bac9-11e8-b2b8-ac1f6b6435d0', 'Predicted'] = '14'
submission_df.loc[submission_df['Id']=='cbfe766e-bace-11e8-b2b8-ac1f6b6435d0', 'Predicted'] = '14'
submission_df.loc[submission_df['Id']=='c1dc11c4-bacd-11e8-b2b8-ac1f6b6435d0', 'Predicted'] = '0'
submission_df.loc[submission_df['Id']=='8f257b9c-bacf-11e8-b2b8-ac1f6b6435d0', 'Predicted'] = '21'
submission_df.loc[submission_df['Id']=='d69acc70-bac5-11e8-b2b7-ac1f6b6435d0', 'Predicted'] = '0'
submission_df.loc[submission_df['Id']=='94205e64-baca-11e8-b2b8-ac1f6b6435d0', 'Predicted'] = '16'
submission_df.loc[submission_df['Id']=='55eb4db6-baca-11e8-b2b8-ac1f6b6435d0', 'Predicted'] = '22 16 25'
submission_df.loc[submission_df['Id']=='10e592a2-bac6-11e8-b2b7-ac1f6b6435d0', 'Predicted'] = '21 16 19 25'
submission_df.loc[submission_df['Id']=='e7a05526-bad4-11e8-b2b8-ac1f6b6435d0', 'Predicted'] = '9 10'
submission_df.loc[submission_df['Id']=='896007d6-bad9-11e8-b2b9-ac1f6b6435d0', 'Predicted'] = '14 16'
submission_df.loc[submission_df['Id']=='111f3934-bad6-11e8-b2b9-ac1f6b6435d0', 'Predicted'] = '16 25'
submission_df.loc[submission_df['Id']=='aec4415e-bad9-11e8-b2b9-ac1f6b6435d0', 'Predicted'] = '2 16'
submission_df.loc[submission_df['Id']=='03f31e24-badb-11e8-b2b9-ac1f6b6435d0', 'Predicted'] = '16'
submission_df.loc[submission_df['Id']=='f60586ac-bac7-11e8-b2b7-ac1f6b6435d0', 'Predicted'] = '7 17'
submission_df.loc[submission_df['Id']=='fce301c8-bac9-11e8-b2b8-ac1f6b6435d0', 'Predicted'] = '5'
submission_df.loc[submission_df['Id']=='10748996-baca-11e8-b2b8-ac1f6b6435d0', 'Predicted'] = '15 25'
submission_df.loc[submission_df['Id']=='2367dd2c-bac6-11e8-b2b7-ac1f6b6435d0', 'Predicted'] = '21 11 16'
submission_df.loc[submission_df['Id']=='9d2d08b2-bada-11e8-b2b9-ac1f6b6435d0', 'Predicted'] = '12'
submission_df.loc[submission_df['Id']=='260a351a-bac7-11e8-b2b7-ac1f6b6435d0', 'Predicted'] = '17'
submission_df.loc[submission_df['Id']=='54138f64-bad7-11e8-b2b9-ac1f6b6435d0', 'Predicted'] = '17 25'
submission_df.loc[submission_df['Id']=='443b81cc-bac9-11e8-b2b8-ac1f6b6435d0', 'Predicted'] = '15'
submission_df.loc[submission_df['Id']=='72c9bb82-bac7-11e8-b2b7-ac1f6b6435d0', 'Predicted'] = '4 21 17'
submission_df.loc[submission_df['Id']=='be0cf5a8-bad3-11e8-b2b8-ac1f6b6435d0', 'Predicted'] = '16 23'
submission_df.loc[submission_df['Id']=='01835f6c-bad3-11e8-b2b8-ac1f6b6435d0', 'Predicted'] = '4'
submission_df.loc[submission_df['Id']=='74993d6e-bad8-11e8-b2b9-ac1f6b6435d0', 'Predicted'] = '16'
submission_df.loc[submission_df['Id']=='84ca8928-bad2-11e8-b2b8-ac1f6b6435d0', 'Predicted'] = '17'
submission_df.loc[submission_df['Id']=='8f8c19a6-bacf-11e8-b2b8-ac1f6b6435d0', 'Predicted'] = '0 25'
submission_df.loc[submission_df['Id']=='107d6830-bac6-11e8-b2b7-ac1f6b6435d0', 'Predicted'] = '0 19 25'
submission_df.loc[submission_df['Id']=='70f3e586-bacb-11e8-b2b8-ac1f6b6435d0', 'Predicted'] = '0 21'
submission_df.loc[submission_df['Id']=='a7e9e53a-bad1-11e8-b2b8-ac1f6b6435d0', 'Predicted'] = '16 25'
submission_df.loc[submission_df['Id']=='aa45019c-bad2-11e8-b2b8-ac1f6b6435d0', 'Predicted'] = '0 16'
submission_df.loc[submission_df['Id']=='4509520a-bad3-11e8-b2b8-ac1f6b6435d0', 'Predicted'] = '1 2'
submission_df.loc[submission_df['Id']=='ec087d1e-bacf-11e8-b2b8-ac1f6b6435d0', 'Predicted'] = '0 16 25'
submission_df.loc[submission_df['Id']=='18d295a6-bad9-11e8-b2b9-ac1f6b6435d0', 'Predicted'] = '23'
submission_df.loc[submission_df['Id']=='db77c3dc-bacb-11e8-b2b8-ac1f6b6435d0', 'Predicted'] = '17 19'
submission_df.loc[submission_df['Id']=='a6c830fa-bad9-11e8-b2b9-ac1f6b6435d0', 'Predicted'] = '22'
submission_df.loc[submission_df['Id']=='0457b426-baca-11e8-b2b8-ac1f6b6435d0', 'Predicted'] = '2'
submission_df.loc[submission_df['Id']=='d79a1e12-bad1-11e8-b2b8-ac1f6b6435d0', 'Predicted'] = '16'
submission_df.loc[submission_df['Id']=='7929949a-bad4-11e8-b2b8-ac1f6b6435d0', 'Predicted'] = '17 25'
submission_df.loc[submission_df['Id']=='7d7565e2-bad9-11e8-b2b9-ac1f6b6435d0', 'Predicted'] = '0 16'
submission_df.loc[submission_df['Id']=='d6a07ae2-bad1-11e8-b2b8-ac1f6b6435d0', 'Predicted'] = '5'
submission_df.loc[submission_df['Id']=='82c1d5f6-bacc-11e8-b2b8-ac1f6b6435d0', 'Predicted'] = '0 14 18'
submission_df.loc[submission_df['Id']=='bfdfc644-bac9-11e8-b2b8-ac1f6b6435d0', 'Predicted'] = '19'
submission_df.loc[submission_df['Id']=='74bdbb12-bace-11e8-b2b8-ac1f6b6435d0', 'Predicted'] = '7 25'
submission_df.loc[submission_df['Id']=='a4b77124-bad7-11e8-b2b9-ac1f6b6435d0', 'Predicted'] = '23'
submission_df.loc[submission_df['Id']=='27ca9b0e-bace-11e8-b2b8-ac1f6b6435d0', 'Predicted'] = '12'
submission_df.loc[submission_df['Id']=='9ded793a-bacb-11e8-b2b8-ac1f6b6435d0', 'Predicted'] = '2 4'
submission_df.loc[submission_df['Id']=='77161884-bad6-11e8-b2b9-ac1f6b6435d0', 'Predicted'] = '16'
submission_df.loc[submission_df['Id']=='17d8a71c-bacf-11e8-b2b8-ac1f6b6435d0', 'Predicted'] = '5'
submission_df.loc[submission_df['Id']=='c3c67272-bad7-11e8-b2b9-ac1f6b6435d0', 'Predicted'] = '0'
submission_df.loc[submission_df['Id']=='b52035ba-bad1-11e8-b2b8-ac1f6b6435d0', 'Predicted'] = '3'
submission_df.loc[submission_df['Id']=='718ebf3e-bada-11e8-b2b9-ac1f6b6435d0', 'Predicted'] = '16 17 23'
submission_df.loc[submission_df['Id']=='530bfbea-baca-11e8-b2b8-ac1f6b6435d0', 'Predicted'] = '14 17 23'
submission_df.loc[submission_df['Id']=='c78652b6-bad6-11e8-b2b9-ac1f6b6435d0', 'Predicted'] = '0 25'
submission_df.loc[submission_df['Id']=='8316d286-bad6-11e8-b2b9-ac1f6b6435d0', 'Predicted'] = '6 21'
submission_df.loc[submission_df['Id']=='e1d58c82-bac6-11e8-b2b7-ac1f6b6435d0', 'Predicted'] = '0 16'
submission_df.loc[submission_df['Id']=='1a15e75a-bad5-11e8-b2b8-ac1f6b6435d0', 'Predicted'] = '0 16 17'
submission_df.loc[submission_df['Id']=='0f0ccc64-bad7-11e8-b2b9-ac1f6b6435d0', 'Predicted'] = '0 16 25'
submission_df.loc[submission_df['Id']=='b642cefc-bad7-11e8-b2b9-ac1f6b6435d0', 'Predicted'] = '0 16 25'
submission_df.loc[submission_df['Id']=='9f71c832-bacc-11e8-b2b8-ac1f6b6435d0', 'Predicted'] = '14 16'
submission_df.loc[submission_df['Id']=='f665e29c-bad4-11e8-b2b8-ac1f6b6435d0', 'Predicted'] = '14 16'
submission_df.loc[submission_df['Id']=='e75331f2-bad8-11e8-b2b9-ac1f6b6435d0', 'Predicted'] = '14 16'
submission_df.loc[submission_df['Id']=='be034880-bad0-11e8-b2b8-ac1f6b6435d0', 'Predicted'] = '14 16 19'
submission_df.loc[submission_df['Id']=='5d2711a6-bac9-11e8-b2b8-ac1f6b6435d0', 'Predicted'] = '14 16 25'
submission_df.loc[submission_df['Id']=='89975d50-bad7-11e8-b2b9-ac1f6b6435d0', 'Predicted'] = '15 25'
submission_df.loc[submission_df['Id']=='7fcba676-bad9-11e8-b2b9-ac1f6b6435d0', 'Predicted'] = '15 25'
def update_with_test_matches(df):
test_matches_df = pd.read_csv('test_matches.csv')
print(test_matches_df.head())
for row in test_matches_df[['Test', 'Target']].values:
#print(row)
df.loc[df['Id']==row[0], 'Predicted'] = row[1]
if __name__ == '__main__':
df = pd.read_csv('sub/res101_tune_lb_7132.csv')
update_sub(df)
#df.to_csv('sub/sub1_1206_update.csv', index=False)
update_with_test_matches(df)
df.to_csv('sub/res101_tune_lb_7132_update.csv', index=False)
| 95.041379 | 111 | 0.714897 | 1,833 | 13,781 | 5.21713 | 0.116749 | 0.317474 | 0.197637 | 0.329395 | 0.865314 | 0.753843 | 0.681167 | 0.618948 | 0.601694 | 0.542194 | 0 | 0.21084 | 0.086931 | 13,781 | 144 | 112 | 95.701389 | 0.549154 | 0.004354 | 0 | 0 | 0 | 0 | 0.475288 | 0.335107 | 0 | 0 | 0 | 0 | 0 | 1 | 0.014493 | false | 0 | 0.007246 | 0 | 0.021739 | 0.007246 | 0 | 0 | 0 | null | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
201e5b66bd8e9b30fda03b6a4fc52800d09a02a6 | 34 | py | Python | backend/perks/__init__.py | TiFu/runepicker-helper | dcbd7bcce06a80ed5c800b3aaca26d53e4cace8f | [
"MIT"
] | null | null | null | backend/perks/__init__.py | TiFu/runepicker-helper | dcbd7bcce06a80ed5c800b3aaca26d53e4cace8f | [
"MIT"
] | 1 | 2018-01-01T03:56:01.000Z | 2018-01-01T03:56:01.000Z | backend/perks/__init__.py | TiFu/runepicker-helper | dcbd7bcce06a80ed5c800b3aaca26d53e4cace8f | [
"MIT"
] | null | null | null | from .get_smarties import Smarties | 34 | 34 | 0.882353 | 5 | 34 | 5.8 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.088235 | 34 | 1 | 34 | 34 | 0.935484 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
645224f668fd220e619876740f0da08484ae5468 | 136 | py | Python | active_learning/scoring/__init__.py | theodore-ando/active-learning | c0be13cf93ee172ebb7eee2a87c390d209b12bd8 | [
"Apache-2.0"
] | 5 | 2018-08-30T18:55:00.000Z | 2019-04-11T02:20:06.000Z | active_learning/scoring/__init__.py | theodore-ando/active-learning | c0be13cf93ee172ebb7eee2a87c390d209b12bd8 | [
"Apache-2.0"
] | 1 | 2018-10-05T22:17:14.000Z | 2018-10-05T22:17:14.000Z | active_learning/scoring/__init__.py | theodore-ando/active-learning | c0be13cf93ee172ebb7eee2a87c390d209b12bd8 | [
"Apache-2.0"
] | 2 | 2018-09-19T20:45:40.000Z | 2022-03-31T06:57:30.000Z | from .information_density import information_density
from .marginal_entropy import marginal_entropy
from .probability import probability | 45.333333 | 52 | 0.897059 | 16 | 136 | 7.375 | 0.4375 | 0.305085 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.080882 | 136 | 3 | 53 | 45.333333 | 0.944 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
646fb456c8fd33d0b670f8c05b1cf320bad90f6c | 50,784 | py | Python | ostap/fitting/toys.py | TatianaOvsiannikova/ostap | a005a78b4e2860ac8f4b618e94b4b563b2eddcf1 | [
"BSD-3-Clause"
] | 14 | 2017-03-24T12:38:08.000Z | 2022-02-21T05:00:57.000Z | ostap/fitting/toys.py | TatianaOvsiannikova/ostap | a005a78b4e2860ac8f4b618e94b4b563b2eddcf1 | [
"BSD-3-Clause"
] | 10 | 2019-03-08T18:48:42.000Z | 2022-03-22T11:59:48.000Z | ostap/fitting/toys.py | TatianaOvsiannikova/ostap | a005a78b4e2860ac8f4b618e94b4b563b2eddcf1 | [
"BSD-3-Clause"
] | 11 | 2017-03-23T15:29:58.000Z | 2022-02-21T05:03:57.000Z | #!/usr/bin/env python
# -*- coding: utf-8 -*-
# ==========================================================================================
## @file ostap/fitting/toys.py
# Simple utilities to run fitting toys, Jackknife, bootstrap, etc...
# @date 2020-01-18
# @author Vanya BELYAEV Ivan.Belyaev@itep.ru
# =============================================================================
""" Simple utilities to run fitting toys, Jackknife, bootstrap, ..
"""
# =============================================================================
__author__ = 'Vanya BELYAEV Ivan.Belyaev@itep.ru'
__date__ = "2020-01-18"
__version__ = '$Revision$'
__all__ = (
"make_toys" , ## run fitting toys (the same PDF to generate and fit)
"make_toys2" , ## run fitting toys (separate models to generate and fit)
'make_jackknife' , ## run Jackknife analysis
'make_bootstrap' , ## run Bootstrapanalysis
"vars_transform" , ## helper fnuction to transform the variables
"print_stats" , ## print toys statistics
"print_jackknife" , ## print jackknife statistics
"print_bootstrap" , ## print bootstrap statistics
)
# =============================================================================
import ROOT
from builtins import range
from ostap.core.core import VE
# =============================================================================
# logging
# =============================================================================
from ostap.logger.logger import getLogger
if '__main__' == __name__ : logger = getLogger( 'ostap.fitting.toys' )
else : logger = getLogger( __name__ )
# =============================================================================
logger.debug ( 'Utilities to run fitting toys, Jackknife, bootstrap, ... ')
# ==============================================================================
## Technical transformation to the dictionary : { 'name' : float_value }
def vars_transform ( vars ) :
"""Technical transformation to the dictionary : `{ 'name' : float_value }`
"""
from ostap.core.ostap_types import dictlike_types
import ostap.fitting.roofitresult
import ostap.fitting.variables
result = {}
if isinstance ( vars , ROOT.RooFitResult ) :
rs = vars.dct_parsms ()
for p in rs : result [ p ] = float ( rs [ p ] )
elif isinstance ( vars , dictlike_types ) :
for p in vars : result [ p ] = float ( vars [ p ] )
else :
for p in vars :
if not isinstance ( p , ROOT.RooAbsCategory ) :
result [ p.GetName() ] = float ( p )
return result
# =============================================================================
## Prepare statistics from results of toys/jackknifes/boostrap studies
# @code
# results = ...
# statistics = make_stats ( results )
# @endcode
def make_stats ( results , fits = {} , covs = {} ) :
"""Prepare statistics from results of toys/jackknifes/boostrap studies
>>> results = ...
>>> statistics = make_stats ( results )
"""
from collections import defaultdict
from ostap.core.core import SE
stats = defaultdict ( SE )
for par in results :
pars = results [ par ]
for v in pars :
stats [ par ] += float ( v )
for k in fits :
stats ['- Status %s' % k ] = fits [ k ]
for k in covs :
stats ['- CovQual %s' % k ] = covs [ k ]
return stats
# =============================================================================
## print statistics of pseudoexperiments
def print_stats ( stats , ntoys = '' , logger = logger ) :
"""print statistics of pseudoexperiments
"""
table = [ ( 'Parameter' , '#', 'mean' , 'rms' , '%13s / %-13s' % ( 'min' , 'max' ) ) ]
def make_row ( c ) :
n = "{:^11}".format ( c.nEntries() )
mean = c.mean ()
mean = "%+13.6g +/- %-13.6g" % ( mean.value() , mean.error() )
rms = "%13.6g" % c.rms ()
minmax = "%+13.6g / %-+13.6g" % ( c.min() , c.max () )
return p , n , mean , rms , minmax
for p in sorted ( stats ) :
if p.startswith('pull:') : continue
c = stats [ p ]
table.append ( make_row ( c ) )
for p in sorted ( stats ) :
if not p.startswith('pull:') : continue
c = stats [ p ]
table.append ( make_row ( c ) )
if not ntoys :
ntoys = 0
for s in stats :
ntoys = max ( ntoys , stats[s].nEntries() )
import ostap.logger.table as Table
table = Table.table ( table ,
title = "Results of %s toys" % ntoys ,
alignment = 'lcccc' ,
prefix = "# " )
logger.info ( 'Results of %s toys:\n%s' % ( ntoys , table ) )
# =============================================================================
## Jackknife estimator from jackknife statistic
# @code
# statistics = ....
# jackknife = jackknife_estimator ( statistics )
# jackknife , theta_jack = jackknife_esiimator ( statistics , value )
# @endcode
def jackknife_statistics ( statistics , theta = None ) :
"""Jackknife estimator from jackknife statistic
>>> statistics = ....
>>> jacknife = jackknife_estimator ( statistics )
>>> jacknife , theta_corr , bias = jackknife_esiimator ( statistics , value )
"""
assert isinstance ( theta , VE ) or theta is None ,\
"jackknife_statistics: invalid type of ``value'' %s" % type ( value )
N = statistics . nEntries () ## number of jackknife samples
theta_dot = statistics . mean () ## mean over jackknife samples
jvar = statistics . variance () * ( N - 1 ) ## variance
jackknife = VE ( theta_dot.value () , jvar ) ## jackknife estimate
## get Jackknife estimator from statistics
if theta is None : return jackknife
bias = ( N - 1 ) * ( theta_dot.value() - theta.value() )
## theta corrected for the bias and with Jackknife variance
theta_jack = VE ( theta.value() - bias , jvar ) ## corrected value
return jackknife , theta_jack
# =============================================================================
## print Jackknife statistics
def print_jackknife ( fitresult ,
stats ,
morevars = {} ,
logger = logger ,
title = '' ) :
"""print Jackknife statistics
"""
header = ( 'Parameter' , 'theta' , 'theta_(.)' , 'theta_jack' , 'bias/sigma [%]' , 'error [%]' )
table = []
N = 0
for name in sorted ( stats ) :
if name in fitresult :
p = fitresult [ name ]
theta = p * 1.0
if not isinstance ( theta , VE ) or theta.cov2() <= 0 :
logger.warning ('print_jackknife: parameter "%s" is invalid in ``fitresult'', skip %s' % ( name , theta ) )
continue
elif name in morevars :
theta = morevars [ name ]
if not isinstance ( theta , VE ) or theta.cov2() <= 0 :
logger.warning ('print_jackknife: parameter "%s" is invalid in ``morevars'', skip %s' % ( name , theta ) )
continue
else :
continue
statistics = stats [ name ]
N = max ( N , statistics.nEntries() )
## jackknife estimates
jackknife , theta_jack = jackknife_statistics ( statistics , theta )
bias = theta_jack.value () - theta .value ()
scale = theta .error () / theta_jack.error ()
row = ( name ,
"%+13.6g +/- %-13.6g" % ( theta . value () , theta .error () ) ,
"%+13.6g +/- %-13.6g" % ( jackknife . value () , jackknife .error () ) ,
"%+13.6g +/- %-13.6g" % ( theta_jack . value () , theta_jack .error () ) ,
'%+6.2f' % ( bias / theta.error() * 100 ) ,
'%+6.2f' % ( scale * 100 - 100 ) )
table.append ( row )
for name in sorted ( stats ) :
if name in fitresult : continue
if name in morevars : continue
statistics = stats [ name ]
jackknife = jackknife_statistics ( statistics )
row = name , '' , "%+13.6g +/- %-13.6g" % ( jackknife . value () , jackknife .error () ) , '' , '' , ''
table.append ( row )
table = [ header ] + table
title = title if title else "Jackknife results (N=%d)" % N
import ostap.logger.table as Table
table = Table.table ( table ,
title = title ,
alignment = 'lcccccc' ,
prefix = "# " )
logger.info ( '%s:\n%s' % ( title , table ) )
# =============================================================================
## print Bootstrap statistics
def print_bootstrap ( fitresult ,
stats ,
morevars = {} ,
logger = logger ,
title = '' ) :
"""print Bootstrap statistics
"""
header = ( 'Parameter' , 'theta' , 'theta_boot' , 'bias/sigma [%]' , 'error [%]' )
table = []
n = 0
for name in sorted ( stats ) :
if name in fitresult :
p = fitresult [ name ]
theta = p * 1.0
if not isinstance ( theta , VE ) or theta.cov2() <= 0 :
logger.warning ('print_bootstrap: parameter "%s" is invalid in ``fitresult'', skip %s' % ( name , theta ) )
continue
elif name in morevars :
theta = morevars [ name ]
if not isinstance ( theta , VE ) or theta.cov2() <= 0 :
logger.warning ('print_bootstrap: parameter "%s" is invalid in ``morevars'', skip %s' % ( name , theta ) )
continue
else :
continue
statistics = stats [ name ]
n = max ( n , statistics.nEntries() )
theta_boot = VE ( statistics.mean().value() , statistics.mu2() )
bias = theta_boot.value () - theta .value ()
scale = theta .error () / theta_boot.error ()
row = ( name ,
"%+13.6g +/- %-13.6g" % ( theta . value () , theta .error () ) ,
"%+13.6g +/- %-13.6g" % ( theta_boot . value () , theta_boot .error () ) ,
'%+6.2f' % ( bias / theta.error() * 100 ) ,
'%+6.2f' % ( scale * 100 - 100 ) )
table.append ( row )
for name in sorted ( stats ) :
if name in fitresult : continue
if name in morevars : continue
statistics = stats [ name ]
theta_boot = VE ( statistics.mean().value() , statistics.mu2() )
row = name , '', "%+13.6g +/- %-13.6g" % ( theta_boot . value () , theta_boot .error () ) , '' , ''
table.append ( row )
table = [ header ] + table
title = title if title else "Bootstrapping with #%d samples" % n
import ostap.logger.table as Table
table = Table.table ( table ,
title = title ,
alignment = 'lcccc' ,
prefix = "# " )
logger.info ( '%s:\n%s' % ( title , table ) )
# ==============================================================================
## Default function to generate the data
# - simple call for <code>PDF.generate</code>
def generate_data ( pdf , varset , **config ) :
"""Default function to generate the data
- simple call for `PDF.generate`
"""
return pdf.generate ( varset = varset , **config )
# ==============================================================================
## Default function to perform the actual fit
# - simple call for <code>PDF.fitTo</code>
def make_fit ( pdf , dataset , **config ) :
"""Default function to perform the actual fit
- simple call for `PDF.fitTo`
"""
result , _ = pdf.fitTo ( dataset , **config )
return result
# ==============================================================================
## Accept fit?
# Accept the fit result?
# - valid fit result
# - fit status is 0 (SUCCESS)
# - covariance matrix quality is either 3(full accurate matrix) or -1 (unknown/externbally provided?)
# @param result fit result
# @param pdf pdf
# @param dataset pdf
#
def accept_fit ( result , pdf = None , dataset = None ) :
"""Accept the fit result?
- valid fit result
- fit status is 0 (SUCCESS)
- covariance matrix quality is either 0 or -1
"""
return result and ( 0 == result.status () ) and ( result.covQual () in ( -1 , 3 ) )
# ==============================================================================
## make <code>nToys</code> pseudoexperiments
#
# Schematically:
# @code
# for toy in range ( nToys ) :
# ... dataset = gen_fun ( pdf , ... , **gen_config )
# ... result = fit_fun ( pdf , dataset , **fit_config )
# ... if not accept_fun ( result , pdf , dataset ) : continue
# .... < collect statistics here >
# @endcode
#
# For each experiment
# - generate dataset using <code>pdf</code> with variables specified
# in <code>data</code> and configuration specified via<code>gen_config</code>
# for each generation the parameters of <code>pdf</code> are reset
# for their initial values and values from <code>init_pars</code>
# - fit generated dataset with <code>pdf</code> using configuration
# specified via <code>fit_config</code>
#
# @code
# pdf = ...
# results , stats = make_toys ( pdf , ## PDF to use
# nToys = 1000 , ## Number of pseudoexperiments
# data = [ 'mass' ] , ## variables in dataset
# gen_config = { 'nEvents' : 5000 } , ## configuration of <code>pdf.generate</code>
# fit_config = { 'ncpus' : 2 } , ## configuration of <code>pdf.fitTo</code>
# init_pars = { 'mean' : 0.0 , 'sigma' : 1.0 } ) ## parameters to use for generation
# @endcode
#
# Derived parameters can be also retrived via <code>more_vars</code> argument:
# @code
# ratio = lambda res,pdf : res.ratio('x','y')
# more_vars = { 'Ratio' : ratio }
# r, s = make_toys ( .... , more_vars = more_vars , ... )
# @endcode
#
# @param pdf PDF to be used for generation and fitting
# @param nToys number of pseudoexperiments to generate
# @param data variable list of variables to be used for dataset generation
# @param gen_config configuration of <code>pdf.generate</code>
# @param fit_config configuration of <code>pdf.fitTo</code>
# @param init_pars redefine these parameters for each pseudoexperiment
# @param more_vars calculate more variables form fit-result
# @param get_fun specific generate action (if needed)
# @param fit_fun specific fitting action (if needed)
# @param accept_fun specific accept action (if needed)
# @param silent silent toys?
# @param progress show the progress?
# @param logger use this logger
# @param frequency how often to dump the intermediate results ?
# @return dictionary with fit results for the toys and the dictionary of statistics
#
# - If <code>gen_fun</code> is not specified <code>generate_data</code> is used
# - If <code>fit_fun</code> is not specified <code>make_fit</code> is used
# - If <code>accept_fun</code> is not specified <code>accept_fit</code> is used
def make_toys ( pdf ,
nToys ,
data , ## template for dataset/variables
gen_config , ## parameters for <code>pdf.generate</code>
fit_config = {} , ## parameters for <code>pdf.fitTo</code>
init_pars = {} ,
more_vars = {} ,
gen_fun = None , ## generator function ( pdf , varset , **config )
fit_fun = None , ## fit function ( pdf , dataset , **config )
accept_fun = None , ## accept function ( fit-result, pdf, dataset )
silent = True ,
progress = True ,
logger = logger ,
frequency = 1000 ) : ##
"""Make `nToys` pseudoexperiments
- Schematically:
>>> for toy in range ( nToys ) :
>>> ... dataset = gen_fun ( pdf , ... , **gen_config )
>>> ... result = fit_fun ( pdf , dataset , **fit_config )
>>> ... if not accept_fun ( result , pdf , dataset ) : continue
>>> .... < collect statistics here >
For each pseudoexperiment:
1. generate dataset using `pdf` with variables specified
in `data` and configuration specified via `gen_config`
for each generation the parameters of `pdf` are reset
for their initial values and valeus from `init_pars`
2. fit generated dataset with `pdf` using configuration
specified via `fit_config`
- `pdf` : PDF to be used for generation and fitting
- `nToys` : number of pseudoexperiments to generate
- `data` : variable list of variables to be used for dataset generation
- `gen_config` : configuration of <code>pdf.generate</code>
- `fit_config` : configuration of <code>pdf.fitTo</code>
- `init_pars` : redefine these parameters for each pseudoexperiment
- `more_vars` : dictionary of functions to define the additional results
- `gen_fun` : generator function
- `fit_fun` : fitting function
- `accept_fun` : accept function
- `silent` : silent toys?
- `progress` : show progress bar?
- `logger` : use this logger
- `frequency` : how often to dump the intermediate results ?
It returns a dictionary with fit results for the toys and a dictionary of statistics
>>> pdf = ...
... results, stats = make_toys ( pdf , ## PDF to use
... 1000 , ## number of toys
... [ 'mass' ] , ## variables in dataset
... { 'nEvents' : 5000 } , ## configuration of `pdf.generate`
... { 'ncpus' : 2 } , ## configuration of `pdf.fitTo`
... { 'mean' : 0.0 , 'sigma' : 1.0 } ## parameters to use for generation
... )
Derived parameters can be also retrived via <code>more_vars</code> argument:
>>> ratio = lambda res,pdf : res.ratio('x','y')
>>> more_vars = { 'Ratio' : ratio }
>>> r, s = make_toys ( .... , more_vars = more_vars , ... )
- If `gen_fun` is not specified `generate_data` is used
- If `fit_fun` is not specified `make_fit` is used
- If `accept_fun` is not specified `accept_fit` is used
"""
from ostap.core.ostap_types import string_types, integer_types
assert isinstance ( nToys , integer_types ) and 0 < nToys,\
'Invalid "nToys" argument %s/%s' % ( nToys , type ( nToys ) )
assert gen_config and 'nEvents' in gen_config,\
'Number of events per toy must be specified via "gen_config" %s' % gen_config
## 1. generator function?
if gen_fun is None :
if not silent : logger.info ( "make_toys: use default ``generate_data'' function!")
gen_fun = generate_data
assert gen_fun and callable ( gen_fun ) , 'Invalid generator function!'
## 2. fitting function?
if fit_fun is None :
if not silent : logger.info ( "make_toys: use default ``make_fit'' function!")
fit_fun = make_fit
assert fit_fun and callable ( fit_fun ) , 'Invalid fit function!'
## 3. accept function?
if accept_fun is None :
if not silent : logger.info ( "make_toys: use default ``accept_fit'' function!")
accept_fun = accept_fit
assert accept_fun and callable ( accept_fun ) , 'Invalid accept function!'
if progress and not silent :
assert isinstance ( frequency , integer_types ) and 0 < frequency,\
"make_toys: invalid ``frequency'' parameter %s" % frequency
import ostap.fitting.roofit
import ostap.fitting.dataset
import ostap.fitting.variables
import ostap.fitting.roofitresult
import ostap.fitting.basic
params = pdf.params ()
varset = ROOT.RooArgSet()
if isinstance ( data , ROOT.RooAbsData ) : varset = data.varset()
else :
for v in data :
if isinstance ( v , ROOT.RooAbsArg ) :
varset.add ( v )
elif isinstance ( v , string_types ) and v in params :
varset.add ( params [ v ] )
else :
raise TypeError('Invalid variable %s/%s' % ( v , type ( v ) ) )
fix_pars = vars_transform ( params )
fix_init = vars_transform ( init_pars )
pdf.load_params ( params = fix_pars , silent = silent )
pdf.load_params ( params = fix_init , silent = silent )
## save all initial parameters (needed for the final statistics)
params = pdf.params ()
fix_all = vars_transform ( params )
fitcnf = {}
fitcnf.update ( fit_config )
if not 'silent' in fitcnf : fitcnf [ 'silent' ] = silent
from collections import defaultdict
results = defaultdict(list)
from ostap.core.core import SE, VE
fits = defaultdict ( SE ) ## fit statuses
covs = defaultdict ( SE ) ## covariance matrix quality
## run pseudoexperiments
from ostap.utils.progress_bar import progress_bar
for i in progress_bar ( range ( nToys ) , silent = not progress ) :
## 1. reset PDF parameters
pdf.load_params ( params = fix_pars , silent = silent )
pdf.load_params ( params = init_pars , silent = silent )
## 2. generate dataset!
## dataset = pdf.generate ( varset = varset , **gen_config )
dataset = gen_fun ( pdf , varset = varset , **gen_config )
if not silent : logger.info ( 'Generated dataset #%d\n%s' % ( i , dataset ) )
## 3. fit it!
r = fit_fun ( pdf , dataset , **fitcnf )
## fit status
fits [ r.status () ] += 1
## covariance matrix quality
covs [ r.covQual () ] += 1
## ok ?
if accept_fun ( r , pdf , dataset ) :
## 4. save results
rpf = r.params ( float_only = True )
for p in rpf :
results [ p ].append ( rpf [ p ][0] )
for v in more_vars :
func = more_vars[v]
results [ v ] .append ( func ( r , pdf ) )
results [ '#' ] .append ( len ( dataset ) )
results [ '#sumw' ] .append ( dataset.sumVar ( '1' ) )
dataset.clear()
del dataset
del r
if progress or not silent :
if 0 < frequency and 1 <= i and 0 == ( i + 1 ) % frequency :
stats = make_stats ( results , fits , covs )
print_stats ( stats , i + 1 , logger = logger )
## make a final statistics
stats = make_stats ( results , fits , covs )
if progress or not silent :
print_stats ( stats , nToys , logger = logger )
return results, stats
# =============================================================================
## make <code>nToys</code> pseudoexperiments
#
# Schematically:
# @code
# for toy in range ( nToys ) :
# ... dataset = gen_fun ( gen_pdf , ... , **gen_config )
# ... result = fit_fun ( fit_pdf , dataset , **fit_config )
# ... if not accept_fun ( result , fit_pdf , dataset ) : continue
# .... < collect statistics here >
# @endcode
#
# For each experiment
# - generate dataset using <code>pdf</code> with variables specified
# in <code>data</code> and configuration specified via<code>gen_config</code>
# for each generation the parameters of <code>pdf</code> are reset
# for their initial values and values from <code>init_pars</code>
# - fit generated dataset with <code>pdf</code> using configuration
# specified via <code>fit_config</code>
#
# @code
# gen_pdf = ... ## PDF to use to generate pseudoexperiments
# fit_pdf = ... ## PDF to use to fit pseudoexperiments
# results , stats = make_toys2(
# gen_pdf = gen_pdf , ## PDF to use to generate pseudoexperiments
# fit_pdf = fit_pdf , ## PDF to use to fit pseudoexperiments
# nToys = 1000 , ## number of pseudoexperiments
# data = [ 'mass' ] , ## variables in dataset
# gen_config = { 'nEvents' : 5000 } , ## configuration of <code>pdf.generate</code>
# fit_config = { 'ncpus' : 2 } , ## configuration of <code>pdf.fitTo</code>
# gen_pars = { 'mean' : 0.0 , 'sigma' : 1.0 } ## parameters to use for generation
# )
# @endcode
#
# Derived parameters can be also retrived via <code>more_vars</code> argument:
# @code
# ratio = lambda res,pdf : res.ratio('x','y')
# more_vars = { 'Ratio' : ratio }
# r, s = make_toys2 ( .... , more_vars = more_vars , ... )
# @code
#
# @param gen_pdf PDF to be used for generation
# @param fit_pdf PDF to be used for fitting
# @param nToys number of pseudoexperiments to generate
# @param data variable list of variables to be used for dataset generation
# @param gen_config configuration of <code>pdf.generate</code>
# @param fit_config configuration of <code>pdf.fitTo</code>
# @param gen_pars redefine these parameters for each pseudoexperiment
# @param fit_pars redefine these parameters for each pseudoexperiment
# @param more_vars calculate more variables form fit-result
# @param gen_fun specific generate action (if needed)
# @param fit_fun specific fitting action (if needed)
# @param accept_fun specific accept action (if needed)
# @param silent silent toys?
# @param progress show progress bar?
# @param logger logger
# @param frequency how often to dump the intermediate results ?
# @return dictionary with fit results for the toys and the dictionary of statistics
#
# - If <code>gen_fun</code> is not specified <code>generate_data</code> is used
# - If <code>fit_fun</code> is not specified <code>make_fit</code> is used
# - If <code>accept_fun</code> is not specified <code>accept_fit</code> is used
def make_toys2 ( gen_pdf , ## pdf to generate toys
fit_pdf , ## pdf to fit
nToys , ## number of pseudoexperiments
data , ## template for dataset/variables
gen_config , ## parameters for <code>pdf.generate</code>
fit_config = {} , ## parameters for <code>pdf.fitTo</code>
gen_pars = {} , ## gen-parameters to reset/use
fit_pars = {} , ## fit-parameters to reset/use
more_vars = {} , ## additional results to be calculated
gen_fun = None , ## generator function ( pdf , varset , **gen_config )
fit_fun = None , ## fit function ( pdf , dataset , **fit_config )
accept_fun = None , ## accept function ( fit-result, pdf, dataset )
silent = True ,
progress = True ,
logger = logger ,
frequency = 1000 ) :
"""Make `ntoys` pseudoexperiments
- Schematically:
>>> for toy in range ( nToys ) :
>>> ... dataset = gen_fun ( gen_pdf , ... , **gen_config )
>>> ... result = fit_fun ( fit_pdf , dataset , **fit_config )
>>> ... if not accept_fun ( result , fit_pdf , dataset ) : continue
>>> .... < collect statistics here >
For each experiment:
1. generate dataset using `pdf` with variables specified
in `data` and configuration specified via `gen_config`
for each generation the parameters of `pdf` are reset
for their initial values and valeus from `init_pars`
2. fit generated dataset with `pdf` using configuration
specified via `fit_config`
- `pdf` : PDF to be used for generation and fitting
- `nToys` : number of pseudoexperiments to generate
- `data` : variable list of variables to be used for dataset generation
- `gen_config` : configuration of <code>pdf.generate</code>
- `fit_config` : configuration of <code>pdf.fitTo</code>
- `gen_pars` : redefine these parameters for generation of each pseudoexperiment
- `fit_pars` : redefine these parameters for fit of each pseudoexperiment
- `silent` : silent toys?
- `progress` : show progress bar?
- `logger` : use this logger
- `frequency` : how often to dump the intermediate results ?
It returns a dictionary with fit results for the toys and a dictionary of statistics
>>> pdf = ...
... results, stats = make_toys ( pdf , ## PDF to use
... 1000 , ## number of toys
... [ 'mass' ] , ## varibales in dataset
... { 'nEvents' : 5000 } , ## configuration of `pdf.generate`
... { 'ncpus' : 2 } , ## configuration of `pdf.fitTo`
... { 'mean' : 0.0 , 'sigma' : 1.0 } ## parameters to use for generation
... )
"""
from ostap.core.ostap_types import string_types, integer_types
assert isinstance ( nToys , integer_types ) and 0 < nToys,\
'Invalid "nToys" argument %s/%s' % ( nToys , type ( nToys ) )
assert gen_config and 'nEvents' in gen_config,\
'Number of events per toy must be specified via "gen_config" %s' % gen_config
## 1. generator function?
if gen_fun is None :
if not silent : logger.info ( "make_toys2: use default ``generate_data'' function!")
gen_fun = generate_data
assert gen_fun and callable ( gen_fun ) , 'Invalid generator function!'
## 2. fitting function?
if fit_fun is None :
if not silent : logger.info ( "make_toys2: use default ``make_fit'' function!")
fit_fun = make_fit
assert fit_fun and callable ( fit_fun ) , 'Invalid fit function!'
## 3. accept function?
if accept_fun is None :
if not silent : logger.info ( "make_toys2: use default ``accept_fit'' function!")
accept_fun = accept_fit
assert accept_fun and callable ( accept_fun ) , 'Invalid accept function!'
if progress and not silent :
assert isinstance ( frequency , integer_types ) and 0 < frequency,\
"make_toys2: invalid ``frequency'' parameter %s" % frequency
import ostap.fitting.roofit
import ostap.fitting.dataset
import ostap.fitting.variables
import ostap.fitting.roofitresult
import ostap.fitting.basic
gparams = gen_pdf.params ()
varset = ROOT.RooArgSet ()
if isinstance ( data , ROOT.RooAbsData ) : varset = data.varset()
else :
for v in data :
if isinstance ( v , ROOT.RooAbsArg ) :
varset.add ( v )
elif isinstance ( v , string_types ) and v in gparams :
varset.add ( gparams [ v ] )
else :
raise TypeError('Invalid variable %s/%s' % ( v , type ( v ) ) )
## parameters for generation
fix_gen_init = vars_transform ( gparams )
fix_gen_pars = vars_transform ( gen_pars )
## parameters for fitting
fparams = fit_pdf.params ()
fix_fit_init = vars_transform ( fparams )
fix_fit_pars = vars_transform ( fit_pars )
fitcnf = {}
fitcnf.update ( fit_config )
if not 'silent' in fitcnf : fitcnf [ 'silent' ] = silent
from collections import defaultdict
results = defaultdict(list)
from ostap.core.core import SE
fits = defaultdict ( SE ) ## fit statuses
covs = defaultdict ( SE ) ## covarinace matrix quality
## run pseudoexperiments
from ostap.utils.progress_bar import progress_bar
for i in progress_bar ( range ( nToys ) , silent = not progress ) :
## 1. reset PDF parameters
gen_pdf.load_params ( params = fix_gen_init , silent = silent )
gen_pdf.load_params ( params = fix_gen_pars , silent = silent )
## 2. generate dataset!
dataset = gen_fun ( gen_pdf , varset = varset , **gen_config )
if not silent : logger.info ( 'Generated dataset #%d\n%s' % ( i , dataset ) )
## 3. reset parameters of fit_pdf
fit_pdf.load_params ( params = fix_fit_init , silent = silent )
fit_pdf.load_params ( params = fix_fit_pars , silent = silent )
## 4. fit it!
r = fit_fun ( fit_pdf , dataset , **fitcnf )
## fit status
fits [ r.status () ] += 1
## covariance matrix quality
covs [ r.covQual () ] += 1
## ok ?
if accept_fun ( r , fit_pdf , dataset ) :
## 5. save results
rpf = r.params ( float_only = True )
for j in rpf :
results [ j ].append ( rpf [ j ] [ 0 ] )
for v in more_vars :
func = more_vars[v]
results [ v ] .append ( func ( r , fit_pdf ) )
results [ '#' ] .append ( len ( dataset ) )
results [ '#sumw' ] .append ( dataset.sumVar ( '1' ) )
dataset.clear()
del dataset
if progress or not silent :
if 0 < frequency and 1 <= i and 0 == ( i + 1 ) % frequency :
stats = make_stats ( results , fits , covs )
print_stats ( stats , i + 1 , logger = logger )
## make a final statistics
stats = make_stats ( results , fits , covs )
if progress or not silent :
print_stats ( stats , nToys , logger = logger )
return results, stats
# =============================================================================
## run Jackknife analysis, useful for evaluaton of fit biases and uncertainty estimates
#
# For each <code>i</code> remove event with index <code>i</code> from the dataset,
# and refit it.
# @code
# dataset = ...
# model = ...
# r , f = model.fitTo ( dataset , .... ) ## fit the whole dataset
# results, stats = make_jackknife ( model , data ) ## run Jackknife
# print_jackknife ( r , stats ) ## print summary table
# @endcode
# @see printJackknife
#
# Derived parameters can be also retrived via <code>more_vars</code> argument:
# @code
# ratio = lambda res,pdf : res.ratio('x','y')
# more_vars = { 'Ratio' : ratio }
# r, s = make_jackknife ( .... , more_vars = more_vars , ... )
# @endcode
#
# @see https://en.wikipedia.org/wiki/Jackknife_resampling
# @param pdf fit model
# @param data original dataset
# @param fit_config configuration of <code>pdf.FitTo( data , ... )</code>
# @param fit_pars redefine these parameters before each fit
# @param more_vars calculate more variables from the fit-results
# @param fit_fun fitting function
# @param accept_fun accept function
# @param event_range event range to use for jackknife
# @param silent silent processing
# @param progress show progress bar?
# @param logger use this logger
# @param frequency how often to dump the intermediate results ?
# @return statistics of jackknife experiments
def make_jackknife ( pdf ,
data ,
fit_config = {} , ## parameters for <code>pdf.fitTo</code>
fit_pars = {} , ## fit-parameters to reset/use
more_vars = {} , ## additional results to be calculated
fit_fun = None , ## fit function ( pdf , dataset , **fit_config )
accept_fun = None , ## accept function ( fit-result, pdf, dataset )
event_range = () , ## event range for jackknife
silent = True ,
progress = True ,
logger = logger ,
frequency = 100 ) :
"""Run Jackknife analysis, useful for evaluaton of fit biased and uncertainty estimates
For each <code>i</code> remove event with index <code>i</code> from the dataset, and refit it.
>>> dataset = ...
>>> model = ...
>>> r , f = model.fitTo ( dataset , .... ) ## fit the whole dataset
>>> results, stats = make_jackknife ( model , data ) ## run Jackknife
>>> print_jackknife ( r , stats ) ## print summary table
- see https://en.wikipedia.org/wiki/Jackknife_resampling
- see print_jackknife
- see jackknife_statistics
- `pdf` : fit model
- `data` : original dataset
- `fit_config` : configuration of `pdf.FitTo( data , ... )`
- `fit_pars` : redefine these parameters before each fit
- `more_vars` : calculate more variables from the fit-results
- `fit_fun` : specific fitting acion (if needed)
- `accept_fun` : specific accept action (if needed)
- `event_range` : event range to use for jackknife
- `silent` : silent processing?
- `progress` : show progress bar?
- `logger` : use this logger
- `frequency` : how often to dump the intermediate results ?
"""
N = len ( data )
assert 1 < N , 'make_jackknife: invalid dataset size %s' % N
if not event_range : event_range = 0 , N
assert 2 == len ( event_range ) , 'make_jackknife: invalid event range %s ' % str ( event_range )
begin , end = event_range
## check begin/end range
assert 0 <= begin and begin < end and begin < N , 'make_jackknife: invalid event range (%s,%s)/%d' % ( begin , end , N )
## adjust the end
end = min ( end , N )
## 1. fitting function?
if fit_fun is None :
if not silent : logger.info ( "make_jackknife: use default ``make_fit'' function!")
fit_fun = make_fit
assert fit_fun and callable ( fit_fun ) , 'Invalid fit function!'
## 2. accept function?
if accept_fun is None :
if not silent : logger.info ( "make_jackknife: use default ``accept_fit'' function!")
accept_fun = accept_fit
assert accept_fun and callable ( accept_fun ) , 'Invalid accept function!'
if progress and not silent :
assert isinstance ( frequency , integer_types ) and 0 < frequency,\
"make_makejackknife: invalid ``frequency'' parameter %s" % frequency
import ostap.fitting.roofit
import ostap.fitting.dataset
import ostap.fitting.variables
import ostap.fitting.roofitresult
import ostap.fitting.basic
## parameters for fitting
fparams = pdf.params ()
fix_fit_init = vars_transform ( fparams )
fix_fit_pars = vars_transform ( fit_pars )
fitcnf = {}
fitcnf.update ( fit_config )
if not 'silent' in fitcnf : fitcnf [ 'silent' ] = silent
from collections import defaultdict
results = defaultdict(list)
from ostap.core.core import SE
fits = defaultdict ( SE ) ## fit statuses
covs = defaultdict ( SE ) ## covarinace matrix quality
## Fit the whole sample
pdf.load_params ( params = fix_fit_init , silent = silent )
pdf.load_params ( params = fix_fit_pars , silent = silent )
r_tot = fit_fun ( pdf , data , **fitcnf )
from ostap.utils.progress_bar import progress_bar
## run jackknife bootstrapping
for i , ds in progress_bar ( enumerate ( data.jackknife ( begin , end ) ) , max_value = end - begin , silent = not progress ) :
## 2. reset parameters of fit_pdf
pdf.load_params ( params = fix_fit_init , silent = silent )
pdf.load_params ( params = fix_fit_pars , silent = silent )
## 3. fit it!
r = fit_fun ( pdf , ds , **fitcnf )
## 4. fit status
fits [ r.status () ] += 1
## 5. covariance matrix quality
covs [ r.covQual () ] += 1
## ok ?
if accept_fun ( r , pdf , ds ) :
## 6. save results
rpf = r.params ( float_only = True )
for j in rpf :
results [ j ].append ( rpf [ j ] [ 0 ] )
## 7. more variables to be calculated?
for v in more_vars :
func = more_vars[v]
results [ v ] .append ( func ( r , pdf ) )
results [ '#' ] .append ( len ( ds ) )
results [ '#sumw' ] .append ( ds.sumVar ( '1' ) )
ds.clear()
if progress or not silent :
if 0 < frequency and 1 <= i and 0 == ( i + 1 ) % frequency :
stats = make_stats ( results , fits , covs )
print_stats ( stats , i + 1 , logger = logger )
## 8. make a final statistics
stats = make_stats ( results , fits , covs )
if progress or not silent :
## 9. fit total dataset (twice)
r_tot = fit_fun ( pdf , data , **fitcnf )
r_tot = fit_fun ( pdf , data , **fitcnf )
## 10. the final table
print_jackknife ( r_tot ,
stats ,
morevars = dict ( ( k , more_vars [ k ]( r_tot , pdf ) ) for k in more_vars ) ,
logger = logger )
return results , stats
# =============================================================================
## Run Bootstrap analysis, useful for evaluaton of fit biases and uncertainty estimates
#
# In total <code>size</code> datasets are sampled (with replacement) from the original dataste
# <code>data</code> and each sampled dataset is fit
# @code
# dataset = ...
# model = ...
# r , f = model.fitTo ( dataset , .... ) ## fit the whole dataset
# results, stats = make_bootstrap ( model , data , size = 1000 ) ## run Bootstrap
# print_bootstrap ( r , stats ) ## print summary table
# @endcode
# @see print_bootstrap
#
# Derived parameters can be also retrived via <code>more_vars</code> argument:
# @code
# ratio = lambda res,pdf : res.ratio('x','y')
# more_vars = { 'Ratio' : ratio }
# r, s = make_bootstrap ( .... , more_vars = more_vars , ... )
# @endcode
#
# @param pdf fit model
# @param data original dataset
# @param size number of datasets to sample
# @param fit_config configuration of <code>pdf.FitTo( data , ... )</code>
# @param fit_pars redefine these parameters before each fit
# @param more_vars calculate more variables from the fit-results
# @param fit_fun specific fitting action (if needed)
# @param accept_fun specific accept action (if needed)
# @param silent silent processing
# @param progress show progress bar?
# @param logger use this logger
# @param frequency how often dump the intermediate results?
# @return statistics of boostrap experiments
def make_bootstrap ( pdf ,
data ,
size = 100 , ## numbere of samples
fit_config = {} , ## parameters for <code>pdf.fitTo</code>
fit_pars = {} , ## fit-parameters to reset/use
more_vars = {} , ## additional results to be calculated
fit_fun = None , ## fit function ( pdf , dataset , **fit_config )
accept_fun = None , ## accept function ( fit-result, pdf, dataset )
silent = True , ## silent processing?
progress = True , ## shpow progress bar?
logger = logger , ## use this logger
frequency = 100 ) :
"""Run Bootstrap analysis, useful for evaluaton of fit biased and uncertainty estimates
In total `size` datasets are sampled (with replacement) from the original dataste
`data` and each sampled dataset is fit
>>> dataset = ...
>>> model = ...
>>> r , f = model.fitTo ( dataset , .... ) ## fit the whole dataset
>>> results, stats = make_bootstrap ( model , data , size = 1000 ) ## run Bootstrap
>>> print_bootstrap ( r , stats ) ## print summary table
- `pdf` : fit model
- `data` : original dataset
- `size` : number of datasets to sample
- `fit_config` : configuration of `pdf.FitTo( data , ... )`
- `fit_pars` : redefine these parameters before each fit
- `more_vars` : calculate more variables from the fit-results
- `fit_fun` : specific fitting acion (if needed)
- `accept_fun` : specific accept action (if needed)
- `silent` : silent processing?
- `progress` : show progress bar?
- `logger` : use this logger
- `frequency` : how often dump the intermediate results?
"""
N = len ( data )
assert 1 < N , 'make_bootstrap: invalid dataset size %s' % N
from ostap.core.ostap_types import integer_types
assert isinstance ( size , integer_types ) and 0 < size, \
"make_bootstrap: invalid ``size'' parameter %s" % size
## 1. fitting function?
if fit_fun is None :
if not silent : logger.info ( "make_bootstrap: use default ``make_fit'' function!")
fit_fun = make_fit
assert fit_fun and callable ( fit_fun ) , 'Invalid fit function!'
## 2. accept function?
if accept_fun is None :
if not silent : logger.info ( "make_bootstrap: use default ``accept_fit'' function!")
accept_fun = accept_fit
assert accept_fun and callable ( accept_fun ) , 'Invalid accept function!'
if progress and not silent :
assert isinstance ( frequency , integer_types ) and 0 < frequency,\
"make_bootstrap: invalid ``frequency'' parameter %s" % frequency
import ostap.fitting.roofit
import ostap.fitting.dataset
import ostap.fitting.variables
import ostap.fitting.roofitresult
import ostap.fitting.basic
## parameters for fitting
fparams = pdf.params ()
fix_fit_init = vars_transform ( fparams )
fix_fit_pars = vars_transform ( fit_pars )
fitcnf = {}
fitcnf.update ( fit_config )
if not 'silent' in fitcnf : fitcnf [ 'silent' ] = silent
from collections import defaultdict
results = defaultdict(list)
from ostap.core.core import SE
fits = defaultdict ( SE ) ## fit statuses
covs = defaultdict ( SE ) ## covarinace matrix quality
## fit original dataset
pdf.load_params ( params = fix_fit_init , silent = silent )
pdf.load_params ( params = fix_fit_pars , silent = silent )
r_tot = fit_fun ( pdf , data , **fitcnf )
from ostap.utils.progress_bar import progress_bar
## run jackknife bootstrapping
for i , ds in progress_bar ( enumerate ( data.bootstrap ( size ) ) , max_value = size , silent = not progress ) :
## 2. reset parameters of fit_pdf
pdf.load_params ( params = fix_fit_init , silent = silent )
pdf.load_params ( params = fix_fit_pars , silent = silent )
## 3. fit it!
r = fit_fun ( pdf , ds , **fitcnf )
## 4. fit status
fits [ r.status () ] += 1
## 5. covariance matrix quality
covs [ r.covQual () ] += 1
## ok ?
if accept_fun ( r , pdf , ds ) :
## 6. save results
rpf = r.params ( float_only = True )
for j in rpf :
results [ j ].append ( rpf [ j ] [ 0 ] )
## 7. more variables to be calculated?
for v in more_vars :
func = more_vars[v]
results [ v ] .append ( func ( r , pdf ) )
results [ '#' ] .append ( len ( ds ) )
results [ '#sumw' ] .append ( ds.sumVar ( '1' ) )
ds.clear()
if progress or not silent :
if 0 < frequency and 1 <= i and 0 == ( i + 1 ) % frequency :
stats = make_stats ( results , fits , covs )
## print_stats ( stats , i + 1 , logger = logger )
print_bootstrap ( r_tot ,
stats ,
morevars = dict ( ( k , more_vars [ k ] ( r_tot , pdf ) ) for k in more_vars ),
logger = logger )
## 8. make a final statistics
stats = make_stats ( results , fits , covs )
if progress or not silent :
## 9. fit total dataset (twice)
r_tot = fit_fun ( pdf , data , **fitcnf )
r_tot = fit_fun ( pdf , data , **fitcnf )
## 10. the final table
print_bootstrap ( r_tot ,
stats ,
morevars = dict ( ( k , more_vars [ k ]( r_tot , pdf ) ) for k in more_vars ),
logger = logger )
return results , stats
# =============================================================================
if '__main__' == __name__ :
from ostap.utils.docme import docme
docme ( __name__ , logger = logger )
# =============================================================================
## The END
# =============================================================================
| 41.355049 | 131 | 0.530285 | 5,466 | 50,784 | 4.819978 | 0.067508 | 0.013664 | 0.015031 | 0.011539 | 0.83136 | 0.809838 | 0.79086 | 0.762165 | 0.727549 | 0.705306 | 0 | 0.009143 | 0.332349 | 50,784 | 1,227 | 132 | 41.388753 | 0.767888 | 0.430451 | 0 | 0.687037 | 0 | 0 | 0.092601 | 0.000757 | 0 | 0 | 0 | 0 | 0.044444 | 1 | 0.025926 | false | 0 | 0.088889 | 0 | 0.135185 | 0.033333 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
6479ef1be3bcc7e640dc472bcf3f1bdc2b30ed05 | 31 | py | Python | src/test/anovos/drift/test_distances.py | ziedbouf/anovos | 4cd149fe803f8cec7d49cf1d2ebff5abf6b362ce | [
"Apache-2.0"
] | 60 | 2021-11-15T22:30:57.000Z | 2022-03-31T08:13:27.000Z | src/test/anovos/drift/test_distances.py | ziedbouf/anovos | 4cd149fe803f8cec7d49cf1d2ebff5abf6b362ce | [
"Apache-2.0"
] | 62 | 2021-11-15T17:27:56.000Z | 2022-03-28T20:12:56.000Z | src/test/anovos/drift/test_distances.py | ziedbouf/anovos | 4cd149fe803f8cec7d49cf1d2ebff5abf6b362ce | [
"Apache-2.0"
] | 15 | 2021-11-17T19:39:47.000Z | 2022-03-30T18:20:33.000Z | def test_hellinger():
pass
| 10.333333 | 21 | 0.677419 | 4 | 31 | 5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.225806 | 31 | 2 | 22 | 15.5 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | true | 0.5 | 0 | 0 | 0.5 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
648313225764536c3630d17a413a18ac011de8b5 | 170 | py | Python | tests/__init__.py | Alindil/python-plexapi | 737401be0e35d6bb5ffeeb1743255ac66b69feba | [
"BSD-3-Clause"
] | 749 | 2017-02-17T11:42:53.000Z | 2022-03-29T01:58:44.000Z | tests/__init__.py | Alindil/python-plexapi | 737401be0e35d6bb5ffeeb1743255ac66b69feba | [
"BSD-3-Clause"
] | 557 | 2017-02-15T04:15:54.000Z | 2022-03-30T11:20:54.000Z | tests/__init__.py | Alindil/python-plexapi | 737401be0e35d6bb5ffeeb1743255ac66b69feba | [
"BSD-3-Clause"
] | 165 | 2017-02-17T18:41:02.000Z | 2022-03-29T20:53:46.000Z | # -*- coding: utf-8 -*-
import sys
from os.path import dirname, abspath
# Make sure plexapi is in the systempath
sys.path.insert(0, dirname(dirname(abspath(__file__))))
| 24.285714 | 55 | 0.729412 | 26 | 170 | 4.615385 | 0.769231 | 0.233333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.013605 | 0.135294 | 170 | 6 | 56 | 28.333333 | 0.802721 | 0.352941 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
64b45be31c86d80d100d0dec5f73f5e0e1dc565a | 11,643 | py | Python | src/test/python/test_scc_control_codes.py | xchange11/ttconv-1 | 6e67172af126fa0e90690044848f300c0173715c | [
"BSD-2-Clause"
] | 66 | 2020-09-25T11:38:28.000Z | 2022-03-23T15:15:34.000Z | src/test/python/test_scc_control_codes.py | xchange11/ttconv-1 | 6e67172af126fa0e90690044848f300c0173715c | [
"BSD-2-Clause"
] | 217 | 2020-09-22T22:45:22.000Z | 2022-03-31T23:02:15.000Z | src/test/python/test_scc_control_codes.py | xchange11/ttconv-1 | 6e67172af126fa0e90690044848f300c0173715c | [
"BSD-2-Clause"
] | 5 | 2020-09-25T09:24:17.000Z | 2021-08-08T20:52:26.000Z | #!/usr/bin/env python
# -*- coding: UTF-8 -*-
# Copyright (c) 2020, Sandflow Consulting LLC
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are met:
#
# 1. Redistributions of source code must retain the above copyright notice, this
# list of conditions and the following disclaimer.
# 2. Redistributions in binary form must reproduce the above copyright notice,
# this list of conditions and the following disclaimer in the documentation
# and/or other materials provided with the distribution.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
# WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR
# ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
# (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
# ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
# SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
"""Unit tests for the SCC Control codes"""
# pylint: disable=R0201,C0115,C0116
import unittest
from ttconv.scc.codes.control_codes import SccControlCode
FIELD_1_CONTROL_CODE_VALUES = [0X1422, 0x1C22, 0X1423, 0x1C23, 0X1421, 0x1C21, 0X142D, 0x1C2D, 0X1424,
0x1C24, 0X142C, 0x1C2C, 0X142E, 0x1C2E, 0X142F, 0x1C2F, 0X1428, 0x1C28,
0X1429, 0x1C29, 0X142B, 0x1C2B, 0X1721, 0x1F21, 0X1722, 0x1F22, 0X1723,
0x1F23, 0X142A, 0x1C2A, 0X1420, 0x1C20, 0X1425, 0x1C25, 0X1426, 0x1C26,
0X1427, 0x1C27]
FIELD_2_CONTROL_CODE_VALUES = [0x1522, 0x1D22, 0x1523, 0x1D23, 0x1521, 0x1D21, 0x152D, 0x1D2D, 0x1524, 0x1D24,
0x152C, 0x1D2C, 0x152E, 0x1D2E, 0x152F, 0x1D2F, 0x1528, 0x1D28, 0x1529, 0x1D29,
0x152B, 0x1D2B, 0x1721, 0x1F21, 0x1722, 0x1F22, 0x1723, 0x1F23, 0x152A, 0x1D2A,
0x1520, 0x1D20, 0x1525, 0x1D25, 0x1526, 0x1D26, 0x1527, 0x1D27]
class SCCControlCodesTest(unittest.TestCase):
def test_scc_control_codes(self):
self.assertEqual(SccControlCode.AOF,
SccControlCode.find(FIELD_1_CONTROL_CODE_VALUES[0]))
self.assertEqual(SccControlCode.AOF,
SccControlCode.find(FIELD_2_CONTROL_CODE_VALUES[0]))
self.assertEqual(SccControlCode.AOF,
SccControlCode.find(FIELD_1_CONTROL_CODE_VALUES[1]))
self.assertEqual(SccControlCode.AOF,
SccControlCode.find(FIELD_2_CONTROL_CODE_VALUES[1]))
self.assertEqual(SccControlCode.AON,
SccControlCode.find(FIELD_1_CONTROL_CODE_VALUES[2]))
self.assertEqual(SccControlCode.AON,
SccControlCode.find(FIELD_2_CONTROL_CODE_VALUES[2]))
self.assertEqual(SccControlCode.AON,
SccControlCode.find(FIELD_1_CONTROL_CODE_VALUES[3]))
self.assertEqual(SccControlCode.AON,
SccControlCode.find(FIELD_2_CONTROL_CODE_VALUES[3]))
self.assertEqual(SccControlCode.BS,
SccControlCode.find(FIELD_1_CONTROL_CODE_VALUES[4]))
self.assertEqual(SccControlCode.BS,
SccControlCode.find(FIELD_2_CONTROL_CODE_VALUES[4]))
self.assertEqual(SccControlCode.BS,
SccControlCode.find(FIELD_1_CONTROL_CODE_VALUES[5]))
self.assertEqual(SccControlCode.BS,
SccControlCode.find(FIELD_2_CONTROL_CODE_VALUES[5]))
self.assertEqual(SccControlCode.CR,
SccControlCode.find(FIELD_1_CONTROL_CODE_VALUES[6]))
self.assertEqual(SccControlCode.CR,
SccControlCode.find(FIELD_2_CONTROL_CODE_VALUES[6]))
self.assertEqual(SccControlCode.CR,
SccControlCode.find(FIELD_1_CONTROL_CODE_VALUES[7]))
self.assertEqual(SccControlCode.CR,
SccControlCode.find(FIELD_2_CONTROL_CODE_VALUES[7]))
self.assertEqual(SccControlCode.DER,
SccControlCode.find(FIELD_1_CONTROL_CODE_VALUES[8]))
self.assertEqual(SccControlCode.DER,
SccControlCode.find(FIELD_2_CONTROL_CODE_VALUES[8]))
self.assertEqual(SccControlCode.DER,
SccControlCode.find(FIELD_1_CONTROL_CODE_VALUES[9]))
self.assertEqual(SccControlCode.DER,
SccControlCode.find(FIELD_2_CONTROL_CODE_VALUES[9]))
self.assertEqual(SccControlCode.EDM,
SccControlCode.find(FIELD_1_CONTROL_CODE_VALUES[10]))
self.assertEqual(SccControlCode.EDM,
SccControlCode.find(FIELD_2_CONTROL_CODE_VALUES[10]))
self.assertEqual(SccControlCode.EDM,
SccControlCode.find(FIELD_1_CONTROL_CODE_VALUES[11]))
self.assertEqual(SccControlCode.EDM,
SccControlCode.find(FIELD_2_CONTROL_CODE_VALUES[11]))
self.assertEqual(SccControlCode.ENM,
SccControlCode.find(FIELD_1_CONTROL_CODE_VALUES[12]))
self.assertEqual(SccControlCode.ENM,
SccControlCode.find(FIELD_2_CONTROL_CODE_VALUES[12]))
self.assertEqual(SccControlCode.ENM,
SccControlCode.find(FIELD_1_CONTROL_CODE_VALUES[13]))
self.assertEqual(SccControlCode.ENM,
SccControlCode.find(FIELD_2_CONTROL_CODE_VALUES[13]))
self.assertEqual(SccControlCode.EOC,
SccControlCode.find(FIELD_1_CONTROL_CODE_VALUES[14]))
self.assertEqual(SccControlCode.EOC,
SccControlCode.find(FIELD_2_CONTROL_CODE_VALUES[14]))
self.assertEqual(SccControlCode.EOC,
SccControlCode.find(FIELD_1_CONTROL_CODE_VALUES[15]))
self.assertEqual(SccControlCode.EOC,
SccControlCode.find(FIELD_2_CONTROL_CODE_VALUES[15]))
self.assertEqual(SccControlCode.FON,
SccControlCode.find(FIELD_1_CONTROL_CODE_VALUES[16]))
self.assertEqual(SccControlCode.FON,
SccControlCode.find(FIELD_2_CONTROL_CODE_VALUES[16]))
self.assertEqual(SccControlCode.FON,
SccControlCode.find(FIELD_1_CONTROL_CODE_VALUES[17]))
self.assertEqual(SccControlCode.FON,
SccControlCode.find(FIELD_2_CONTROL_CODE_VALUES[17]))
self.assertEqual(SccControlCode.RDC,
SccControlCode.find(FIELD_1_CONTROL_CODE_VALUES[18]))
self.assertEqual(SccControlCode.RDC,
SccControlCode.find(FIELD_2_CONTROL_CODE_VALUES[18]))
self.assertEqual(SccControlCode.RDC,
SccControlCode.find(FIELD_1_CONTROL_CODE_VALUES[19]))
self.assertEqual(SccControlCode.RDC,
SccControlCode.find(FIELD_2_CONTROL_CODE_VALUES[19]))
self.assertEqual(SccControlCode.RTD,
SccControlCode.find(FIELD_1_CONTROL_CODE_VALUES[20]))
self.assertEqual(SccControlCode.RTD,
SccControlCode.find(FIELD_2_CONTROL_CODE_VALUES[20]))
self.assertEqual(SccControlCode.RTD,
SccControlCode.find(FIELD_1_CONTROL_CODE_VALUES[21]))
self.assertEqual(SccControlCode.RTD,
SccControlCode.find(FIELD_2_CONTROL_CODE_VALUES[21]))
self.assertEqual(SccControlCode.TO1,
SccControlCode.find(FIELD_1_CONTROL_CODE_VALUES[22]))
self.assertEqual(SccControlCode.TO1,
SccControlCode.find(FIELD_2_CONTROL_CODE_VALUES[22]))
self.assertEqual(SccControlCode.TO1,
SccControlCode.find(FIELD_1_CONTROL_CODE_VALUES[23]))
self.assertEqual(SccControlCode.TO1,
SccControlCode.find(FIELD_2_CONTROL_CODE_VALUES[23]))
self.assertEqual(SccControlCode.TO2,
SccControlCode.find(FIELD_1_CONTROL_CODE_VALUES[24]))
self.assertEqual(SccControlCode.TO2,
SccControlCode.find(FIELD_2_CONTROL_CODE_VALUES[24]))
self.assertEqual(SccControlCode.TO2,
SccControlCode.find(FIELD_1_CONTROL_CODE_VALUES[25]))
self.assertEqual(SccControlCode.TO2,
SccControlCode.find(FIELD_2_CONTROL_CODE_VALUES[25]))
self.assertEqual(SccControlCode.TO3,
SccControlCode.find(FIELD_1_CONTROL_CODE_VALUES[26]))
self.assertEqual(SccControlCode.TO3,
SccControlCode.find(FIELD_2_CONTROL_CODE_VALUES[26]))
self.assertEqual(SccControlCode.TO3,
SccControlCode.find(FIELD_1_CONTROL_CODE_VALUES[27]))
self.assertEqual(SccControlCode.TO3,
SccControlCode.find(FIELD_2_CONTROL_CODE_VALUES[27]))
self.assertEqual(SccControlCode.TR,
SccControlCode.find(FIELD_1_CONTROL_CODE_VALUES[28]))
self.assertEqual(SccControlCode.TR,
SccControlCode.find(FIELD_2_CONTROL_CODE_VALUES[28]))
self.assertEqual(SccControlCode.TR,
SccControlCode.find(FIELD_1_CONTROL_CODE_VALUES[29]))
self.assertEqual(SccControlCode.TR,
SccControlCode.find(FIELD_2_CONTROL_CODE_VALUES[29]))
self.assertEqual(SccControlCode.RCL,
SccControlCode.find(FIELD_1_CONTROL_CODE_VALUES[30]))
self.assertEqual(SccControlCode.RCL,
SccControlCode.find(FIELD_2_CONTROL_CODE_VALUES[30]))
self.assertEqual(SccControlCode.RCL,
SccControlCode.find(FIELD_1_CONTROL_CODE_VALUES[31]))
self.assertEqual(SccControlCode.RCL,
SccControlCode.find(FIELD_2_CONTROL_CODE_VALUES[31]))
self.assertEqual(SccControlCode.RU2,
SccControlCode.find(FIELD_1_CONTROL_CODE_VALUES[32]))
self.assertEqual(SccControlCode.RU2,
SccControlCode.find(FIELD_2_CONTROL_CODE_VALUES[32]))
self.assertEqual(SccControlCode.RU2,
SccControlCode.find(FIELD_1_CONTROL_CODE_VALUES[33]))
self.assertEqual(SccControlCode.RU2,
SccControlCode.find(FIELD_2_CONTROL_CODE_VALUES[33]))
self.assertEqual(SccControlCode.RU3,
SccControlCode.find(FIELD_1_CONTROL_CODE_VALUES[34]))
self.assertEqual(SccControlCode.RU3,
SccControlCode.find(FIELD_2_CONTROL_CODE_VALUES[34]))
self.assertEqual(SccControlCode.RU3,
SccControlCode.find(FIELD_1_CONTROL_CODE_VALUES[35]))
self.assertEqual(SccControlCode.RU3,
SccControlCode.find(FIELD_2_CONTROL_CODE_VALUES[35]))
self.assertEqual(SccControlCode.RU4,
SccControlCode.find(FIELD_1_CONTROL_CODE_VALUES[36]))
self.assertEqual(SccControlCode.RU4,
SccControlCode.find(FIELD_2_CONTROL_CODE_VALUES[36]))
self.assertEqual(SccControlCode.RU4,
SccControlCode.find(FIELD_1_CONTROL_CODE_VALUES[37]))
self.assertEqual(SccControlCode.RU4,
SccControlCode.find(FIELD_2_CONTROL_CODE_VALUES[37]))
def test_scc_control_codes_invalid(self):
other_code_values = [code for code in range(0x0000, 0xFFFF) if
(code not in FIELD_1_CONTROL_CODE_VALUES) and code not in FIELD_2_CONTROL_CODE_VALUES]
for cc in other_code_values:
self.assertIsNone(SccControlCode.find(cc))
if __name__ == '__main__':
unittest.main()
| 54.919811 | 111 | 0.686765 | 1,280 | 11,643 | 5.98125 | 0.211719 | 0.107106 | 0.177638 | 0.088819 | 0.791014 | 0.76698 | 0.757576 | 0.749478 | 0.749478 | 0.749478 | 0 | 0.064642 | 0.230697 | 11,643 | 211 | 112 | 55.180095 | 0.790108 | 0.120072 | 0 | 0.44186 | 0 | 0 | 0.000783 | 0 | 0 | 0 | 0.04581 | 0 | 0.447674 | 1 | 0.011628 | false | 0 | 0.011628 | 0 | 0.02907 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
64bf793fad20499aa20b293b2b6919cdc4712ade | 48 | py | Python | app/db/helper/__init__.py | aallali/SFJ-MSR | f87f45df46341ae8d9f2e84484cccbdde26b9baf | [
"MIT"
] | 1 | 2022-03-04T14:32:27.000Z | 2022-03-04T14:32:27.000Z | app/db/helper/__init__.py | aallali/SFJ-MSR | f87f45df46341ae8d9f2e84484cccbdde26b9baf | [
"MIT"
] | null | null | null | app/db/helper/__init__.py | aallali/SFJ-MSR | f87f45df46341ae8d9f2e84484cccbdde26b9baf | [
"MIT"
] | null | null | null | from .user import find_user_by_prop, insert_user | 48 | 48 | 0.875 | 9 | 48 | 4.222222 | 0.777778 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.083333 | 48 | 1 | 48 | 48 | 0.863636 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
b3f5bcd4d24555a0beb074c7205d77fe6e7e930e | 58 | py | Python | src/b.py | yamap55/python_import_sample | c468ac0d076cb61f54ef6133d906f1cb112e2aae | [
"MIT"
] | null | null | null | src/b.py | yamap55/python_import_sample | c468ac0d076cb61f54ef6133d906f1cb112e2aae | [
"MIT"
] | 1 | 2021-03-26T01:09:24.000Z | 2021-03-26T01:09:24.000Z | src/b.py | yamap55/python_import_sample | c468ac0d076cb61f54ef6133d906f1cb112e2aae | [
"MIT"
] | null | null | null | from define import HOGE
def get_hoge():
return HOGE
| 9.666667 | 23 | 0.706897 | 9 | 58 | 4.444444 | 0.777778 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.241379 | 58 | 5 | 24 | 11.6 | 0.909091 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | true | 0 | 0.333333 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 1 | 0 | 0 | 0 | 6 |
b3fdd7a97cdb529a9d85aee971989061259ac81b | 35,400 | py | Python | models/model_settings.py | guanyuelee/DP-LaSE | 55f83dc04a84aa3d855939626ff165569a60d178 | [
"MIT"
] | 2 | 2021-05-13T17:42:07.000Z | 2021-07-19T04:52:54.000Z | models/model_settings.py | guanyuelee/PmSFC | f26cbb46ba86ba0ee7c2fced982ac07307f3108e | [
"Apache-2.0"
] | null | null | null | models/model_settings.py | guanyuelee/PmSFC | f26cbb46ba86ba0ee7c2fced982ac07307f3108e | [
"Apache-2.0"
] | null | null | null | # python 3.7
"""Contains basic configurations for models used in this project.
Please download the public released models from the following repositories
OR train your own models, and then put them into the folder
`pretrain/tensorflow`.
PGGAN: https://github.com/tkarras/progressive_growing_of_gans
StyleGAN: https://github.com/NVlabs/stylegan
StyleGAN2: https://github.com/NVlabs/stylegan2
NOTE: Any new model should be registered in `MODEL_POOL` before used.
"""
import os
import copy
import json
BASE_DIR = os.path.dirname(os.path.relpath(__file__))
MODEL_DIR = os.path.join(BASE_DIR, 'pretrain')
PTH_MODEL_DIR = 'pytorch'
TF_MODEL_DIR = 'tensorflow'
class BigGANConfig(object):
""" Configuration class to store the configuration of a `BigGAN`.
Defaults are for the 128x128 model.
layers tuple are (up-sample in the layer ?, input channels, output channels)
"""
def __init__(self,
output_dim=128,
z_dim=128,
class_embed_dim=128,
channel_width=128,
num_classes=1000,
layers=[(False, 16, 16),
(True, 16, 16),
(False, 16, 16),
(True, 16, 8),
(False, 8, 8),
(True, 8, 4),
(False, 4, 4),
(True, 4, 2),
(False, 2, 2),
(True, 2, 1)],
attention_layer_position=8,
eps=1e-4,
n_stats=51):
"""Constructs BigGANConfig. """
self.output_dim = output_dim
self.z_dim = z_dim
self.class_embed_dim = class_embed_dim
self.channel_width = channel_width
self.num_classes = num_classes
self.layers = layers
self.attention_layer_position = attention_layer_position
self.eps = eps
self.n_stats = n_stats
@classmethod
def from_dict(cls, json_object):
"""Constructs a `BigGANConfig` from a Python dictionary of parameters."""
config = BigGANConfig()
for key, value in json_object.items():
config.__dict__[key] = value
return config
@classmethod
def from_json_file(cls, json_file):
"""Constructs a `BigGANConfig` from a json file of parameters."""
with open(json_file, "r", encoding='utf-8') as reader:
text = reader.read()
return cls.from_dict(json.loads(text))
def __repr__(self):
return str(self.to_json_string())
def to_dict(self):
"""Serializes this instance to a Python dictionary."""
output = copy.deepcopy(self.__dict__)
return output
def to_json_string(self):
"""Serializes this instance to a JSON string."""
return json.dumps(self.to_dict(), indent=2, sort_keys=True) + "\n"
if not os.path.exists(os.path.join(MODEL_DIR, PTH_MODEL_DIR)):
os.makedirs(os.path.join(MODEL_DIR, PTH_MODEL_DIR))
# pylint: disable=line-too-long
MODEL_POOL = {
# PGGAN Official.
'pggan_celebahq': {
'weight_path': os.path.join(MODEL_DIR, PTH_MODEL_DIR, 'pggan_celebahq1024_generator.pth'),
'tf_weight_path': os.path.join(MODEL_DIR, TF_MODEL_DIR, 'karras2018iclr-celebahq-1024x1024.pkl'),
'tf_code_path': os.path.join(BASE_DIR, 'pggan_tf_official'),
'gan_type': 'pggan',
'dataset_name': 'celebahq',
'z_space_dim': 512,
'resolution': 1024,
'fused_scale': False,
},
'pggan_celebahq_disc': {
'weight_path': os.path.join(MODEL_DIR, PTH_MODEL_DIR, 'pggan_celebahq1024_discriminator.pth'),
'tf_weight_path': os.path.join(MODEL_DIR, TF_MODEL_DIR, 'karras2018iclr-celebahq-1024x1024.pkl'),
'tf_code_path': os.path.join(BASE_DIR, 'pggan_tf_official'),
'gan_type': 'pggan_disc',
'dataset_name': 'celebahq',
'resolution': 1024,
'fused_scale': False,
},
'pggan_bedroom': {
'weight_path': os.path.join(MODEL_DIR, PTH_MODEL_DIR, 'pggan_bedroom256_generator.pth'),
'tf_weight_path': os.path.join(MODEL_DIR, TF_MODEL_DIR, 'karras2018iclr-lsun-bedroom-256x256.pkl'),
'tf_code_path': os.path.join(BASE_DIR, 'pggan_tf_official'),
'gan_type': 'pggan',
'dataset_name': 'lsun-bedroom',
'z_space_dim': 512,
'resolution': 256,
'fused_scale': False,
},
'pggan_bedroom_disc': {
'weight_path': os.path.join(MODEL_DIR, PTH_MODEL_DIR, 'pggan_bedroom256_discriminator.pth'),
'tf_weight_path': os.path.join(MODEL_DIR, TF_MODEL_DIR, 'karras2018iclr-lsun-bedroom-256x256.pkl'),
'tf_code_path': os.path.join(BASE_DIR, 'pggan_tf_official'),
'gan_type': 'pggan_disc',
'dataset_name': 'lsun-bedroom',
'resolution': 256,
'fused_scale': False,
},
'pggan_livingroom': {
'weight_path': os.path.join(MODEL_DIR, PTH_MODEL_DIR, 'pggan_livingroom256_generator.pth'),
'tf_weight_path': os.path.join(MODEL_DIR, TF_MODEL_DIR, 'karras2018iclr-lsun-livingroom-256x256.pkl'),
'tf_code_path': os.path.join(BASE_DIR, 'pggan_tf_official'),
'gan_type': 'pggan',
'dataset_name': 'lsun-livingroom',
'z_space_dim': 512,
'resolution': 256,
'fused_scale': False,
},
'pggan_diningroom': {
'weight_path': os.path.join(MODEL_DIR, PTH_MODEL_DIR, 'pggan_diningroom256_generator.pth'),
'tf_weight_path': os.path.join(MODEL_DIR, TF_MODEL_DIR, 'karras2018iclr-lsun-dining_room-256x256.pkl'),
'tf_code_path': os.path.join(BASE_DIR, 'pggan_tf_official'),
'gan_type': 'pggan',
'dataset_name': 'lsun-diningroom',
'z_space_dim': 512,
'resolution': 256,
'fused_scale': False,
},
'pggan_kitchen': {
'weight_path': os.path.join(MODEL_DIR, PTH_MODEL_DIR, 'pggan_kitchen256_generator.pth'),
'tf_weight_path': os.path.join(MODEL_DIR, TF_MODEL_DIR, 'karras2018iclr-lsun-kitchen-256x256.pkl'),
'tf_code_path': os.path.join(BASE_DIR, 'pggan_tf_official'),
'gan_type': 'pggan',
'dataset_name': 'lsun-kitchen',
'z_space_dim': 512,
'resolution': 256,
'fused_scale': False,
},
'pggan_churchoutdoor': {
'weight_path': os.path.join(MODEL_DIR, PTH_MODEL_DIR, 'pggan_churchoutdoor256_generator.pth'),
'tf_weight_path': os.path.join(MODEL_DIR, TF_MODEL_DIR, 'karras2018iclr-lsun-churchoutdoor-256x256.pkl'),
'tf_code_path': os.path.join(BASE_DIR, 'pggan_tf_official'),
'gan_type': 'pggan',
'dataset_name': 'lsun-churchoutdoor',
'z_space_dim': 512,
'resolution': 256,
'fused_scale': False,
},
'pggan_churchoutdoor_disc': {
'weight_path': os.path.join(MODEL_DIR, PTH_MODEL_DIR, 'pggan_churchoutdoor256_discriminator.pth'),
'tf_weight_path': os.path.join(MODEL_DIR, TF_MODEL_DIR, 'karras2018iclr-lsun-churchoutdoor-256x256.pkl'),
'tf_code_path': os.path.join(BASE_DIR, 'pggan_tf_official'),
'gan_type': 'pggan_disc',
'dataset_name': 'lsun-churchoutdoor',
'resolution': 256,
'fused_scale': False,
},
'pggan_tower': {
'weight_path': os.path.join(MODEL_DIR, PTH_MODEL_DIR, 'pggan_tower256_generator.pth'),
'tf_weight_path': os.path.join(MODEL_DIR, TF_MODEL_DIR, 'karras2018iclr-lsun-tower-256x256.pkl'),
'tf_code_path': os.path.join(BASE_DIR, 'pggan_tf_official'),
'gan_type': 'pggan',
'dataset_name': 'lsun-tower',
'z_space_dim': 512,
'resolution': 256,
'fused_scale': False,
},
'pggan_bridge': {
'weight_path': os.path.join(MODEL_DIR, PTH_MODEL_DIR, 'pggan_bridge256_generator.pth'),
'tf_weight_path': os.path.join(MODEL_DIR, TF_MODEL_DIR, 'karras2018iclr-lsun-bridge-256x256.pkl'),
'tf_code_path': os.path.join(BASE_DIR, 'pggan_tf_official'),
'gan_type': 'pggan',
'dataset_name': 'lsun-bridge',
'z_space_dim': 512,
'resolution': 256,
'fused_scale': False,
},
'pggan_restaurant': {
'weight_path': os.path.join(MODEL_DIR, PTH_MODEL_DIR, 'pggan_restaurant256_generator.pth'),
'tf_weight_path': os.path.join(MODEL_DIR, TF_MODEL_DIR, 'karras2018iclr-lsun-restaurant-256x256.pkl'),
'tf_code_path': os.path.join(BASE_DIR, 'pggan_tf_official'),
'gan_type': 'pggan',
'dataset_name': 'lsun-restaurant',
'z_space_dim': 512,
'resolution': 256,
'fused_scale': False,
},
'pggan_classroom': {
'weight_path': os.path.join(MODEL_DIR, PTH_MODEL_DIR, 'pggan_classroom256_generator.pth'),
'tf_weight_path': os.path.join(MODEL_DIR, TF_MODEL_DIR, 'karras2018iclr-lsun-classroom-256x256.pkl'),
'tf_code_path': os.path.join(BASE_DIR, 'pggan_tf_official'),
'gan_type': 'pggan',
'dataset_name': 'lsun-classroom',
'z_space_dim': 512,
'resolution': 256,
'fused_scale': False,
},
'pggan_conferenceroom': {
'weight_path': os.path.join(MODEL_DIR, PTH_MODEL_DIR, 'pggan_conferenceroom256_generator.pth'),
'tf_weight_path': os.path.join(MODEL_DIR, TF_MODEL_DIR, 'karras2018iclr-lsun-conferenceroom-256x256.pkl'),
'tf_code_path': os.path.join(BASE_DIR, 'pggan_tf_official'),
'gan_type': 'pggan',
'dataset_name': 'lsun-conferenceroom',
'z_space_dim': 512,
'resolution': 256,
'fused_scale': False,
},
'pggan_person': {
'weight_path': os.path.join(MODEL_DIR, PTH_MODEL_DIR, 'pggan_person256_generator.pth'),
'tf_weight_path': os.path.join(MODEL_DIR, TF_MODEL_DIR, 'karras2018iclr-lsun-person-256x256.pkl'),
'tf_code_path': os.path.join(BASE_DIR, 'pggan_tf_official'),
'gan_type': 'pggan',
'dataset_name': 'lsun-person',
'z_space_dim': 512,
'resolution': 256,
'fused_scale': False,
},
'pggan_cat': {
'weight_path': os.path.join(MODEL_DIR, PTH_MODEL_DIR, 'pggan_cat256_generator.pth'),
'tf_weight_path': os.path.join(MODEL_DIR, TF_MODEL_DIR, 'karras2018iclr-lsun-cat-256x256.pkl'),
'tf_code_path': os.path.join(BASE_DIR, 'pggan_tf_official'),
'gan_type': 'pggan',
'dataset_name': 'lsun-cat',
'z_space_dim': 512,
'resolution': 256,
'fused_scale': False,
},
'pggan_dog': {
'weight_path': os.path.join(MODEL_DIR, PTH_MODEL_DIR, 'pggan_dog256_generator.pth'),
'tf_weight_path': os.path.join(MODEL_DIR, TF_MODEL_DIR, 'karras2018iclr-lsun-dog-256x256.pkl'),
'tf_code_path': os.path.join(BASE_DIR, 'pggan_tf_official'),
'gan_type': 'pggan',
'dataset_name': 'lsun-dog',
'z_space_dim': 512,
'resolution': 256,
'fused_scale': False,
},
'pggan_bird': {
'weight_path': os.path.join(MODEL_DIR, PTH_MODEL_DIR, 'pggan_bird256_generator.pth'),
'tf_weight_path': os.path.join(MODEL_DIR, TF_MODEL_DIR, 'karras2018iclr-lsun-bird-256x256.pkl'),
'tf_code_path': os.path.join(BASE_DIR, 'pggan_tf_official'),
'gan_type': 'pggan',
'dataset_name': 'lsun-bird',
'z_space_dim': 512,
'resolution': 256,
'fused_scale': False,
},
'pggan_horse': {
'weight_path': os.path.join(MODEL_DIR, PTH_MODEL_DIR, 'pggan_horse256_generator.pth'),
'tf_weight_path': os.path.join(MODEL_DIR, TF_MODEL_DIR, 'karras2018iclr-lsun-horse-256x256.pkl'),
'tf_code_path': os.path.join(BASE_DIR, 'pggan_tf_official'),
'gan_type': 'pggan',
'dataset_name': 'lsun-horse',
'z_space_dim': 512,
'resolution': 256,
'fused_scale': False,
},
'pggan_horse_disc': {
'weight_path': os.path.join(MODEL_DIR, PTH_MODEL_DIR, 'pggan_horse256_discriminator.pth'),
'tf_weight_path': os.path.join(MODEL_DIR, TF_MODEL_DIR, 'karras2018iclr-lsun-horse-256x256.pkl'),
'tf_code_path': os.path.join(BASE_DIR, 'pggan_tf_official'),
'gan_type': 'pggan_disc',
'dataset_name': 'lsun-horse',
'resolution': 256,
'fused_scale': False,
},
'pggan_sheep': {
'weight_path': os.path.join(MODEL_DIR, PTH_MODEL_DIR, 'pggan_sheep256_generator.pth'),
'tf_weight_path': os.path.join(MODEL_DIR, TF_MODEL_DIR, 'karras2018iclr-lsun-sheep-256x256.pkl'),
'tf_code_path': os.path.join(BASE_DIR, 'pggan_tf_official'),
'gan_type': 'pggan',
'dataset_name': 'lsun-sheep',
'z_space_dim': 512,
'resolution': 256,
'fused_scale': False,
},
'pggan_cow': {
'weight_path': os.path.join(MODEL_DIR, PTH_MODEL_DIR, 'pggan_cow256_generator.pth'),
'tf_weight_path': os.path.join(MODEL_DIR, TF_MODEL_DIR, 'karras2018iclr-lsun-cow-256x256.pkl'),
'tf_code_path': os.path.join(BASE_DIR, 'pggan_tf_official'),
'gan_type': 'pggan',
'dataset_name': 'lsun-cow',
'z_space_dim': 512,
'resolution': 256,
'fused_scale': False,
},
'pggan_car': {
'weight_path': os.path.join(MODEL_DIR, PTH_MODEL_DIR, 'pggan_car256_generator.pth'),
'tf_weight_path': os.path.join(MODEL_DIR, TF_MODEL_DIR, 'karras2018iclr-lsun-car-256x256.pkl'),
'tf_code_path': os.path.join(BASE_DIR, 'pggan_tf_official'),
'gan_type': 'pggan',
'dataset_name': 'lsun-car',
'z_space_dim': 512,
'resolution': 256,
'fused_scale': False,
},
'pggan_bicycle': {
'weight_path': os.path.join(MODEL_DIR, PTH_MODEL_DIR, 'pggan_bicycle256_generator.pth'),
'tf_weight_path': os.path.join(MODEL_DIR, TF_MODEL_DIR, 'karras2018iclr-lsun-bicycle-256x256.pkl'),
'tf_code_path': os.path.join(BASE_DIR, 'pggan_tf_official'),
'gan_type': 'pggan',
'dataset_name': 'lsun-bicycle',
'z_space_dim': 512,
'resolution': 256,
'fused_scale': False,
},
'pggan_motorbike': {
'weight_path': os.path.join(MODEL_DIR, PTH_MODEL_DIR, 'pggan_motorbike256_generator.pth'),
'tf_weight_path': os.path.join(MODEL_DIR, TF_MODEL_DIR, 'karras2018iclr-lsun-motorbike-256x256.pkl'),
'tf_code_path': os.path.join(BASE_DIR, 'pggan_tf_official'),
'gan_type': 'pggan',
'dataset_name': 'lsun-motorbike',
'z_space_dim': 512,
'resolution': 256,
'fused_scale': False,
},
'pggan_bus': {
'weight_path': os.path.join(MODEL_DIR, PTH_MODEL_DIR, 'pggan_bus256_generator.pth'),
'tf_weight_path': os.path.join(MODEL_DIR, TF_MODEL_DIR, 'karras2018iclr-lsun-bus-256x256.pkl'),
'tf_code_path': os.path.join(BASE_DIR, 'pggan_tf_official'),
'gan_type': 'pggan',
'dataset_name': 'lsun-bus',
'z_space_dim': 512,
'resolution': 256,
'fused_scale': False,
},
'pggan_train': {
'weight_path': os.path.join(MODEL_DIR, PTH_MODEL_DIR, 'pggan_train256_generator.pth'),
'tf_weight_path': os.path.join(MODEL_DIR, TF_MODEL_DIR, 'karras2018iclr-lsun-train-256x256.pkl'),
'tf_code_path': os.path.join(BASE_DIR, 'pggan_tf_official'),
'gan_type': 'pggan',
'dataset_name': 'lsun-train',
'z_space_dim': 512,
'resolution': 256,
'fused_scale': False,
},
'pggan_boat': {
'weight_path': os.path.join(MODEL_DIR, PTH_MODEL_DIR, 'pggan_boat256_generator.pth'),
'tf_weight_path': os.path.join(MODEL_DIR, TF_MODEL_DIR, 'karras2018iclr-lsun-boat-256x256.pkl'),
'tf_code_path': os.path.join(BASE_DIR, 'pggan_tf_official'),
'gan_type': 'pggan',
'dataset_name': 'lsun-boat',
'z_space_dim': 512,
'resolution': 256,
'fused_scale': False,
},
'pggan_airplane': {
'weight_path': os.path.join(MODEL_DIR, PTH_MODEL_DIR, 'pggan_airplane256_generator.pth'),
'tf_weight_path': os.path.join(MODEL_DIR, TF_MODEL_DIR, 'karras2018iclr-lsun-airplane-256x256.pkl'),
'tf_code_path': os.path.join(BASE_DIR, 'pggan_tf_official'),
'gan_type': 'pggan',
'dataset_name': 'lsun-airplane',
'z_space_dim': 512,
'resolution': 256,
'fused_scale': False,
},
'pggan_bottle': {
'weight_path': os.path.join(MODEL_DIR, PTH_MODEL_DIR, 'pggan_bottle256_generator.pth'),
'tf_weight_path': os.path.join(MODEL_DIR, TF_MODEL_DIR, 'karras2018iclr-lsun-bottle-256x256.pkl'),
'tf_code_path': os.path.join(BASE_DIR, 'pggan_tf_official'),
'gan_type': 'pggan',
'dataset_name': 'lsun-bottle',
'z_space_dim': 512,
'resolution': 256,
'fused_scale': False,
},
'pggan_chair': {
'weight_path': os.path.join(MODEL_DIR, PTH_MODEL_DIR, 'pggan_chair256_generator.pth'),
'tf_weight_path': os.path.join(MODEL_DIR, TF_MODEL_DIR, 'karras2018iclr-lsun-chair-256x256.pkl'),
'tf_code_path': os.path.join(BASE_DIR, 'pggan_tf_official'),
'gan_type': 'pggan',
'dataset_name': 'lsun-chair',
'z_space_dim': 512,
'resolution': 256,
'fused_scale': False,
},
'pggan_pottedplant': {
'weight_path': os.path.join(MODEL_DIR, PTH_MODEL_DIR, 'pggan_pottedplant256_generator.pth'),
'tf_weight_path': os.path.join(MODEL_DIR, TF_MODEL_DIR, 'karras2018iclr-lsun-pottedplant-256x256.pkl'),
'tf_code_path': os.path.join(BASE_DIR, 'pggan_tf_official'),
'gan_type': 'pggan',
'dataset_name': 'lsun-pottedplant',
'z_space_dim': 512,
'resolution': 256,
'fused_scale': False,
},
'pggan_tvmonitor': {
'weight_path': os.path.join(MODEL_DIR, PTH_MODEL_DIR, 'pggan_tvmonitor256_generator.pth'),
'tf_weight_path': os.path.join(MODEL_DIR, TF_MODEL_DIR, 'karras2018iclr-lsun-tvmonitor-256x256.pkl'),
'tf_code_path': os.path.join(BASE_DIR, 'pggan_tf_official'),
'gan_type': 'pggan',
'dataset_name': 'lsun-tvmonitor',
'z_space_dim': 512,
'resolution': 256,
'fused_scale': False,
},
'pggan_diningtable': {
'weight_path': os.path.join(MODEL_DIR, PTH_MODEL_DIR, 'pggan_diningtable256_generator.pth'),
'tf_weight_path': os.path.join(MODEL_DIR, TF_MODEL_DIR, 'karras2018iclr-lsun-diningtable-256x256.pkl'),
'tf_code_path': os.path.join(BASE_DIR, 'pggan_tf_official'),
'gan_type': 'pggan',
'dataset_name': 'lsun-diningtable',
'z_space_dim': 512,
'resolution': 256,
'fused_scale': False,
},
'pggan_sofa': {
'weight_path': os.path.join(MODEL_DIR, PTH_MODEL_DIR, 'pggan_sofa256_generator.pth'),
'tf_weight_path': os.path.join(MODEL_DIR, TF_MODEL_DIR, 'karras2018iclr-lsun-sofa-256x256.pkl'),
'tf_code_path': os.path.join(BASE_DIR, 'pggan_tf_official'),
'gan_type': 'pggan',
'dataset_name': 'lsun-sofa',
'z_space_dim': 512,
'resolution': 256,
'fused_scale': False,
},
# StyleGAN Official.
'stylegan_ffhq': {
'weight_path': os.path.join(MODEL_DIR, PTH_MODEL_DIR, 'stylegan_ffhq1024_generator.pth'),
'tf_weight_path': os.path.join(MODEL_DIR, TF_MODEL_DIR, 'karras2019stylegan-ffhq-1024x1024.pkl'),
'tf_code_path': os.path.join(BASE_DIR, 'stylegan_tf_official'),
'gan_type': 'stylegan',
'dataset_name': 'ffhq',
'z_space_dim': 512,
'w_space_dim': 512,
'resolution': 1024,
'fused_scale': 'auto',
},
'stylegan_celebahq': {
'weight_path': os.path.join(MODEL_DIR, PTH_MODEL_DIR, 'stylegan_celebahq1024_generator.pth'),
'tf_weight_path': os.path.join(MODEL_DIR, TF_MODEL_DIR, 'karras2019stylegan-celebahq-1024x1024.pkl'),
'tf_code_path': os.path.join(BASE_DIR, 'stylegan_tf_official'),
'gan_type': 'stylegan',
'dataset_name': 'celebahq',
'z_space_dim': 512,
'w_space_dim': 512,
'resolution': 1024,
'fused_scale': 'auto',
},
'stylegan_bedroom': {
'weight_path': os.path.join(MODEL_DIR, PTH_MODEL_DIR, 'stylegan_bedroom256_generator.pth'),
'tf_weight_path': os.path.join(MODEL_DIR, TF_MODEL_DIR, 'karras2019stylegan-bedrooms-256x256.pkl'),
'tf_code_path': os.path.join(BASE_DIR, 'stylegan_tf_official'),
'gan_type': 'stylegan',
'dataset_name': 'lsun-bedroom',
'z_space_dim': 512,
'w_space_dim': 512,
'resolution': 256,
'fused_scale': 'auto',
},
'stylegan_cat': {
'weight_path': os.path.join(MODEL_DIR, PTH_MODEL_DIR, 'stylegan_cat256_generator.pth'),
'tf_weight_path': os.path.join(MODEL_DIR, TF_MODEL_DIR, 'karras2019stylegan-cats-256x256.pkl'),
'tf_code_path': os.path.join(BASE_DIR, 'stylegan_tf_official'),
'gan_type': 'stylegan',
'dataset_name': 'lsun-cat',
'z_space_dim': 512,
'w_space_dim': 512,
'resolution': 256,
'fused_scale': 'auto',
},
'stylegan_car': {
'weight_path': os.path.join(MODEL_DIR, PTH_MODEL_DIR, 'stylegan_car512_generator.pth'),
'tf_weight_path': os.path.join(MODEL_DIR, TF_MODEL_DIR, 'karras2019stylegan-cars-512x384.pkl'),
'tf_code_path': os.path.join(BASE_DIR, 'stylegan_tf_official'),
'gan_type': 'stylegan',
'dataset_name': 'lsun-car',
'z_space_dim': 512,
'w_space_dim': 512,
'resolution': 512,
'fused_scale': 'auto',
},
# StyleGAN Self-Training.
'stylegan_ffhq256': {
'weight_path': os.path.join(MODEL_DIR, PTH_MODEL_DIR, 'stylegan_ffhq256_generator.pth'),
'tf_weight_path': os.path.join(MODEL_DIR, TF_MODEL_DIR, 'stylegan-ffhq-256x256-025000.pkl'),
'tf_code_path': os.path.join(BASE_DIR, 'stylegan_tf_official'),
'gan_type': 'stylegan',
'dataset_name': 'ffhq',
'z_space_dim': 512,
'w_space_dim': 512,
'resolution': 256,
'fused_scale': 'auto',
},
'stylegan_ffhq512': {
'weight_path': os.path.join(MODEL_DIR, PTH_MODEL_DIR, 'stylegan_ffhq512_generator.pth'),
'tf_weight_path': os.path.join(MODEL_DIR, TF_MODEL_DIR, 'stylegan-ffhq-512x512-025000.pkl'),
'tf_code_path': os.path.join(BASE_DIR, 'stylegan_tf_official'),
'gan_type': 'stylegan',
'dataset_name': 'ffhq',
'z_space_dim': 512,
'w_space_dim': 512,
'resolution': 512,
'fused_scale': 'auto',
},
'stylegan_livingroom': {
'weight_path': os.path.join(MODEL_DIR, PTH_MODEL_DIR, 'stylegan_livingroom256_generator.pth'),
'tf_weight_path': os.path.join(MODEL_DIR, TF_MODEL_DIR, 'stylegan-livingroom-256x256-030000.pkl'),
'tf_code_path': os.path.join(BASE_DIR, 'stylegan_tf_official'),
'gan_type': 'stylegan',
'dataset_name': 'lsun-livingroom',
'z_space_dim': 512,
'w_space_dim': 512,
'resolution': 256,
'fused_scale': 'auto',
},
'stylegan_diningroom': {
'weight_path': os.path.join(MODEL_DIR, PTH_MODEL_DIR, 'stylegan_diningroom256_generator.pth'),
'tf_weight_path': os.path.join(MODEL_DIR, TF_MODEL_DIR, 'stylegan-diningroom-256x256-025000.pkl'),
'tf_code_path': os.path.join(BASE_DIR, 'stylegan_tf_official'),
'gan_type': 'stylegan',
'dataset_name': 'lsun-diningroom',
'z_space_dim': 512,
'w_space_dim': 512,
'resolution': 256,
'fused_scale': 'auto',
},
'stylegan_kitchen': {
'weight_path': os.path.join(MODEL_DIR, PTH_MODEL_DIR, 'stylegan_kitchen256_generator.pth'),
'tf_weight_path': os.path.join(MODEL_DIR, TF_MODEL_DIR, 'stylegan-kitchen-256x256-030000.pkl'),
'tf_code_path': os.path.join(BASE_DIR, 'stylegan_tf_official'),
'gan_type': 'stylegan',
'dataset_name': 'lsun-kitchen',
'z_space_dim': 512,
'w_space_dim': 512,
'resolution': 256,
'fused_scale': 'auto',
},
'stylegan_apartment': {
'weight_path': os.path.join(MODEL_DIR, PTH_MODEL_DIR, 'stylegan_apartment256_generator.pth'),
'tf_weight_path': os.path.join(MODEL_DIR, TF_MODEL_DIR, 'stylegan-apartment-256x256-060000.pkl'),
'tf_code_path': os.path.join(BASE_DIR, 'stylegan_tf_official'),
'gan_type': 'stylegan',
'dataset_name': 'lsun-bedroom-livingroom-diningroom-kitchen',
'z_space_dim': 512,
'w_space_dim': 512,
'resolution': 256,
'fused_scale': 'auto',
},
'stylegan_churchoutdoor': {
'weight_path': os.path.join(MODEL_DIR, PTH_MODEL_DIR, 'stylegan_churchoutdoor256_generator.pth'),
'tf_weight_path': os.path.join(MODEL_DIR, TF_MODEL_DIR, 'stylegan-churchoutdoor-256x256-030000.pkl'),
'tf_code_path': os.path.join(BASE_DIR, 'stylegan_tf_official'),
'gan_type': 'stylegan',
'dataset_name': 'lsun-churchoutdoor',
'z_space_dim': 512,
'w_space_dim': 512,
'resolution': 256,
'fused_scale': 'auto',
},
'stylegan_tower': {
'weight_path': os.path.join(MODEL_DIR, PTH_MODEL_DIR, 'stylegan_tower256_generator.pth'),
'tf_weight_path': os.path.join(MODEL_DIR, TF_MODEL_DIR, 'stylegan-tower-256x256-030000.pkl'),
'tf_code_path': os.path.join(BASE_DIR, 'stylegan_tf_official'),
'gan_type': 'stylegan',
'dataset_name': 'lsun-tower',
'z_space_dim': 512,
'w_space_dim': 512,
'resolution': 256,
'fused_scale': 'auto',
},
'stylegan_bridge': {
'weight_path': os.path.join(MODEL_DIR, PTH_MODEL_DIR, 'stylegan_bridge256_generator.pth'),
'tf_weight_path': os.path.join(MODEL_DIR, TF_MODEL_DIR, 'stylegan-bridge-256x256-025000.pkl'),
'tf_code_path': os.path.join(BASE_DIR, 'stylegan_tf_official'),
'gan_type': 'stylegan',
'dataset_name': 'lsun-bridge',
'z_space_dim': 512,
'w_space_dim': 512,
'resolution': 256,
'fused_scale': 'auto',
},
'stylegan_restaurant': {
'weight_path': os.path.join(MODEL_DIR, PTH_MODEL_DIR, 'stylegan_restaurant256_generator.pth'),
'tf_weight_path': os.path.join(MODEL_DIR, TF_MODEL_DIR, 'stylegan-restaurant-256x256-050000.pkl'),
'tf_code_path': os.path.join(BASE_DIR, 'stylegan_tf_official'),
'gan_type': 'stylegan',
'dataset_name': 'lsun-restaurant',
'z_space_dim': 512,
'w_space_dim': 512,
'resolution': 256,
'fused_scale': 'auto',
},
'stylegan_classroom': {
'weight_path': os.path.join(MODEL_DIR, PTH_MODEL_DIR, 'stylegan_classroom256_generator.pth'),
'tf_weight_path': os.path.join(MODEL_DIR, TF_MODEL_DIR, 'stylegan-classroom-256x256-050000.pkl'),
'tf_code_path': os.path.join(BASE_DIR, 'stylegan_tf_official'),
'gan_type': 'stylegan',
'dataset_name': 'lsun-classroom',
'z_space_dim': 512,
'w_space_dim': 512,
'resolution': 256,
'fused_scale': 'auto',
},
'stylegan_conferenceroom': {
'weight_path': os.path.join(MODEL_DIR, PTH_MODEL_DIR, 'stylegan_conferenceroom256_generator.pth'),
'tf_weight_path': os.path.join(MODEL_DIR, TF_MODEL_DIR, 'stylegan-conferenceroom-256x256-050000.pkl'),
'tf_code_path': os.path.join(BASE_DIR, 'stylegan_tf_official'),
'gan_type': 'stylegan',
'dataset_name': 'lsun-conferenceroom',
'z_space_dim': 512,
'w_space_dim': 512,
'resolution': 256,
'fused_scale': 'auto',
},
# StyleGAN2 Official.
'stylegan2_ffhq': {
'weight_path': os.path.join(MODEL_DIR, PTH_MODEL_DIR, 'stylegan2_ffhq1024_generator.pth'),
'tf_weight_path': os.path.join(MODEL_DIR, TF_MODEL_DIR, 'stylegan2-ffhq-config-f.pkl'),
'tf_code_path': os.path.join(BASE_DIR, 'stylegan2_tf_official'),
'gan_type': 'stylegan2',
'dataset_name': 'ffhq',
'z_space_dim': 512,
'w_space_dim': 512,
'resolution': 1024,
'g_architecture_type': 'skip',
'fused_modulate': True,
},
'stylegan2_ffhq_disc': {
'weight_path': os.path.join(MODEL_DIR, PTH_MODEL_DIR, 'stylegan2_ffhq1024_discriminator.pth'),
'tf_weight_path': os.path.join(MODEL_DIR, TF_MODEL_DIR, 'stylegan2-ffhq-config-f.pkl'),
'tf_code_path': os.path.join(BASE_DIR, 'stylegan2_tf_official'),
'gan_type': 'stylegan2_disc',
'dataset_name': 'ffhq',
'resolution': 1024,
'd_architecture_type': 'resnet',
'fused_modulate': True,
},
'stylegan2_church': {
'weight_path': os.path.join(MODEL_DIR, PTH_MODEL_DIR, 'stylegan2_church256_generator.pth'),
'tf_weight_path': os.path.join(MODEL_DIR, TF_MODEL_DIR, 'stylegan2-church-config-f.pkl'),
'tf_code_path': os.path.join(BASE_DIR, 'stylegan2_tf_official'),
'gan_type': 'stylegan2',
'dataset_name': 'lsun-church',
'z_space_dim': 512,
'w_space_dim': 512,
'resolution': 256,
'g_architecture_type': 'skip',
'fused_modulate': True,
},
'stylegan2_church_disc': {
'weight_path': os.path.join(MODEL_DIR, PTH_MODEL_DIR, 'stylegan2_church256_discriminator.pth'),
'tf_weight_path': os.path.join(MODEL_DIR, TF_MODEL_DIR, 'stylegan2-church-config-f.pkl'),
'tf_code_path': os.path.join(BASE_DIR, 'stylegan2_tf_official'),
'gan_type': 'stylegan2_disc',
'dataset_name': 'church',
'resolution': 256,
'd_architecture_type': 'resnet',
'fused_modulate': True,
},
'stylegan2_cat': {
'weight_path': os.path.join(MODEL_DIR, PTH_MODEL_DIR, 'stylegan2_cat256_generator.pth'),
'tf_weight_path': os.path.join(MODEL_DIR, TF_MODEL_DIR, 'stylegan2-cat-config-f.pkl'),
'tf_code_path': os.path.join(BASE_DIR, 'stylegan2_tf_official'),
'gan_type': 'stylegan2',
'dataset_name': 'lsun-cat',
'z_space_dim': 512,
'w_space_dim': 512,
'resolution': 256,
'g_architecture_type': 'skip',
'fused_modulate': True,
},
'stylegan2_horse': {
'weight_path': os.path.join(MODEL_DIR, PTH_MODEL_DIR, 'stylegan2_horse256_generator.pth'),
'tf_weight_path': os.path.join(MODEL_DIR, TF_MODEL_DIR, 'stylegan2-horse-config-f.pkl'),
'tf_code_path': os.path.join(BASE_DIR, 'stylegan2_tf_official'),
'gan_type': 'stylegan2',
'dataset_name': 'lsun-horse',
'z_space_dim': 512,
'w_space_dim': 512,
'resolution': 256,
'g_architecture_type': 'skip',
'fused_modulate': True,
},
'stylegan2_horse_disc': {
'weight_path': os.path.join(MODEL_DIR, PTH_MODEL_DIR, 'stylegan2_horse256_discriminator.pth'),
'tf_weight_path': os.path.join(MODEL_DIR, TF_MODEL_DIR, 'stylegan2-horse-config-f.pkl'),
'tf_code_path': os.path.join(BASE_DIR, 'stylegan2_tf_official'),
'gan_type': 'stylegan2_disc',
'dataset_name': 'horse',
'resolution': 256,
'd_architecture_type': 'resnet',
'fused_modulate': True,
},
'stylegan2_car': {
'weight_path': os.path.join(MODEL_DIR, PTH_MODEL_DIR, 'stylegan2_car512_generator.pth'),
'tf_weight_path': os.path.join(MODEL_DIR, TF_MODEL_DIR, 'stylegan2-car-config-f.pkl'),
'tf_code_path': os.path.join(BASE_DIR, 'stylegan2_tf_official'),
'gan_type': 'stylegan2',
'dataset_name': 'lsun-car',
'z_space_dim': 512,
'w_space_dim': 512,
'resolution': 512,
'g_architecture_type': 'skip',
'fused_modulate': True,
},
'biggandeep128_imagenet':{
'weight_path': os.path.join(MODEL_DIR, PTH_MODEL_DIR, 'biggan-deep-128-pytorch_model.bin'),
'tf_weight_path': os.path.join(MODEL_DIR, TF_MODEL_DIR, 'biggan-deep-128-tensorflow_model.bin'),
'config_path': os.path.join(MODEL_DIR, PTH_MODEL_DIR, 'biggan-deep-128-pytorch_config.json'),
'tf_code_path': os.path.join(BASE_DIR, 'biggan_tf_official'),
'gan_type': 'biggandeep',
'dataset_name': 'imagenet',
'z_space_dim': 128,
'resolution': 128,
},
'biggandeep256_imagenet':{
'weight_path': os.path.join(MODEL_DIR, PTH_MODEL_DIR, 'biggan-deep-256-pytorch_model.bin'),
'tf_weight_path': os.path.join(MODEL_DIR, TF_MODEL_DIR, 'biggan-deep-256-tensorflow_model.bin'),
'config_path': os.path.join(MODEL_DIR, PTH_MODEL_DIR, 'biggan-deep-256-pytorch_config.json'),
'tf_code_path': os.path.join(BASE_DIR, 'biggan_tf_official'),
'gan_type': 'biggandeep',
'dataset_name': 'imagenet',
'z_space_dim': 128,
'resolution': 256,
},
'biggandeep512_imagenet':{
'weight_path': os.path.join(MODEL_DIR, PTH_MODEL_DIR, 'biggan-deep-512-pytorch_model.bin'),
'tf_weight_path': os.path.join(MODEL_DIR, TF_MODEL_DIR, 'biggan-deep-512-tensorflow_model.bin'),
'config_path': os.path.join(MODEL_DIR, PTH_MODEL_DIR, 'biggan-deep-512-pytorch_config.json'),
'tf_code_path': os.path.join(BASE_DIR, 'biggan_tf_official'),
'gan_type': 'biggandeep',
'dataset_name': 'imagenet',
'z_space_dim': 128,
'resolution': 512,
},
'bigganshallow_imagenet':{
'weight_path': os.path.join(MODEL_DIR, PTH_MODEL_DIR, 'generators', 'BigGAN', 'G_ema.pth'),
'tf_weight_path': os.path.join(MODEL_DIR, TF_MODEL_DIR, 'generators', 'BigGAN', 'G_ema.pth'),
'config_path': os.path.join(MODEL_DIR, PTH_MODEL_DIR, 'generators', 'BigGAN', 'generator_config.json'),
'tf_code_path': os.path.join(BASE_DIR, 'biggan_tf_official'),
'gan_type': 'bigganshallow',
'dataset_name': 'imagenet',
'z_space_dim': 120,
'resolution': 128,
},
'sngan_anime':{
'weight_path': os.path.join(MODEL_DIR, PTH_MODEL_DIR, 'generators', 'SN_Anime', 'generator.pt'),
'tf_weight_path': os.path.join(MODEL_DIR, TF_MODEL_DIR, 'generators', 'SN_Anime', 'generator.pt'),
'config_path': os.path.join(MODEL_DIR, PTH_MODEL_DIR, 'generators', 'SN_Anime', 'args.json'),
'tf_code_path': os.path.join(BASE_DIR, 'sngan_tf_official'),
'gan_type': 'sngan',
'dataset_name': 'anime',
'z_space_dim': 128,
'resolution': 64,
},
'sngan_mnist':{
'weight_path': os.path.join(MODEL_DIR, PTH_MODEL_DIR, 'generators', 'SN_MNIST', 'generator.pt'),
'tf_weight_path': os.path.join(MODEL_DIR, TF_MODEL_DIR, 'generators', 'SN_MNIST', 'generator.pt'),
'config_path': os.path.join(MODEL_DIR, PTH_MODEL_DIR, 'generators', 'SN_MNIST', 'args.json'),
'tf_code_path': os.path.join(BASE_DIR, 'sngan_tf_official'),
'gan_type': 'sngan',
'dataset_name': 'mnist',
'z_space_dim': 128,
'resolution': 32,
}
}
# pylint: enable=line-too-long
# Settings for StyleGAN.
STYLEGAN_TRUNCATION_PSI = 0.7 # 1.0 means no truncation
STYLEGAN_TRUNCATION_LAYERS = 8 # 0 means no truncation
STYLEGAN_RANDOMIZE_NOISE = False
# Settings for StyleGAN2.
STYLEGAN2_TRUNCATION_PSI = 0.7 # 1.0 means no truncation
STYLEGAN2_TRUNCATION_LAYERS = 8 # 0 means no truncation
STYLEGAN2_RANDOMIZE_NOISE = False
# Settings for model running.
USE_CUDA = True
MAX_IMAGES_ON_DEVICE = 4
MAX_IMAGES_ON_RAM = 800
| 43.811881 | 114 | 0.635537 | 4,535 | 35,400 | 4.600221 | 0.066814 | 0.108523 | 0.099223 | 0.1369 | 0.869092 | 0.856198 | 0.845221 | 0.836257 | 0.820056 | 0.814879 | 0 | 0.048466 | 0.21548 | 35,400 | 807 | 115 | 43.866171 | 0.702722 | 0.033475 | 0 | 0.536 | 0 | 0 | 0.423157 | 0.141501 | 0 | 0 | 0 | 0 | 0 | 1 | 0.008 | false | 0 | 0.004 | 0.001333 | 0.02 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
37a9e8f0766245c1164bddcd6576f7f7159546fa | 35 | py | Python | src/sage/finance/__init__.py | switzel/sage | 7eb8510dacf61b691664cd8f1d2e75e5d473e5a0 | [
"BSL-1.0"
] | 5 | 2015-01-04T07:15:06.000Z | 2022-03-04T15:15:18.000Z | src/sage/finance/__init__.py | switzel/sage | 7eb8510dacf61b691664cd8f1d2e75e5d473e5a0 | [
"BSL-1.0"
] | null | null | null | src/sage/finance/__init__.py | switzel/sage | 7eb8510dacf61b691664cd8f1d2e75e5d473e5a0 | [
"BSL-1.0"
] | 10 | 2016-09-28T13:12:40.000Z | 2022-02-12T09:28:34.000Z | # Quantitative Finance
import all
| 8.75 | 22 | 0.8 | 4 | 35 | 7 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.171429 | 35 | 3 | 23 | 11.666667 | 0.965517 | 0.571429 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
807e509b414505029119463f87456bd539ecbe84 | 13,370 | py | Python | tests/test_nonmobile.py | SandySalvatore/uamobile | 51c637effc65b863c8f1897d971a13bb099bdb84 | [
"MIT"
] | null | null | null | tests/test_nonmobile.py | SandySalvatore/uamobile | 51c637effc65b863c8f1897d971a13bb099bdb84 | [
"MIT"
] | null | null | null | tests/test_nonmobile.py | SandySalvatore/uamobile | 51c637effc65b863c8f1897d971a13bb099bdb84 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
from tests import msg
from uamobile import *
from uamobile.nonmobile import NonMobileUserAgent as NonMobile
def test_detect_fast():
assert detect_fast('Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.0.1) Gecko/2008070208 Firefox/3.0.1') == 'nonmobile'
def test_empty_useragent():
try:
ua = detect({})
except:
assert False, 'KeyError HTTP_USER_AGENT should be ignored silently'
else:
pass
def test_useragent_nonmobile():
def inner(useragent):
ua = detect({'HTTP_USER_AGENT':useragent})
assert isinstance(ua, NonMobile)
assert ua.carrier == 'NonMobile'
assert ua.short_carrier == 'N'
assert ua.is_docomo() == False
assert ua.is_ezweb() == False
assert ua.is_softbank() == False
assert ua.is_vodafone() == False
assert ua.is_jphone() == False
assert ua.is_willcom() == False
assert ua.is_nonmobile()
assert ua.display is not None
assert ua.supports_cookie() == True
assert ua.serialnumber is None
for ua in DATA:
yield inner, ua
def test_display_default():
ua = detect({'HTTP_USER_AGENT':'Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.0.4) Gecko/2008102920 Firefox/3.0.4'})
assert ua.display.width != 0
assert ua.display.height != 0
assert ua.display.color
assert ua.display.depth
assert ua.display.is_vga() is False
assert ua.display.is_qvga() is True
def test_strip_serialnumber():
value = 'Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.0.4) Gecko/2008102920 Firefox/3.0.4'
ua = detect({'HTTP_USER_AGENT': value})
assert ua.strip_serialnumber() == value
#########################
# Test data
#########################
DATA = ('Mozilla/2.0 (compatible; Ask Jeeves)',
'Mozilla/2.0 (compatible; MSIE 3.01; Windows 95)',
'Mozilla/2.0 (compatible; MSIE 3.02; Windows CE)',
'Mozilla/2.0 (compatible; MSIE 3.02; Windows CE; 240x320)',
'Mozilla/2.0 (compatible; MSIE 3.02; Windows CE; 240x320; PPC)',
'Mozilla/2.0 (compatible; MSIE 3.02; Windows CE; PPC; 240x320)',
'Mozilla/2.0 (compatible; T-H-U-N-D-E-R-S-T-O-N-E)',
'Mozilla/3.0 (DreamPassport/3.0)',
'Mozilla/3.0 (DreamPassport/3.15; SONICTEAM/PSOV2)',
'Mozilla/3.0 (DreamPassport/3.2)',
'Mozilla/3.0 (Slurp.so/Goo; slurp@inktomi.com; http://www.inktomi.com/slurp.html)',
'Mozilla/3.0 (Slurp/si; slurp@inktomi.com; http://www.inktomi.com/slurp.html)',
'Mozilla/3.0 (Win95; I)',
'Mozilla/3.0 (Windows 2000; U) Opera 6.05 [ja]',
'Mozilla/3.0 (aruyo/0.01;http://www.aaacafe.ne.jp/ ;support@aaacafe.ne.jp)',
'Mozilla/3.0 (compatible)',
'Mozilla/3.0 (compatible; Indy Library)',
'Mozilla/3.0 (compatible; NetMind-Minder/4.3.1J)',
'Mozilla/3.0 (compatible; NetPositive/2.2.1; BeOS)',
'Mozilla/3.0 (compatible; PerMan Surfer 3.0; Win95)',
'Mozilla/3.0 (compatible;)',
'Mozilla/3.01 (compatible;)',
'Mozilla/3.01 [ja] (Macintosh; I; 68K)',
'Mozilla/3.01Gold (Macintosh; I; 68K)',
'Mozilla/3.01Gold (Macintosh; I; 68K; SiteCoach 1.0)',
'Mozilla/4.0',
'Mozilla/4.0 (LINKS ARoMATIZED)',
'Mozilla/4.0 (PDA; SL-A300/1.0,Embedix/Qtopia/1.1.0) NetFront/3.0',
'Mozilla/4.0 (PDA; Windows CE/0.9.3) NetFront/3.0',
'Mozilla/4.0 (Windows NT 4.0)',
'Mozilla/4.0 (compatible',
'Mozilla/4.0 (compatible; MSIE 4.01; MSN 2.5; Windows 95)',
'Mozilla/4.0 (compatible; MSIE 4.01; Windows 95)',
'Mozilla/4.0 (compatible; MSIE 4.01; Windows 98)',
'Mozilla/4.0 (compatible; MSIE 4.01; Windows NT Windows CE)',
'Mozilla/4.0 (compatible; MSIE 4.01; Windows NT)',
'Mozilla/4.0 (compatible; MSIE 4.0; Windows 95)',
'Mozilla/4.0 (compatible; MSIE 4.5; Mac_PowerPC)',
'Mozilla/4.0 (compatible; MSIE 5.00; Windows 98)',
'Mozilla/4.0 (compatible; MSIE 5.01; MSN 2.5; Windows 98)',
'Mozilla/4.0 (compatible; MSIE 5.01; Windows 95)',
'Mozilla/4.0 (compatible; MSIE 5.01; Windows 98)',
'Mozilla/4.0 (compatible; MSIE 5.01; Windows 98; HKBN)',
'Mozilla/4.0 (compatible; MSIE 5.01; Windows 98; MSIECrawler)',
'Mozilla/4.0 (compatible; MSIE 5.01; Windows 98; MSOCD; AtHomeJP0109)',
'Mozilla/4.0 (compatible; MSIE 5.01; Windows 98; Q312461)',
'Mozilla/4.0 (compatible; MSIE 5.01; Windows 98; YComp 5.0.2.4)',
'Mozilla/4.0 (compatible; MSIE 5.01; Windows 98; istb 641)',
'Mozilla/4.0 (compatible; MSIE 5.01; Windows NT 5.0)',
'Mozilla/4.0 (compatible; MSIE 5.01; Windows NT 5.0) LinkChecker 0.1',
'Mozilla/4.0 (compatible; MSIE 5.01; Windows NT 5.0) WebWasher 3.2',
'Mozilla/4.0 (compatible; MSIE 5.01; Windows NT 5.0; .NET CLR 1.0.3705)',
'Mozilla/4.0 (compatible; MSIE 5.01; Windows NT 5.0; DigExt)',
'Mozilla/4.0 (compatible; MSIE 5.01; Windows NT 5.0; NetCaptor 7.0.1)',
'Mozilla/4.0 (compatible; MSIE 5.01; Windows NT 5.0; T312461)',
'Mozilla/4.0 (compatible; MSIE 5.01; Windows NT 5.0; istb 641)',
'Mozilla/4.0 (compatible; MSIE 5.01; Windows NT)',
'Mozilla/4.0 (compatible; MSIE 5.01; Windows NT; Lunascape 0.99c)',
'Mozilla/4.0 (compatible; MSIE 5.01; Windows NT; Norfolk Southern Corp.)',
'Mozilla/4.0 (compatible; MSIE 5.01; Windows NT; nk-07102k)',
'Mozilla/4.0 (compatible; MSIE 5.0; AOL 7.0; Windows 98; DigExt)',
'Mozilla/4.0 (compatible; MSIE 5.0; Linux 2.2.18-0vl4.2 i686) Opera 6.0 [en]',
'Mozilla/4.0 (compatible; MSIE 5.0; Mac_PowerPC)',
'Mozilla/4.0 (compatible; MSIE 5.0; Mac_PowerPC; AtHomeJP191)',
'Mozilla/4.0 (compatible; MSIE 5.0; Mac_PowerPC;)',
'Mozilla/4.0 (compatible; MSIE 5.0; Win32)',
'Mozilla/4.0 (compatible; MSIE 5.0; Windows 2000) Opera 6.0 [ja]',
'Mozilla/4.0 (compatible; MSIE 5.0; Windows 2000) Opera 6.03 [en]',
'Mozilla/4.0 (compatible; MSIE 5.0; Windows 2000) Opera 6.03 [ja]',
'Mozilla/4.0 (compatible; MSIE 5.0; Windows 2000) Opera 6.05 [en]',
'Mozilla/4.0 (compatible; MSIE 5.0; Windows 2000) Opera 6.05 [ja]',
'Mozilla/4.0 (compatible; MSIE 5.0; Windows 95)',
'Mozilla/4.0 (compatible; MSIE 5.0; Windows 95; DigExt)',
'Mozilla/4.0 (compatible; MSIE 5.0; Windows 98)',
'Mozilla/4.0 (compatible; MSIE 5.0; Windows 98) Opera 5.12 [es]',
'Mozilla/4.0 (compatible; MSIE 5.0; Windows 98) Opera 6.03 [ja]',
'Mozilla/4.0 (compatible; MSIE 5.0; Windows 98) Opera 6.05 [en]',
'Mozilla/4.0 (compatible; MSIE 5.0; Windows 98) Opera 6.05 [ja]',
'Mozilla/4.0 (compatible; MSIE 5.0; Windows 98)::ELNSB50::0000211003200258031a018f000000000505000b00000000',
'Mozilla/4.0 (compatible; MSIE 5.0; Windows 98; DigExt)',
'Mozilla/4.0 (compatible; MSIE 5.0; Windows 98; DigExt; YComp 5.0.0.0)',
'Mozilla/4.0 (compatible; MSIE 5.0; Windows 98; Hotbar 3.0)',
'Mozilla/4.0 (compatible; MSIE 5.0; Windows ME) Opera 6.03 [ja]',
'Mozilla/4.0 (compatible; MSIE 5.0; Windows ME) Opera 6.05 [ja]',
'Mozilla/4.0 (compatible; MSIE 5.0; Windows NT 4.0) Opera 6.0 [ja]',
'Mozilla/4.0 (compatible; MSIE 5.0; Windows NT 4.0) Opera 6.01 [ja]',
'Mozilla/4.0 (compatible; MSIE 5.0; Windows NT 4.0) Opera 6.03 [ja]',
'Mozilla/4.0 (compatible; MSIE 5.0; Windows NT 4.0) Opera 6.05 [ja]',
'Mozilla/4.0 (compatible; MSIE 5.0; Windows NT 5.0)',
'Mozilla/4.0 (compatible; MSIE 5.0; Windows NT)',
'Mozilla/4.0 (compatible; MSIE 5.0; Windows NT; DigExt)',
'Mozilla/4.0 (compatible; MSIE 5.0; Windows NT; DigExt; DTS Agent',
'Mozilla/4.0 (compatible; MSIE 5.0; Windows XP) Opera 6.01 [de]',
'Mozilla/4.0 (compatible; MSIE 5.0; Windows XP) Opera 6.03 [ja]',
'Mozilla/4.0 (compatible; MSIE 5.0; Windows XP) Opera 6.04 [en]',
'Mozilla/4.0 (compatible; MSIE 5.0; Windows XP) Opera 6.04 [ja]',
'Mozilla/4.0 (compatible; MSIE 5.0; Windows XP) Opera 6.05 [ja]',
'Mozilla/4.0 (compatible; MSIE 5.12; Mac_PowerPC)',
'Mozilla/4.0 (compatible; MSIE 5.14; Mac_PowerPC)',
'Mozilla/4.0 (compatible; MSIE 5.16; Mac_PowerPC)',
'Mozilla/4.0 (compatible; MSIE 5.21; Mac_PowerPC)',
'Mozilla/4.0 (compatible; MSIE 5.22; Mac_PowerPC)',
'Mozilla/4.0 (compatible; MSIE 5.2; Mac_PowerPC)',
'Mozilla/4.0 (compatible; MSIE 5.2; Mac_PowerPC) OmniWeb/4.1.1-v424.6',
'Mozilla/4.0 (compatible; MSIE 5.5; AOL 6.0; Windows 98; Win 9x 4.90)',
'Mozilla/4.0 (compatible; MSIE 5.5; MSN 2.5; AOL 7.0; Windows 98)',
'Mozilla/4.0 (compatible; MSIE 5.5; MSN 2.5; Windows 98)',
'Mozilla/4.0 (compatible; MSIE 5.5; Windows 95)',
'Mozilla/4.0 (compatible; MSIE 5.5; Windows 95; T312461)',
'Mozilla/4.0 (compatible; MSIE 5.5; Windows 95; YComp 5.0.0.0)',
'Mozilla/4.0 (compatible; MSIE 5.5; Windows 95; ie5.5cd_t-zone_0005)',
'Mozilla/4.0 (compatible; MSIE 5.5; Windows 98)',
'Mozilla/4.0 (compatible; MSIE 5.5; Windows 98; .NET CLR 1.0.3705)',
'Mozilla/4.0 (compatible; MSIE 5.5; Windows 98; DigExt)',
'Mozilla/4.0 (compatible; MSIE 5.5; Windows 98; H010818)',
'Mozilla/4.0 (compatible; MSIE 5.5; Windows 98; MSIECrawler)',
'Mozilla/4.0 (compatible; MSIE 5.5; Windows 98; MSN 6.1; MSNbMSFT; MSNmja-jp; MSNc00)',
'Mozilla/4.0 (compatible; MSIE 5.5; Windows 98; MSOCD; AtHomeJP191)',
'Mozilla/4.0 (compatible; MSIE 5.5; Windows 98; Q312461)',
'Mozilla/4.0 (compatible; MSIE 5.5; Windows 98; Q312461; T312461)',
'Mozilla/4.0 (compatible; MSIE 5.5; Windows 98; T312461)',
'Mozilla/4.0 (compatible; MSIE 5.5; Windows 98; T312461; .NET CLR 1.0.3705)',
'Mozilla/4.0 (compatible; MSIE 5.5; Windows 98; T312461; istb 641)',
'Mozilla/4.0 (compatible; MSIE 5.5; Windows 98; T312461; istb 641; COM+ 1.0.2204)',
'Mozilla/4.0 (compatible; MSIE 5.5; Windows 98; Win 9x 4.90',
'Mozilla/4.0 (compatible; MSIE 5.5; Windows 98; Win 9x 4.90)',
'Mozilla/4.0 (compatible; MSIE 5.5; Windows 98; Win 9x 4.90; .NET CLR 1.0.3705)',
'Mozilla/4.0 (compatible; MSIE 5.5; Windows 98; Win 9x 4.90; Lunascape 0.98d)',
'Mozilla/4.0 (compatible; MSIE 5.5; Windows 98; Win 9x 4.90; MSOCD; AtHomeJP0109)',
'Mozilla/4.0 (compatible; MSIE 5.5; Windows 98; Win 9x 4.90; Q312461)',
'Mozilla/4.0 (compatible; MSIE 5.5; Windows 98; Win 9x 4.90; Q312461; T312461)',
'Mozilla/4.0 (compatible; MSIE 5.5; Windows 98; Win 9x 4.90; T312461)',
'Mozilla/4.0 (compatible; MSIE 5.5; Windows 98; Win 9x 4.90; T312461; .NET CLR 1.0.3705)',
'Mozilla/4.0 (compatible; MSIE 5.5; Windows 98; Win 9x 4.90; T312461; Lunascape 0.99c)',
'Mozilla/4.0 (compatible; MSIE 5.5; Windows 98; Win 9x 4.90; Unithink)',
'Mozilla/4.0 (compatible; MSIE 5.5; Windows 98; Win 9x 4.90; telus.net_v5.0.1; Hotbar 4.0)',
'Mozilla/4.0 (compatible; MSIE 5.5; Windows 98; telus.net_v5.0.1)',
'Mozilla/4.0 (compatible; MSIE 5.5; Windows NT 4.0)',
'Mozilla/4.0 (compatible; MSIE 5.5; Windows NT 4.0; H010818)',
'Mozilla/4.0 (compatible; MSIE 5.5; Windows NT 4.0; H010818; CPT-IE401SP1; T312461)',
'Mozilla/4.0 (compatible; MSIE 5.5; Windows NT 4.0; SenseWave 1.0)',
'Mozilla/4.0 (compatible; MSIE 5.5; Windows NT 4.0; Suncorp Metway Ltd)',
'Mozilla/4.0 (compatible; MSIE 5.5; Windows NT 4.0; T312461)',
'Mozilla/4.0 (compatible; MSIE 5.5; Windows NT 4.0; T312461; .NET CLR 1.0.3705)',
'Mozilla/4.0 (compatible; MSIE 5.5; Windows NT 4.0; T312461; Lunascape 0.95a)',
'Mozilla/4.0 (compatible; MSIE 5.5; Windows NT 4.0; Yahoo! JAPAN Version Windows 95/NT CD-ROM Edition 1.0.)',
'Mozilla/4.0 (compatible; MSIE 5.5; Windows NT 5.0)',
'Mozilla/4.0 (compatible; MSIE 5.5; Windows NT 5.0; (R1 1.1))',
'Mozilla/4.0 (compatible; MSIE 5.5; Windows NT 5.0; (R1 1.1); (R1 1.3))',
'Mozilla/4.0 (compatible; MSIE 5.5; Windows NT 5.0; (R1 1.3))',
'Mozilla/4.0 (compatible; MSIE 5.5; Windows NT 5.0; .NET CLR 1.0.3705)',
'Mozilla/4.0 (compatible; MSIE 5.5; Windows NT 5.0; AIRF)',
'Mozilla/4.0 (compatible; MSIE 5.5; Windows NT 5.0; DigExt)',
'Mozilla/4.0 (compatible; MSIE 5.5; Windows NT 5.0; FORJEIS55SP1)',
'Mozilla/4.0 (compatible; MSIE 5.5; Windows NT 5.0; H010818)',
'Mozilla/4.0 (compatible; MSIE 5.5; Windows NT 5.0; Hotbar 3.0; istb 641)',
'Mozilla/4.0 (compatible; MSIE 5.5; Windows NT 5.0; Hotbar 4.1.7.0)',
'Mozilla/4.0 (compatible; MSIE 5.5; Windows NT 5.0; Lunascape 0.98c)',
'Mozilla/4.0 (compatible; MSIE 5.5; Windows NT 5.0; N_o_k_i_a)',
'Mozilla/4.0 (compatible; MSIE 5.5; Windows NT 5.0; Q312461)',
'Mozilla/4.0 (compatible; MSIE 5.5; Windows NT 5.0; Q312461; .NET CLR 1.0.3705)',
'Mozilla/4.0 (compatible; MSIE 5.5; Windows NT 5.0; T312461)',
'Mozilla/4.0 (compatible; MSIE 5.5; Windows NT 5.0; T312461; .NET CLR 1.0.3705)',
'Mozilla/4.0 (compatible; MSIE 5.5; Windows NT 5.0; T312461; Hewle'
)
| 59.6875 | 131 | 0.608153 | 2,197 | 13,370 | 3.678198 | 0.106054 | 0.038362 | 0.154808 | 0.31506 | 0.787031 | 0.758569 | 0.744091 | 0.731964 | 0.694963 | 0.577528 | 0 | 0.148856 | 0.22169 | 13,370 | 223 | 132 | 59.955157 | 0.627715 | 0.002319 | 0 | 0 | 0 | 0.449275 | 0.751995 | 0.012344 | 0 | 0 | 0 | 0 | 0.10628 | 1 | 0.028986 | false | 0.019324 | 0.014493 | 0 | 0.043478 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
80879172c827a2e342271dcf61bb378ea7e8edbf | 106 | py | Python | Ar_Script/unittest_demo/Cal_demo/doc_test/test_demo.py | archerckk/PyTest | 610dd89df8d70c096f4670ca11ed2f0ca3196ca5 | [
"MIT"
] | null | null | null | Ar_Script/unittest_demo/Cal_demo/doc_test/test_demo.py | archerckk/PyTest | 610dd89df8d70c096f4670ca11ed2f0ca3196ca5 | [
"MIT"
] | 1 | 2020-01-19T01:19:57.000Z | 2020-01-19T01:19:57.000Z | Ar_Script/unittest_demo/Cal_demo/doc_test/test_demo.py | archerckk/PyTest | 610dd89df8d70c096f4670ca11ed2f0ca3196ca5 | [
"MIT"
] | null | null | null | import unittest
class Test_print(unittest.TestCase):
def test_print(self):
print('test aaa') | 17.666667 | 36 | 0.698113 | 14 | 106 | 5.142857 | 0.642857 | 0.25 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.198113 | 106 | 6 | 37 | 17.666667 | 0.847059 | 0 | 0 | 0 | 0 | 0 | 0.074766 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.25 | 0 | 0.75 | 0.75 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 6 |
8090e8523b1939839f8f84195204224c06d74654 | 16,336 | py | Python | wiwo/frames.py | CoreSecurity/wiwo | 44bd44b8ebea7e33105a7f4dac6480493cbb9623 | [
"Apache-1.1"
] | 76 | 2015-08-01T23:24:43.000Z | 2018-07-02T11:13:16.000Z | wiwo/frames.py | 6e726d/wiwo | 44bd44b8ebea7e33105a7f4dac6480493cbb9623 | [
"Apache-1.1"
] | 1 | 2016-01-28T22:11:17.000Z | 2016-02-03T22:14:46.000Z | wiwo/frames.py | 6e726d/wiwo | 44bd44b8ebea7e33105a7f4dac6480493cbb9623 | [
"Apache-1.1"
] | 27 | 2015-08-11T07:24:42.000Z | 2018-10-05T11:09:54.000Z | #!/usr/bin/env python
# -*- coding: iso-8859-15 -*-
#
# Copyright 2003-2015 CORE Security Technologies
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# Authors:
# Andres Blanco (6e726d)
# Andres Gazzoli
#
import array
import struct
from impacket.ImpactPacket import ProtocolPacket
class WiwoFrame(ProtocolPacket):
"""
Represents the header that appears on every wiwo frame.
+-----------------+--------+
| Ethernet Header | Type |
+-----------------+--------+
14 bytes 1 byte
"""
ethertype = 0xFAFA
def __init__(self, buff=None):
header_size = 1
tail_size = 0
ProtocolPacket.__init__(self, header_size, tail_size)
if buff:
self.load_packet(buff)
def get_type(self):
return self.header.get_byte(0)
def set_type(self, value):
self.header.set_byte(0, value)
class WiwoEmptyFrame(ProtocolPacket):
"""
Represents the header that appears on every Wiwo empty frame.
"""
def __init__(self, buff=None):
header_size = 0
tail_size = 0
ProtocolPacket.__init__(self, header_size, tail_size)
if buff:
self.load_packet(buff)
class WiwoAckFrame(WiwoEmptyFrame):
"""
Represents the header that appears on every wiwo ACK frame. This is an empty header.
"""
frametype = 0x00
class WiwoAnnounceFrame(WiwoEmptyFrame):
"""
Represents the header that appears on every wiwo announce frame. This is an empty header.
"""
frametype = 0x01
class WiwoInfoRequestFrame(WiwoEmptyFrame):
"""
Represents the header that appears on every wiwo info request frame. This is an empty header.
"""
frametype = 0x02
class WiwoInfoResponseFrame(ProtocolPacket):
"""
Represents the header that appears on every wiwo info response frame.
+-----------------+--------+---------------------+
| Ethernet Header | Type | Info Response Items |
+-----------------+--------+---------------------+
14 bytes 1 byte N bytes
|
+-----------------------------+
|
+-------------+------------+-------------+---------------+-------------+--------------------+---------+
| Item Length | Iface Name | Item Length | Protocol Name | Item Length | Supported Channels | Channel |
+-------------+------------+-------------+---------------+-------------+--------------------+---------+
1 byte N bytes 1 byte N bytes 1 byte N bytes 1 byte
"""
frametype = 0x03
def __init__(self, buff=None):
header_size = 0
tail_size = 0
ProtocolPacket.__init__(self, header_size, tail_size)
if buff:
self.load_packet(buff)
def get_interfaces(self):
interfaces = list()
buff = self.body.get_bytes()
while len(buff):
wirfi = WiwoInfoResponseFrameInterface(buff)
if wirfi.get_iface_len() == 0:
break
interfaces.append(wirfi)
buff = buff[wirfi.get_len():]
return interfaces
class WiwoInfoResponseFrameInterface(ProtocolPacket):
SIZE_ITEM_LENGTH = 1
SIZE_ITEM_CHANNEL = 1
def __init__(self, buff=None):
header_size = 0
tail_size = 0
ProtocolPacket.__init__(self, header_size, tail_size)
if buff:
self.load_packet(buff)
def get_len(self):
result = self.SIZE_ITEM_LENGTH + self.get_iface_len()
result += self.SIZE_ITEM_LENGTH + self.get_protocol_len()
result += self.SIZE_ITEM_LENGTH + self.get_channels_count()
result += self.SIZE_ITEM_CHANNEL
return result
def get_iface_len(self):
return self.body.get_byte(0)
def set_iface_len(self, value):
self.body.set_byte(0, value)
def get_iface(self):
return self.body.get_bytes()[self.SIZE_ITEM_LENGTH:self.get_iface_len()+self.SIZE_ITEM_LENGTH]
def get_iface_as_string(self):
return self.get_iface().tostring()
def set_iface(self, value):
self.body.get_bytes()[self.SIZE_ITEM_LENGTH:] = value
def set_iface_from_string(self, value):
self.body.get_bytes()[self.SIZE_ITEM_LENGTH:] = array.array('B', value)
def get_protocol_len(self):
return self.body.get_byte(self.SIZE_ITEM_LENGTH + self.get_iface_len())
def set_protocol_len(self, value):
self.body.set_byte(self.SIZE_ITEM_LENGTH + self.get_iface_len(), value)
def get_protocol(self):
offset = int()
offset += self.SIZE_ITEM_LENGTH + self.get_iface_len()
offset += self.SIZE_ITEM_LENGTH
return self.body.get_bytes()[offset:offset+self.get_protocol_len()]
def get_protocol_as_string(self):
return self.get_protocol().tostring()
def set_protocol(self, value):
offset = int()
offset += self.SIZE_ITEM_LENGTH + self.get_iface_len()
offset += self.SIZE_ITEM_LENGTH
self.body.get_bytes()[offset:offset+self.get_protocol_len()] = value
def set_protocol_from_string(self, value):
offset = int()
offset += self.SIZE_ITEM_LENGTH + self.get_iface_len()
offset += self.SIZE_ITEM_LENGTH
self.body.get_bytes()[offset:offset+self.get_protocol_len()] = array.array('B', value)
def get_channels_count(self):
offset = int()
offset += self.SIZE_ITEM_LENGTH + self.get_iface_len()
offset += self.SIZE_ITEM_LENGTH + self.get_protocol_len()
return self.body.get_byte(offset)
def set_channels_count(self, value):
offset = int()
offset += self.SIZE_ITEM_LENGTH + self.get_iface_len()
offset += self.SIZE_ITEM_LENGTH + self.get_protocol_len()
self.body.set_byte(offset, value)
def get_channels(self):
offset = int()
offset += self.SIZE_ITEM_LENGTH + self.get_iface_len()
offset += self.SIZE_ITEM_LENGTH + self.get_protocol_len()
offset += self.SIZE_ITEM_LENGTH
return self.body.get_bytes()[offset:offset+self.get_channels_count()]
def set_channels(self, value):
offset = int()
offset += self.SIZE_ITEM_LENGTH + self.get_iface_len()
offset += self.SIZE_ITEM_LENGTH + self.get_protocol_len()
offset += self.SIZE_ITEM_LENGTH
self.body.get_bytes()[offset:offset+self.get_channels_count()] = value
def get_channel(self):
offset = int()
offset += self.SIZE_ITEM_LENGTH + self.get_iface_len()
offset += self.SIZE_ITEM_LENGTH + self.get_protocol_len()
offset += self.SIZE_ITEM_LENGTH + self.get_channels_count()
return self.body.get_byte(offset)
def set_channel(self, value):
offset = int()
offset += self.SIZE_ITEM_LENGTH + self.get_iface_len()
offset += self.SIZE_ITEM_LENGTH + self.get_protocol_len()
offset += self.SIZE_ITEM_LENGTH + self.get_channels_count()
self.body.set_byte(offset, value)
class WiwoSetChannelFrame(ProtocolPacket):
"""
Represents the header that appears on every wiwo set channel frame.
+-----------------+--------+--------------+------------+---------+
| Ethernet Header | Type | Iface Length | Iface Name | Channel |
+-----------------+--------+--------------+------------+---------+
14 bytes 1 byte 1 byte N bytes 1 byte
"""
SIZE_ITEM_LENGTH = 1
SIZE_ITEM_CHANNEL = 1
frametype = 0x04
def __init__(self, buff=None):
header_size = 0
tail_size = 0
ProtocolPacket.__init__(self, header_size, tail_size)
if buff:
self.load_packet(buff)
def get_iface_len(self):
return self.body.get_byte(0)
def set_iface_len(self, value):
self.body.set_byte(0, value)
def get_iface(self):
return self.body.get_bytes()[self.SIZE_ITEM_LENGTH:self.SIZE_ITEM_LENGTH+self.get_iface_len()]
def get_iface_as_string(self):
return self.get_iface().tostring()
def set_iface(self, value):
self.body.get_bytes()[self.SIZE_ITEM_LENGTH:] = value
def set_iface_from_string(self, value):
self.body.get_bytes()[self.SIZE_ITEM_LENGTH:] = array.array('B', value)
def get_channel(self):
return self.body.get_byte(self.SIZE_ITEM_LENGTH+self.get_iface_len())
def set_channel(self, value):
self.body.set_byte(self.SIZE_ITEM_LENGTH+self.get_iface_len(), value)
class WiwoStartFrame(ProtocolPacket):
"""
Represents the header that appears on every wiwo start frame.
+-----------------+--------+--------------+------------+-------------------+------------+
| Ethernet Header | Type | Iface Length | Iface Name | BPF Filter Length | BPF Filter |
+-----------------+--------+--------------+------------+-------------------+------------+
14 bytes 1 byte 1 byte N bytes 2 bytes N bytes
"""
SIZE_ITEM_LENGTH = 1
SIZE_BPF_FILTER_LENGTH = 2
frametype = 0x05
def __init__(self, buff=None):
header_size = 0
tail_size = 0
ProtocolPacket.__init__(self, header_size, tail_size)
if buff:
self.load_packet(buff)
def get_iface_len(self):
return self.body.get_byte(0)
def set_iface_len(self, value):
self.body.set_byte(0, value)
def get_iface(self):
return self.body.get_bytes()[self.SIZE_ITEM_LENGTH:self.SIZE_ITEM_LENGTH+self.get_iface_len()]
def get_iface_as_string(self):
return self.get_iface().tostring()
def set_iface(self, value):
self.body.get_bytes()[self.SIZE_ITEM_LENGTH:] = value
def set_iface_from_string(self, value):
self.body.get_bytes()[self.SIZE_ITEM_LENGTH:] = array.array('B', value)
def get_filter_len(self):
return self.body.get_word(self.SIZE_ITEM_LENGTH+self.get_iface_len())
def set_filter_len(self, value):
self.body.set_word(self.SIZE_ITEM_LENGTH+self.get_iface_len(), value)
def get_filter(self):
offset = int()
offset += self.SIZE_ITEM_LENGTH+self.get_iface_len()+self.SIZE_BPF_FILTER_LENGTH
return self.body.get_bytes()[offset:]
def get_filter_as_string(self):
return self.get_filter().tostring()
def set_filter(self, value):
offset = int()
offset += self.SIZE_ITEM_LENGTH+self.get_iface_len()+self.SIZE_BPF_FILTER_LENGTH
self.body.get_bytes()[offset:] = value
def set_filter_from_string(self, value):
offset = int()
offset += self.SIZE_ITEM_LENGTH+self.get_iface_len()+self.SIZE_BPF_FILTER_LENGTH
self.body.get_bytes()[offset:] = array.array('B', value)
class WiwoStopFrame(WiwoEmptyFrame):
"""
Represents the header that appears on every wiwo stop frame. This is an empty header.
"""
frametype = 0x06
class WiwoDataFrame(ProtocolPacket):
"""
This represents the header that appears on every wiwo data frame.
+-----------------+--------+----------+
| Ethernet Header | Type | Data |
+-----------------+--------+----------+
14 bytes 1 byte N bytes
"""
frametype = 0x07
def __init__(self, buff=None):
header_size = 0
tail_size = 0
ProtocolPacket.__init__(self, header_size, tail_size)
if buff:
self.load_packet(buff)
def get_data(self):
return self.body.get_bytes()
def get_data_as_string(self):
return self.get_data().tostring()
def set_data(self, value):
self.body.get_bytes()[:] = value
def set_data_from_string(self, value):
self.body.get_bytes()[:] = array.array('B', value)
class WiwoDataFragmentFrame(ProtocolPacket):
"""
Represents the header that appears on every wiwo data frame.
+-----------------+--------+------------+----------+
| Ethernet Header | Type | Seq Ctrl | Data |
+-----------------+--------+------------+----------+
14 bytes 1 byte 1 byte N bytes
"""
SEQUENCE_NUMBER_MASK = 0x7F
LAST_FRAGMENT_MASK = 0x80
SIZE_SEQ_CTRL_LENGTH = 1
frametype = 0x08
def __init__(self, buff=None):
header_size = 1
tail_size = 0
ProtocolPacket.__init__(self, header_size, tail_size)
if buff:
self.load_packet(buff)
def get_sequence_control(self):
return self.header.get_byte(0)
def set_sequence_control(self, value):
self.header.set_byte(0, value)
def get_sequence_number(self):
return struct.unpack("B", chr(self.get_sequence_control()))[0] & self.SEQUENCE_NUMBER_MASK
def is_last_fragment(self):
return bool((self.get_sequence_control() & self.LAST_FRAGMENT_MASK) >> 7)
def get_data(self):
return self.body.get_bytes()
def get_data_as_string(self):
return self.get_data().tostring()
def set_data(self, value):
self.body.get_bytes()[:] = value
def set_data_from_string(self, value):
self.body.get_bytes()[:] = array.array('B', value)
class WiwoDataInjectFrame(ProtocolPacket):
"""
Represents the header that appears on every wiwo data inject frame.
+-----------------+--------+--------------+------------+----------+
| Ethernet Header | Type | Iface Length | Iface Name | Data |
+-----------------+--------+--------------+------------+----------+
14 bytes 1 byte 1 byte N bytes N bytes
"""
SIZE_ITEM_LENGTH = 1
frametype = 0x09
def __init__(self, buff=None):
header_size = 0
tail_size = 0
ProtocolPacket.__init__(self, header_size, tail_size)
if buff:
self.load_packet(buff)
def get_iface_len(self):
return self.body.get_byte(0)
def set_iface_len(self, value):
self.body.set_byte(0, value)
def get_iface(self):
return self.body.get_bytes()[self.SIZE_ITEM_LENGTH:self.SIZE_ITEM_LENGTH+self.get_iface_len()]
def get_iface_as_string(self):
return self.get_iface().tostring()
def set_iface(self, value):
self.body.get_bytes()[self.SIZE_ITEM_LENGTH:] = value
def set_iface_from_string(self, value):
self.body.get_bytes()[self.SIZE_ITEM_LENGTH:] = array.array('B', value)
def get_data(self):
return self.body.get_bytes()[self.SIZE_ITEM_LENGTH+self.get_iface_len():]
def get_data_as_string(self):
return self.get_data().tostring()
def set_data(self, value):
self.body.get_bytes()[self.SIZE_ITEM_LENGTH+self.get_iface_len():] = value
def set_data_from_string(self, value):
self.body.get_bytes()[self.SIZE_ITEM_LENGTH+self.get_iface_len():] = array.array('B', value)
class WiwoErrorFrame(ProtocolPacket):
"""
Represents the header that appears on every wiwo error frame.
+-----------------+--------+---------------+
| Ethernet Header | Type | Error Message |
+-----------------+--------+---------------+
14 bytes 1 byte N bytes
"""
frametype = 0x0A
def __init__(self, buff=None):
header_size = 0
tail_size = 0
ProtocolPacket.__init__(self, header_size, tail_size)
if buff:
self.load_packet(buff)
def get_msg(self):
return self.body.get_bytes()
def get_msg_as_string(self):
return self.get_msg().tostring()
def set_msg(self, value):
self.body.get_bytes()[:] = value
def set_msg_from_string(self, value):
self.body.get_bytes()[:] = array.array('B', value)
| 31.295019 | 107 | 0.595311 | 2,004 | 16,336 | 4.576846 | 0.097804 | 0.065416 | 0.087004 | 0.104012 | 0.776167 | 0.766463 | 0.741932 | 0.712167 | 0.655691 | 0.576319 | 0 | 0.011114 | 0.245409 | 16,336 | 521 | 108 | 31.355086 | 0.732944 | 0.236655 | 0 | 0.669014 | 0 | 0 | 0.000909 | 0 | 0 | 0 | 0.004794 | 0 | 0 | 1 | 0.274648 | false | 0 | 0.010563 | 0.102113 | 0.538732 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
80d79653ca6db38a6bcd8417b091242eb6858bf7 | 1,071 | py | Python | src/tests/test_node_scale.py | PearCoding/SpriteRecourceCompiler | 34dcd9175f92e580705a2f07998046a05a19329b | [
"MIT"
] | 1 | 2016-04-16T21:33:58.000Z | 2016-04-16T21:33:58.000Z | src/tests/test_node_scale.py | PearCoding/SpriteRecourceCompiler | 34dcd9175f92e580705a2f07998046a05a19329b | [
"MIT"
] | null | null | null | src/tests/test_node_scale.py | PearCoding/SpriteRecourceCompiler | 34dcd9175f92e580705a2f07998046a05a19329b | [
"MIT"
] | null | null | null | import src
FILES = ["./src/tests/files/test_1.png", "./src/tests/files/test_2.png", "./src/tests/files/test_3.png"]
def test_node_scale_sqr():
p = src.Parser()
p.from_string(
"""
<package>
<input filter="*">
<output>
<scale factor="2" />
</output>
</input>
</package>
""")
proc = src.Processor()
p.parse(proc)
outputs = proc.execute(FILES)
assert len(outputs) == 3
for output in outputs:
assert output.width == 64
assert output.height == 64
def test_node_scale():
p = src.Parser()
p.from_string(
"""
<package>
<input filter="*">
<output>
<scale factor="2,4" />
</output>
</input>
</package>
""")
proc = src.Processor()
p.parse(proc)
outputs = proc.execute(FILES)
assert len(outputs) == 3
for output in outputs:
assert output.width == 64
assert output.height == 128
| 21 | 104 | 0.485528 | 115 | 1,071 | 4.434783 | 0.313043 | 0.094118 | 0.076471 | 0.1 | 0.847059 | 0.768627 | 0.768627 | 0.768627 | 0.768627 | 0.768627 | 0 | 0.025148 | 0.368814 | 1,071 | 50 | 105 | 21.42 | 0.72929 | 0 | 0 | 0.666667 | 0 | 0 | 0.122628 | 0.122628 | 0 | 0 | 0 | 0 | 0.25 | 1 | 0.083333 | false | 0 | 0.041667 | 0 | 0.125 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
0389f442dabd87ae721a0c46d5a51f3b54a1bf04 | 2,088 | py | Python | tests/test_data.py | lradomski10m/nasa-fevo | 92cc11097766e94346bc2b0b0819e9191f8b04bf | [
"MIT"
] | null | null | null | tests/test_data.py | lradomski10m/nasa-fevo | 92cc11097766e94346bc2b0b0819e9191f8b04bf | [
"MIT"
] | null | null | null | tests/test_data.py | lradomski10m/nasa-fevo | 92cc11097766e94346bc2b0b0819e9191f8b04bf | [
"MIT"
] | null | null | null | FIRST_PHOTO_URL = "https://mars.nasa.gov/msl-raw-images/proj/msl/redops/ods/surface/sol/03418/opgs/edr/ncam/NLB_700915024EDR_F0933240CCAM03417M_.JPG"
TEST_RESP = {"photos":
[
{
"id": 948763,
"sol": 3418,
"camera": {
"id": 26,
"name": "NAVCAM",
"rover_id": 5,
"full_name": "Navigation Camera"
},
"img_src": FIRST_PHOTO_URL,
"earth_date": "2022-03-18",
"rover": {
"id": 5,
"name": "Curiosity",
"landing_date": "2012-08-06",
"launch_date": "2011-11-26",
"status": "active"
}
},
{
"id": 948764,
"sol": 3418,
"camera": {
"id": 26,
"name": "NAVCAM",
"rover_id": 5,
"full_name": "Navigation Camera"
},
"img_src": "https://mars.nasa.gov/msl-raw-images/proj/msl/redops/ods/surface/sol/03418/opgs/edr/ncam/NRB_700922659EDR_F0933240NCAM00560M_.JPG",
"earth_date": "2022-03-18",
"rover": {
"id": 5,
"name": "Curiosity",
"landing_date": "2012-08-06",
"launch_date": "2011-11-26",
"status": "active"
}
},
{
"id": 948765,
"sol": 3418,
"camera": {
"id": 26,
"name": "NAVCAM",
"rover_id": 5,
"full_name": "Navigation Camera"
},
"img_src": "https://mars.nasa.gov/msl-raw-images/proj/msl/redops/ods/surface/sol/03418/opgs/edr/ncam/NRB_700922621EDR_F0933240NCAM00560M_.JPG",
"earth_date": "2022-03-18",
"rover": {
"id": 5,
"name": "Curiosity",
"landing_date": "2012-08-06",
"launch_date": "2011-11-26",
"status": "active"
}
}
]
}
| 32.123077 | 155 | 0.416667 | 192 | 2,088 | 4.364583 | 0.3125 | 0.050119 | 0.057279 | 0.057279 | 0.856802 | 0.856802 | 0.856802 | 0.856802 | 0.856802 | 0.856802 | 0 | 0.160535 | 0.427203 | 2,088 | 64 | 156 | 32.625 | 0.540134 | 0 | 0 | 0.629032 | 0 | 0.048387 | 0.416866 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
ff0154d982ba9b89d883e90dbe2179e1b67dee3a | 53 | py | Python | tests/fakeapi/foo_bar.py | athenianco/especifico | af8b97868390ba23a2c5e3e8506bd5215ee0084a | [
"Apache-2.0"
] | null | null | null | tests/fakeapi/foo_bar.py | athenianco/especifico | af8b97868390ba23a2c5e3e8506bd5215ee0084a | [
"Apache-2.0"
] | null | null | null | tests/fakeapi/foo_bar.py | athenianco/especifico | af8b97868390ba23a2c5e3e8506bd5215ee0084a | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/env python3
def search():
return ""
| 8.833333 | 22 | 0.584906 | 7 | 53 | 4.428571 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.02439 | 0.226415 | 53 | 5 | 23 | 10.6 | 0.731707 | 0.396226 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | true | 0 | 0 | 0.5 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
ff021c90419f2bc174ef1eaa8357be66753ea260 | 6,543 | py | Python | huaweicloud-sdk-rms/huaweicloudsdkrms/v1/model/__init__.py | wuchen-huawei/huaweicloud-sdk-python-v3 | 3683d703f4320edb2b8516f36f16d485cff08fc2 | [
"Apache-2.0"
] | 64 | 2020-06-12T07:05:07.000Z | 2022-03-30T03:32:50.000Z | huaweicloud-sdk-rms/huaweicloudsdkrms/v1/model/__init__.py | wuchen-huawei/huaweicloud-sdk-python-v3 | 3683d703f4320edb2b8516f36f16d485cff08fc2 | [
"Apache-2.0"
] | 11 | 2020-07-06T07:56:54.000Z | 2022-01-11T11:14:40.000Z | huaweicloud-sdk-rms/huaweicloudsdkrms/v1/model/__init__.py | wuchen-huawei/huaweicloud-sdk-python-v3 | 3683d703f4320edb2b8516f36f16d485cff08fc2 | [
"Apache-2.0"
] | 24 | 2020-06-08T11:42:13.000Z | 2022-03-04T06:44:08.000Z | # coding: utf-8
from __future__ import absolute_import
# import models into model package
from huaweicloudsdkrms.v1.model.channel_config_body import ChannelConfigBody
from huaweicloudsdkrms.v1.model.create_policy_assignments_request import CreatePolicyAssignmentsRequest
from huaweicloudsdkrms.v1.model.create_policy_assignments_response import CreatePolicyAssignmentsResponse
from huaweicloudsdkrms.v1.model.create_tracker_config_request import CreateTrackerConfigRequest
from huaweicloudsdkrms.v1.model.create_tracker_config_response import CreateTrackerConfigResponse
from huaweicloudsdkrms.v1.model.delete_policy_assignment_request import DeletePolicyAssignmentRequest
from huaweicloudsdkrms.v1.model.delete_policy_assignment_response import DeletePolicyAssignmentResponse
from huaweicloudsdkrms.v1.model.delete_tracker_config_request import DeleteTrackerConfigRequest
from huaweicloudsdkrms.v1.model.delete_tracker_config_response import DeleteTrackerConfigResponse
from huaweicloudsdkrms.v1.model.disable_policy_assignment_request import DisablePolicyAssignmentRequest
from huaweicloudsdkrms.v1.model.disable_policy_assignment_response import DisablePolicyAssignmentResponse
from huaweicloudsdkrms.v1.model.enable_policy_assignment_request import EnablePolicyAssignmentRequest
from huaweicloudsdkrms.v1.model.enable_policy_assignment_response import EnablePolicyAssignmentResponse
from huaweicloudsdkrms.v1.model.history_item import HistoryItem
from huaweicloudsdkrms.v1.model.list_all_resources_request import ListAllResourcesRequest
from huaweicloudsdkrms.v1.model.list_all_resources_response import ListAllResourcesResponse
from huaweicloudsdkrms.v1.model.list_built_in_policy_definitions_request import ListBuiltInPolicyDefinitionsRequest
from huaweicloudsdkrms.v1.model.list_built_in_policy_definitions_response import ListBuiltInPolicyDefinitionsResponse
from huaweicloudsdkrms.v1.model.list_policy_assignments_request import ListPolicyAssignmentsRequest
from huaweicloudsdkrms.v1.model.list_policy_assignments_response import ListPolicyAssignmentsResponse
from huaweicloudsdkrms.v1.model.list_policy_states_by_assignment_id_request import ListPolicyStatesByAssignmentIdRequest
from huaweicloudsdkrms.v1.model.list_policy_states_by_assignment_id_response import ListPolicyStatesByAssignmentIdResponse
from huaweicloudsdkrms.v1.model.list_policy_states_by_domain_id_request import ListPolicyStatesByDomainIdRequest
from huaweicloudsdkrms.v1.model.list_policy_states_by_domain_id_response import ListPolicyStatesByDomainIdResponse
from huaweicloudsdkrms.v1.model.list_policy_states_by_resource_id_request import ListPolicyStatesByResourceIdRequest
from huaweicloudsdkrms.v1.model.list_policy_states_by_resource_id_response import ListPolicyStatesByResourceIdResponse
from huaweicloudsdkrms.v1.model.list_providers_request import ListProvidersRequest
from huaweicloudsdkrms.v1.model.list_providers_response import ListProvidersResponse
from huaweicloudsdkrms.v1.model.list_regions_request import ListRegionsRequest
from huaweicloudsdkrms.v1.model.list_regions_response import ListRegionsResponse
from huaweicloudsdkrms.v1.model.list_resources_request import ListResourcesRequest
from huaweicloudsdkrms.v1.model.list_resources_response import ListResourcesResponse
from huaweicloudsdkrms.v1.model.page_info import PageInfo
from huaweicloudsdkrms.v1.model.policy_assignment import PolicyAssignment
from huaweicloudsdkrms.v1.model.policy_assignment_request_body import PolicyAssignmentRequestBody
from huaweicloudsdkrms.v1.model.policy_definition import PolicyDefinition
from huaweicloudsdkrms.v1.model.policy_filter_definition import PolicyFilterDefinition
from huaweicloudsdkrms.v1.model.policy_parameter_definition import PolicyParameterDefinition
from huaweicloudsdkrms.v1.model.policy_parameter_value import PolicyParameterValue
from huaweicloudsdkrms.v1.model.policy_state import PolicyState
from huaweicloudsdkrms.v1.model.region import Region
from huaweicloudsdkrms.v1.model.resource_entity import ResourceEntity
from huaweicloudsdkrms.v1.model.resource_provider_response import ResourceProviderResponse
from huaweicloudsdkrms.v1.model.resource_relation import ResourceRelation
from huaweicloudsdkrms.v1.model.resource_type_response import ResourceTypeResponse
from huaweicloudsdkrms.v1.model.run_evaluation_by_policy_assignment_id_request import RunEvaluationByPolicyAssignmentIdRequest
from huaweicloudsdkrms.v1.model.run_evaluation_by_policy_assignment_id_response import RunEvaluationByPolicyAssignmentIdResponse
from huaweicloudsdkrms.v1.model.selector_config_body import SelectorConfigBody
from huaweicloudsdkrms.v1.model.show_built_in_policy_definition_request import ShowBuiltInPolicyDefinitionRequest
from huaweicloudsdkrms.v1.model.show_built_in_policy_definition_response import ShowBuiltInPolicyDefinitionResponse
from huaweicloudsdkrms.v1.model.show_evaluation_state_by_assignment_id_request import ShowEvaluationStateByAssignmentIdRequest
from huaweicloudsdkrms.v1.model.show_evaluation_state_by_assignment_id_response import ShowEvaluationStateByAssignmentIdResponse
from huaweicloudsdkrms.v1.model.show_policy_assignment_request import ShowPolicyAssignmentRequest
from huaweicloudsdkrms.v1.model.show_policy_assignment_response import ShowPolicyAssignmentResponse
from huaweicloudsdkrms.v1.model.show_resource_by_id_request import ShowResourceByIdRequest
from huaweicloudsdkrms.v1.model.show_resource_by_id_response import ShowResourceByIdResponse
from huaweicloudsdkrms.v1.model.show_resource_history_request import ShowResourceHistoryRequest
from huaweicloudsdkrms.v1.model.show_resource_history_response import ShowResourceHistoryResponse
from huaweicloudsdkrms.v1.model.show_resource_relations_request import ShowResourceRelationsRequest
from huaweicloudsdkrms.v1.model.show_resource_relations_response import ShowResourceRelationsResponse
from huaweicloudsdkrms.v1.model.show_tracker_config_request import ShowTrackerConfigRequest
from huaweicloudsdkrms.v1.model.show_tracker_config_response import ShowTrackerConfigResponse
from huaweicloudsdkrms.v1.model.tracker_config_body import TrackerConfigBody
from huaweicloudsdkrms.v1.model.tracker_obs_channel_config_body import TrackerOBSChannelConfigBody
from huaweicloudsdkrms.v1.model.tracker_smn_channel_config_body import TrackerSMNChannelConfigBody
from huaweicloudsdkrms.v1.model.update_policy_assignment_request import UpdatePolicyAssignmentRequest
from huaweicloudsdkrms.v1.model.update_policy_assignment_response import UpdatePolicyAssignmentResponse
| 89.630137 | 128 | 0.925416 | 694 | 6,543 | 8.412104 | 0.182997 | 0.241007 | 0.26396 | 0.321343 | 0.508222 | 0.438849 | 0.367592 | 0.152449 | 0.137376 | 0.099349 | 0 | 0.010863 | 0.043252 | 6,543 | 72 | 129 | 90.875 | 0.921725 | 0.00703 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
ff02fa85f279c717f1cbb3e5517fc2c042727d2c | 1,239 | py | Python | vocabs2/run.py | networkedsystems/PercEvite | 709386b2c6fc0619b8b7df55ae2feb9806cfed9e | [
"BSD-4-Clause"
] | 3 | 2019-10-23T12:53:02.000Z | 2021-05-16T00:55:22.000Z | vocabs2/run.py | networkedsystems/PercEvite | 709386b2c6fc0619b8b7df55ae2feb9806cfed9e | [
"BSD-4-Clause"
] | 1 | 2020-02-10T18:38:10.000Z | 2020-02-10T18:38:10.000Z | vocabs2/run.py | networkedsystems/PercEvite | 709386b2c6fc0619b8b7df55ae2feb9806cfed9e | [
"BSD-4-Clause"
] | 3 | 2020-01-24T19:26:25.000Z | 2020-11-09T20:18:13.000Z | from subprocess import run
run(["./build/vocabs2.exe", "A"])
run(["./build/vocabs2.exe", "C"])
run(["./build/vocabs2.exe", "E"])
run(["./build/vocabs2.exe", "A", "0.5"])
run(["./build/vocabs2.exe", "C", "0.5"])
run(["./build/vocabs2.exe", "E", "0.5"])
run(["./build/vocabs2.exe", "A", "1"])
run(["./build/vocabs2.exe", "C", "1"])
run(["./build/vocabs2.exe", "E", "1"])
run(["./build/vocabs2.exe", "A", "1.5"])
run(["./build/vocabs2.exe", "C", "1.5"])
run(["./build/vocabs2.exe", "E", "1.5"])
run(["./build/vocabs2.exe", "A", "2"])
run(["./build/vocabs2.exe", "C", "2"])
run(["./build/vocabs2.exe", "E", "2"])
run(["./build/vocabs2.exe", "A", "2.5"])
run(["./build/vocabs2.exe", "C", "2.5"])
run(["./build/vocabs2.exe", "E", "2.5"])
run(["./build/vocabs2.exe", "A", "3"])
run(["./build/vocabs2.exe", "C", "3"])
run(["./build/vocabs2.exe", "E", "3"])
run(["./build/vocabs2.exe", "A", "3.5"])
run(["./build/vocabs2.exe", "C", "3.5"])
run(["./build/vocabs2.exe", "E", "3.5"])
run(["./build/vocabs2.exe", "A", "4"])
run(["./build/vocabs2.exe", "C", "4"])
run(["./build/vocabs2.exe", "E", "4"])
run(["./build/vocabs2.exe", "A", "5"])
run(["./build/vocabs2.exe", "C", "5"])
run(["./build/vocabs2.exe", "E", "5"])
| 37.545455 | 41 | 0.510089 | 193 | 1,239 | 3.274611 | 0.082902 | 0.379747 | 0.712025 | 0.85443 | 0.962025 | 0.746835 | 0 | 0 | 0 | 0 | 0 | 0.061883 | 0.100081 | 1,239 | 32 | 42 | 38.71875 | 0.504933 | 0 | 0 | 0 | 0 | 0 | 0.539354 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.032258 | 0 | 0.032258 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
20885efb2380f8bb8d19733ec5883600da721668 | 82 | py | Python | src/aspire/noise/__init__.py | PrincetonUniversity/ASPIRE-Python | 1bff8d3884183203bd77695a76bccb1efc909fd3 | [
"MIT"
] | 7 | 2018-11-07T16:45:35.000Z | 2020-01-10T16:54:26.000Z | src/aspire/noise/__init__.py | PrincetonUniversity/ASPIRE-Python | 1bff8d3884183203bd77695a76bccb1efc909fd3 | [
"MIT"
] | 1 | 2019-04-05T18:41:39.000Z | 2019-04-05T18:41:39.000Z | src/aspire/noise/__init__.py | PrincetonUniversity/ASPIRE-Python | 1bff8d3884183203bd77695a76bccb1efc909fd3 | [
"MIT"
] | 2 | 2019-06-04T17:01:53.000Z | 2019-07-08T19:01:40.000Z | from .noise import AnisotropicNoiseEstimator, NoiseEstimator, WhiteNoiseEstimator
| 41 | 81 | 0.890244 | 6 | 82 | 12.166667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.073171 | 82 | 1 | 82 | 82 | 0.960526 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
20983a319c037aa2db66acd2fd67cb81556be36d | 26 | py | Python | SapLogonPad/__init__.py | jduncan8142/sap_gui_robot_framework | 01fd8f59548afd643f37009967a8a5183654fe12 | [
"MIT"
] | null | null | null | SapLogonPad/__init__.py | jduncan8142/sap_gui_robot_framework | 01fd8f59548afd643f37009967a8a5183654fe12 | [
"MIT"
] | null | null | null | SapLogonPad/__init__.py | jduncan8142/sap_gui_robot_framework | 01fd8f59548afd643f37009967a8a5183654fe12 | [
"MIT"
] | null | null | null | from .SapLogonPad import * | 26 | 26 | 0.807692 | 3 | 26 | 7 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.115385 | 26 | 1 | 26 | 26 | 0.913043 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
20a11f7f875bc7586eaef2cca0f536f1e869ee64 | 40 | py | Python | app/database/__init__.py | WebPractices/Eager | ce509dbbd65199e70b76aad82ba740e80864ac65 | [
"MIT"
] | null | null | null | app/database/__init__.py | WebPractices/Eager | ce509dbbd65199e70b76aad82ba740e80864ac65 | [
"MIT"
] | null | null | null | app/database/__init__.py | WebPractices/Eager | ce509dbbd65199e70b76aad82ba740e80864ac65 | [
"MIT"
] | null | null | null | from .MongodbClient import MongodbClient | 40 | 40 | 0.9 | 4 | 40 | 9 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.075 | 40 | 1 | 40 | 40 | 0.972973 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
20a22f23d45bbc1223e6bd47a32b6e6be6c1c1d6 | 255 | py | Python | cs1006-IntroPython/HW4-ObjectInheritance/assignment/assignment.py | ecahern16/AcademicCode | cf95a65545e7054604c23d4830f709323eeb81f5 | [
"Apache-2.0"
] | null | null | null | cs1006-IntroPython/HW4-ObjectInheritance/assignment/assignment.py | ecahern16/AcademicCode | cf95a65545e7054604c23d4830f709323eeb81f5 | [
"Apache-2.0"
] | null | null | null | cs1006-IntroPython/HW4-ObjectInheritance/assignment/assignment.py | ecahern16/AcademicCode | cf95a65545e7054604c23d4830f709323eeb81f5 | [
"Apache-2.0"
] | null | null | null | class Assignment(object):
def __init__(self, difficulty):
self.difficulty = difficulty
self.grades = []
def __repr__(self):
return "\tAssignment Difficulty: {}\n\tStudent Count:{}".format(self.difficulty, len(self.grades)) | 36.428571 | 106 | 0.662745 | 27 | 255 | 5.962963 | 0.592593 | 0.26087 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.2 | 255 | 7 | 106 | 36.428571 | 0.789216 | 0 | 0 | 0 | 0 | 0 | 0.183594 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0 | 0.166667 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
20bfa36eb0123a968011acc9bd3df35d7f437a00 | 3,999 | py | Python | py_stringsimjoin/tests/test_profiler.py | guptaarth87/py_stringsimjoin | f8c2d1e380d39a491b887684ae734a5b7cbda24c | [
"BSD-3-Clause"
] | 34 | 2016-06-20T18:11:40.000Z | 2022-01-02T16:44:47.000Z | py_stringsimjoin/tests/test_profiler.py | guptaarth87/py_stringsimjoin | f8c2d1e380d39a491b887684ae734a5b7cbda24c | [
"BSD-3-Clause"
] | 22 | 2017-08-04T16:48:32.000Z | 2021-05-26T16:54:14.000Z | py_stringsimjoin/tests/test_profiler.py | guptaarth87/py_stringsimjoin | f8c2d1e380d39a491b887684ae734a5b7cbda24c | [
"BSD-3-Clause"
] | 19 | 2016-07-28T04:55:39.000Z | 2021-06-21T13:26:08.000Z |
import unittest
from nose.tools import assert_equal, assert_list_equal, raises
import pandas as pd
from py_stringsimjoin.profiler.profiler import profile_table_for_join
class ProfileTableForJoinTestCases(unittest.TestCase):
def setUp(self):
self.table = pd.DataFrame([('1', 'data science'),
('2', None),
('3', 'data integration'),
('4', ''),
('5', 'data science')],
columns = ['id', 'attr'])
def test_profile_table_for_join(self):
profile_output = profile_table_for_join(self.table)
expected_output_attrs = ['Unique values', 'Missing values', 'Comments']
# verify whether the output dataframe has the necessary attributes.
assert_list_equal(list(profile_output.columns.values),
expected_output_attrs)
expected_unique_column = ['5 (100.0%)', '4 (80.0%)']
# verify whether correct values are present in 'Unique values' column.
assert_list_equal(list(profile_output['Unique values']),
expected_unique_column)
expected_missing_column = ['0 (0.0%)', '1 (20.0%)']
# verify whether correct values are present in 'Missing values' column.
assert_list_equal(list(profile_output['Missing values']),
expected_missing_column)
expected_comments = ['This attribute can be used as a key attribute.',
'Joining on this attribute will ignore 1 (20.0%) rows.']
# verify whether correct values are present in 'Comments' column.
assert_list_equal(list(profile_output['Comments']),
expected_comments)
# verify whether index name is set correctly in the output dataframe.
assert_equal(profile_output.index.name, 'Attribute')
expected_index_column = ['id', 'attr']
# verify whether correct values are present in the dataframe index.
assert_list_equal(list(profile_output.index.values),
expected_index_column)
def test_profile_table_for_join_with_profile_attrs(self):
profile_output = profile_table_for_join(self.table, ['attr'])
expected_output_attrs = ['Unique values', 'Missing values', 'Comments']
# verify whether the output dataframe has the necessary attributes.
assert_list_equal(list(profile_output.columns.values),
expected_output_attrs)
expected_unique_column = ['4 (80.0%)']
# verify whether correct values are present in 'Unique values' column.
assert_list_equal(list(profile_output['Unique values']),
expected_unique_column)
expected_missing_column = ['1 (20.0%)']
# verify whether correct values are present in 'Missing values' column.
assert_list_equal(list(profile_output['Missing values']),
expected_missing_column)
expected_comments = ['Joining on this attribute will ignore 1 (20.0%) rows.']
# verify whether correct values are present in 'Comments' column.
assert_list_equal(list(profile_output['Comments']),
expected_comments)
# verify whether index name is set correctly in the output dataframe.
assert_equal(profile_output.index.name, 'Attribute')
expected_index_column = ['attr']
# verify whether correct values are present in the dataframe index.
assert_list_equal(list(profile_output.index.values),
expected_index_column)
@raises(TypeError)
def test_profile_table_for_join_invalid_table(self):
profile_table_for_join([])
@raises(AssertionError)
def test_profile_table_for_join_invalid_profile_attr(self):
profile_table_for_join(self.table, ['id', 'invalid_attr'])
| 44.433333 | 85 | 0.630908 | 448 | 3,999 | 5.372768 | 0.183036 | 0.075613 | 0.06855 | 0.078936 | 0.828833 | 0.817615 | 0.784379 | 0.756959 | 0.756959 | 0.719568 | 0 | 0.01287 | 0.28107 | 3,999 | 89 | 86 | 44.932584 | 0.824348 | 0.2013 | 0 | 0.436364 | 0 | 0 | 0.139352 | 0 | 0 | 0 | 0 | 0 | 0.254545 | 1 | 0.090909 | false | 0 | 0.072727 | 0 | 0.181818 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
20cc2b894aee91f0638c9b714749d69fa1d79da8 | 5,202 | py | Python | tests/handyspark/sql/test_string.py | FoundryAI/handyspark | bf23522eb0794cce8af2f036347b34df1a2c7b09 | [
"MIT"
] | null | null | null | tests/handyspark/sql/test_string.py | FoundryAI/handyspark | bf23522eb0794cce8af2f036347b34df1a2c7b09 | [
"MIT"
] | null | null | null | tests/handyspark/sql/test_string.py | FoundryAI/handyspark | bf23522eb0794cce8af2f036347b34df1a2c7b09 | [
"MIT"
] | null | null | null | import numpy.testing as npt
from handyspark import *
# integer returns
def test_count(sdf, pdf):
hdf = sdf.toHandy()
hdf = hdf.assign(newcol=hdf.pandas['Name'].str.count(pat='Mr.'))
hres = hdf.cols['newcol'][:20]
res = pdf['Name'].str.count(pat='Mr.')[:20]
npt.assert_array_equal(hres, res)
def test_find(sdf, pdf):
hdf = sdf.toHandy()
hdf = hdf.assign(newcol=hdf.pandas['Name'].str.find(sub='Mr.'))
hres = hdf.cols['newcol'][:20]
res = pdf['Name'].str.find(sub='Mr.')[:20]
npt.assert_array_equal(hres, res)
def test_len(sdf, pdf):
hdf = sdf.toHandy()
hdf = hdf.assign(newcol=hdf.pandas['Name'].str.len())
hres = hdf.cols['newcol'][:20]
res = pdf['Name'].str.len()[:20]
npt.assert_array_equal(hres, res)
def test_rfind(sdf, pdf):
hdf = sdf.toHandy()
hdf = hdf.assign(newcol=hdf.pandas['Name'].str.rfind(sub='Mr.'))
hres = hdf.cols['newcol'][:20]
res = pdf['Name'].str.rfind(sub='Mr.')[:20]
npt.assert_array_equal(hres, res)
# boolean returns
def test_contains(sdf, pdf):
hdf = sdf.toHandy()
hdf = hdf.assign(newcol=hdf.pandas['Name'].str.contains(pat='Mr.'))
hres = hdf.cols['newcol'][:20]
res = pdf['Name'].str.contains(pat='Mr.')[:20]
npt.assert_array_equal(hres, res)
def test_startswith(sdf, pdf):
hdf = sdf.toHandy()
hdf = hdf.assign(newcol=hdf.pandas['Name'].str.startswith(pat='Mr.'))
hres = hdf.cols['newcol'][:20]
res = pdf['Name'].str.startswith(pat='Mr.')[:20]
npt.assert_array_equal(hres, res)
def test_match(sdf, pdf):
hdf = sdf.toHandy()
hdf = hdf.assign(newcol=hdf.pandas['Name'].str.match(pat='Mr.'))
hres = hdf.cols['newcol'][:20]
res = pdf['Name'].str.match(pat='Mr.')[:20]
npt.assert_array_equal(hres, res)
def test_isalpha(sdf, pdf):
hdf = sdf.toHandy()
hdf = hdf.assign(newcol=hdf.pandas['Name'].str.isalpha())
hres = hdf.cols['newcol'][:20]
res = pdf['Name'].str.isalpha()[:20]
npt.assert_array_equal(hres, res)
# string returns
def test_replace(sdf, pdf):
hdf = sdf.toHandy()
hdf = hdf.assign(newcol=hdf.pandas['Name'].str.replace(pat='Mr.', repl='Mister'))
hres = hdf.cols['newcol'][:20]
res = pdf['Name'].str.replace(pat='Mr.', repl='Mister')[:20]
npt.assert_array_equal(hres, res)
def test_repeat(sdf, pdf):
hdf = sdf.toHandy()
hdf = hdf.assign(newcol=hdf.pandas['Name'].str.repeat(repeats=2))
hres = hdf.cols['newcol'][:20]
res = pdf['Name'].str.repeat(repeats=2)[:20]
npt.assert_array_equal(hres, res)
def test_join(sdf, pdf):
hdf = sdf.toHandy()
hdf = hdf.assign(newcol=hdf.pandas['Name'].str.join(sep=','))
hres = hdf.cols['newcol'][:20]
res = pdf['Name'].str.join(sep=',')[:20]
npt.assert_array_equal(hres, res)
def test_pad(sdf, pdf):
hdf = sdf.toHandy()
hdf = hdf.assign(newcol=hdf.pandas['Name'].str.pad(width=20))
hres = hdf.cols['newcol'][:20]
res = pdf['Name'].str.pad(width=20)[:20]
npt.assert_array_equal(hres, res)
def test_slice(sdf, pdf):
hdf = sdf.toHandy()
hdf = hdf.assign(newcol=hdf.pandas['Name'].str.slice(start=5, stop=10))
hres = hdf.cols['newcol'][:20]
res = pdf['Name'].str.slice(start=5, stop=10)[:20]
npt.assert_array_equal(hres, res)
def test_slice_replace(sdf, pdf):
hdf = sdf.toHandy()
hdf = hdf.assign(newcol=hdf.pandas['Name'].str.slice_replace(start=5, stop=10, repl='X'))
hres = hdf.cols['newcol'][:20]
res = pdf['Name'].str.slice_replace(start=5, stop=10, repl='X')[:20]
npt.assert_array_equal(hres, res)
def test_strip(sdf, pdf):
hdf = sdf.toHandy()
hdf = hdf.assign(newcol=hdf.pandas['Name'].str.strip())
hres = hdf.cols['newcol'][:20]
res = pdf['Name'].str.strip()[:20]
npt.assert_array_equal(hres, res)
def test_wrap(sdf, pdf):
hdf = sdf.toHandy()
hdf = hdf.assign(newcol=hdf.pandas['Name'].str.wrap(width=5))
hres = hdf.cols['newcol'][:20]
res = pdf['Name'].str.wrap(width=5)[:20]
npt.assert_array_equal(hres, res)
def test_get(sdf, pdf):
hdf = sdf.toHandy()
hdf = hdf.assign(newcol=hdf.pandas['Name'].str.get(i=5))
hres = hdf.cols['newcol'][:20]
res = pdf['Name'].str.get(i=5)[:20]
npt.assert_array_equal(hres, res)
def test_center(sdf, pdf):
hdf = sdf.toHandy()
hdf = hdf.assign(newcol=hdf.pandas['Name'].str.center(width=10))
hres = hdf.cols['newcol'][:20]
res = pdf['Name'].str.center(width=10)[:20]
npt.assert_array_equal(hres, res)
def test_zfill(sdf, pdf):
hdf = sdf.toHandy()
hdf = hdf.assign(newcol=hdf.pandas['Name'].str.zfill(width=20))
hres = hdf.cols['newcol'][:20]
res = pdf['Name'].str.zfill(width=20)[:20]
npt.assert_array_equal(hres, res)
def test_normalize(sdf, pdf):
hdf = sdf.toHandy()
hdf = hdf.assign(newcol=hdf.pandas['Name'].str.normalize(form='NFKD'))
hres = hdf.cols['newcol'][:20]
res = pdf['Name'].str.normalize(form='NFKD')[:20]
npt.assert_array_equal(hres, res)
def test_upper(sdf, pdf):
hdf = sdf.toHandy()
hdf = hdf.assign(newcol=hdf.pandas['Name'].str.upper())
hres = hdf.cols['newcol'][:20]
res = pdf['Name'].str.upper()[:20]
npt.assert_array_equal(hres, res)
| 34 | 93 | 0.62995 | 813 | 5,202 | 3.94957 | 0.081181 | 0.09156 | 0.05886 | 0.07848 | 0.920274 | 0.857988 | 0.857988 | 0.820617 | 0.820617 | 0.653379 | 0 | 0.026033 | 0.158208 | 5,202 | 152 | 94 | 34.223684 | 0.707239 | 0.008843 | 0 | 0.492188 | 0 | 0 | 0.069876 | 0 | 0 | 0 | 0 | 0 | 0.164063 | 1 | 0.164063 | false | 0 | 0.015625 | 0 | 0.179688 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
458dc3922452db4900f7ebfe5592a380eda01c2d | 43 | py | Python | auth_ms/authApp/serializers/__init__.py | jonathangil97/4a-docs | 66d6fed0416d9b02bbaf22334a67b323fe8cd692 | [
"Unlicense"
] | null | null | null | auth_ms/authApp/serializers/__init__.py | jonathangil97/4a-docs | 66d6fed0416d9b02bbaf22334a67b323fe8cd692 | [
"Unlicense"
] | null | null | null | auth_ms/authApp/serializers/__init__.py | jonathangil97/4a-docs | 66d6fed0416d9b02bbaf22334a67b323fe8cd692 | [
"Unlicense"
] | null | null | null |
from .userSerializer import UserSerializer | 21.5 | 42 | 0.883721 | 4 | 43 | 9.5 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.093023 | 43 | 2 | 42 | 21.5 | 0.974359 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
45ce92066d00c050ab47e1771778a38de467c89d | 1,370 | py | Python | main.py | miniprime1/not-python | 2e7877d8af1bae8a16ce5e6c50a220ce3f1396bf | [
"0BSD"
] | 1 | 2021-04-24T02:30:18.000Z | 2021-04-24T02:30:18.000Z | main.py | miniprime1/not-python | 2e7877d8af1bae8a16ce5e6c50a220ce3f1396bf | [
"0BSD"
] | null | null | null | main.py | miniprime1/not-python | 2e7877d8af1bae8a16ce5e6c50a220ce3f1396bf | [
"0BSD"
] | null | null | null | import platform
import sys
if platform.system() == "Windows":
print('Not Python 3.7.8 (tags/v3.7.8:4b47a5b6ba, Jun 28 2020, 07:55:33) [MSC v.1916 32 bit (Intel)] on win32')
print('Type "help", "copyright", "credits" or "license" for more information.')
while True:
i = input(">>> ")
if i == "exit()": break
print("Nope!")
elif platform.system() == "Drarwin":
print('Not Python 3.7.8 (v3.7.8:4b47a5b6ba, Jun 27 2020, 04:47:50)')
print('[Clang 6.0 (clang-600.0.57)] on darwin')
print('Type "help", "copyright", "credits" or "license" for more information.')
while True:
i = input(">>> ")
if i == "exit()": break
print("Nope!")
elif platform.system() == "Linux":
print('Not Python 3.7.8 (v3.7.8:4b47a5b6ba, Jun 27 2020, 09:53:43)')
print('[GCC 8.4.0] on linux')
print('Type "help", "copyright", "credits" or "license" for more information.')
while True:
i = input(">>> ")
if i == "exit()": break
print("Nope!")
else:
print('Not Python 3.7.8 (v3.7.8:4b47a5b6ba, Jun 27 2020, 09:53:43)')
print('[GCC 8.4.0]', 'on', sys.platform)
print('Type "help", "copyright", "credits" or "license" for more information.')
while True:
i = input(">>> ")
if i == "exit()": break
print("Nope!")
| 36.052632 | 115 | 0.548905 | 195 | 1,370 | 3.85641 | 0.328205 | 0.021277 | 0.074468 | 0.079787 | 0.792553 | 0.769947 | 0.74734 | 0.74734 | 0.74734 | 0.74734 | 0 | 0.111765 | 0.255474 | 1,370 | 37 | 116 | 37.027027 | 0.62549 | 0 | 0 | 0.666667 | 0 | 0.121212 | 0.531133 | 0.018005 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.060606 | 0 | 0.060606 | 0.454545 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
b32ceaebb472da9bb7816135a7261447af945ec0 | 2,364 | py | Python | tests/test_csv_writer.py | taylordeatri/phc-sdk-py | 8f3ec6ac44e50c7194f174fd0098de390886693d | [
"MIT"
] | 1 | 2020-07-22T12:46:58.000Z | 2020-07-22T12:46:58.000Z | tests/test_csv_writer.py | taylordeatri/phc-sdk-py | 8f3ec6ac44e50c7194f174fd0098de390886693d | [
"MIT"
] | 54 | 2019-10-09T16:19:04.000Z | 2022-01-19T20:28:59.000Z | tests/test_csv_writer.py | taylordeatri/phc-sdk-py | 8f3ec6ac44e50c7194f174fd0098de390886693d | [
"MIT"
] | 2 | 2019-10-30T19:54:43.000Z | 2020-12-03T18:57:15.000Z | import math
import os
import numpy as np
import pandas as pd
from phc.util.csv_writer import CSVWriter
def setup():
if os.path.exists("/tmp/sample.csv"):
os.remove("/tmp/sample.csv")
def test_writing_batches():
setup()
writer = CSVWriter("/tmp/sample.csv")
first_batch = pd.DataFrame(
[
{"first_name": "Laura", "last_name": "Lane"},
{"first_name": "Susie", "last_name": "Smith"},
]
)
second_batch = pd.DataFrame(
[
{"first_name": "Jenny", "last_name": "Jones"},
{"last_name": "Motte", "date_of_birth": "03/07/1986"},
]
)
writer.write(first_batch)
writer.write(second_batch)
frame = pd.read_csv("/tmp/sample.csv")
assert frame.columns.tolist() == [
"first_name",
"last_name",
"date_of_birth",
]
# Cannot compare NaN - must use math.isnan()
is_nan = np.vectorize(lambda x: isinstance(x, float) and math.isnan(x))
assert np.logical_or(
frame.values
== [
["Laura", "Lane", math.nan],
["Susie", "Smith", math.nan],
["Jenny", "Jones", math.nan],
[math.nan, "Motte", "03/07/1986"],
],
is_nan(frame.values),
).all()
def test_bad_column_names():
setup()
writer = CSVWriter("/tmp/sample.csv")
first_batch = pd.DataFrame(
[
{"first\n_name": "Laura", "last_name": "Lane"},
{"first\n_name": "Susie", "last_name": "Smith"},
]
)
second_batch = pd.DataFrame(
[
{"first_name": "Jenny", "last\t_name": "Jones"},
{"last\t_name": "Motte", "date_of_birth": "03/07/1986"},
]
)
writer.write(first_batch)
writer.write(second_batch)
frame = pd.read_csv("/tmp/sample.csv")
assert frame.columns.tolist() == [
"first_name",
"last_name",
"date_of_birth",
]
# Cannot compare NaN - must use math.isnan()
is_nan = np.vectorize(lambda x: isinstance(x, float) and math.isnan(x))
assert np.logical_or(
frame.values
== [
["Laura", "Lane", math.nan],
["Susie", "Smith", math.nan],
["Jenny", "Jones", math.nan],
[math.nan, "Motte", "03/07/1986"],
],
is_nan(frame.values),
).all()
| 23.405941 | 75 | 0.528765 | 278 | 2,364 | 4.323741 | 0.258993 | 0.053245 | 0.0599 | 0.069884 | 0.825291 | 0.825291 | 0.78203 | 0.78203 | 0.78203 | 0.78203 | 0 | 0.019335 | 0.299915 | 2,364 | 100 | 76 | 23.64 | 0.706949 | 0.035956 | 0 | 0.578947 | 0 | 0 | 0.217926 | 0 | 0 | 0 | 0 | 0 | 0.052632 | 1 | 0.039474 | false | 0 | 0.065789 | 0 | 0.105263 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
b332c313667ad32a1bcf74fe561ef5efeee351ed | 38 | py | Python | cloudr/utils/filetype/__init__.py | lee88688/cloud-file-manager | d3be3021a28213d08da95733d4b2b3f271de7bbe | [
"MIT"
] | 5 | 2018-07-22T14:04:55.000Z | 2019-12-27T12:51:46.000Z | cloudr/utils/filetype/__init__.py | lee88688/cloud-file-manager | d3be3021a28213d08da95733d4b2b3f271de7bbe | [
"MIT"
] | 9 | 2018-07-19T08:30:41.000Z | 2020-07-07T19:25:44.000Z | cloudr/utils/filetype/__init__.py | lee88688/cloud-file-manager | d3be3021a28213d08da95733d4b2b3f271de7bbe | [
"MIT"
] | 1 | 2020-01-23T08:44:30.000Z | 2020-01-23T08:44:30.000Z | from .filetype import check_file_type
| 19 | 37 | 0.868421 | 6 | 38 | 5.166667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.105263 | 38 | 1 | 38 | 38 | 0.911765 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
640355f269e7339c0ceb7bd08850b1be905c7d77 | 10,267 | py | Python | tests/lib2to3/test_convert_response.py | AlainMaesNokia/openapi | ea3534efdaeaf82989521d0725773cdf5ed5ea1a | [
"BSD-2-Clause"
] | 64 | 2018-12-13T19:46:47.000Z | 2022-03-25T05:25:56.000Z | tests/lib2to3/test_convert_response.py | AlainMaesNokia/openapi | ea3534efdaeaf82989521d0725773cdf5ed5ea1a | [
"BSD-2-Clause"
] | 71 | 2018-12-07T10:18:56.000Z | 2022-01-18T14:31:08.000Z | tests/lib2to3/test_convert_response.py | AlainMaesNokia/openapi | ea3534efdaeaf82989521d0725773cdf5ed5ea1a | [
"BSD-2-Clause"
] | 40 | 2018-12-19T22:57:13.000Z | 2022-03-27T14:09:14.000Z | """.convert_response() test suite."""
import pytest
import sphinxcontrib.openapi._lib2to3 as lib2to3
@pytest.fixture(scope="function")
def convert_response(oas_fragment):
def _wrapper(response, produces):
oas2 = oas_fragment(
"""
swagger: "2.0"
info:
title: An example spec
version: "1.0"
paths:
/test:
get:
responses:
'200':
description: a response description
"""
)
oas2["paths"]["/test"]["get"]["responses"]["200"] = response
oas2["paths"]["/test"]["get"]["produces"] = produces
oas3 = lib2to3.convert(oas2)
return oas3["paths"]["/test"]["get"]["responses"]["200"]
return _wrapper
def test_minimal(convert_response, oas_fragment):
converted = convert_response(
oas_fragment(
"""
description: a response description
"""
),
produces=["application/json"],
)
assert converted == oas_fragment(
"""
description: a response description
"""
)
def test_schema(convert_response, oas_fragment):
converted = convert_response(
oas_fragment(
"""
description: a response description
schema:
items:
format: int32
type: integer
type: array
"""
),
produces=["application/json"],
)
assert converted == oas_fragment(
"""
content:
application/json:
schema:
items:
format: int32
type: integer
type: array
description: a response description
"""
)
def test_schema_mimetypes(convert_response, oas_fragment):
converted = convert_response(
oas_fragment(
"""
description: a response description
schema:
items:
format: int32
type: integer
type: array
"""
),
produces=["application/json", "text/plain"],
)
assert converted == oas_fragment(
"""
content:
application/json:
schema:
items:
format: int32
type: integer
type: array
text/plain:
schema:
items:
format: int32
type: integer
type: array
description: a response description
"""
)
def test_schema_no_mimetypes(convert_response, oas_fragment):
converted = convert_response(
oas_fragment(
"""
description: a response description
schema:
items:
format: int32
type: integer
type: array
"""
),
produces=None,
)
assert converted == oas_fragment(
"""
content:
'*/*':
schema:
items:
format: int32
type: integer
type: array
description: a response description
"""
)
def test_examples(convert_response, oas_fragment):
converted = convert_response(
oas_fragment(
"""
description: a response description
examples:
application/json:
something: important
"""
),
produces=["application/json"],
)
assert converted == oas_fragment(
"""
content:
application/json:
example:
something: important
description: a response description
"""
)
def test_examples_any_type(convert_response, oas_fragment):
converted = convert_response(
oas_fragment(
"""
description: a response description
examples:
application/json: '{"something": "important"}'
"""
),
produces=["application/json"],
)
assert converted == oas_fragment(
"""
content:
application/json:
example: '{"something": "important"}'
description: a response description
"""
)
def test_examples_mimetypes(convert_response, oas_fragment):
converted = convert_response(
oas_fragment(
"""
description: a response description
examples:
application/json:
something: important
text/plain: something=imporant
"""
),
produces=["application/json", "text/plain"],
)
assert converted == oas_fragment(
"""
content:
application/json:
example:
something: important
text/plain:
example: something=imporant
description: a response description
"""
)
def test_headers_schema_only(convert_response, oas_fragment):
converted = convert_response(
oas_fragment(
"""
description: a response description
headers:
X-Test:
type: string
"""
),
produces=["application/json"],
)
assert converted == oas_fragment(
"""
description: a response description
headers:
X-Test:
schema:
type: string
"""
)
def test_headers_schema_extra(convert_response, oas_fragment):
converted = convert_response(
oas_fragment(
"""
description: a response description
headers:
X-Test:
description: Is it a test?
type: string
"""
),
produces=["application/json"],
)
assert converted == oas_fragment(
"""
description: a response description
headers:
X-Test:
description: Is it a test?
schema:
type: string
"""
)
def test_headers_multiple(convert_response, oas_fragment):
converted = convert_response(
oas_fragment(
"""
description: a response description
headers:
X-Bar:
format: int32
type: integer
X-Foo:
type: string
"""
),
produces=["application/json"],
)
assert converted == oas_fragment(
"""
description: a response description
headers:
X-Bar:
schema:
format: int32
type: integer
X-Foo:
schema:
type: string
"""
)
def test_schema_examples_headers(convert_response, oas_fragment):
converted = convert_response(
oas_fragment(
"""
description: a response description
examples:
application/json:
something: important
headers:
X-Test:
description: Is it a test?
type: string
schema:
items:
format: int32
type: integer
type: array
"""
),
produces=["application/json"],
)
assert converted == oas_fragment(
"""
description: a response description
content:
application/json:
example:
something: important
schema:
items:
format: int32
type: integer
type: array
headers:
X-Test:
description: Is it a test?
schema:
type: string
"""
)
def test_complete(convert_response, oas_fragment):
converted = convert_response(
oas_fragment(
"""
description: a response description
examples:
application/json:
something: important
headers:
X-Test:
description: Is it a test?
type: string
schema:
items:
format: int32
type: integer
type: array
"""
),
produces=["application/json"],
)
assert converted == oas_fragment(
"""
description: a response description
content:
application/json:
example:
something: important
schema:
items:
format: int32
type: integer
type: array
headers:
X-Test:
description: Is it a test?
schema:
type: string
"""
)
def test_vendor_extensions(convert_response, oas_fragment):
converted = convert_response(
oas_fragment(
"""
description: a response description
examples:
application/json:
something: important
headers:
X-Test:
description: Is it a test?
type: string
x-header-ext: header-ext
schema:
items:
format: int32
type: integer
type: array
x-schema-ext: schema-ext
x-response-ext: response-ext
"""
),
produces=["application/json"],
)
assert converted == oas_fragment(
"""
description: a response description
content:
application/json:
example:
something: important
schema:
items:
format: int32
type: integer
type: array
x-schema-ext: schema-ext
headers:
X-Test:
description: Is it a test?
schema:
type: string
x-header-ext: header-ext
x-response-ext: response-ext
"""
)
| 24.562201 | 68 | 0.472387 | 779 | 10,267 | 6.098845 | 0.096277 | 0.094927 | 0.102294 | 0.147758 | 0.891391 | 0.86445 | 0.835824 | 0.815618 | 0.815618 | 0.814355 | 0 | 0.009671 | 0.446089 | 10,267 | 417 | 69 | 24.621103 | 0.825743 | 0.003019 | 0 | 0.551724 | 0 | 0 | 0.081126 | 0 | 0 | 0 | 0 | 0 | 0.112069 | 1 | 0.12931 | false | 0 | 0.017241 | 0 | 0.163793 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
641d1f908ba1f681e8cd3c410f11e9ab2ef8ac66 | 28 | py | Python | categorical/__init__.py | enewe101/categorical | 09fcc24d7a6ca3433b5d29d54c51770cc77a5627 | [
"MIT"
] | 10 | 2016-07-15T06:57:21.000Z | 2021-12-12T16:56:19.000Z | categorical/__init__.py | enewe101/categorical | 09fcc24d7a6ca3433b5d29d54c51770cc77a5627 | [
"MIT"
] | 1 | 2016-07-25T03:13:54.000Z | 2016-07-25T03:18:18.000Z | categorical/__init__.py | enewe101/categorical | 09fcc24d7a6ca3433b5d29d54c51770cc77a5627 | [
"MIT"
] | 4 | 2017-05-01T20:06:04.000Z | 2022-02-06T10:38:18.000Z | from cat import Categorical
| 14 | 27 | 0.857143 | 4 | 28 | 6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.142857 | 28 | 1 | 28 | 28 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
64206ca3bd9f06d4431187392b8ee31d8b31f748 | 2,109 | py | Python | models/GCN.py | KrishnaswamyLab/LearnableScattering | b8aec1a98de61ddf9a6aa118951b8507ce0bd987 | [
"MIT"
] | 2 | 2021-11-02T10:43:01.000Z | 2021-11-03T17:10:56.000Z | models/GCN.py | KrishnaswamyLab/LearnableScattering | b8aec1a98de61ddf9a6aa118951b8507ce0bd987 | [
"MIT"
] | null | null | null | models/GCN.py | KrishnaswamyLab/LearnableScattering | b8aec1a98de61ddf9a6aa118951b8507ce0bd987 | [
"MIT"
] | null | null | null | import torch
from torch.nn import Linear
import torch.nn.functional as F
from torch_geometric.nn import GCNConv
from torch_geometric.nn import global_mean_pool
class GCN(torch.nn.Module):
def __init__(self, in_channels, hidden_channels, out_channels, return_embedding=False):
super(GCN, self).__init__()
self.conv1 = GCNConv(in_channels, hidden_channels)
self.conv2 = GCNConv(hidden_channels, hidden_channels)
self.conv3 = GCNConv(hidden_channels, hidden_channels)
self.lin = Linear(hidden_channels, out_channels)
self.return_embedding = return_embedding
def forward(self, data):
x, edge_index, batch = data.x, data.edge_index, data.batch
# 1. Obtain node embeddings
x = self.conv1(x, edge_index)
x = x.relu()
x = self.conv2(x, edge_index)
x = x.relu()
x = self.conv3(x, edge_index)
# 2. Readout layer
x = global_mean_pool(x, batch) # [batch_size, hidden_channels]
# 3. Apply a final classifier
# x = F.dropout(x, p=0.5, training=self.training)
y = self.lin(x)
if self.return_embedding:
return y, x
return y
class GCN_Headless(torch.nn.Module):
def __init__(self, in_channels, hidden_channels, out_channels, return_embedding=False):
super(GCN_Headless, self).__init__()
self.conv1 = GCNConv(in_channels, hidden_channels)
self.conv2 = GCNConv(hidden_channels, hidden_channels)
self.conv3 = GCNConv(hidden_channels, hidden_channels)
self.return_embedding = return_embedding
def forward(self, data):
x, edge_index, batch = data.x, data.edge_index, data.batch
# 1. Obtain node embeddings
x = self.conv1(x, edge_index)
x = x.relu()
x = self.conv2(x, edge_index)
x = x.relu()
x = self.conv3(x, edge_index)
# 2. Readout layer
x = global_mean_pool(x, batch) # [batch_size, hidden_channels]
# 3. Apply a final classifier
# x = F.dropout(x, p=0.5, training=self.training)
return x
| 34.016129 | 91 | 0.6477 | 287 | 2,109 | 4.526132 | 0.205575 | 0.161663 | 0.135489 | 0.120092 | 0.852964 | 0.812933 | 0.812933 | 0.812933 | 0.812933 | 0.812933 | 0 | 0.013968 | 0.253201 | 2,109 | 61 | 92 | 34.57377 | 0.810794 | 0.140825 | 0 | 0.634146 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.097561 | false | 0 | 0.121951 | 0 | 0.341463 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
643e2422da3dd97b46ad6acac2b64604233f553c | 45 | py | Python | aiotba/__init__.py | tweirtx/aiotba | 31b003780ca74cf2ab75f724997751aef4df78af | [
"MIT"
] | null | null | null | aiotba/__init__.py | tweirtx/aiotba | 31b003780ca74cf2ab75f724997751aef4df78af | [
"MIT"
] | null | null | null | aiotba/__init__.py | tweirtx/aiotba | 31b003780ca74cf2ab75f724997751aef4df78af | [
"MIT"
] | 2 | 2019-12-16T05:30:49.000Z | 2022-01-17T04:02:25.000Z | from . import *
from .http import TBASession
| 15 | 28 | 0.755556 | 6 | 45 | 5.666667 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.177778 | 45 | 2 | 29 | 22.5 | 0.918919 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
64410ff4baeacd1dbcba9593feb838b78f953300 | 35 | py | Python | CTFd/utils/helpers/__init__.py | AIica/Crypto-2020 | 8980fdd3c20651eb6fd4a66fa72c1f9099dc23d7 | [
"Apache-2.0"
] | null | null | null | CTFd/utils/helpers/__init__.py | AIica/Crypto-2020 | 8980fdd3c20651eb6fd4a66fa72c1f9099dc23d7 | [
"Apache-2.0"
] | null | null | null | CTFd/utils/helpers/__init__.py | AIica/Crypto-2020 | 8980fdd3c20651eb6fd4a66fa72c1f9099dc23d7 | [
"Apache-2.0"
] | null | null | null | from flask import url_for, request
| 17.5 | 34 | 0.828571 | 6 | 35 | 4.666667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.142857 | 35 | 1 | 35 | 35 | 0.933333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
644259f4696e76032e1aa23fb89e8be7bfbb06c6 | 169 | py | Python | modules/shared/infrastructure/log/__init__.py | eduardolujan/hexagonal_architecture_django | 8055927cb460bc40f3a2651c01a9d1da696177e8 | [
"BSD-3-Clause"
] | 6 | 2020-08-09T23:41:08.000Z | 2021-03-16T22:05:40.000Z | modules/shared/infrastructure/log/__init__.py | eduardolujan/hexagonal_architecture_django | 8055927cb460bc40f3a2651c01a9d1da696177e8 | [
"BSD-3-Clause"
] | 1 | 2020-10-02T02:59:38.000Z | 2020-10-02T02:59:38.000Z | modules/shared/infrastructure/log/__init__.py | eduardolujan/hexagonal_architecture_django | 8055927cb460bc40f3a2651c01a9d1da696177e8 | [
"BSD-3-Clause"
] | 2 | 2021-03-16T22:05:43.000Z | 2021-04-30T06:35:25.000Z | from .logger_decorator import LoggerDecorator
from .pylogger_service import PyLoggerService, get_logger
__all__ = ('LoggerDecorator', 'PyLoggerService', 'get_logger')
| 28.166667 | 62 | 0.822485 | 17 | 169 | 7.705882 | 0.588235 | 0.274809 | 0.366412 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.094675 | 169 | 5 | 63 | 33.8 | 0.856209 | 0 | 0 | 0 | 0 | 0 | 0.236686 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
ff532768ba704be77b8e60bb9f96f87423297a2c | 180 | py | Python | anomaly/isolation/column_selection.py | cubonacci/mixed-anomaly | b4cfa5bbae8429585214b1d2dfeac864d49a3158 | [
"Apache-2.0"
] | 21 | 2019-05-11T10:15:23.000Z | 2021-07-12T06:23:06.000Z | anomaly/isolation/column_selection.py | seah-kjs/mixed-anomaly | b4cfa5bbae8429585214b1d2dfeac864d49a3158 | [
"Apache-2.0"
] | 1 | 2019-11-25T13:18:49.000Z | 2019-11-25T13:18:49.000Z | anomaly/isolation/column_selection.py | seah-kjs/mixed-anomaly | b4cfa5bbae8429585214b1d2dfeac864d49a3158 | [
"Apache-2.0"
] | 13 | 2019-05-11T10:15:37.000Z | 2021-09-09T01:21:05.000Z | from typing import List
import pandas as pd
import numpy as np
def random_selector(diverse_columns: List[str], data: pd.DataFrame):
return np.random.choice(diverse_columns)
| 20 | 68 | 0.783333 | 28 | 180 | 4.928571 | 0.678571 | 0.202899 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.144444 | 180 | 8 | 69 | 22.5 | 0.896104 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0.6 | 0.2 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
ff5f5bc2c8384249d158c54c598c979941c2076b | 13,651 | py | Python | tests/test_model.py | broadinstitute/deepometry | 8187133bb09e8a18d08151c779f48cc2e4976d06 | [
"BSD-3-Clause"
] | 21 | 2017-09-14T10:33:23.000Z | 2022-01-23T01:06:18.000Z | tests/test_model.py | broadinstitute/deepometry | 8187133bb09e8a18d08151c779f48cc2e4976d06 | [
"BSD-3-Clause"
] | 15 | 2017-05-23T15:37:10.000Z | 2018-03-23T03:20:57.000Z | tests/test_model.py | broadinstitute/deepometry | 8187133bb09e8a18d08151c779f48cc2e4976d06 | [
"BSD-3-Clause"
] | 9 | 2017-08-09T18:12:59.000Z | 2019-01-23T11:38:41.000Z | import csv
import os.path
import keras
import keras_resnet.models
import numpy
import pkg_resources
import pytest
import deepometry.image.iterator
import deepometry.model
@pytest.fixture()
def data_dir(tmpdir):
return tmpdir.mkdir("data")
def test_init():
model = deepometry.model.Model(shape=(48, 48, 3), units=4, directory="/home/mugatu/models", name="zoolander")
assert model.model.input.shape.as_list() == [None, 48, 48, 3]
assert model.model.output.shape.as_list() == [None, 4]
assert model.directory == "/home/mugatu/models"
assert model.name == "zoolander"
def test_compile():
model = deepometry.model.Model(shape=(48, 48, 3), units=4)
model.compile()
assert model.model.loss == "categorical_crossentropy"
assert model.model.metrics == ["accuracy"]
assert isinstance(model.model.optimizer, keras.optimizers.Adam)
def test_fit_defaults(data_dir, mocker):
numpy.random.seed(53)
x = numpy.random.randint(256, size=(100, 48, 48, 3))
y = numpy.random.randint(4, size=(100,))
with mocker.patch("keras_resnet.models.ResNet50") as model_mock:
keras_resnet.models.ResNet50.return_value = model_mock
resources = mocker.patch("pkg_resources.resource_filename")
resources.side_effect = lambda _, filename: str(data_dir.join(os.path.basename(filename)))
model = deepometry.model.Model(shape=(48, 48, 3), units=4)
model.compile()
model.fit(
x,
y,
batch_size=10,
epochs=1,
validation_split=0.1,
verbose=0
)
model_mock.fit_generator.assert_called_once_with(
callbacks=mocker.ANY,
class_weight=mocker.ANY,
epochs=1,
generator=mocker.ANY,
steps_per_epoch=9,
validation_data=mocker.ANY,
validation_steps=1,
verbose=0
)
_, kwargs = model_mock.fit_generator.call_args
# callbacks
callbacks = kwargs["callbacks"]
assert len(callbacks) == 4
assert isinstance(callbacks[0], keras.callbacks.CSVLogger)
assert callbacks[0].filename == pkg_resources.resource_filename(
"deepometry",
os.path.join("data", "training.csv")
)
assert isinstance(callbacks[1], keras.callbacks.EarlyStopping)
assert isinstance(callbacks[2], keras.callbacks.ModelCheckpoint)
assert callbacks[2].filepath == pkg_resources.resource_filename(
"deepometry",
os.path.join("data", "checkpoint.hdf5")
)
assert isinstance(callbacks[3], keras.callbacks.ReduceLROnPlateau)
# generator
generator = kwargs["generator"]
assert isinstance(generator, deepometry.image.iterator.NumpyArrayIterator)
assert generator.batch_size == 10
assert generator.x.shape == (90, 48, 48, 3)
assert generator.image_data_generator.height_shift_range == 0.5
assert generator.image_data_generator.horizontal_flip == True
assert generator.image_data_generator.rotation_range == 180
assert generator.image_data_generator.vertical_flip == True
assert generator.image_data_generator.width_shift_range == 0.5
x_train = generator.x
sample = x_train[0]
expected = numpy.empty((48, 48, 3))
expected[:, :, 0] = (sample[:, :, 0] - numpy.mean(x_train[:, :, :, 0]) + 255.0) / (2.0 * 255.0)
expected[:, :, 1] = (sample[:, :, 1] - numpy.mean(x_train[:, :, :, 1]) + 255.0) / (2.0 * 255.0)
expected[:, :, 2] = (sample[:, :, 2] - numpy.mean(x_train[:, :, :, 2]) + 255.0) / (2.0 * 255.0)
actual = generator.image_data_generator.preprocessing_function(sample)
numpy.testing.assert_array_almost_equal(actual, expected, decimal=5)
# validation_data
validation_data = kwargs["validation_data"]
assert isinstance(validation_data, deepometry.image.iterator.NumpyArrayIterator)
assert validation_data.batch_size == 10
assert validation_data.x.shape == (10, 48, 48, 3)
assert validation_data.image_data_generator.height_shift_range == 0.5
assert validation_data.image_data_generator.horizontal_flip == True
assert validation_data.image_data_generator.rotation_range == 180
assert validation_data.image_data_generator.vertical_flip == True
assert validation_data.image_data_generator.width_shift_range == 0.5
x_valid = validation_data.x
sample = x_valid[0]
expected = numpy.empty((48, 48, 3))
expected[:, :, 0] = (sample[:, :, 0] - numpy.mean(x_train[:, :, :, 0]) + 255.0) / (2.0 * 255.0)
expected[:, :, 1] = (sample[:, :, 1] - numpy.mean(x_train[:, :, :, 1]) + 255.0) / (2.0 * 255.0)
expected[:, :, 2] = (sample[:, :, 2] - numpy.mean(x_train[:, :, :, 2]) + 255.0) / (2.0 * 255.0)
actual = generator.image_data_generator.preprocessing_function(sample)
numpy.testing.assert_array_almost_equal(actual, expected, decimal=5)
def test_fit_named_model(data_dir, mocker):
numpy.random.seed(53)
x = numpy.random.randint(256, size=(100, 48, 48, 3))
y = numpy.random.randint(4, size=(100,))
with mocker.patch("keras_resnet.models.ResNet50") as model_mock:
keras_resnet.models.ResNet50.return_value = model_mock
resources = mocker.patch("pkg_resources.resource_filename")
resources.side_effect = lambda _, filename: str(data_dir.join(os.path.basename(filename)))
model = deepometry.model.Model(shape=(48, 48, 3), units=4, name="zoolander")
model.compile()
model.fit(
x,
y,
batch_size=10,
epochs=1,
validation_split=0.1,
verbose=0
)
_, kwargs = model_mock.fit_generator.call_args
# callbacks
callbacks = kwargs["callbacks"]
assert callbacks[0].filename == pkg_resources.resource_filename(
"deepometry",
os.path.join("data", "zoolander_training.csv")
)
assert callbacks[2].filepath == pkg_resources.resource_filename(
"deepometry",
os.path.join("data", "zoolander_checkpoint.hdf5")
)
assert os.path.exists(
pkg_resources.resource_filename(
"deepometry",
os.path.join("data", "zoolander_means.csv")
)
)
def test_fit_named_directory(data_dir, mocker):
numpy.random.seed(53)
x = numpy.random.randint(256, size=(100, 48, 48, 3))
y = numpy.random.randint(4, size=(100,))
with mocker.patch("keras_resnet.models.ResNet50") as model_mock:
keras_resnet.models.ResNet50.return_value = model_mock
model_directory = str(data_dir.mkdir("models"))
model = deepometry.model.Model(shape=(48, 48, 3), units=4, directory=model_directory)
model.compile()
model.fit(
x,
y,
batch_size=10,
epochs=1,
validation_split=0.1,
verbose=0
)
_, kwargs = model_mock.fit_generator.call_args
# callbacks
callbacks = kwargs["callbacks"]
assert callbacks[0].filename == os.path.join(model_directory, "training.csv")
assert callbacks[2].filepath == os.path.join(model_directory, "checkpoint.hdf5")
assert os.path.exists(os.path.join(model_directory, "means.csv"))
def test_evaluate_defaults(data_dir, mocker):
x = numpy.random.randint(256, size=(100, 48, 48, 3)).astype(numpy.float64)
y = numpy.random.randint(4, size=(100,))
meanscsv = str(data_dir.join("means.csv"))
with open(meanscsv, "w") as csvfile:
writer = csv.writer(csvfile)
writer.writerow([125.3, 127.12, 121.9])
expected_samples = x.copy()
expected_samples[:, :, :, 0] = (expected_samples[:, :, :, 0] - 125.3 + 255.0) / (2.0 * 255.0)
expected_samples[:, :, :, 1] = (expected_samples[:, :, :, 1] - 127.12 + 255.0) / (2.0 * 255.0)
expected_samples[:, :, :, 2] = (expected_samples[:, :, :, 2] - 121.9 + 255.0) / (2.0 * 255.0)
expected_targets = keras.utils.to_categorical(y, 4)
with mocker.patch("keras_resnet.models.ResNet50") as model_mock:
keras_resnet.models.ResNet50.return_value = model_mock
resources = mocker.patch("pkg_resources.resource_filename")
resources.side_effect = lambda _, filename: str(data_dir.join(os.path.basename(filename)))
model = deepometry.model.Model(shape=(48, 48, 3), units=4)
model.compile()
model.evaluate(
x,
y,
batch_size=10,
verbose=0
)
model_mock.load_weights.assert_called_once_with(
pkg_resources.resource_filename("deepometry", os.path.join("data", "checkpoint.hdf5"))
)
model_mock.evaluate.assert_called_once_with(
x=mocker.ANY,
y=mocker.ANY,
batch_size=10,
verbose=0
)
_, kwargs = model_mock.evaluate.call_args
samples = kwargs["x"]
assert samples.shape == expected_samples.shape
numpy.testing.assert_array_equal(samples, expected_samples)
targets = kwargs["y"]
assert targets.shape == expected_targets.shape
numpy.testing.assert_array_equal(targets, expected_targets)
def test_evaluate_named_model(data_dir, mocker):
x = numpy.random.randint(256, size=(100, 48, 48, 3)).astype(numpy.float64)
y = numpy.random.randint(4, size=(100,))
meanscsv = str(data_dir.join("zoolander_means.csv"))
with open(meanscsv, "w") as csvfile:
writer = csv.writer(csvfile)
writer.writerow([125.3, 127.12, 121.9])
expected_samples = x.copy()
expected_samples[:, :, :, 0] = (expected_samples[:, :, :, 0] - 125.3 + 255.0) / (2.0 * 255.0)
expected_samples[:, :, :, 1] = (expected_samples[:, :, :, 1] - 127.12 + 255.0) / (2.0 * 255.0)
expected_samples[:, :, :, 2] = (expected_samples[:, :, :, 2] - 121.9 + 255.0) / (2.0 * 255.0)
expected_targets = keras.utils.to_categorical(y, 4)
with mocker.patch("keras_resnet.models.ResNet50") as model_mock:
keras_resnet.models.ResNet50.return_value = model_mock
resources = mocker.patch("pkg_resources.resource_filename")
resources.side_effect = lambda _, filename: str(data_dir.join(os.path.basename(filename)))
model = deepometry.model.Model(shape=(48, 48, 3), units=4, name="zoolander")
model.compile()
model.evaluate(
x,
y,
batch_size=10,
verbose=0
)
model_mock.load_weights.assert_called_once_with(
pkg_resources.resource_filename(
"deepometry",
os.path.join("data", "zoolander_checkpoint.hdf5")
)
)
model_mock.evaluate.assert_called_once_with(
x=mocker.ANY,
y=mocker.ANY,
batch_size=10,
verbose=0
)
_, kwargs = model_mock.evaluate.call_args
samples = kwargs["x"]
assert samples.shape == expected_samples.shape
numpy.testing.assert_array_equal(samples, expected_samples)
targets = kwargs["y"]
assert targets.shape == expected_targets.shape
numpy.testing.assert_array_equal(targets, expected_targets)
def test_evaluate_named_directory(data_dir, mocker):
x = numpy.random.randint(256, size=(100, 48, 48, 3)).astype(numpy.float64)
y = numpy.random.randint(4, size=(100,))
model_directory = data_dir.mkdir("models")
meanscsv = str(model_directory.join("means.csv"))
with open(meanscsv, "w") as csvfile:
writer = csv.writer(csvfile)
writer.writerow([125.3, 127.12, 121.9])
expected_samples = x.copy()
expected_samples[:, :, :, 0] = (expected_samples[:, :, :, 0] - 125.3 + 255.0) / (2.0 * 255.0)
expected_samples[:, :, :, 1] = (expected_samples[:, :, :, 1] - 127.12 + 255.0) / (2.0 * 255.0)
expected_samples[:, :, :, 2] = (expected_samples[:, :, :, 2] - 121.9 + 255.0) / (2.0 * 255.0)
expected_targets = keras.utils.to_categorical(y, 4)
with mocker.patch("keras_resnet.models.ResNet50") as model_mock:
keras_resnet.models.ResNet50.return_value = model_mock
model = deepometry.model.Model(shape=(48, 48, 3), units=4, directory=str(model_directory))
model.compile()
model.evaluate(
x,
y,
batch_size=10,
verbose=0
)
model_mock.load_weights.assert_called_once_with(
os.path.join(str(model_directory), "checkpoint.hdf5")
)
model_mock.evaluate.assert_called_once_with(
x=mocker.ANY,
y=mocker.ANY,
batch_size=10,
verbose=0
)
_, kwargs = model_mock.evaluate.call_args
samples = kwargs["x"]
assert samples.shape == expected_samples.shape
numpy.testing.assert_array_equal(samples, expected_samples)
targets = kwargs["y"]
assert targets.shape == expected_targets.shape
numpy.testing.assert_array_equal(targets, expected_targets)
| 35.549479 | 114 | 0.60252 | 1,621 | 13,651 | 4.887107 | 0.097471 | 0.015148 | 0.011992 | 0.011361 | 0.821383 | 0.787932 | 0.769755 | 0.742489 | 0.733148 | 0.713456 | 0 | 0.054984 | 0.267233 | 13,651 | 383 | 115 | 35.642298 | 0.736979 | 0.004029 | 0 | 0.645161 | 0 | 0 | 0.060044 | 0.029378 | 0 | 0 | 0 | 0 | 0.204301 | 1 | 0.032258 | false | 0 | 0.032258 | 0.003584 | 0.0681 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
ff70181693e0d36d5b33661300501479bef86753 | 250 | py | Python | myproject/tests/test_basic_utils.py | ianfaulkner/myproject | d4cb6bf48b9d46b979ddea01c31e794ddea29af0 | [
"BSD-2-Clause"
] | null | null | null | myproject/tests/test_basic_utils.py | ianfaulkner/myproject | d4cb6bf48b9d46b979ddea01c31e794ddea29af0 | [
"BSD-2-Clause"
] | null | null | null | myproject/tests/test_basic_utils.py | ianfaulkner/myproject | d4cb6bf48b9d46b979ddea01c31e794ddea29af0 | [
"BSD-2-Clause"
] | null | null | null | from myproject.basic_utils import multiply_by_10
from numpy import inf
def test_multiply_by_10():
assert multiply_by_10(3) == 30
assert multiply_by_10(-1) == -10
assert multiply_by_10(inf) == inf
assert multiply_by_10(-inf) == -inf
| 25 | 48 | 0.728 | 41 | 250 | 4.097561 | 0.390244 | 0.357143 | 0.428571 | 0.428571 | 0.416667 | 0.285714 | 0 | 0 | 0 | 0 | 0 | 0.087379 | 0.176 | 250 | 9 | 49 | 27.777778 | 0.728155 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.571429 | 1 | 0.142857 | true | 0 | 0.285714 | 0 | 0.428571 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
ffb5236fac2e3693dd3e2e06feada49226d6fc14 | 26 | py | Python | models/super_inverted_perovskites/maple/__init__.py | hyllios/utils | 76f60bada2cda85486077ee9f598147546d5c4be | [
"MIT"
] | null | null | null | models/super_inverted_perovskites/maple/__init__.py | hyllios/utils | 76f60bada2cda85486077ee9f598147546d5c4be | [
"MIT"
] | null | null | null | models/super_inverted_perovskites/maple/__init__.py | hyllios/utils | 76f60bada2cda85486077ee9f598147546d5c4be | [
"MIT"
] | null | null | null | from .MAPLE import MAPLE
| 13 | 25 | 0.769231 | 4 | 26 | 5 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.192308 | 26 | 1 | 26 | 26 | 0.952381 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
ffba392fae4895b98e688e202d1dee7d4159f65e | 45 | py | Python | day2/stock.py | anishLearnsToCode/python-workshop-4 | d34c312d17c0d4194b2a1d269b8c0e2a56a04699 | [
"MIT"
] | 4 | 2020-07-15T13:47:10.000Z | 2021-09-15T20:28:50.000Z | day2/stock.py | anishLearnsToCode/python-workshop-4 | d34c312d17c0d4194b2a1d269b8c0e2a56a04699 | [
"MIT"
] | null | null | null | day2/stock.py | anishLearnsToCode/python-workshop-4 | d34c312d17c0d4194b2a1d269b8c0e2a56a04699 | [
"MIT"
] | 2 | 2020-07-18T03:28:03.000Z | 2020-11-01T00:56:02.000Z | def getValue(stock_symbol):
return 123
| 9 | 27 | 0.711111 | 6 | 45 | 5.166667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.085714 | 0.222222 | 45 | 4 | 28 | 11.25 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | false | 0 | 0 | 0.5 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
922d96cf70e8ecd9ee508b520c11df00087fb2b4 | 48 | py | Python | taln2016/icsisumm-primary-sys34_v1/nltk/nltk-0.9.2/nltk_contrib/drt/__init__.py | hectormartinez/rougexstem | 32da9eab253cb88fc1882e59026e8b5b40900a25 | [
"Apache-2.0"
] | null | null | null | taln2016/icsisumm-primary-sys34_v1/nltk/nltk-0.9.2/nltk_contrib/drt/__init__.py | hectormartinez/rougexstem | 32da9eab253cb88fc1882e59026e8b5b40900a25 | [
"Apache-2.0"
] | null | null | null | taln2016/icsisumm-primary-sys34_v1/nltk/nltk-0.9.2/nltk_contrib/drt/__init__.py | hectormartinez/rougexstem | 32da9eab253cb88fc1882e59026e8b5b40900a25 | [
"Apache-2.0"
] | null | null | null | from DRT import *
from resolve_anaphora import * | 24 | 30 | 0.8125 | 7 | 48 | 5.428571 | 0.714286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.145833 | 48 | 2 | 30 | 24 | 0.926829 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
923b33e841bb6de0030a58a79859c4128897d72b | 7,941 | py | Python | yuansfer/api/recurring_api.py | yuansfer/yuansfer-python-sdk | 4ec2b396ef093d79975dc8f86faeb48b274212da | [
"MIT"
] | null | null | null | yuansfer/api/recurring_api.py | yuansfer/yuansfer-python-sdk | 4ec2b396ef093d79975dc8f86faeb48b274212da | [
"MIT"
] | null | null | null | yuansfer/api/recurring_api.py | yuansfer/yuansfer-python-sdk | 4ec2b396ef093d79975dc8f86faeb48b274212da | [
"MIT"
] | 1 | 2021-05-17T13:40:39.000Z | 2021-05-17T13:40:39.000Z | # -*- coding: utf-8 -*-
from yuansfer.api_helper import APIHelper
from yuansfer.http.api_response import ApiResponse
from yuansfer.api.base_api import BaseApi
from yuansfer.exception import InvalidParamsError
from yuansfer import constant
class RecurringApi(BaseApi):
"""A Controller to access Endpoints in the Yuansfer API."""
def __init__(self, config):
super(RecurringApi, self).__init__(config)
def apply_token(self,
body):
"""POST Request
Args:
body: An object containing the fields to
POST for the request. See the corresponding object definition
for field details.
Returns:
ApiResponse: An object with the response value as well as other
useful information such as status codes and headers. Success
Raises:
APIException: When an error occurs while fetching the data from
the remote API. This exception includes the HTTP Response
code, an error message, and the HTTP body that was received in
the request.
"""
# Prepare query URL
_url_path = constant.RECURRING_APPLY_TOKEN
_query_builder = self.config.get_base_uri()
_query_url = _query_builder+_url_path
# Parameters validation
requiredFileds = ['autoDebitNo','grantType']
self.validate_parameter(requiredFileds,body)
# Prepare and execute request
_request = self.config.http_client.post(_query_url, headers=None, parameters=body)
_response = self.execute_request(_request)
if type(_response.response) is not dict:
_errors = _response.reason
else:
_errors = None
_result = ApiResponse(_response, body=_response.response, errors=_errors)
return _result
def auto_pay(self,
body):
"""POST Request
Args:
body: An object containing the fields to
POST for the request. See the corresponding object definition
for field details.
Returns:
ApiResponse: An object with the response value as well as other
useful information such as status codes and headers. Success
Raises:
APIException: When an error occurs while fetching the data from
the remote API. This exception includes the HTTP Response
code, an error message, and the HTTP body that was received in
the request.
"""
# Prepare query URL
_url_path = constant.RECURRING_PAY
_query_builder = self.config.get_base_uri()
_query_url = _query_builder+_url_path
# Parameters validation
self.amount_validate('amount',body['amount'])
requiredFileds = ['reference','currency','settleCurrency','autoDebitNo']
self.validate_parameter(requiredFileds,body)
# Prepare and execute request
_request = self.config.http_client.post(_query_url, headers=None, parameters=body)
_response = self.execute_request(_request)
if type(_response.response) is not dict:
_errors = _response.reason
else:
_errors = None
_result = ApiResponse(_response, body=_response.response, errors=_errors)
return _result
def consult(self,
body):
"""POST Request
Args:
body: An object containing the fields to
POST for the request. See the corresponding object definition
for field details.
Returns:
ApiResponse: An object with the response value as well as other
useful information such as status codes and headers. Success
Raises:
APIException: When an error occurs while fetching the data from
the remote API. This exception includes the HTTP Response
code, an error message, and the HTTP body that was received in
the request.
"""
# Prepare query URL
_url_path = constant.RECURRING_CONSULT
_query_builder = self.config.get_base_uri()
_query_url = _query_builder+_url_path
# Parameters validation
requiredFileds = ['osType','osVersion','autoRedirectUrl','autoReference','vendor','terminal']
self.validate_parameter(requiredFileds,body)
# Prepare and execute request
_request = self.config.http_client.post(_query_url, headers=None, parameters=body)
_response = self.execute_request(_request)
if type(_response.response) is not dict:
_errors = _response.reason
else:
_errors = None
_result = ApiResponse(_response, body=_response.response, errors=_errors)
return _result
def revoke(self,
body):
"""POST Request
Args:
body: An object containing the fields to
POST for the request. See the corresponding object definition
for field details.
Returns:
ApiResponse: An object with the response value as well as other
useful information such as status codes and headers. Success
Raises:
APIException: When an error occurs while fetching the data from
the remote API. This exception includes the HTTP Response
code, an error message, and the HTTP body that was received in
the request.
"""
# Prepare query URL
_url_path = constant.RECURRING_REVOKE
_query_builder = self.config.get_base_uri()
_query_url = _query_builder+_url_path
# Parameters validation
requiredFileds = ['autoDebitNo']
self.validate_parameter(requiredFileds,body)
# Prepare and execute request
_request = self.config.http_client.post(_query_url, headers=None, parameters=body)
_response = self.execute_request(_request)
if type(_response.response) is not dict:
_errors = _response.reason
else:
_errors = None
_result = ApiResponse(_response, body=_response.response, errors=_errors)
return _result
def update_recurring(self,
body):
"""POST Request to UpdateRecurring payment
Process a UpdateRecurring payment .
Args:
body: An object containing the fields to
POST for the request. See the corresponding object definition
for field details.
Returns:
ApiResponse: An object with the response value as well as other
useful information such as status codes and headers. Success
Raises:
APIException: When an error occurs while fetching the data from
the remote API. This exception includes the HTTP Response
code, an error message, and the HTTP body that was received in
the request.
"""
# Prepare query URL
_url_path = constant.UPDATE_RECURRING
_query_builder = self.config.get_base_uri()
_query_url = _query_builder+_url_path
# Parameters validation
requiredFileds = ['paymentCount','status']
self.validate_parameter(requiredFileds,body)
if requiredFileds['paymentCount'] <= 0:
raise InvalidParamsError('paymentCount should be greater than 0')
# Prepare and execute request
_request = self.config.http_client.post(_query_url, headers=None, parameters=body)
_response = self.execute_request(_request)
if type(_response.response) is not dict:
_errors = _response.reason
else:
_errors = None
_result = ApiResponse(_response, body=_response.response, errors=_errors)
return _result
| 37.635071 | 101 | 0.631533 | 867 | 7,941 | 5.592849 | 0.149942 | 0.024747 | 0.043308 | 0.019592 | 0.85131 | 0.843267 | 0.843267 | 0.843267 | 0.843267 | 0.843267 | 0 | 0.000548 | 0.310918 | 7,941 | 210 | 102 | 37.814286 | 0.885599 | 0.40209 | 0 | 0.697674 | 0 | 0 | 0.050132 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.069767 | false | 0 | 0.05814 | 0 | 0.197674 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
2bacafdc7fc6ca24e62d693bf8eb415b8eb37342 | 288 | py | Python | alternat/__init__.py | keplerlab/alternat | b57029dc7192cc09ed5e39e3f6f7b59ba72b8e86 | [
"MIT"
] | 15 | 2020-12-04T11:25:17.000Z | 2022-03-23T21:24:16.000Z | alternat/__init__.py | keplerlab/alternat | b57029dc7192cc09ed5e39e3f6f7b59ba72b8e86 | [
"MIT"
] | 1 | 2020-12-10T16:50:16.000Z | 2020-12-10T16:58:20.000Z | alternat/__init__.py | keplerlab/alternat | b57029dc7192cc09ed5e39e3f6f7b59ba72b8e86 | [
"MIT"
] | null | null | null | from alternat.generation.config import Config
from alternat.generation.rules.caption_handler import CaptionDataHandler
from alternat.generation.rules.label_handler import LabelDataHandler
from alternat.generation.rules.ocr_handler import OCRDataHandler
from .version import __version__
| 36 | 72 | 0.881944 | 34 | 288 | 7.264706 | 0.411765 | 0.194332 | 0.356275 | 0.327935 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.076389 | 288 | 7 | 73 | 41.142857 | 0.928571 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
2bb385299a96dac52e36baa2e4167e6f39910e8e | 199 | py | Python | api/tests/unit_tests/test_helpers.py | onecrayon/api.ashes.live | 72709fb4e53220aa9b48749a51f5b834ebb2ca42 | [
"0BSD"
] | 11 | 2020-09-13T16:49:21.000Z | 2021-07-29T06:17:58.000Z | api/tests/unit_tests/test_helpers.py | onecrayon/api.ashes.live | 72709fb4e53220aa9b48749a51f5b834ebb2ca42 | [
"0BSD"
] | 49 | 2020-09-11T05:23:02.000Z | 2022-03-02T18:31:00.000Z | api/tests/unit_tests/test_helpers.py | onecrayon/api.ashes.live | 72709fb4e53220aa9b48749a51f5b834ebb2ca42 | [
"0BSD"
] | 1 | 2022-03-27T22:11:29.000Z | 2022-03-27T22:11:29.000Z | from api.utils.helpers import stubify
def test_stubify_empty():
"""Stubify logic must return None if it doesn't have a name"""
assert stubify("") is None
assert stubify(" ") is None
| 24.875 | 66 | 0.683417 | 30 | 199 | 4.466667 | 0.733333 | 0.19403 | 0.223881 | 0.283582 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.21608 | 199 | 7 | 67 | 28.428571 | 0.858974 | 0.281407 | 0 | 0.5 | 0 | 0 | 0.029197 | 0 | 0 | 0 | 0 | 0 | 0.5 | 1 | 0.25 | true | 0 | 0.25 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
920d93eccda56ed9760e07567ba14ccfb6e6ee59 | 186 | py | Python | tests/integration_tests/boolean_tests/test_edge_type.py | skrat/martinez | 86db48324cb50ecb52be8ab2e4278a6d5cdd562b | [
"MIT"
] | 7 | 2020-05-07T08:13:44.000Z | 2021-12-17T07:33:51.000Z | tests/integration_tests/boolean_tests/test_edge_type.py | skrat/martinez | 86db48324cb50ecb52be8ab2e4278a6d5cdd562b | [
"MIT"
] | 17 | 2019-11-29T23:17:26.000Z | 2020-12-20T15:47:17.000Z | tests/integration_tests/boolean_tests/test_edge_type.py | skrat/martinez | 86db48324cb50ecb52be8ab2e4278a6d5cdd562b | [
"MIT"
] | 1 | 2020-12-17T22:44:21.000Z | 2020-12-17T22:44:21.000Z | from tests.bind_tests.hints import BoundEdgeType
from tests.port_tests.hints import PortedEdgeType
def test_basic():
assert BoundEdgeType.__members__ == PortedEdgeType.__members__
| 26.571429 | 66 | 0.833333 | 22 | 186 | 6.545455 | 0.590909 | 0.125 | 0.222222 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.107527 | 186 | 6 | 67 | 31 | 0.86747 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.25 | 1 | 0.25 | true | 0 | 0.5 | 0 | 0.75 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
a603ef3a7ce5828a79ba8153f09a4f1cd318de5b | 32 | py | Python | project/confidence/__init__.py | StealthTech/django-conf | bc73ef5e868a4088d962714cb63acd43393f6413 | [
"Apache-2.0"
] | 1 | 2018-04-10T19:42:52.000Z | 2018-04-10T19:42:52.000Z | project/confidence/__init__.py | StealthTech/django-conf | bc73ef5e868a4088d962714cb63acd43393f6413 | [
"Apache-2.0"
] | 7 | 2019-12-04T21:46:34.000Z | 2022-02-10T07:59:32.000Z | project/confidence/__init__.py | StealthTech/django-confidence | bc73ef5e868a4088d962714cb63acd43393f6413 | [
"Apache-2.0"
] | null | null | null | from .core import Configuration
| 16 | 31 | 0.84375 | 4 | 32 | 6.75 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.125 | 32 | 1 | 32 | 32 | 0.964286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
a676e8f057c02eb4fb92dea8917f8d58a94c1d68 | 91,885 | py | Python | tests/assessment_authoring/test_sessions.py | UOC/dlkit | a9d265db67e81b9e0f405457464e762e2c03f769 | [
"MIT"
] | 2 | 2018-02-23T12:16:11.000Z | 2020-10-08T17:54:24.000Z | tests/assessment_authoring/test_sessions.py | UOC/dlkit | a9d265db67e81b9e0f405457464e762e2c03f769 | [
"MIT"
] | 87 | 2017-04-21T18:57:15.000Z | 2021-12-13T19:43:57.000Z | tests/assessment_authoring/test_sessions.py | UOC/dlkit | a9d265db67e81b9e0f405457464e762e2c03f769 | [
"MIT"
] | 1 | 2018-03-01T16:44:25.000Z | 2018-03-01T16:44:25.000Z | """Unit tests of assessment.authoring sessions."""
import pytest
from random import shuffle
from ..utilities.general import is_never_authz, is_no_authz, uses_cataloging, uses_filesystem_only
from dlkit.abstract_osid.assessment.objects import Bank as ABCBank
from dlkit.abstract_osid.assessment_authoring import objects as ABCObjects
from dlkit.abstract_osid.assessment_authoring import queries as ABCQueries
from dlkit.abstract_osid.osid import errors
from dlkit.abstract_osid.osid.objects import OsidForm
from dlkit.json_.id.objects import IdList
from dlkit.primordium.id.primitives import Id
from dlkit.primordium.type.primitives import Type
from dlkit.runtime import PROXY_SESSION, proxy_example
from dlkit.runtime.managers import Runtime
REQUEST = proxy_example.SimpleRequest()
CONDITION = PROXY_SESSION.get_proxy_condition()
CONDITION.set_http_request(REQUEST)
PROXY = PROXY_SESSION.get_proxy(CONDITION)
DEFAULT_TYPE = Type(**{'identifier': 'DEFAULT', 'namespace': 'DEFAULT', 'authority': 'DEFAULT'})
DEFAULT_GENUS_TYPE = Type(**{'identifier': 'DEFAULT', 'namespace': 'GenusType', 'authority': 'DLKIT.MIT.EDU'})
ALIAS_ID = Id(**{'identifier': 'ALIAS', 'namespace': 'ALIAS', 'authority': 'ALIAS'})
SIMPLE_SEQUENCE_RECORD_TYPE = Type(**{"authority": "ODL.MIT.EDU", "namespace": "osid-object", "identifier": "simple-child-sequencing"})
NEW_TYPE = Type(**{'identifier': 'NEW', 'namespace': 'MINE', 'authority': 'YOURS'})
NEW_TYPE_2 = Type(**{'identifier': 'NEW 2', 'namespace': 'MINE', 'authority': 'YOURS'})
AGENT_ID = Id(**{'identifier': 'jane_doe', 'namespace': 'osid.agent.Agent', 'authority': 'MIT-ODL'})
@pytest.fixture(scope="class",
params=['TEST_SERVICE', 'TEST_SERVICE_ALWAYS_AUTHZ', 'TEST_SERVICE_NEVER_AUTHZ', 'TEST_SERVICE_CATALOGING', 'TEST_SERVICE_FILESYSTEM', 'TEST_SERVICE_MEMCACHE'])
def assessment_part_lookup_session_class_fixture(request):
request.cls.service_config = request.param
request.cls.svc_mgr = Runtime().get_service_manager(
'ASSESSMENT',
proxy=PROXY,
implementation=request.cls.service_config)
request.cls.fake_id = Id('resource.Resource%3A000000000000000000000000%40DLKIT.MIT.EDU')
@pytest.fixture(scope="function")
def assessment_part_lookup_session_test_fixture(request):
request.cls.assessment_part_list = list()
request.cls.assessment_part_ids = list()
if not is_never_authz(request.cls.service_config):
create_form = request.cls.svc_mgr.get_bank_form_for_create([])
create_form.display_name = 'Test Bank'
create_form.description = 'Test Bank for AssessmentPartLookupSession tests'
request.cls.catalog = request.cls.svc_mgr.create_bank(create_form)
assessment_form = request.cls.catalog.get_assessment_form_for_create([])
assessment_form.display_name = 'Test Assessment'
assessment_form.description = 'Test Assessment for AssessmentPartLookupSession tests'
request.cls.assessment = request.cls.catalog.create_assessment(assessment_form)
for num in [0, 1, 2, 3]:
create_form = request.cls.catalog.get_assessment_part_form_for_create_for_assessment(request.cls.assessment.ident,
[])
create_form.display_name = 'Test AssessmentPart ' + str(num)
create_form.description = 'Test AssessmentPart for AssessmentPartLookupSession tests'
if num > 1:
create_form.sequestered = True
obj = request.cls.catalog.create_assessment_part_for_assessment(create_form)
request.cls.assessment_part_list.append(obj)
request.cls.assessment_part_ids.append(obj.ident)
request.cls.assessment = request.cls.catalog.get_assessment(request.cls.assessment.ident)
else:
request.cls.catalog = request.cls.svc_mgr.get_assessment_part_lookup_session(proxy=PROXY)
request.cls.session = request.cls.catalog
def test_tear_down():
if not is_never_authz(request.cls.service_config):
request.cls.catalog.use_unsequestered_assessment_part_view()
for obj in request.cls.catalog.get_assessment_parts():
request.cls.catalog.delete_assessment_part(obj.ident)
request.cls.catalog.delete_assessment(request.cls.assessment.ident)
request.cls.svc_mgr.delete_bank(request.cls.catalog.ident)
request.addfinalizer(test_tear_down)
@pytest.mark.usefixtures("assessment_part_lookup_session_class_fixture", "assessment_part_lookup_session_test_fixture")
class TestAssessmentPartLookupSession(object):
"""Tests for AssessmentPartLookupSession"""
def test_get_bank_id(self):
"""Tests get_bank_id"""
# this should not be here...
pass
def test_get_bank(self):
"""Tests get_bank"""
# is this test really needed?
# From test_templates/resource.py::ResourceLookupSession::get_bin_template
if not is_never_authz(self.service_config):
assert isinstance(self.catalog.get_bank(), ABCBank)
def test_can_lookup_assessment_parts(self):
"""Tests can_lookup_assessment_parts"""
# From test_templates/resource.py ResourceLookupSession.can_lookup_resources_template
assert isinstance(self.catalog.can_lookup_assessment_parts(), bool)
def test_use_comparative_assessment_part_view(self):
"""Tests use_comparative_assessment_part_view"""
# From test_templates/resource.py ResourceLookupSession.use_comparative_resource_view_template
self.catalog.use_comparative_assessment_part_view()
def test_use_plenary_assessment_part_view(self):
"""Tests use_plenary_assessment_part_view"""
# From test_templates/resource.py ResourceLookupSession.use_plenary_resource_view_template
self.catalog.use_plenary_assessment_part_view()
def test_use_federated_bank_view(self):
"""Tests use_federated_bank_view"""
# From test_templates/resource.py ResourceLookupSession.use_federated_bin_view_template
self.catalog.use_federated_bank_view()
def test_use_isolated_bank_view(self):
"""Tests use_isolated_bank_view"""
# From test_templates/resource.py ResourceLookupSession.use_isolated_bin_view_template
self.catalog.use_isolated_bank_view()
def test_use_active_assessment_part_view(self):
"""Tests use_active_assessment_part_view"""
# From test_templates/repository.py::CompositionLookupSession::use_active_composition_view_template
# Ideally also verify the value is set...
self.catalog.use_active_assessment_part_view()
def test_use_any_status_assessment_part_view(self):
"""Tests use_any_status_assessment_part_view"""
# From test_templates/repository.py::CompositionLookupSession::use_any_status_composition_view_template
# Ideally also verify the value is set...
self.catalog.use_any_status_assessment_part_view()
def test_use_sequestered_assessment_part_view(self):
"""Tests use_sequestered_assessment_part_view"""
# From test_templates/repository.py::CompositionLookupSession::use_sequestered_composition_view
# Ideally also verify the value is set...
self.catalog.use_sequestered_assessment_part_view()
def test_use_unsequestered_assessment_part_view(self):
"""Tests use_unsequestered_assessment_part_view"""
# From test_templates/repository.py::CompositionLookupSession::use_unsequestered_composition_view
# Ideally also verify the value is set...
self.catalog.use_unsequestered_assessment_part_view()
def test_get_assessment_part(self):
"""Tests get_assessment_part"""
if not is_never_authz(self.service_config):
self.catalog.use_isolated_bank_view()
obj = self.catalog.get_assessment_part(self.assessment_part_list[0].ident)
assert obj.ident == self.assessment_part_list[0].ident
self.catalog.use_federated_bank_view()
obj = self.catalog.get_assessment_part(self.assessment_part_list[0].ident)
assert obj.ident == self.assessment_part_list[0].ident
else:
with pytest.raises(errors.NotFound):
self.catalog.get_assessment_part(self.fake_id)
def test_get_assessment_parts_by_ids(self):
"""Tests get_assessment_parts_by_ids"""
from dlkit.abstract_osid.assessment_authoring.objects import AssessmentPartList
objects = self.catalog.get_assessment_parts_by_ids(self.assessment_part_ids)
assert isinstance(objects, AssessmentPartList)
self.catalog.use_federated_bank_view()
objects = self.catalog.get_assessment_parts_by_ids(self.assessment_part_ids)
assert isinstance(objects, AssessmentPartList)
if not is_never_authz(self.service_config):
assert objects.available() > 0
else:
assert objects.available() == 0
def test_get_assessment_parts_by_genus_type(self):
"""Tests get_assessment_parts_by_genus_type"""
from dlkit.abstract_osid.assessment_authoring.objects import AssessmentPartList
objects = self.catalog.get_assessment_parts_by_genus_type(DEFAULT_GENUS_TYPE)
assert isinstance(objects, AssessmentPartList)
self.catalog.use_federated_bank_view()
objects = self.catalog.get_assessment_parts_by_genus_type(DEFAULT_GENUS_TYPE)
assert isinstance(objects, AssessmentPartList)
if not is_never_authz(self.service_config):
assert objects.available() > 0
else:
assert objects.available() == 0
def test_get_assessment_parts_by_parent_genus_type(self):
"""Tests get_assessment_parts_by_parent_genus_type"""
from dlkit.abstract_osid.assessment_authoring.objects import AssessmentPartList
if not is_never_authz(self.service_config):
objects = self.catalog.get_assessment_parts_by_parent_genus_type(DEFAULT_GENUS_TYPE)
assert isinstance(objects, AssessmentPartList)
self.catalog.use_federated_bank_view()
objects = self.catalog.get_assessment_parts_by_parent_genus_type(DEFAULT_GENUS_TYPE)
assert objects.available() == 0
assert isinstance(objects, AssessmentPartList)
else:
with pytest.raises(errors.Unimplemented):
# because the never_authz "tries harder" and runs the actual query...
# whereas above the method itself in JSON returns an empty list
self.catalog.get_assessment_parts_by_parent_genus_type(DEFAULT_GENUS_TYPE)
def test_get_assessment_parts_by_record_type(self):
"""Tests get_assessment_parts_by_record_type"""
from dlkit.abstract_osid.assessment_authoring.objects import AssessmentPartList
objects = self.catalog.get_assessment_parts_by_record_type(DEFAULT_TYPE)
assert isinstance(objects, AssessmentPartList)
self.catalog.use_federated_bank_view()
objects = self.catalog.get_assessment_parts_by_record_type(DEFAULT_TYPE)
assert objects.available() == 0
assert isinstance(objects, AssessmentPartList)
def test_get_assessment_parts_for_assessment(self):
"""Tests get_assessment_parts_for_assessment"""
# Override this because we do have AssessmentPartQuerySession implemented,
# so with NEVER_AUTHZ it returns an empty result set
results = self.session.get_assessment_parts_for_assessment(self.assessment.ident)
assert isinstance(results, ABCObjects.AssessmentPartList)
if not is_never_authz(self.service_config):
assert results.available() == 2
else:
assert results.available() == 0
def test_get_assessment_parts(self):
"""Tests get_assessment_parts"""
from dlkit.abstract_osid.assessment_authoring.objects import AssessmentPartList
objects = self.catalog.get_assessment_parts()
assert isinstance(objects, AssessmentPartList)
self.catalog.use_federated_bank_view()
objects = self.catalog.get_assessment_parts()
assert isinstance(objects, AssessmentPartList)
if not is_never_authz(self.service_config):
assert objects.available() > 0
else:
assert objects.available() == 0
def test_get_assessment_part_with_alias(self):
if not is_never_authz(self.service_config):
self.catalog.alias_assessment_part(self.assessment_part_ids[0], ALIAS_ID)
obj = self.catalog.get_assessment_part(ALIAS_ID)
assert obj.get_id() == self.assessment_part_ids[0]
def test_get_assessment_id(self):
"""tests get_assessment_id"""
if not is_never_authz(self.service_config):
assert str(self.assessment_part_list[0].get_assessment_id()) == str(self.assessment.ident)
def test_get_assessment(self):
"""tests get_assessment"""
def check_equal(val1, val2):
assert val1 == val2
def check_dict_equal(dict1, dict2):
for item in dict1.items():
key = item[0]
value = item[1]
if isinstance(value, dict):
check_dict_equal(value, dict2[key])
else:
check_equal(value, dict2[key])
if not is_never_authz(self.service_config):
check_dict_equal(self.assessment_part_list[0].get_assessment().object_map,
self.assessment.object_map)
class FakeQuery:
_cat_id_args_list = []
@pytest.fixture(scope="class",
params=['TEST_SERVICE', 'TEST_SERVICE_ALWAYS_AUTHZ', 'TEST_SERVICE_NEVER_AUTHZ', 'TEST_SERVICE_CATALOGING', 'TEST_SERVICE_FILESYSTEM', 'TEST_SERVICE_MEMCACHE'])
def assessment_part_query_session_class_fixture(request):
request.cls.service_config = request.param
request.cls.svc_mgr = Runtime().get_service_manager(
'ASSESSMENT',
proxy=PROXY,
implementation=request.cls.service_config)
@pytest.fixture(scope="function")
def assessment_part_query_session_test_fixture(request):
request.cls.assessment_part_list = list()
request.cls.assessment_part_ids = list()
if not is_never_authz(request.cls.service_config):
create_form = request.cls.svc_mgr.get_bank_form_for_create([])
create_form.display_name = 'Test Bank'
create_form.description = 'Test Bank for AssessmentPartQuerySession tests'
request.cls.catalog = request.cls.svc_mgr.create_bank(create_form)
assessment_form = request.cls.catalog.get_assessment_form_for_create([])
assessment_form.display_name = 'Test Assessment'
assessment_form.description = 'Test Assessment for AssessmentPartQuerySession tests'
request.cls.assessment = request.cls.catalog.create_assessment(assessment_form)
colors = ['Orange', 'Blue', 'Green', 'orange']
for num in [0, 1, 2, 3]:
create_form = request.cls.catalog.get_assessment_part_form_for_create_for_assessment(request.cls.assessment.ident,
[])
create_form.display_name = 'Test AssessmentPart ' + str(num) + colors[num]
create_form.description = 'Test AssessmentPart for AssessmentPartQuerySession tests'
obj = request.cls.catalog.create_assessment_part_for_assessment(create_form)
request.cls.assessment_part_list.append(obj)
request.cls.assessment_part_ids.append(obj.ident)
request.cls.assessment = request.cls.catalog.get_assessment(request.cls.assessment.ident)
else:
request.cls.catalog = request.cls.svc_mgr.get_assessment_part_query_session(proxy=PROXY)
request.cls.session = request.cls.catalog
def test_tear_down():
if not is_never_authz(request.cls.service_config):
request.cls.catalog.use_unsequestered_assessment_part_view()
for obj in request.cls.catalog.get_assessment_parts():
request.cls.catalog.delete_assessment_part(obj.ident)
request.cls.catalog.delete_assessment(request.cls.assessment.ident)
request.cls.svc_mgr.delete_bank(request.cls.catalog.ident)
request.addfinalizer(test_tear_down)
@pytest.mark.usefixtures("assessment_part_query_session_class_fixture", "assessment_part_query_session_test_fixture")
class TestAssessmentPartQuerySession(object):
"""Tests for AssessmentPartQuerySession"""
def test_get_bank_id(self):
"""Tests get_bank_id"""
# From test_templates/resource.py ResourceLookupSession.get_bin_id_template
if not is_never_authz(self.service_config):
assert self.catalog.get_bank_id() == self.catalog.ident
def test_get_bank(self):
"""Tests get_bank"""
# is this test really needed?
# From test_templates/resource.py::ResourceLookupSession::get_bin_template
if not is_never_authz(self.service_config):
assert isinstance(self.catalog.get_bank(), ABCBank)
def test_can_search_assessment_parts(self):
"""Tests can_search_assessment_parts"""
# From test_templates/resource.py ResourceQuerySession::can_search_resources_template
assert isinstance(self.session.can_search_assessment_parts(), bool)
def test_use_federated_bank_view(self):
"""Tests use_federated_bank_view"""
# From test_templates/resource.py ResourceLookupSession.use_federated_bin_view_template
self.catalog.use_federated_bank_view()
def test_use_isolated_bank_view(self):
"""Tests use_isolated_bank_view"""
# From test_templates/resource.py ResourceLookupSession.use_isolated_bin_view_template
self.catalog.use_isolated_bank_view()
def test_use_sequestered_assessment_part_view(self):
"""Tests use_sequestered_assessment_part_view"""
# From test_templates/repository.py::CompositionLookupSession::use_sequestered_composition_view
# Ideally also verify the value is set...
self.catalog.use_sequestered_assessment_part_view()
def test_use_unsequestered_assessment_part_view(self):
"""Tests use_unsequestered_assessment_part_view"""
# From test_templates/repository.py::CompositionLookupSession::use_unsequestered_composition_view
# Ideally also verify the value is set...
self.catalog.use_unsequestered_assessment_part_view()
def test_get_assessment_part_query(self):
"""Tests get_assessment_part_query"""
# From test_templates/resource.py ResourceQuerySession::get_resource_query_template
query = self.session.get_assessment_part_query()
assert isinstance(query, ABCQueries.AssessmentPartQuery)
def test_get_assessment_parts_by_query(self):
"""Tests get_assessment_parts_by_query"""
# From test_templates/resource.py ResourceQuerySession::get_resources_by_query_template
# Need to add some tests with string types
if not is_never_authz(self.service_config):
query = self.session.get_assessment_part_query()
query.match_display_name('orange')
assert self.catalog.get_assessment_parts_by_query(query).available() == 2
query.clear_display_name_terms()
query.match_display_name('blue', match=False)
assert self.session.get_assessment_parts_by_query(query).available() == 3
else:
with pytest.raises(errors.PermissionDenied):
self.session.get_assessment_parts_by_query(FakeQuery())
@pytest.fixture(scope="class",
params=['TEST_SERVICE', 'TEST_SERVICE_ALWAYS_AUTHZ', 'TEST_SERVICE_NEVER_AUTHZ', 'TEST_SERVICE_CATALOGING', 'TEST_SERVICE_FILESYSTEM', 'TEST_SERVICE_MEMCACHE'])
def assessment_part_admin_session_class_fixture(request):
request.cls.service_config = request.param
request.cls.svc_mgr = Runtime().get_service_manager(
'ASSESSMENT',
proxy=PROXY,
implementation=request.cls.service_config)
request.cls.fake_id = Id('resource.Resource%3Afake%40DLKIT.MIT.EDU')
if not is_never_authz(request.cls.service_config):
create_form = request.cls.svc_mgr.get_bank_form_for_create([])
create_form.display_name = 'Test Bank'
create_form.description = 'Test Bank for AssessmentPartAdminSession tests'
request.cls.catalog = request.cls.svc_mgr.create_bank(create_form)
assessment_form = request.cls.catalog.get_assessment_form_for_create([])
assessment_form.display_name = 'Test Assessment'
assessment_form.description = 'Test Assessment for AssessmentPartAdminSession tests'
request.cls.assessment = request.cls.catalog.create_assessment(assessment_form)
else:
request.cls.catalog = request.cls.svc_mgr.get_assessment_part_admin_session(proxy=PROXY)
def class_tear_down():
if not is_never_authz(request.cls.service_config):
for obj in request.cls.catalog.get_assessment_parts():
request.cls.catalog.delete_assessment_part(obj.ident)
for obj in request.cls.catalog.get_assessments():
request.cls.catalog.delete_assessment(obj.ident)
request.cls.svc_mgr.delete_bank(request.cls.catalog.ident)
request.addfinalizer(class_tear_down)
@pytest.fixture(scope="function")
def assessment_part_admin_session_test_fixture(request):
if not is_never_authz(request.cls.service_config):
request.cls.form = request.cls.catalog.get_assessment_part_form_for_create_for_assessment(request.cls.assessment.ident,
[SIMPLE_SEQUENCE_RECORD_TYPE])
request.cls.form.display_name = 'new AssessmentPart'
request.cls.form.description = 'description of AssessmentPart'
request.cls.form.set_genus_type(NEW_TYPE)
request.cls.osid_object = request.cls.catalog.create_assessment_part_for_assessment(request.cls.form)
request.cls.session = request.cls.catalog
def test_tear_down():
if not is_never_authz(request.cls.service_config):
request.cls.osid_object = request.cls.catalog.get_assessment_part(request.cls.osid_object.ident)
if request.cls.osid_object.has_children():
for child_id in request.cls.osid_object.get_child_assessment_part_ids():
request.cls.catalog.delete_assessment_part(child_id)
request.cls.catalog.delete_assessment_part(request.cls.osid_object.ident)
request.addfinalizer(test_tear_down)
@pytest.mark.usefixtures("assessment_part_admin_session_class_fixture", "assessment_part_admin_session_test_fixture")
class TestAssessmentPartAdminSession(object):
"""Tests for AssessmentPartAdminSession"""
def test_get_bank_id(self):
"""Tests get_bank_id"""
# From test_templates/resource.py ResourceLookupSession.get_bin_id_template
if not is_never_authz(self.service_config):
assert self.catalog.get_bank_id() == self.catalog.ident
def test_get_bank(self):
"""Tests get_bank"""
# is this test really needed?
# From test_templates/resource.py::ResourceLookupSession::get_bin_template
if not is_never_authz(self.service_config):
assert isinstance(self.catalog.get_bank(), ABCBank)
def test_can_create_assessment_parts(self):
"""Tests can_create_assessment_parts"""
# From test_templates/resource.py::ResourceAdminSession::can_create_resources_template
assert isinstance(self.catalog.can_create_assessment_parts(), bool)
def test_can_create_assessment_part_with_record_types(self):
"""Tests can_create_assessment_part_with_record_types"""
# From test_templates/resource.py::ResourceAdminSession::can_create_resource_with_record_types_template
assert isinstance(self.catalog.can_create_assessment_part_with_record_types(DEFAULT_TYPE), bool)
def test_get_assessment_part_form_for_create_for_assessment(self):
"""Tests get_assessment_part_form_for_create_for_assessment"""
if not is_never_authz(self.service_config):
form = self.session.get_assessment_part_form_for_create_for_assessment(self.assessment.ident, [])
assert isinstance(form, ABCObjects.AssessmentPartForm)
assert not form.is_for_update()
else:
with pytest.raises(errors.PermissionDenied):
self.session.get_assessment_part_form_for_create_for_assessment(self.fake_id, [])
def test_create_assessment_part_for_assessment(self):
"""Tests create_assessment_part_for_assessment"""
from dlkit.abstract_osid.assessment_authoring.objects import AssessmentPart
if not is_never_authz(self.service_config):
assert isinstance(self.osid_object, AssessmentPart)
assert self.osid_object.display_name.text == 'new AssessmentPart'
assert self.osid_object.description.text == 'description of AssessmentPart'
assert self.osid_object.genus_type == NEW_TYPE
form = self.catalog.get_assessment_part_form_for_create_for_assessment_part(self.osid_object.ident, [])
form.display_name = 'new AssessmentPart child'
form.description = 'description of AssessmentPart child'
child_part = self.catalog.create_assessment_part_for_assessment_part(form)
parent_part = self.catalog.get_assessment_part(self.osid_object.ident)
assert parent_part.has_children()
assert parent_part.get_child_assessment_part_ids().available() == 1
assert str(parent_part.get_child_assessment_part_ids().next()) == str(child_part.ident)
else:
with pytest.raises(errors.PermissionDenied):
self.catalog.create_assessment_part_for_assessment_part('foo')
def test_get_assessment_part_form_for_create_for_assessment_part(self):
"""Tests get_assessment_part_form_for_create_for_assessment_part"""
if not is_never_authz(self.service_config):
form = self.session.get_assessment_part_form_for_create_for_assessment_part(self.osid_object.ident, [])
assert isinstance(form, ABCObjects.AssessmentPartForm)
assert not form.is_for_update()
else:
with pytest.raises(errors.PermissionDenied):
self.session.get_assessment_part_form_for_create_for_assessment_part(self.fake_id, [])
def test_create_assessment_part_for_assessment_part(self):
"""Tests create_assessment_part_for_assessment_part"""
# From test_templates/resource.py::ResourceAdminSession::create_resource_template
from dlkit.abstract_osid.assessment_authoring.objects import AssessmentPart
if not is_never_authz(self.service_config):
assert isinstance(self.osid_object, AssessmentPart)
assert self.osid_object.display_name.text == 'new AssessmentPart'
assert self.osid_object.description.text == 'description of AssessmentPart'
assert self.osid_object.genus_type == NEW_TYPE
with pytest.raises(errors.IllegalState):
self.catalog.create_assessment_part_for_assessment_part(self.form)
with pytest.raises(errors.InvalidArgument):
self.catalog.create_assessment_part_for_assessment_part('I Will Break You!')
update_form = self.catalog.get_assessment_part_form_for_update(self.osid_object.ident)
with pytest.raises(errors.InvalidArgument):
self.catalog.create_assessment_part_for_assessment_part(update_form)
else:
with pytest.raises(errors.PermissionDenied):
self.catalog.create_assessment_part_for_assessment_part('foo')
def test_can_update_assessment_parts(self):
"""Tests can_update_assessment_parts"""
# From test_templates/resource.py::ResourceAdminSession::can_update_resources_template
assert isinstance(self.catalog.can_update_assessment_parts(), bool)
def test_get_assessment_part_form_for_update(self):
"""Tests get_assessment_part_form_for_update"""
# From test_templates/resource.py::ResourceAdminSession::get_resource_form_for_update_template
if not is_never_authz(self.service_config):
form = self.catalog.get_assessment_part_form_for_update(self.osid_object.ident)
assert isinstance(form, OsidForm)
assert form.is_for_update()
with pytest.raises(errors.InvalidArgument):
self.catalog.get_assessment_part_form_for_update(['This is Doomed!'])
with pytest.raises(errors.InvalidArgument):
self.catalog.get_assessment_part_form_for_update(
Id(authority='Respect my Authoritay!',
namespace='assessment.authoring.{object_name}',
identifier='1'))
else:
with pytest.raises(errors.PermissionDenied):
self.catalog.get_assessment_part_form_for_update(self.fake_id)
def test_update_assessment_part(self):
"""Tests update_assessment_part"""
if not is_never_authz(self.service_config):
form = self.catalog.get_assessment_part_form_for_update(self.osid_object.ident)
form.display_name = 'new name'
form.description = 'new description'
form.set_genus_type(NEW_TYPE_2)
updated_object = self.catalog.update_assessment_part(self.osid_object.ident, form)
assert isinstance(updated_object, ABCObjects.AssessmentPart)
assert updated_object.ident == self.osid_object.ident
assert updated_object.display_name.text == 'new name'
assert updated_object.description.text == 'new description'
assert updated_object.genus_type == NEW_TYPE_2
else:
with pytest.raises(errors.PermissionDenied):
self.session.update_assessment_part(self.fake_id, 'foo')
def test_can_delete_assessment_parts(self):
"""Tests can_delete_assessment_parts"""
# From test_templates/resource.py::ResourceAdminSession::can_delete_resources_template
assert isinstance(self.catalog.can_delete_assessment_parts(), bool)
def test_delete_assessment_part(self):
"""Tests delete_assessment_part"""
if not is_never_authz(self.service_config):
results = self.catalog.get_assessment_parts()
assert results.available() == 1
form = self.catalog.get_assessment_part_form_for_create_for_assessment(self.assessment.ident,
[])
form.display_name = 'new AssessmentPart'
form.description = 'description of AssessmentPart'
new_assessment_part = self.catalog.create_assessment_part_for_assessment(form)
results = self.catalog.get_assessment_parts()
assert results.available() == 2
self.session.delete_assessment_part(new_assessment_part.ident)
results = self.catalog.get_assessment_parts()
assert results.available() == 1
assert str(results.next().ident) != str(new_assessment_part.ident)
else:
with pytest.raises(errors.PermissionDenied):
self.catalog.delete_assessment_part(self.fake_id)
def test_can_manage_assessment_part_aliases(self):
"""Tests can_manage_assessment_part_aliases"""
# From test_templates/resource.py::ResourceAdminSession::can_manage_resource_aliases_template
assert isinstance(self.catalog.can_manage_assessment_part_aliases(), bool)
def test_alias_assessment_part(self):
"""Tests alias_assessment_part"""
# From test_templates/resource.py::ResourceAdminSession::alias_resource_template
if not is_never_authz(self.service_config):
alias_id = Id(self.catalog.ident.namespace + '%3Amy-alias%40ODL.MIT.EDU')
self.catalog.alias_assessment_part(self.osid_object.ident, alias_id)
aliased_object = self.catalog.get_assessment_part(alias_id)
assert aliased_object.ident == self.osid_object.ident
else:
with pytest.raises(errors.PermissionDenied):
self.catalog.alias_assessment_part(self.fake_id, self.fake_id)
@pytest.fixture(scope="class",
params=['TEST_SERVICE', 'TEST_SERVICE_ALWAYS_AUTHZ', 'TEST_SERVICE_NEVER_AUTHZ', 'TEST_SERVICE_CATALOGING', 'TEST_SERVICE_FILESYSTEM', 'TEST_SERVICE_MEMCACHE'])
def assessment_part_bank_session_class_fixture(request):
request.cls.service_config = request.param
request.cls.assessment_part_list = list()
request.cls.assessment_part_ids = list()
request.cls.svc_mgr = Runtime().get_service_manager(
'ASSESSMENT',
proxy=PROXY,
implementation=request.cls.service_config)
request.cls.fake_id = Id('resource.Resource%3Afake%40DLKIT.MIT.EDU')
if not is_never_authz(request.cls.service_config):
create_form = request.cls.svc_mgr.get_bank_form_for_create([])
create_form.display_name = 'Test Bank'
create_form.description = 'Test Bank for AssessmentPartBankSession tests'
request.cls.catalog = request.cls.svc_mgr.create_bank(create_form)
create_form = request.cls.svc_mgr.get_bank_form_for_create([])
create_form.display_name = 'Test Bank for Assignment'
create_form.description = 'Test Bank for AssessmentPartBankSession tests assignment'
request.cls.assigned_catalog = request.cls.svc_mgr.create_bank(create_form)
assessment_form = request.cls.catalog.get_assessment_form_for_create([])
assessment_form.display_name = 'Test Assessment'
assessment_form.description = 'Test Assessment for AssessmentPartBankSession tests'
request.cls.assessment = request.cls.catalog.create_assessment(assessment_form)
for num in [0, 1, 2]:
create_form = request.cls.catalog.get_assessment_part_form_for_create_for_assessment(request.cls.assessment.ident, [])
create_form.display_name = 'Test AssessmentPart ' + str(num)
create_form.description = 'Test AssessmentPart for AssessmentPartBankSession tests'
obj = request.cls.catalog.create_assessment_part_for_assessment(create_form)
request.cls.assessment_part_list.append(obj)
request.cls.assessment_part_ids.append(obj.ident)
request.cls.svc_mgr.assign_assessment_part_to_bank(
request.cls.assessment_part_ids[1], request.cls.assigned_catalog.ident)
request.cls.svc_mgr.assign_assessment_part_to_bank(
request.cls.assessment_part_ids[2], request.cls.assigned_catalog.ident)
def class_tear_down():
if not is_never_authz(request.cls.service_config):
request.cls.svc_mgr.unassign_assessment_part_from_bank(
request.cls.assessment_part_ids[1], request.cls.assigned_catalog.ident)
request.cls.svc_mgr.unassign_assessment_part_from_bank(
request.cls.assessment_part_ids[2], request.cls.assigned_catalog.ident)
for obj in request.cls.catalog.get_assessment_parts():
request.cls.catalog.delete_assessment_part(obj.ident)
request.cls.catalog.delete_assessment(request.cls.assessment.ident)
request.cls.svc_mgr.delete_bank(request.cls.assigned_catalog.ident)
request.cls.svc_mgr.delete_bank(request.cls.catalog.ident)
request.addfinalizer(class_tear_down)
@pytest.fixture(scope="function")
def assessment_part_bank_session_test_fixture(request):
request.cls.session = request.cls.svc_mgr
@pytest.mark.usefixtures("assessment_part_bank_session_class_fixture", "assessment_part_bank_session_test_fixture")
class TestAssessmentPartBankSession(object):
"""Tests for AssessmentPartBankSession"""
def test_can_lookup_assessment_part_bank_mappings(self):
"""Tests can_lookup_assessment_part_bank_mappings"""
# From test_templates/resource.py::ResourceBinSession::can_lookup_resource_bin_mappings
result = self.session.can_lookup_assessment_part_bank_mappings()
assert isinstance(result, bool)
def test_use_comparative_assessment_part_bank_view(self):
"""Tests use_comparative_assessment_part_bank_view"""
# From test_templates/resource.py::BinLookupSession::use_comparative_bin_view_template
self.svc_mgr.use_comparative_assessment_part_bank_view()
def test_use_plenary_assessment_part_bank_view(self):
"""Tests use_plenary_assessment_part_bank_view"""
# From test_templates/resource.py::BinLookupSession::use_plenary_bin_view_template
self.svc_mgr.use_plenary_assessment_part_bank_view()
def test_get_assessment_part_ids_by_bank(self):
"""Tests get_assessment_part_ids_by_bank"""
# From test_templates/resource.py::ResourceBinSession::get_resource_ids_by_bin_template
if not is_never_authz(self.service_config):
objects = self.svc_mgr.get_assessment_part_ids_by_bank(self.assigned_catalog.ident)
assert objects.available() == 2
else:
with pytest.raises(errors.PermissionDenied):
self.svc_mgr.get_assessment_part_ids_by_bank(self.fake_id)
def test_get_assessment_parts_by_bank(self):
"""Tests get_assessment_parts_by_bank"""
# From test_templates/resource.py::ResourceBinSession::get_resources_by_bin_template
if not is_never_authz(self.service_config):
results = self.session.get_assessment_parts_by_bank(self.assigned_catalog.ident)
assert isinstance(results, ABCObjects.AssessmentPartList)
assert results.available() == 2
else:
with pytest.raises(errors.PermissionDenied):
self.session.get_assessment_parts_by_bank(self.fake_id)
def test_get_assessment_part_ids_by_banks(self):
"""Tests get_assessment_part_ids_by_banks"""
# From test_templates/resource.py::ResourceBinSession::get_resource_ids_by_bins_template
if not is_never_authz(self.service_config):
catalog_ids = [self.catalog.ident, self.assigned_catalog.ident]
object_ids = self.session.get_assessment_part_ids_by_banks(catalog_ids)
assert isinstance(object_ids, IdList)
# Currently our impl does not remove duplicate objectIds
assert object_ids.available() == 5
else:
with pytest.raises(errors.PermissionDenied):
self.session.get_assessment_part_ids_by_banks([self.fake_id])
def test_get_assessment_parts_by_banks(self):
"""Tests get_assessment_parts_by_banks"""
# From test_templates/resource.py::ResourceBinSession::get_resources_by_bins_template
if not is_never_authz(self.service_config):
catalog_ids = [self.catalog.ident, self.assigned_catalog.ident]
results = self.session.get_assessment_parts_by_banks(catalog_ids)
assert isinstance(results, ABCObjects.AssessmentPartList)
# Currently our impl does not remove duplicate objects
assert results.available() == 5
else:
with pytest.raises(errors.PermissionDenied):
self.session.get_assessment_parts_by_banks([self.fake_id])
def test_get_bank_ids_by_assessment_part(self):
"""Tests get_bank_ids_by_assessment_part"""
# From test_templates/resource.py::ResourceBinSession::get_bin_ids_by_resource_template
if not is_never_authz(self.service_config):
cats = self.svc_mgr.get_bank_ids_by_assessment_part(self.assessment_part_ids[1])
assert cats.available() == 2
else:
with pytest.raises(errors.PermissionDenied):
self.svc_mgr.get_bank_ids_by_assessment_part(self.fake_id)
def test_get_banks_by_assessment_part(self):
"""Tests get_banks_by_assessment_part"""
# From test_templates/resource.py::ResourceBinSession::get_bins_by_resource_template
if not is_never_authz(self.service_config):
cats = self.svc_mgr.get_banks_by_assessment_part(self.assessment_part_ids[1])
assert cats.available() == 2
else:
with pytest.raises(errors.PermissionDenied):
self.svc_mgr.get_banks_by_assessment_part(self.fake_id)
@pytest.fixture(scope="class",
params=['TEST_SERVICE', 'TEST_SERVICE_ALWAYS_AUTHZ', 'TEST_SERVICE_NEVER_AUTHZ', 'TEST_SERVICE_CATALOGING', 'TEST_SERVICE_FILESYSTEM', 'TEST_SERVICE_MEMCACHE'])
def assessment_part_bank_assignment_session_class_fixture(request):
request.cls.service_config = request.param
request.cls.assessment_part_list = list()
request.cls.assessment_part_ids = list()
request.cls.svc_mgr = Runtime().get_service_manager(
'ASSESSMENT',
proxy=PROXY,
implementation=request.cls.service_config)
request.cls.fake_id = Id('resource.Resource%3Afake%40DLKIT.MIT.EDU')
if not is_never_authz(request.cls.service_config):
create_form = request.cls.svc_mgr.get_bank_form_for_create([])
create_form.display_name = 'Test Bank'
create_form.description = 'Test Bank for AssessmentPartBankAssignmentSession tests'
request.cls.catalog = request.cls.svc_mgr.create_bank(create_form)
create_form = request.cls.svc_mgr.get_bank_form_for_create([])
create_form.display_name = 'Test Bank for Assignment'
create_form.description = 'Test Bank for AssessmentPartBankAssignmentSession tests assignment'
request.cls.assigned_catalog = request.cls.svc_mgr.create_bank(create_form)
assessment_form = request.cls.catalog.get_assessment_form_for_create([])
assessment_form.display_name = 'Test Assessment'
assessment_form.description = 'Test Assessment for AssessmentPartBankAssignmentSession tests'
request.cls.assessment = request.cls.catalog.create_assessment(assessment_form)
for num in [0, 1, 2]:
create_form = request.cls.catalog.get_assessment_part_form_for_create_for_assessment(request.cls.assessment.ident, [])
create_form.display_name = 'Test AssessmentPart ' + str(num)
create_form.description = 'Test AssessmentPart for AssessmentPartBankAssignmentSession tests'
obj = request.cls.catalog.create_assessment_part_for_assessment(create_form)
request.cls.assessment_part_list.append(obj)
request.cls.assessment_part_ids.append(obj.ident)
def class_tear_down():
if not is_never_authz(request.cls.service_config):
for obj in request.cls.catalog.get_assessment_parts():
request.cls.catalog.delete_assessment_part(obj.ident)
request.cls.catalog.delete_assessment(request.cls.assessment.ident)
request.cls.svc_mgr.delete_bank(request.cls.assigned_catalog.ident)
request.cls.svc_mgr.delete_bank(request.cls.catalog.ident)
request.addfinalizer(class_tear_down)
@pytest.fixture(scope="function")
def assessment_part_bank_assignment_session_test_fixture(request):
request.cls.session = request.cls.svc_mgr
@pytest.mark.usefixtures("assessment_part_bank_assignment_session_class_fixture", "assessment_part_bank_assignment_session_test_fixture")
class TestAssessmentPartBankAssignmentSession(object):
"""Tests for AssessmentPartBankAssignmentSession"""
def test_can_assign_assessment_parts(self):
"""Tests can_assign_assessment_parts"""
# From test_templates/resource.py::ResourceBinAssignmentSession::can_assign_resources_template
result = self.session.can_assign_assessment_parts()
assert isinstance(result, bool)
def test_can_assign_assessment_parts_to_bank(self):
"""Tests can_assign_assessment_parts_to_bank"""
# From test_templates/resource.py::ResourceBinAssignmentSession::can_assign_resources_to_bin_template
result = self.session.can_assign_assessment_parts_to_bank(self.assigned_catalog.ident)
assert isinstance(result, bool)
def test_get_assignable_bank_ids(self):
"""Tests get_assignable_bank_ids"""
# From test_templates/resource.py::ResourceBinAssignmentSession::get_assignable_bin_ids_template
# Note that our implementation just returns all catalogIds, which does not follow
# the OSID spec (should return only the catalogIds below the given one in the hierarchy.
if not is_never_authz(self.service_config):
results = self.session.get_assignable_bank_ids(self.catalog.ident)
assert isinstance(results, IdList)
# Because we're not deleting all banks from all tests, we might
# have some crufty banks here...but there should be at least 2.
assert results.available() >= 2
else:
with pytest.raises(errors.PermissionDenied):
self.session.get_assignable_bank_ids(self.fake_id)
def test_get_assignable_bank_ids_for_assessment_part(self):
"""Tests get_assignable_bank_ids_for_assessment_part"""
# From test_templates/resource.py::ResourceBinAssignmentSession::get_assignable_bin_ids_for_resource_template
# Note that our implementation just returns all catalogIds, which does not follow
# the OSID spec (should return only the catalogIds below the given one in the hierarchy.
if not is_never_authz(self.service_config):
results = self.session.get_assignable_bank_ids_for_assessment_part(self.catalog.ident, self.assessment_part_ids[0])
assert isinstance(results, IdList)
# Because we're not deleting all banks from all tests, we might
# have some crufty banks here...but there should be at least 2.
assert results.available() >= 2
else:
with pytest.raises(errors.PermissionDenied):
self.session.get_assignable_bank_ids_for_assessment_part(self.fake_id, self.fake_id)
def test_assign_assessment_part_to_bank(self):
"""Tests assign_assessment_part_to_bank"""
# From test_templates/resource.py::ResourceBinAssignmentSession::assign_resource_to_bin_template
if not is_never_authz(self.service_config):
results = self.assigned_catalog.get_assessment_parts()
assert results.available() == 0
self.session.assign_assessment_part_to_bank(self.assessment_part_ids[1], self.assigned_catalog.ident)
results = self.assigned_catalog.get_assessment_parts()
assert results.available() == 1
self.session.unassign_assessment_part_from_bank(
self.assessment_part_ids[1],
self.assigned_catalog.ident)
else:
with pytest.raises(errors.PermissionDenied):
self.session.assign_assessment_part_to_bank(self.fake_id, self.fake_id)
def test_unassign_assessment_part_from_bank(self):
"""Tests unassign_assessment_part_from_bank"""
# From test_templates/resource.py::ResourceBinAssignmentSession::unassign_resource_from_bin_template
if not is_never_authz(self.service_config):
results = self.assigned_catalog.get_assessment_parts()
assert results.available() == 0
self.session.assign_assessment_part_to_bank(
self.assessment_part_ids[1],
self.assigned_catalog.ident)
results = self.assigned_catalog.get_assessment_parts()
assert results.available() == 1
self.session.unassign_assessment_part_from_bank(
self.assessment_part_ids[1],
self.assigned_catalog.ident)
results = self.assigned_catalog.get_assessment_parts()
assert results.available() == 0
else:
with pytest.raises(errors.PermissionDenied):
self.session.unassign_assessment_part_from_bank(self.fake_id, self.fake_id)
def test_reassign_assessment_part_to_bank(self):
"""Tests reassign_assessment_part_to_bank"""
if is_never_authz(self.service_config):
pass # no object to call the method on?
elif uses_cataloging(self.service_config):
pass # cannot call the _get_record() methods on catalogs
else:
with pytest.raises(errors.Unimplemented):
self.session.reassign_assessment_part_to_bank(True, True, True)
@pytest.fixture(scope="class",
params=['TEST_SERVICE', 'TEST_SERVICE_ALWAYS_AUTHZ', 'TEST_SERVICE_NEVER_AUTHZ', 'TEST_SERVICE_CATALOGING', 'TEST_SERVICE_FILESYSTEM', 'TEST_SERVICE_MEMCACHE'])
def assessment_part_item_session_class_fixture(request):
request.cls.service_config = request.param
request.cls.svc_mgr = Runtime().get_service_manager(
'ASSESSMENT_AUTHORING',
proxy=PROXY,
implementation=request.cls.service_config)
request.cls.fake_id = Id('resource.Resource%3Afake%40DLKIT.MIT.EDU')
@pytest.fixture(scope="function")
def assessment_part_item_session_test_fixture(request):
request.cls.item_list = list()
request.cls.item_ids = list()
if not is_never_authz(request.cls.service_config):
create_form = request.cls.svc_mgr.get_bank_form_for_create([])
create_form.display_name = 'Test Bank'
create_form.description = 'Test Bank for AssessmentPartItemSession tests'
request.cls.catalog = request.cls.svc_mgr.create_bank(create_form)
create_form = request.cls.catalog.get_assessment_form_for_create([])
create_form.display_name = 'Test Assessment'
create_form.description = 'Test Assessment for AssessmentPartItemSession tests'
request.cls.assessment = request.cls.catalog.create_assessment(create_form)
create_form = request.cls.catalog.get_assessment_part_form_for_create_for_assessment(request.cls.assessment.ident, [])
create_form.display_name = 'Test Assessment Part'
create_form.description = 'Test Assessment Part for AssessmentPartItemSession tests'
request.cls.assessment_part = request.cls.catalog.create_assessment_part_for_assessment(create_form)
for num in [0, 1, 2, 3]:
create_form = request.cls.catalog.get_item_form_for_create([])
create_form.display_name = 'Test Item ' + str(num)
create_form.description = 'Test Item for AssessmentPartItemSession tests'
obj = request.cls.catalog.create_item(create_form)
request.cls.item_list.append(obj)
request.cls.item_ids.append(obj.ident)
request.cls.catalog.add_item(obj.ident, request.cls.assessment_part.ident)
else:
request.cls.catalog = request.cls.svc_mgr.get_assessment_part_item_session(proxy=PROXY)
request.cls.session = request.cls.catalog
def test_tear_down():
if not is_never_authz(request.cls.service_config):
for catalog in request.cls.svc_mgr.get_banks():
for obj in catalog.get_assessment_parts():
if obj.has_children():
for child_id in obj.get_child_assessment_part_ids():
catalog.delete_assessment_part(child_id)
catalog.delete_assessment_part(obj.ident)
for obj in catalog.get_assessments():
catalog.delete_assessment(obj.ident)
for obj in catalog.get_items():
catalog.delete_item(obj.ident)
request.cls.svc_mgr.delete_bank(catalog.ident)
request.addfinalizer(test_tear_down)
@pytest.mark.usefixtures("assessment_part_item_session_class_fixture", "assessment_part_item_session_test_fixture")
class TestAssessmentPartItemSession(object):
"""Tests for AssessmentPartItemSession"""
def test_get_bank_id(self):
"""Tests get_bank_id"""
# From test_templates/resource.py ResourceLookupSession.get_bin_id_template
if not is_never_authz(self.service_config):
assert self.catalog.get_bank_id() == self.catalog.ident
def test_get_bank(self):
"""Tests get_bank"""
# is this test really needed?
# From test_templates/resource.py::ResourceLookupSession::get_bin_template
if not is_never_authz(self.service_config):
assert isinstance(self.catalog.get_bank(), ABCBank)
def test_can_access_assessment_part_items(self):
"""Tests can_access_assessment_part_items"""
assert isinstance(self.session.can_access_assessment_part_items(), bool)
def test_use_comparative_asseessment_part_item_view(self):
"""Tests use_comparative_asseessment_part_item_view"""
# From test_templates/resource.py ResourceLookupSession.use_comparative_resource_view_template
self.catalog.use_comparative_asseessment_part_item_view()
def test_use_plenary_assessment_part_item_view(self):
"""Tests use_plenary_assessment_part_item_view"""
# From test_templates/resource.py ResourceLookupSession.use_plenary_resource_view_template
self.catalog.use_plenary_assessment_part_item_view()
def test_use_federated_bank_view(self):
"""Tests use_federated_bank_view"""
# From test_templates/resource.py ResourceLookupSession.use_federated_bin_view_template
self.catalog.use_federated_bank_view()
def test_use_isolated_bank_view(self):
"""Tests use_isolated_bank_view"""
# From test_templates/resource.py ResourceLookupSession.use_isolated_bin_view_template
self.catalog.use_isolated_bank_view()
def test_get_assessment_part_items(self):
"""Tests get_assessment_part_items"""
# From test_templates/repository.py::AssetCompositionSession::get_composition_assets_template
if not is_never_authz(self.service_config):
assert self.catalog.get_assessment_part_items(self.assessment_part.ident).available() == 4
else:
with pytest.raises(errors.PermissionDenied):
self.catalog.get_assessment_part_items(self.fake_id)
def test_get_assessment_parts_by_item(self):
"""Tests get_assessment_parts_by_item"""
# From test_templates/repository.py::AssetCompositionSession::get_compositions_by_asset_template
if not is_never_authz(self.service_config):
assert self.catalog.get_assessment_parts_by_item(self.item_ids[0]).available() == 1
assert self.catalog.get_assessment_parts_by_item(self.item_ids[0]).next().ident == self.assessment_part.ident
else:
with pytest.raises(errors.PermissionDenied):
self.catalog.get_assessment_parts_by_item(self.fake_id)
@pytest.fixture(scope="class",
params=['TEST_SERVICE', 'TEST_SERVICE_ALWAYS_AUTHZ', 'TEST_SERVICE_NEVER_AUTHZ', 'TEST_SERVICE_CATALOGING', 'TEST_SERVICE_FILESYSTEM', 'TEST_SERVICE_MEMCACHE'])
def assessment_part_item_design_session_class_fixture(request):
request.cls.service_config = request.param
request.cls.item_list = list()
request.cls.item_ids = list()
request.cls.svc_mgr = Runtime().get_service_manager(
'ASSESSMENT_AUTHORING',
proxy=PROXY,
implementation=request.cls.service_config)
request.cls.fake_id = Id('resource.Resource%3Afake%40DLKIT.MIT.EDU')
if not is_never_authz(request.cls.service_config):
create_form = request.cls.svc_mgr.get_bank_form_for_create([])
create_form.display_name = 'Test Bank'
create_form.description = 'Test Bank for AssessmentPartItemDesignSession tests'
request.cls.catalog = request.cls.svc_mgr.create_bank(create_form)
create_form = request.cls.catalog.get_assessment_form_for_create([])
create_form.display_name = 'Test Assessment'
create_form.description = 'Test Assessment for AssessmentPartItemDesignSession tests'
request.cls.assessment = request.cls.catalog.create_assessment(create_form)
create_form = request.cls.catalog.get_assessment_part_form_for_create_for_assessment(request.cls.assessment.ident, [])
create_form.display_name = 'Test Assessment Part'
create_form.description = 'Test Assessment Part for AssessmentPartItemDesignSession tests'
request.cls.assessment_part = request.cls.catalog.create_assessment_part_for_assessment(create_form)
for num in [0, 1, 2, 3]:
create_form = request.cls.catalog.get_item_form_for_create([])
create_form.display_name = 'Test Item ' + str(num)
create_form.description = 'Test Item for AssessmentPartItemDesignSession tests'
obj = request.cls.catalog.create_item(create_form)
request.cls.item_list.append(obj)
request.cls.item_ids.append(obj.ident)
request.cls.catalog.add_item(obj.ident, request.cls.assessment_part.ident)
request.cls.assessment = request.cls.catalog.get_assessment(request.cls.assessment.ident)
else:
request.cls.catalog = request.cls.svc_mgr.get_assessment_part_item_design_session(proxy=PROXY)
def class_tear_down():
if not is_never_authz(request.cls.service_config):
for catalog in request.cls.svc_mgr.get_banks():
for obj in catalog.get_assessment_parts():
catalog.delete_assessment_part(obj.ident)
for obj in catalog.get_assessments():
catalog.delete_assessment(obj.ident)
for obj in catalog.get_items():
catalog.delete_item(obj.ident)
request.cls.svc_mgr.delete_bank(catalog.ident)
request.addfinalizer(class_tear_down)
@pytest.fixture(scope="function")
def assessment_part_item_design_session_test_fixture(request):
request.cls.session = request.cls.catalog
@pytest.mark.usefixtures("assessment_part_item_design_session_class_fixture", "assessment_part_item_design_session_test_fixture")
class TestAssessmentPartItemDesignSession(object):
"""Tests for AssessmentPartItemDesignSession"""
def test_get_bank_id(self):
"""Tests get_bank_id"""
# From test_templates/resource.py ResourceLookupSession.get_bin_id_template
if not is_never_authz(self.service_config):
assert self.catalog.get_bank_id() == self.catalog.ident
def test_get_bank(self):
"""Tests get_bank"""
# is this test really needed?
# From test_templates/resource.py::ResourceLookupSession::get_bin_template
if not is_never_authz(self.service_config):
assert isinstance(self.catalog.get_bank(), ABCBank)
def test_can_design_assessment_parts(self):
"""Tests can_design_assessment_parts"""
assert isinstance(self.session.can_design_assessment_parts(), bool)
def test_add_item(self):
"""Tests add_item"""
if not is_never_authz(self.service_config):
assert self.catalog.get_assessment_part_items(self.assessment_part.ident).available() == 4
create_form = self.catalog.get_item_form_for_create([])
create_form.display_name = 'Test Item 5'
create_form.description = 'Test Item for AssessmentPartItemDesignSession tests'
obj = self.catalog.create_item(create_form)
self.session.add_item(obj.ident, self.assessment_part.ident)
assert self.catalog.get_assessment_part_items(self.assessment_part.ident).available() == 5
else:
with pytest.raises(errors.PermissionDenied):
self.catalog.add_item(self.fake_id, self.fake_id)
def test_move_item_ahead(self):
"""Tests move_item_ahead"""
if not is_never_authz(self.service_config):
original_item_order = list(self.catalog.get_assessment_part_items(self.assessment_part.ident))
original_ids = [item.ident for item in original_item_order]
self.session.move_item_ahead(original_ids[-1],
self.assessment_part.ident,
original_ids[0])
expected_order = [original_ids[-1]] + original_ids[0:-1]
new_order = [item.ident for item in self.catalog.get_assessment_part_items(self.assessment_part.ident)]
assert new_order == expected_order
else:
with pytest.raises(errors.PermissionDenied):
self.session.move_item_ahead(self.fake_id, self.fake_id, self.fake_id)
def test_move_item_behind(self):
"""Tests move_item_behind"""
if not is_never_authz(self.service_config):
original_item_order = list(self.catalog.get_assessment_part_items(self.assessment_part.ident))
original_ids = [item.ident for item in original_item_order]
self.session.move_item_behind(original_ids[0],
self.assessment_part.ident,
original_ids[-1])
expected_order = original_ids[1::] + [original_ids[0]]
new_order = [item.ident for item in self.catalog.get_assessment_part_items(self.assessment_part.ident)]
assert new_order == expected_order
else:
with pytest.raises(errors.PermissionDenied):
self.session.move_item_behind(self.fake_id, self.fake_id, self.fake_id)
def test_order_items(self):
"""Tests order_items"""
if not is_never_authz(self.service_config):
original_item_order = list(self.catalog.get_assessment_part_items(self.assessment_part.ident))
original_ids = [item.ident for item in original_item_order]
shuffle(original_ids)
self.session.order_items(original_ids,
self.assessment_part.ident)
new_order = [item.ident for item in self.catalog.get_assessment_part_items(self.assessment_part.ident)]
assert new_order == original_ids
else:
with pytest.raises(errors.PermissionDenied):
self.session.order_items(self.fake_id, self.fake_id)
def test_remove_item(self):
"""Tests remove_item"""
if not is_never_authz(self.service_config):
original_item_order = list(self.catalog.get_assessment_part_items(self.assessment_part.ident))
original_ids = [item.ident for item in original_item_order]
self.session.remove_item(original_ids[0],
self.assessment_part.ident)
new_order = [item.ident for item in self.catalog.get_assessment_part_items(self.assessment_part.ident)]
assert new_order == original_ids[1::]
else:
with pytest.raises(errors.PermissionDenied):
self.session.remove_item(self.fake_id, self.fake_id)
@pytest.fixture(scope="class",
params=['TEST_SERVICE', 'TEST_SERVICE_ALWAYS_AUTHZ', 'TEST_SERVICE_NEVER_AUTHZ', 'TEST_SERVICE_CATALOGING', 'TEST_SERVICE_FILESYSTEM', 'TEST_SERVICE_MEMCACHE'])
def sequence_rule_lookup_session_class_fixture(request):
request.cls.service_config = request.param
request.cls.sequence_rule_list = list()
request.cls.sequence_rule_ids = list()
request.cls.svc_mgr = Runtime().get_service_manager(
'ASSESSMENT',
proxy=PROXY,
implementation=request.cls.service_config)
request.cls.fake_id = Id('resource.Resource%3Afake%40DLKIT.MIT.EDU')
if not is_never_authz(request.cls.service_config):
create_form = request.cls.svc_mgr.get_bank_form_for_create([])
create_form.display_name = 'Test Bank'
create_form.description = 'Test Bank for SequenceRuleLookupSession tests'
request.cls.catalog = request.cls.svc_mgr.create_bank(create_form)
create_form = request.cls.catalog.get_assessment_form_for_create([SIMPLE_SEQUENCE_RECORD_TYPE])
create_form.display_name = 'Test Assessment'
create_form.description = 'Test Assessment for SequenceRuleLookupSession tests'
request.cls.assessment = request.cls.catalog.create_assessment(create_form)
create_form = request.cls.catalog.get_assessment_part_form_for_create_for_assessment(request.cls.assessment.ident, [])
create_form.display_name = 'Test Assessment Part 1'
create_form.description = 'Test Assessment Part for SequenceRuleLookupSession tests'
request.cls.assessment_part_1 = request.cls.catalog.create_assessment_part_for_assessment(create_form)
create_form = request.cls.catalog.get_assessment_part_form_for_create_for_assessment(request.cls.assessment.ident, [])
create_form.display_name = 'Test Assessment Part 2'
create_form.description = 'Test Assessment Part for SequenceRuleLookupSession tests'
assessment_part_2 = request.cls.catalog.create_assessment_part_for_assessment(create_form)
for num in [0, 1]:
create_form = request.cls.catalog.get_sequence_rule_form_for_create(request.cls.assessment_part_1.ident,
assessment_part_2.ident,
[])
create_form.display_name = 'Test Sequence Rule ' + str(num)
create_form.description = 'Test Sequence Rule for SequenceRuleLookupSession tests'
obj = request.cls.catalog.create_sequence_rule(create_form)
request.cls.sequence_rule_list.append(obj)
request.cls.sequence_rule_ids.append(obj.ident)
else:
request.cls.catalog = request.cls.svc_mgr.get_sequence_rule_lookup_session(proxy=PROXY)
def class_tear_down():
if not is_never_authz(request.cls.service_config):
for catalog in request.cls.svc_mgr.get_banks():
for obj in catalog.get_sequence_rules():
catalog.delete_sequence_rule(obj.ident)
for obj in catalog.get_assessment_parts():
catalog.delete_assessment_part(obj.ident)
for obj in catalog.get_assessments():
catalog.delete_assessment(obj.ident)
request.cls.svc_mgr.delete_bank(catalog.ident)
request.addfinalizer(class_tear_down)
@pytest.fixture(scope="function")
def sequence_rule_lookup_session_test_fixture(request):
request.cls.session = request.cls.catalog
request.cls.assessment_part = request.cls.assessment_part_1
@pytest.mark.usefixtures("sequence_rule_lookup_session_class_fixture", "sequence_rule_lookup_session_test_fixture")
class TestSequenceRuleLookupSession(object):
"""Tests for SequenceRuleLookupSession"""
def test_get_bank_id(self):
"""Tests get_bank_id"""
# From test_templates/resource.py ResourceLookupSession.get_bin_id_template
if not is_never_authz(self.service_config):
assert self.catalog.get_bank_id() == self.catalog.ident
def test_get_bank(self):
"""Tests get_bank"""
# is this test really needed?
# From test_templates/resource.py::ResourceLookupSession::get_bin_template
if not is_never_authz(self.service_config):
assert isinstance(self.catalog.get_bank(), ABCBank)
def test_can_lookup_sequence_rules(self):
"""Tests can_lookup_sequence_rules"""
# From test_templates/resource.py ResourceLookupSession.can_lookup_resources_template
assert isinstance(self.catalog.can_lookup_sequence_rules(), bool)
def test_use_comparative_sequence_rule_view(self):
"""Tests use_comparative_sequence_rule_view"""
# From test_templates/resource.py ResourceLookupSession.use_comparative_resource_view_template
self.catalog.use_comparative_sequence_rule_view()
def test_use_plenary_sequence_rule_view(self):
"""Tests use_plenary_sequence_rule_view"""
# From test_templates/resource.py ResourceLookupSession.use_plenary_resource_view_template
self.catalog.use_plenary_sequence_rule_view()
def test_use_federated_bank_view(self):
"""Tests use_federated_bank_view"""
# From test_templates/resource.py ResourceLookupSession.use_federated_bin_view_template
self.catalog.use_federated_bank_view()
def test_use_isolated_bank_view(self):
"""Tests use_isolated_bank_view"""
# From test_templates/resource.py ResourceLookupSession.use_isolated_bin_view_template
self.catalog.use_isolated_bank_view()
def test_use_active_sequence_rule_view(self):
"""Tests use_active_sequence_rule_view"""
# From test_templates/repository.py::CompositionLookupSession::use_active_composition_view_template
# Ideally also verify the value is set...
self.catalog.use_active_sequence_rule_view()
def test_use_any_status_sequence_rule_view(self):
"""Tests use_any_status_sequence_rule_view"""
# From test_templates/repository.py::CompositionLookupSession::use_any_status_composition_view_template
# Ideally also verify the value is set...
self.catalog.use_any_status_sequence_rule_view()
def test_get_sequence_rule(self):
"""Tests get_sequence_rule"""
# Override this because we haven't implemented SequenceRuleQuerySession, so will
# throw PermissionDenied with NEVER_AUTHZ
if not is_never_authz(self.service_config):
self.catalog.use_isolated_bank_view()
obj = self.catalog.get_sequence_rule(self.sequence_rule_list[0].ident)
assert obj.ident == self.sequence_rule_list[0].ident
self.catalog.use_federated_bank_view()
obj = self.catalog.get_sequence_rule(self.sequence_rule_list[0].ident)
assert obj.ident == self.sequence_rule_list[0].ident
else:
with pytest.raises(errors.PermissionDenied):
self.catalog.get_sequence_rule(self.fake_id)
def test_get_sequence_rules_by_ids(self):
"""Tests get_sequence_rules_by_ids"""
# Override this because we haven't implemented SequenceRuleQuerySession, so will
# throw PermissionDenied with NEVER_AUTHZ
from dlkit.abstract_osid.assessment_authoring.objects import SequenceRuleList
if not is_never_authz(self.service_config):
objects = self.catalog.get_sequence_rules_by_ids(self.sequence_rule_ids)
assert isinstance(objects, SequenceRuleList)
self.catalog.use_federated_bank_view()
objects = self.catalog.get_sequence_rules_by_ids(self.sequence_rule_ids)
assert isinstance(objects, SequenceRuleList)
assert objects.available() > 0
else:
with pytest.raises(errors.PermissionDenied):
self.catalog.get_sequence_rules_by_ids(self.sequence_rule_ids)
def test_get_sequence_rules_by_genus_type(self):
"""Tests get_sequence_rules_by_genus_type"""
# Override this because we haven't implemented SequenceRuleQuerySession, so will
# throw PermissionDenied with NEVER_AUTHZ
from dlkit.abstract_osid.assessment_authoring.objects import SequenceRuleList
if not is_never_authz(self.service_config):
objects = self.catalog.get_sequence_rules_by_genus_type(DEFAULT_GENUS_TYPE)
assert isinstance(objects, SequenceRuleList)
self.catalog.use_federated_bank_view()
objects = self.catalog.get_sequence_rules_by_genus_type(DEFAULT_GENUS_TYPE)
assert isinstance(objects, SequenceRuleList)
assert objects.available() > 0
else:
with pytest.raises(errors.PermissionDenied):
self.catalog.get_sequence_rules_by_genus_type(DEFAULT_GENUS_TYPE)
def test_get_sequence_rules_by_parent_genus_type(self):
"""Tests get_sequence_rules_by_parent_genus_type"""
# Override this because we haven't implemented SequenceRuleQuerySession, so will
# throw PermissionDenied with NEVER_AUTHZ
from dlkit.abstract_osid.assessment_authoring.objects import SequenceRuleList
if not is_never_authz(self.service_config):
objects = self.catalog.get_sequence_rules_by_parent_genus_type(DEFAULT_GENUS_TYPE)
assert isinstance(objects, SequenceRuleList)
self.catalog.use_federated_bank_view()
objects = self.catalog.get_sequence_rules_by_parent_genus_type(DEFAULT_GENUS_TYPE)
assert objects.available() == 0
assert isinstance(objects, SequenceRuleList)
else:
with pytest.raises(errors.PermissionDenied):
self.catalog.get_sequence_rules_by_parent_genus_type(DEFAULT_GENUS_TYPE)
def test_get_sequence_rules_by_record_type(self):
"""Tests get_sequence_rules_by_record_type"""
# Override this because we haven't implemented SequenceRuleQuerySession, so will
# throw PermissionDenied with NEVER_AUTHZ
from dlkit.abstract_osid.assessment_authoring.objects import SequenceRuleList
if not is_never_authz(self.service_config):
objects = self.catalog.get_sequence_rules_by_record_type(DEFAULT_TYPE)
assert isinstance(objects, SequenceRuleList)
self.catalog.use_federated_bank_view()
objects = self.catalog.get_sequence_rules_by_record_type(DEFAULT_TYPE)
assert objects.available() == 0
assert isinstance(objects, SequenceRuleList)
else:
with pytest.raises(errors.PermissionDenied):
self.catalog.get_sequence_rules_by_record_type(DEFAULT_TYPE)
def test_get_sequence_rules_for_assessment_part(self):
"""Tests get_sequence_rules_for_assessment_part"""
# From test_templates/learning.py::ActivityLookupSession::get_activities_for_objective_template
if self.svc_mgr.supports_sequence_rule_query():
results = self.session.get_sequence_rules_for_assessment_part(self.assessment_part.ident)
assert isinstance(results, ABCObjects.SequenceRuleList)
if not is_never_authz(self.service_config):
assert results.available() == 2
else:
assert results.available() == 0
else:
if not is_never_authz(self.service_config):
results = self.session.get_sequence_rules_for_assessment_part(self.assessment_part.ident)
assert results.available() == 2
assert isinstance(results, ABCObjects.SequenceRuleList)
else:
with pytest.raises(errors.PermissionDenied):
self.session.get_sequence_rules_for_assessment_part(self.fake_id)
def test_get_sequence_rules_for_next_assessment_part(self):
"""Tests get_sequence_rules_for_next_assessment_part"""
if is_never_authz(self.service_config):
pass # no object to call the method on?
elif uses_cataloging(self.service_config):
pass # cannot call the _get_record() methods on catalogs
else:
with pytest.raises(errors.Unimplemented):
self.session.get_sequence_rules_for_next_assessment_part(True)
def test_get_sequence_rules_for_assessment_parts(self):
"""Tests get_sequence_rules_for_assessment_parts"""
if is_never_authz(self.service_config):
pass # no object to call the method on?
elif uses_cataloging(self.service_config):
pass # cannot call the _get_record() methods on catalogs
else:
with pytest.raises(errors.Unimplemented):
self.session.get_sequence_rules_for_assessment_parts(True, True)
def test_get_sequence_rules_for_assessment(self):
"""Tests get_sequence_rules_for_assessment"""
# From test_templates/learning.py::ActivityLookupSession::get_activities_for_objective_template
if self.svc_mgr.supports_sequence_rule_query():
results = self.session.get_sequence_rules_for_assessment(self.assessment.ident)
assert isinstance(results, ABCObjects.SequenceRuleList)
if not is_never_authz(self.service_config):
assert results.available() == 2
else:
assert results.available() == 0
else:
if not is_never_authz(self.service_config):
results = self.session.get_sequence_rules_for_assessment(self.assessment.ident)
assert results.available() == 2
assert isinstance(results, ABCObjects.SequenceRuleList)
else:
with pytest.raises(errors.PermissionDenied):
self.session.get_sequence_rules_for_assessment(self.fake_id)
def test_get_sequence_rules(self):
"""Tests get_sequence_rules"""
# Override this because we haven't implemented SequenceRuleQuerySession, so will
# throw PermissionDenied with NEVER_AUTHZ
from dlkit.abstract_osid.assessment_authoring.objects import SequenceRuleList
if not is_never_authz(self.service_config):
objects = self.catalog.get_sequence_rules()
assert isinstance(objects, SequenceRuleList)
self.catalog.use_federated_bank_view()
objects = self.catalog.get_sequence_rules()
assert isinstance(objects, SequenceRuleList)
assert objects.available() > 0
else:
with pytest.raises(errors.PermissionDenied):
self.catalog.get_sequence_rules()
@pytest.fixture(scope="class",
params=['TEST_SERVICE', 'TEST_SERVICE_ALWAYS_AUTHZ', 'TEST_SERVICE_NEVER_AUTHZ', 'TEST_SERVICE_CATALOGING', 'TEST_SERVICE_FILESYSTEM', 'TEST_SERVICE_MEMCACHE'])
def sequence_rule_admin_session_class_fixture(request):
request.cls.service_config = request.param
request.cls.sequence_rule_list = list()
request.cls.sequence_rule_ids = list()
request.cls.svc_mgr = Runtime().get_service_manager(
'ASSESSMENT',
proxy=PROXY,
implementation=request.cls.service_config)
request.cls.fake_id = Id('resource.Resource%3Afake%40DLKIT.MIT.EDU')
if not is_never_authz(request.cls.service_config):
create_form = request.cls.svc_mgr.get_bank_form_for_create([])
create_form.display_name = 'Test Bank'
create_form.description = 'Test Bank for SequenceRuleAdminSession tests'
request.cls.catalog = request.cls.svc_mgr.create_bank(create_form)
create_form = request.cls.catalog.get_assessment_form_for_create([])
create_form.display_name = 'Test Assessment'
create_form.description = 'Test Assessment for SequenceRuleAdminSession tests'
request.cls.assessment = request.cls.catalog.create_assessment(create_form)
create_form = request.cls.catalog.get_assessment_part_form_for_create_for_assessment(request.cls.assessment.ident, [])
create_form.display_name = 'Test Assessment Part 1'
create_form.description = 'Test Assessment Part for SequenceRuleAdminSession tests'
request.cls.assessment_part_1 = request.cls.catalog.create_assessment_part_for_assessment(create_form)
create_form = request.cls.catalog.get_assessment_part_form_for_create_for_assessment(request.cls.assessment.ident, [])
create_form.display_name = 'Test Assessment Part 2'
create_form.description = 'Test Assessment Part for SequenceRuleAdminSession tests'
request.cls.assessment_part_2 = request.cls.catalog.create_assessment_part_for_assessment(create_form)
for num in [0, 1]:
create_form = request.cls.catalog.get_sequence_rule_form_for_create(request.cls.assessment_part_1.ident,
request.cls.assessment_part_2.ident,
[])
create_form.display_name = 'Test Sequence Rule ' + str(num)
create_form.description = 'Test Sequence Rule for SequenceRuleAdminSession tests'
obj = request.cls.catalog.create_sequence_rule(create_form)
request.cls.sequence_rule_list.append(obj)
request.cls.sequence_rule_ids.append(obj.ident)
request.cls.form = request.cls.catalog.get_sequence_rule_form_for_create(request.cls.assessment_part_1.ident,
request.cls.assessment_part_2.ident,
[])
request.cls.form.display_name = 'new SequenceRule'
request.cls.form.description = 'description of SequenceRule'
request.cls.form.genus_type = NEW_TYPE
request.cls.osid_object = request.cls.catalog.create_sequence_rule(request.cls.form)
else:
request.cls.catalog = request.cls.svc_mgr.get_sequence_rule_admin_session(proxy=PROXY)
def class_tear_down():
if not is_never_authz(request.cls.service_config):
for obj in request.cls.catalog.get_sequence_rules():
request.cls.catalog.delete_sequence_rule(obj.ident)
for obj in request.cls.catalog.get_assessment_parts():
request.cls.catalog.delete_assessment_part(obj.ident)
for obj in request.cls.catalog.get_assessments():
request.cls.catalog.delete_assessment(obj.ident)
request.cls.svc_mgr.delete_bank(request.cls.catalog.ident)
request.addfinalizer(class_tear_down)
@pytest.fixture(scope="function")
def sequence_rule_admin_session_test_fixture(request):
request.cls.session = request.cls.catalog
@pytest.mark.usefixtures("sequence_rule_admin_session_class_fixture", "sequence_rule_admin_session_test_fixture")
class TestSequenceRuleAdminSession(object):
"""Tests for SequenceRuleAdminSession"""
def test_get_bank_id(self):
"""Tests get_bank_id"""
# From test_templates/resource.py ResourceLookupSession.get_bin_id_template
if not is_never_authz(self.service_config):
assert self.catalog.get_bank_id() == self.catalog.ident
def test_get_bank(self):
"""Tests get_bank"""
# is this test really needed?
# From test_templates/resource.py::ResourceLookupSession::get_bin_template
if not is_never_authz(self.service_config):
assert isinstance(self.catalog.get_bank(), ABCBank)
def test_can_create_sequence_rule(self):
"""Tests can_create_sequence_rule"""
# From test_templates/resource.py::ResourceAdminSession::can_create_resources_template
assert isinstance(self.catalog.can_create_sequence_rule(), bool)
def test_can_create_sequence_rule_with_record_types(self):
"""Tests can_create_sequence_rule_with_record_types"""
# From test_templates/resource.py::ResourceAdminSession::can_create_resource_with_record_types_template
assert isinstance(self.catalog.can_create_sequence_rule_with_record_types(DEFAULT_TYPE), bool)
def test_get_sequence_rule_form_for_create(self):
"""Tests get_sequence_rule_form_for_create"""
if not is_never_authz(self.service_config):
form = self.catalog.get_sequence_rule_form_for_create(self.assessment_part_1.ident,
self.assessment_part_2.ident,
[])
assert isinstance(form, OsidForm)
assert not form.is_for_update()
else:
with pytest.raises(errors.PermissionDenied):
self.catalog.get_sequence_rule_form_for_create(self.fake_id, self.fake_id, [])
def test_create_sequence_rule(self):
"""Tests create_sequence_rule"""
# From test_templates/resource.py::ResourceAdminSession::create_resource_template
from dlkit.abstract_osid.assessment_authoring.objects import SequenceRule
if not is_never_authz(self.service_config):
assert isinstance(self.osid_object, SequenceRule)
assert self.osid_object.display_name.text == 'new SequenceRule'
assert self.osid_object.description.text == 'description of SequenceRule'
assert self.osid_object.genus_type == NEW_TYPE
with pytest.raises(errors.IllegalState):
self.catalog.create_sequence_rule(self.form)
with pytest.raises(errors.InvalidArgument):
self.catalog.create_sequence_rule('I Will Break You!')
update_form = self.catalog.get_sequence_rule_form_for_update(self.osid_object.ident)
with pytest.raises(errors.InvalidArgument):
self.catalog.create_sequence_rule(update_form)
else:
with pytest.raises(errors.PermissionDenied):
self.catalog.create_sequence_rule('foo')
def test_can_update_sequence_rules(self):
"""Tests can_update_sequence_rules"""
# From test_templates/resource.py::ResourceAdminSession::can_update_resources_template
assert isinstance(self.catalog.can_update_sequence_rules(), bool)
def test_get_sequence_rule_form_for_update(self):
"""Tests get_sequence_rule_form_for_update"""
# From test_templates/resource.py::ResourceAdminSession::get_resource_form_for_update_template
if not is_never_authz(self.service_config):
form = self.catalog.get_sequence_rule_form_for_update(self.osid_object.ident)
assert isinstance(form, OsidForm)
assert form.is_for_update()
with pytest.raises(errors.InvalidArgument):
self.catalog.get_sequence_rule_form_for_update(['This is Doomed!'])
with pytest.raises(errors.InvalidArgument):
self.catalog.get_sequence_rule_form_for_update(
Id(authority='Respect my Authoritay!',
namespace='assessment.authoring.{object_name}',
identifier='1'))
else:
with pytest.raises(errors.PermissionDenied):
self.catalog.get_sequence_rule_form_for_update(self.fake_id)
def test_update_sequence_rule(self):
"""Tests update_sequence_rule"""
# From test_templates/resource.py::ResourceAdminSession::update_resource_template
if not is_never_authz(self.service_config):
from dlkit.abstract_osid.assessment_authoring.objects import SequenceRule
form = self.catalog.get_sequence_rule_form_for_update(self.osid_object.ident)
form.display_name = 'new name'
form.description = 'new description'
form.set_genus_type(NEW_TYPE_2)
updated_object = self.catalog.update_sequence_rule(form)
assert isinstance(updated_object, SequenceRule)
assert updated_object.ident == self.osid_object.ident
assert updated_object.display_name.text == 'new name'
assert updated_object.description.text == 'new description'
assert updated_object.genus_type == NEW_TYPE_2
with pytest.raises(errors.IllegalState):
self.catalog.update_sequence_rule(form)
with pytest.raises(errors.InvalidArgument):
self.catalog.update_sequence_rule('I Will Break You!')
with pytest.raises(errors.InvalidArgument):
self.catalog.update_sequence_rule(self.form)
else:
with pytest.raises(errors.PermissionDenied):
self.catalog.update_sequence_rule('foo')
def test_can_delete_sequence_rules(self):
"""Tests can_delete_sequence_rules"""
# From test_templates/resource.py::ResourceAdminSession::can_delete_resources_template
assert isinstance(self.catalog.can_delete_sequence_rules(), bool)
def test_delete_sequence_rule(self):
"""Tests delete_sequence_rule"""
if not is_never_authz(self.service_config):
create_form = self.catalog.get_sequence_rule_form_for_create(self.assessment_part_1.ident,
self.assessment_part_2.ident,
[])
create_form.display_name = 'new SequenceRule'
create_form.description = 'description of SequenceRule'
create_form.genus_type = NEW_TYPE
osid_object = self.catalog.create_sequence_rule(create_form)
self.catalog.delete_sequence_rule(osid_object.ident)
with pytest.raises(errors.NotFound):
self.catalog.get_sequence_rule(osid_object.ident)
else:
with pytest.raises(errors.PermissionDenied):
self.catalog.delete_sequence_rule(self.fake_id)
def test_can_manage_sequence_rule_aliases(self):
"""Tests can_manage_sequence_rule_aliases"""
# From test_templates/resource.py::ResourceAdminSession::can_manage_resource_aliases_template
assert isinstance(self.catalog.can_manage_sequence_rule_aliases(), bool)
def test_alias_sequence_rule(self):
"""Tests alias_sequence_rule"""
# From test_templates/resource.py::ResourceAdminSession::alias_resource_template
if not is_never_authz(self.service_config):
alias_id = Id(self.catalog.ident.namespace + '%3Amy-alias%40ODL.MIT.EDU')
self.catalog.alias_sequence_rule(self.osid_object.ident, alias_id)
aliased_object = self.catalog.get_sequence_rule(alias_id)
assert aliased_object.ident == self.osid_object.ident
else:
with pytest.raises(errors.PermissionDenied):
self.catalog.alias_sequence_rule(self.fake_id, self.fake_id)
def test_can_sequence_sequence_rules(self):
"""Tests can_sequence_sequence_rules"""
if is_never_authz(self.service_config):
pass # no object to call the method on?
else:
with pytest.raises(errors.Unimplemented):
self.session.can_sequence_sequence_rules()
def test_move_sequence_rule_ahead(self):
"""Tests move_sequence_rule_ahead"""
if is_never_authz(self.service_config):
pass # no object to call the method on?
elif uses_cataloging(self.service_config):
pass # cannot call the _get_record() methods on catalogs
else:
with pytest.raises(errors.Unimplemented):
self.session.move_sequence_rule_ahead(True, True, True)
def test_move_sequence_rule_behind(self):
"""Tests move_sequence_rule_behind"""
if is_never_authz(self.service_config):
pass # no object to call the method on?
elif uses_cataloging(self.service_config):
pass # cannot call the _get_record() methods on catalogs
else:
with pytest.raises(errors.Unimplemented):
self.session.move_sequence_rule_behind(True, True, True)
def test_order_sequence_rules(self):
"""Tests order_sequence_rules"""
if is_never_authz(self.service_config):
pass # no object to call the method on?
elif uses_cataloging(self.service_config):
pass # cannot call the _get_record() methods on catalogs
else:
with pytest.raises(errors.Unimplemented):
self.session.order_sequence_rules(True, True)
| 53.828354 | 176 | 0.709594 | 11,028 | 91,885 | 5.552865 | 0.031375 | 0.077045 | 0.030815 | 0.016461 | 0.933913 | 0.889642 | 0.841158 | 0.795744 | 0.759622 | 0.733837 | 0 | 0.002775 | 0.207694 | 91,885 | 1,706 | 177 | 53.859906 | 0.838381 | 0.147881 | 0 | 0.676375 | 0 | 0 | 0.077653 | 0.042102 | 0 | 0 | 0 | 0 | 0.125405 | 1 | 0.116505 | false | 0.011327 | 0.021845 | 0 | 0.147249 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
a67d6a133ff62307b35325f3be1f0e33d354823f | 107,500 | py | Python | examples/pythag.py | ziman/idris-py | 934fddd877afa1c082445c8ee598cd760910709c | [
"BSD-3-Clause"
] | 140 | 2015-04-27T09:16:33.000Z | 2022-02-10T08:39:36.000Z | examples/pythag.py | ziman/idris-py | 934fddd877afa1c082445c8ee598cd760910709c | [
"BSD-3-Clause"
] | 11 | 2016-01-01T20:02:32.000Z | 2019-04-03T21:19:36.000Z | examples/pythag.py | ziman/idris-py | 934fddd877afa1c082445c8ee598cd760910709c | [
"BSD-3-Clause"
] | 14 | 2015-05-01T10:59:03.000Z | 2020-11-08T00:55:53.000Z | #!/usr/bin/env python
import sys
import importlib
import math
Unit = object()
World = object()
class IdrisError(Exception):
pass
def _idris_error(msg):
raise IdrisError(msg)
def _idris_pymodule(name):
return importlib.import_module(name)
def _idris_call(f, args):
return f(*list(args))
def _idris_foreach(it, st, f):
for x in it:
# Apply st, x, world
st = APPLY0(APPLY0(APPLY0(f, st), x), World)
return st
def _idris_try(f, fail, succ):
try:
result = APPLY0(f, World) # apply to world
return APPLY0(succ, result)
except Exception as e:
return APPLY0(fail, e)
def _idris_raise(e):
raise e
def _idris_marshal_PIO(action):
return lambda: APPLY0(action, World) # delayed apply-to-world
def _idris_get_global(name):
return globals()[name]
class _ConsIter(object):
def __init__(self, node):
self.node = node
def next(self):
if self.node.isNil:
raise StopIteration
else:
result = self.node.head
self.node = self.node.tail
return result
class ConsList(object):
def __init__(self, isNil=True, head=None, tail=None):
self.isNil = isNil
self.head = head
self.tail = tail
def __nonzero__(self):
return not self.isNil
def __len__(self):
cnt = 0
while not self.isNil:
self = self.tail
cnt += 1
return cnt
def cons(self, x):
return ConsList(isNil=False, head=x, tail=self)
def __iter__(self):
return _ConsIter(self)
# Prelude.Bool.&&
def _idris_Prelude_46_Bool_46__38__38_(e0, e1):
while True:
if not e0: # Prelude.Bool.False
return False
else: # Prelude.Bool.True
return EVAL0(e1)
return _idris_error("unreachable due to case in tail position")
# Prelude.List.++
def _idris_Prelude_46_List_46__43__43_(e0, e1, e2):
while True:
if e1: # Prelude.List.::
in0, in1 = e1.head, e1.tail
return _idris_Prelude_46_List_46__43__43_(None, in1, e2).cons(in0)
else: # Prelude.List.Nil
return e2
return _idris_error("unreachable due to case in tail position")
# Prelude.Basics..
def _idris_Prelude_46_Basics_46__46_(e0, e1, e2, e3, e4, _idris_x):
while True:
return APPLY0(e3, APPLY0(e4, _idris_x))
# Prelude.Classes.<
def _idris_Prelude_46_Classes_46__60_(e0, e1):
while True:
assert e1[0] == 0 # constructor of Prelude.Classes.Ord
in0, in1, in2 = e1[1:]
return in1
return _idris_error("unreachable due to case in tail position")
# Prelude.Algebra.<+>
def _idris_Prelude_46_Algebra_46__60__43__62_(e0, e1):
while True:
return e1
# Prelude.Classes.==
def _idris_Prelude_46_Classes_46__61__61_(e0, e1):
while True:
return e1
# Prelude.Classes.>
def _idris_Prelude_46_Classes_46__62_(e0, e1):
while True:
assert e1[0] == 0 # constructor of Prelude.Classes.Ord
in0, in1, in2 = e1[1:]
return in2
return _idris_error("unreachable due to case in tail position")
# Force
def _idris_Force(e0, e1, e2):
while True:
in0 = EVAL0(e2)
return in0
# PE_(a, b) instance of Prelude.Show.Show_a94d79ab
def _idris_PE_95__40_a_44__32_b_41__32_instance_32_of_32_Prelude_46_Show_46_Show_95_a94d79ab():
while True:
return (0, (65704,), (65719,)) # constructor of Prelude.Show.Show, {U_{PE_(a, b) instance of Prelude.Show.Show_a94d79ab12}1}, {U_{PE_(a, b) instance of Prelude.Show.Show_a94d79ab26}1}
# PE_@@constructor of Prelude.Algebra.Monoid#Semigroup a_42111bf0
def _idris_PE_95__64__64_constructor_32_of_32_Prelude_46_Algebra_46_Monoid_35_Semigroup_32_a_95_42111bf0(
e0, meth0, meth1
):
while True:
return _idris_Prelude_46_List_46__43__43_(None, meth0, meth1)
# PE_@@constructor of Prelude.Applicative.Alternative#Applicative f_5102bba8
def _idris_PE_95__64__64_constructor_32_of_32_Prelude_46_Applicative_46_Alternative_35_Applicative_32_f_95_5102bba8(
meth0, meth1
):
while True:
return ConsList().cons(meth1)
# PE_@@constructor of Prelude.Monad.Monad#Applicative m_d05ad59e
def _idris_PE_95__64__64_constructor_32_of_32_Prelude_46_Monad_46_Monad_35_Applicative_32_m_95_d05ad59e(
meth0, meth1
):
while True:
return ConsList().cons(meth1)
# PE_Prelude.Show.(a, b) instance of Prelude.Show.Show, method show_cfed4029
def _idris_PE_95_Prelude_46_Show_46__40_a_44__32_b_41__32_instance_32_of_32_Prelude_46_Show_46_Show_44__32_method_32_show_95_cfed4029(
e0
):
while True:
assert True # Builtins.MkPair
in0, in1 = e0
return (u'(' + (_idris_Prelude_46_Show_46_primNumShow(None, (65700,), (0,), in0) + (u', ' + (_idris_Prelude_46_Show_46_primNumShow(None, (65700,), (0,), in1) + u')')))) # {U_prim__toStrInt1}, Prelude.Show.Open, {U_prim__toStrInt1}, Prelude.Show.Open
return _idris_error("unreachable due to case in tail position")
# PE_Prelude.Show.List a instance of Prelude.Show.Show, method show_54220539
def _idris_PE_95_Prelude_46_Show_46_List_32_a_32_instance_32_of_32_Prelude_46_Show_46_Show_44__32_method_32_show_95_54220539(
e0
):
while True:
return (u'[' + (_idris_Prelude_46_Show_46_Prelude_46_Show_46__64_Prelude_46_Show_46_Show_36_List_32_a_58__33_show_58_0_58_show_39__58_0(
None,
None,
None,
_idris_PE_95__40_a_44__32_b_41__32_instance_32_of_32_Prelude_46_Show_46_Show_95_a94d79ab(),
u'',
e0
) + u']'))
# PE_concatMap_af3155d1
def _idris_PE_95_concatMap_95_af3155d1(e0, e1, e2, e3):
while True:
return _idris_PE_95_foldr_95_c8d7af37(
None,
None,
(65728, e2), # {U_{PE_concatMap_af3155d10}1}
_idris_PE_95_neutral_95_42111bf0(None),
e3
)
# PE_empty_8ff8f7b3
def _idris_PE_95_empty_95_8ff8f7b3(e0):
while True:
return ConsList()
# PE_foldr_c8d7af37
def _idris_PE_95_foldr_95_c8d7af37(e0, e1, e2, e3, e4):
while True:
return _idris_Prelude_46_Foldable_46_Prelude_46_List_46__64_Prelude_46_Foldable_46_Foldable_36_List_58__33_foldr_58_0(
None, None, e2, e3, e4
)
# PE_neutral_42111bf0
def _idris_PE_95_neutral_95_42111bf0(e0):
while True:
return ConsList()
# PE_printLn'_1452bb16
def _idris_PE_95_printLn_39__95_1452bb16(e0, e1):
while True:
return _idris_Prelude_46_Interactive_46_putStr_39_(
None,
(_idris_PE_95_show_95_78b4bfbe(e1) + u'\u000a')
)
# PE_printLn'_cfef5baf
def _idris_PE_95_printLn_39__95_cfef5baf(e0, e1):
while True:
return _idris_Prelude_46_Interactive_46_putStr_39_(
None,
(_idris_PE_95_show_95_24967653(e1) + u'\u000a')
)
# PE_printLn_48f3a70d
def _idris_PE_95_printLn_95_48f3a70d(e0):
while True:
return _idris_PE_95_printLn_39__95_1452bb16(None, e0)
# PE_show_24967653
def _idris_PE_95_show_95_24967653(e0):
while True:
return (u'[' + (_idris_Prelude_46_Show_46_Prelude_46_Show_46__64_Prelude_46_Show_46_Show_36_List_32_a_58__33_show_58_0_58_show_39__58_0(
None,
None,
None,
(0, (65729,), (65731,)), # constructor of Prelude.Show.Show, {U_{PE_show_249676530}1}, {U_{PE_show_249676532}1}
u'',
e0
) + u']'))
# PE_show_78b4bfbe
def _idris_PE_95_show_95_78b4bfbe(e0):
while True:
return _idris_PE_95_Prelude_46_Show_46_List_32_a_32_instance_32_of_32_Prelude_46_Show_46_Show_44__32_method_32_show_95_54220539(
e0
)
# call__IO
def _idris_call_95__95_IO(e0, e1, e2):
while True:
return APPLY0(e2, None)
# Prelude.Classes.compare
def _idris_Prelude_46_Classes_46_compare(e0, e1):
while True:
assert e1[0] == 0 # constructor of Prelude.Classes.Ord
in0, in1, in2 = e1[1:]
return in0
return _idris_error("unreachable due to case in tail position")
# Prelude.Foldable.foldr
def _idris_Prelude_46_Foldable_46_foldr(e0, e1, e2, e3):
while True:
return APPLY0(APPLY0(e3, e1), e2)
# Prelude.Basics.id
def _idris_Prelude_46_Basics_46_id(e0, e1):
while True:
return e1
# Prelude.Bool.ifThenElse
def _idris_Prelude_46_Bool_46_ifThenElse(e0, e1, e2, e3):
while True:
if not e1: # Prelude.Bool.False
return EVAL0(e3)
else: # Prelude.Bool.True
return EVAL0(e2)
return _idris_error("unreachable due to case in tail position")
# Prelude.Classes.intToBool
def _idris_Prelude_46_Classes_46_intToBool(e0):
while True:
if e0 == 0:
return False
else:
return True
return _idris_error("unreachable due to case in tail position")
# io_bind
def _idris_io_95_bind(e0, e1, e2, e3, e4, _idris_w):
while True:
return APPLY0(io_bind2(e0, e1, e2, e3, e4, _idris_w), APPLY0(e3, _idris_w))
# io_return
def _idris_io_95_return(e0, e1, e2, _idris_w):
while True:
return e2
# Prelude.Chars.isDigit
def _idris_Prelude_46_Chars_46_isDigit(e0):
while True:
aux1 = _idris_Prelude_46_Classes_46_Prelude_46_Classes_46__64_Prelude_46_Classes_46_Ord_36_Char_58__33__62__61__58_0(
e0, u'0'
)
if not aux1: # Prelude.Bool.False
return False
else: # Prelude.Bool.True
return _idris_Prelude_46_Chars_46__123_isDigit0_125_(e0)
return _idris_error("unreachable due to case in tail position")
# Main.main
def _idris_Main_46_main():
while True:
return (
65697, # {U_io_bind1}
None,
None,
None,
_idris_PE_95_printLn_95_48f3a70d(_idris_Main_46_pythag(100)),
(65640,) # {U_Main.{main2}1}
)
# Prelude.List.mergeBy
def _idris_Prelude_46_List_46_mergeBy(e0, e1, e2, e3):
while True:
if not e2: # Prelude.List.Nil
return e3
else:
if e3: # Prelude.List.::
in0, in1 = e3.head, e3.tail
assert e2 # Prelude.List.::
in2, in3 = e2.head, e2.tail
aux1 = _idris_Prelude_46_Classes_46_Prelude_46_Classes_46__64_Prelude_46_Classes_46_Eq_36_Ordering_58__33__61__61__58_0(
APPLY0(APPLY0(e1, in2), in0),
(0,) # Prelude.Classes.LT
)
if not aux1: # Prelude.Bool.False
return _idris_Prelude_46_List_46_mergeBy(None, e1, in3.cons(in2), in1).cons(in0)
else: # Prelude.Bool.True
return _idris_Prelude_46_List_46_mergeBy(None, e1, in3, in1.cons(in0)).cons(in2)
return _idris_error("unreachable due to case in tail position")
return _idris_error("unreachable due to case in tail position")
else: # Prelude.List.Nil
return e2
return _idris_error("unreachable due to case in tail position")
return _idris_error("unreachable due to case in tail position")
# mkForeignPrim
def _idris_mkForeignPrim():
while True:
return None
# Prelude.Bool.not
def _idris_Prelude_46_Bool_46_not(e0):
while True:
if not e0: # Prelude.Bool.False
return True
else: # Prelude.Bool.True
return False
return _idris_error("unreachable due to case in tail position")
# Prelude.Show.precCon
def _idris_Prelude_46_Show_46_precCon(e0):
while True:
if e0[0] == 6: # Prelude.Show.App
return 6
elif e0[0] == 3: # Prelude.Show.Backtick
return 3
elif e0[0] == 2: # Prelude.Show.Dollar
return 2
elif e0[0] == 1: # Prelude.Show.Eq
return 1
elif e0[0] == 0: # Prelude.Show.Open
return 0
elif e0[0] == 5: # Prelude.Show.PrefixMinus
return 5
else: # Prelude.Show.User
in0 = e0[1]
return 4
return _idris_error("unreachable due to case in tail position")
# Prelude.Show.primNumShow
def _idris_Prelude_46_Show_46_primNumShow(e0, e1, e2, e3):
while True:
in0 = APPLY0(e1, e3)
aux2 = _idris_Prelude_46_Classes_46_Prelude_46_Show_46__64_Prelude_46_Classes_46_Ord_36_Prec_58__33__62__61__58_0(
e2, (5,) # Prelude.Show.PrefixMinus
)
if not aux2: # Prelude.Bool.False
aux3 = False
else: # Prelude.Bool.True
aux3 = _idris_Prelude_46_Show_46__123_primNumShow2_125_(in0, e0, e1, e2, e3)
aux1 = aux3
if not aux1: # Prelude.Bool.False
return in0
else: # Prelude.Bool.True
return (u'(' + (in0 + u')'))
return _idris_error("unreachable due to case in tail position")
# prim__addInt
def _idris_prim_95__95_addInt(op0, op1):
while True:
return (op0 + op1)
# prim__charToInt
def _idris_prim_95__95_charToInt(op0):
while True:
return ord(op0)
# prim__concat
def _idris_prim_95__95_concat(op0, op1):
while True:
return (op0 + op1)
# prim__eqBigInt
def _idris_prim_95__95_eqBigInt(op0, op1):
while True:
return (op0 == op1)
# prim__eqChar
def _idris_prim_95__95_eqChar(op0, op1):
while True:
return (op0 == op1)
# prim__eqInt
def _idris_prim_95__95_eqInt(op0, op1):
while True:
return (op0 == op1)
# prim__eqManagedPtr
def _idris_prim_95__95_eqManagedPtr(op0, op1):
while True:
return _idris_error("unimplemented external: prim__eqManagedPtr")
# prim__eqPtr
def _idris_prim_95__95_eqPtr(op0, op1):
while True:
return _idris_error("unimplemented external: prim__eqPtr")
# prim__eqString
def _idris_prim_95__95_eqString(op0, op1):
while True:
return (op0 == op1)
# prim__ltString
def _idris_prim_95__95_ltString(op0, op1):
while True:
return (op0 < op1)
# prim__mulInt
def _idris_prim_95__95_mulInt(op0, op1):
while True:
return (op0 * op1)
# prim__null
def _idris_prim_95__95_null():
while True:
return None
# prim__readFile
def _idris_prim_95__95_readFile(op0, op1):
while True:
return _idris_error("unimplemented external: prim__readFile")
# prim__registerPtr
def _idris_prim_95__95_registerPtr(op0, op1):
while True:
return _idris_error("unimplemented external: prim__registerPtr")
# prim__sextInt_BigInt
def _idris_prim_95__95_sextInt_95_BigInt(op0):
while True:
return op0
# prim__sltBigInt
def _idris_prim_95__95_sltBigInt(op0, op1):
while True:
return (op0 < op1)
# prim__sltChar
def _idris_prim_95__95_sltChar(op0, op1):
while True:
return (op0 < op1)
# prim__sltInt
def _idris_prim_95__95_sltInt(op0, op1):
while True:
return (op0 < op1)
# prim__stderr
def _idris_prim_95__95_stderr():
while True:
return _idris_error("unimplemented external: prim__stderr")
# prim__stdin
def _idris_prim_95__95_stdin():
while True:
return _idris_error("unimplemented external: prim__stdin")
# prim__stdout
def _idris_prim_95__95_stdout():
while True:
return _idris_error("unimplemented external: prim__stdout")
# prim__strCons
def _idris_prim_95__95_strCons(op0, op1):
while True:
return (op0 + op1)
# prim__strHead
def _idris_prim_95__95_strHead(op0):
while True:
return op0[0]
# prim__strTail
def _idris_prim_95__95_strTail(op0):
while True:
return op0[1:]
# prim__subInt
def _idris_prim_95__95_subInt(op0, op1):
while True:
return (op0 - op1)
# prim__toStrInt
def _idris_prim_95__95_toStrInt(op0):
while True:
return str(op0)
# prim__vm
def _idris_prim_95__95_vm():
while True:
return _idris_error("unimplemented external: prim__vm")
# prim__writeFile
def _idris_prim_95__95_writeFile(op0, op1, op2):
while True:
return _idris_error("unimplemented external: prim__writeFile")
# prim__writeString
def _idris_prim_95__95_writeString(op0, op1):
while True:
return sys.stdout.write(op1)
# prim_io_bind
def _idris_prim_95_io_95_bind(e0, e1, e2, e3):
while True:
return APPLY0(e3, e2)
# Prelude.Show.protectEsc
def _idris_Prelude_46_Show_46_protectEsc(e0, e1, e2):
while True:
aux2 = _idris_Prelude_46_Strings_46_strM(e2)
if aux2[0] == 1: # Prelude.Strings.StrCons
in0, in1 = aux2[1:]
aux3 = APPLY0(e0, in0)
else: # Prelude.Strings.StrNil
aux3 = False
aux1 = aux3
if not aux1: # Prelude.Bool.False
aux4 = u''
else: # Prelude.Bool.True
aux4 = u'\\&'
return (e1 + (aux4 + e2))
# Prelude.Applicative.pure
def _idris_Prelude_46_Applicative_46_pure(e0, e1, e2):
while True:
return APPLY0(e2, e1)
# Prelude.Interactive.putStr'
def _idris_Prelude_46_Interactive_46_putStr_39_(e0, e1):
while True:
return (65697, None, None, None, (65663, e1), (65664,)) # {U_io_bind1}, {U_Prelude.Interactive.{putStr'0}1}, {U_Prelude.Interactive.{putStr'1}1}
# Main.pythag
def _idris_Main_46_pythag(e0):
while True:
return _idris_Prelude_46_Monad_46_Prelude_46__64_Prelude_46_Monad_46_Monad_36_List_58__33__62__62__61__58_0(
None,
None,
_idris_Prelude_46_Prelude_46__64_Prelude_46_Enum_36_Int_58__33_enumFromTo_58_0(
1, e0
),
(65644,) # {U_Main.{pythag3}1}
)
# really_believe_me
def _idris_really_95_believe_95_me(e0, e1, e2):
while True:
return e2
# run__IO
def _idris_run_95__95_IO(e0, e1):
while True:
return APPLY0(e1, None)
# Prelude.Show.show
def _idris_Prelude_46_Show_46_show(e0, e1):
while True:
assert e1[0] == 0 # constructor of Prelude.Show.Show
in0, in1 = e1[1:]
return in0
return _idris_error("unreachable due to case in tail position")
# Prelude.Show.showLitChar
def _idris_Prelude_46_Show_46_showLitChar(e0):
while True:
aux1 = _idris_Prelude_46_Show_46_showLitChar_58_getAt_58_10(
None,
ord(e0),
_idris_Prelude_46_Show_46_showLitChar_58_asciiTab_58_10(None)
)
if aux1 is not None: # Prelude.Maybe.Just
in10 = aux1
aux2 = (65648, None, None, None, (65699, u'\\'), (65686, in10)) # {U_Prelude.Basics..1}, {U_prim__strCons1}, {U_Prelude.Show.{showLitChar10}1}
else: # Prelude.Maybe.Nothing
aux4 = _idris_Prelude_46_Classes_46_Prelude_46_Classes_46__64_Prelude_46_Classes_46_Ord_36_Char_58__33_compare_58_0(
e0,
u'\u007f'
)
if aux4[0] == 2: # Prelude.Classes.GT
aux5 = True
else:
aux5 = False
aux3 = aux5
if not aux3: # Prelude.Bool.False
aux6 = (65699, e0) # {U_prim__strCons1}
else: # Prelude.Bool.True
aux6 = (
65648, # {U_Prelude.Basics..1}
None,
None,
None,
(65699, u'\\'), # {U_prim__strCons1}
(
65673, # {U_Prelude.Show.protectEsc1}
(65650,), # {U_Prelude.Chars.isDigit1}
_idris_Prelude_46_Show_46_primNumShow(None, (65700,), (0,), ord(e0)) # {U_prim__toStrInt1}, Prelude.Show.Open
)
)
aux2 = aux6
return {
u'\u0007': (65685,), # {U_Prelude.Show.{showLitChar0}1}
u'\u0008': (65687,), # {U_Prelude.Show.{showLitChar1}1}
u'\u0009': (65688,), # {U_Prelude.Show.{showLitChar2}1}
u'\u000a': (65689,), # {U_Prelude.Show.{showLitChar3}1}
u'\u000b': (65690,), # {U_Prelude.Show.{showLitChar4}1}
u'\u000c': (65691,), # {U_Prelude.Show.{showLitChar5}1}
u'\u000d': (65692,), # {U_Prelude.Show.{showLitChar6}1}
u'\u000e': (65673, (65693,), u'\\SO'), # {U_Prelude.Show.protectEsc1}, {U_Prelude.Show.{showLitChar7}1}
u'\\': (65694,), # {U_Prelude.Show.{showLitChar8}1}
u'\u007f': (65695,) # {U_Prelude.Show.{showLitChar9}1}
}.get(e0, aux2)
# Prelude.Show.showLitString
def _idris_Prelude_46_Show_46_showLitString(e0):
while True:
if e0: # Prelude.List.::
in0, in1 = e0.head, e0.tail
if in0 == u'"':
return (
65648, # {U_Prelude.Basics..1}
None,
None,
None,
(65696,), # {U_Prelude.Show.{showLitString0}1}
_idris_Prelude_46_Show_46_showLitString(in1)
)
else:
return (
65648, # {U_Prelude.Basics..1}
None,
None,
None,
_idris_Prelude_46_Show_46_showLitChar(in0),
_idris_Prelude_46_Show_46_showLitString(in1)
)
return _idris_error("unreachable due to case in tail position")
else: # Prelude.List.Nil
return (65649, None) # {U_Prelude.Basics.id1}
return _idris_error("unreachable due to case in tail position")
# Prelude.Show.showParens
def _idris_Prelude_46_Show_46_showParens(e0, e1):
while True:
if not e0: # Prelude.Bool.False
return e1
else: # Prelude.Bool.True
return (u'(' + (e1 + u')'))
return _idris_error("unreachable due to case in tail position")
# Prelude.Show.showPrec
def _idris_Prelude_46_Show_46_showPrec(e0, e1):
while True:
assert e1[0] == 0 # constructor of Prelude.Show.Show
in0, in1 = e1[1:]
return in1
return _idris_error("unreachable due to case in tail position")
# Prelude.List.sortBy
def _idris_Prelude_46_List_46_sortBy(e0, e1, e2):
while True:
if e2: # Prelude.List.::
in0, in1 = e2.head, e2.tail
if not in1: # Prelude.List.Nil
return ConsList().cons(in0)
else:
aux1 = _idris_Prelude_46_List_46_sortBy_58_splitRec_58_2(
None,
None,
None,
e2,
e2,
(65649, None) # {U_Prelude.Basics.id1}
)
assert True # Builtins.MkPair
in2, in3 = aux1
return _idris_Prelude_46_List_46_mergeBy(
None,
e1,
_idris_Prelude_46_List_46_sortBy(None, e1, in2),
_idris_Prelude_46_List_46_sortBy(None, e1, in3)
)
return _idris_error("unreachable due to case in tail position")
return _idris_error("unreachable due to case in tail position")
elif not e2: # Prelude.List.Nil
return ConsList()
else:
aux2 = _idris_Prelude_46_List_46_sortBy_58_splitRec_58_2(
None,
None,
None,
e2,
e2,
(65649, None) # {U_Prelude.Basics.id1}
)
assert True # Builtins.MkPair
in4, in5 = aux2
return _idris_Prelude_46_List_46_mergeBy(
None,
e1,
_idris_Prelude_46_List_46_sortBy(None, e1, in4),
_idris_Prelude_46_List_46_sortBy(None, e1, in5)
)
return _idris_error("unreachable due to case in tail position")
return _idris_error("unreachable due to case in tail position")
# Prelude.Strings.strM
def _idris_Prelude_46_Strings_46_strM(e0):
while True:
aux3 = (e0 == u'')
if aux3 == 0:
aux4 = False
else:
aux4 = True
aux2 = aux4
if not aux2: # Prelude.Bool.False
aux5 = True
else: # Prelude.Bool.True
aux5 = False
aux1 = _idris_Decidable_46_Equality_46_Decidable_46_Equality_46__64_Decidable_46_Equality_46_DecEq_36_Bool_58__33_decEq_58_0(
aux5, True
)
if aux1[0] == 1: # Prelude.Basics.No
return _idris_really_95_believe_95_me(None, None, (0,)) # Prelude.Strings.StrNil
else: # Prelude.Basics.Yes
return _idris_really_95_believe_95_me(None, None, (1, e0[0], e0[1:])) # Prelude.Strings.StrCons
return _idris_error("unreachable due to case in tail position")
# Prelude.Strings.unpack
def _idris_Prelude_46_Strings_46_unpack(e0):
while True:
aux1 = _idris_Prelude_46_Strings_46_strM(e0)
if aux1[0] == 1: # Prelude.Strings.StrCons
in0, in1 = aux1[1:]
return _idris_Prelude_46_Strings_46_unpack(in1).cons(in0)
else: # Prelude.Strings.StrNil
return ConsList()
return _idris_error("unreachable due to case in tail position")
# unsafePerformPrimIO
def _idris_unsafePerformPrimIO():
while True:
return None
# world
def _idris_world(e0):
while True:
return e0
# Prelude.Bool.||
def _idris_Prelude_46_Bool_46__124__124_(e0, e1):
while True:
if not e0: # Prelude.Bool.False
return EVAL0(e1)
else: # Prelude.Bool.True
return True
return _idris_error("unreachable due to case in tail position")
# {APPLY0}
def APPLY0(fn0, arg0):
while True:
if fn0[0] < 65690:
if fn0[0] < 65664:
if fn0[0] < 65651:
if fn0[0] < 65644:
if fn0[0] < 65641:
if fn0[0] == 65638: # {U_Main.{main0}1}
P_c0 = fn0[1]
return _idris_Main_46__123_main0_125_(P_c0, arg0)
elif fn0[0] == 65639: # {U_Main.{main1}1}
return _idris_Main_46__123_main1_125_(arg0)
else: # {U_Main.{main2}1}
return _idris_Main_46__123_main2_125_(arg0)
else:
if fn0[0] == 65641: # {U_Main.{pythag0}1}
P_c0, P_c1, P_c2 = fn0[1:]
return _idris_Main_46__123_pythag0_125_(P_c0, P_c1, P_c2, arg0)
elif fn0[0] == 65642: # {U_Main.{pythag1}1}
P_c0, P_c1 = fn0[1:]
return _idris_Main_46__123_pythag1_125_(P_c0, P_c1, arg0)
else: # {U_Main.{pythag2}1}
P_c0 = fn0[1]
return _idris_Main_46__123_pythag2_125_(P_c0, arg0)
else:
if fn0[0] < 65647:
if fn0[0] == 65644: # {U_Main.{pythag3}1}
return _idris_Main_46__123_pythag3_125_(arg0)
elif fn0[0] == 65645: # {U_PE_@@constructor of Prelude.Algebra.Monoid#Semigroup a_42111bf01}
P_c0, P_c1 = fn0[1:]
return _idris_PE_95__64__64_constructor_32_of_32_Prelude_46_Algebra_46_Monoid_35_Semigroup_32_a_95_42111bf0(
P_c0, P_c1, arg0
)
else: # {U_PE_@@constructor of Prelude.Applicative.Alternative#Applicative f_5102bba81}
P_c0 = fn0[1]
return _idris_PE_95__64__64_constructor_32_of_32_Prelude_46_Applicative_46_Alternative_35_Applicative_32_f_95_5102bba8(
P_c0, arg0
)
else:
if fn0[0] < 65649:
if fn0[0] == 65647: # {U_PE_@@constructor of Prelude.Monad.Monad#Applicative m_d05ad59e1}
P_c0 = fn0[1]
return _idris_PE_95__64__64_constructor_32_of_32_Prelude_46_Monad_46_Monad_35_Applicative_32_m_95_d05ad59e(
P_c0, arg0
)
else: # {U_Prelude.Basics..1}
P_c0, P_c1, P_c2, P_c3, P_c4 = fn0[1:]
return _idris_Prelude_46_Basics_46__46_(P_c0, P_c1, P_c2, P_c3, P_c4, arg0)
else:
if fn0[0] == 65649: # {U_Prelude.Basics.id1}
P_c0 = fn0[1]
return _idris_Prelude_46_Basics_46_id(P_c0, arg0)
else: # {U_Prelude.Chars.isDigit1}
return _idris_Prelude_46_Chars_46_isDigit(arg0)
else:
if fn0[0] < 65657:
if fn0[0] < 65654:
if fn0[0] == 65651: # {U_Prelude.Classes.{Char instance of Prelude.Classes.Ord_lam0}1}
P_c0 = fn0[1]
return _idris_Prelude_46_Classes_46__123_Char_32_instance_32_of_32_Prelude_46_Classes_46_Ord_95_lam0_125_(
P_c0, arg0
)
elif fn0[0] == 65652: # {U_Prelude.Classes.{Char instance of Prelude.Classes.Ord_lam1}1}
return _idris_Prelude_46_Classes_46__123_Char_32_instance_32_of_32_Prelude_46_Classes_46_Ord_95_lam1_125_(
arg0
)
else: # {U_Prelude.Classes.{Char instance of Prelude.Classes.Ord_lam2}1}
P_c0 = fn0[1]
return _idris_Prelude_46_Classes_46__123_Char_32_instance_32_of_32_Prelude_46_Classes_46_Ord_95_lam2_125_(
P_c0, arg0
)
else:
if fn0[0] == 65654: # {U_Prelude.Classes.{Char instance of Prelude.Classes.Ord_lam3}1}
return _idris_Prelude_46_Classes_46__123_Char_32_instance_32_of_32_Prelude_46_Classes_46_Ord_95_lam3_125_(
arg0
)
elif fn0[0] == 65655: # {U_Prelude.Classes.{Char instance of Prelude.Classes.Ord_lam4}1}
P_c0 = fn0[1]
return _idris_Prelude_46_Classes_46__123_Char_32_instance_32_of_32_Prelude_46_Classes_46_Ord_95_lam4_125_(
P_c0, arg0
)
else: # {U_Prelude.Classes.{Char instance of Prelude.Classes.Ord_lam5}1}
return _idris_Prelude_46_Classes_46__123_Char_32_instance_32_of_32_Prelude_46_Classes_46_Ord_95_lam5_125_(
arg0
)
else:
if fn0[0] < 65660:
if fn0[0] == 65657: # {U_Prelude.Classes.{Int instance of Prelude.Classes.Ord_lam0}1}
P_c0 = fn0[1]
return _idris_Prelude_46_Classes_46__123_Int_32_instance_32_of_32_Prelude_46_Classes_46_Ord_95_lam0_125_(
P_c0, arg0
)
elif fn0[0] == 65658: # {U_Prelude.Classes.{Int instance of Prelude.Classes.Ord_lam1}1}
return _idris_Prelude_46_Classes_46__123_Int_32_instance_32_of_32_Prelude_46_Classes_46_Ord_95_lam1_125_(
arg0
)
else: # {U_Prelude.Classes.{Int instance of Prelude.Classes.Ord_lam2}1}
P_c0 = fn0[1]
return _idris_Prelude_46_Classes_46__123_Int_32_instance_32_of_32_Prelude_46_Classes_46_Ord_95_lam2_125_(
P_c0, arg0
)
else:
if fn0[0] < 65662:
if fn0[0] == 65660: # {U_Prelude.Classes.{Int instance of Prelude.Classes.Ord_lam3}1}
return _idris_Prelude_46_Classes_46__123_Int_32_instance_32_of_32_Prelude_46_Classes_46_Ord_95_lam3_125_(
arg0
)
else: # {U_Prelude.Classes.{Int instance of Prelude.Classes.Ord_lam4}1}
P_c0 = fn0[1]
return _idris_Prelude_46_Classes_46__123_Int_32_instance_32_of_32_Prelude_46_Classes_46_Ord_95_lam4_125_(
P_c0, arg0
)
else:
if fn0[0] == 65662: # {U_Prelude.Classes.{Int instance of Prelude.Classes.Ord_lam5}1}
return _idris_Prelude_46_Classes_46__123_Int_32_instance_32_of_32_Prelude_46_Classes_46_Ord_95_lam5_125_(
arg0
)
else: # {U_Prelude.Interactive.{putStr'0}1}
P_c0 = fn0[1]
return _idris_Prelude_46_Interactive_46__123_putStr_39_0_125_(P_c0, arg0)
else:
if fn0[0] < 65677:
if fn0[0] < 65670:
if fn0[0] < 65667:
if fn0[0] == 65664: # {U_Prelude.Interactive.{putStr'1}1}
return _idris_Prelude_46_Interactive_46__123_putStr_39_1_125_(arg0)
elif fn0[0] == 65665: # {U_Prelude.List.List instance of Prelude.Foldable.Foldable1}
P_c0, P_c1, P_c2, P_c3 = fn0[1:]
return _idris_Prelude_46_List_46__64_Prelude_46_Foldable_46_Foldable_36_List(
P_c0, P_c1, P_c2, P_c3, arg0
)
else: # {U_Prelude.Nat.Nat instance of Prelude.Classes.Eq1}
P_c0 = fn0[1]
return _idris_Prelude_46_Nat_46__64_Prelude_46_Classes_46_Eq_36_Nat(P_c0, arg0)
else:
if fn0[0] == 65667: # {U_Prelude.Nat.{Nat instance of Prelude.Classes.Ord_lam0}1}
P_c0 = fn0[1]
return _idris_Prelude_46_Nat_46__123_Nat_32_instance_32_of_32_Prelude_46_Classes_46_Ord_95_lam0_125_(
P_c0, arg0
)
elif fn0[0] == 65668: # {U_Prelude.Nat.{Nat instance of Prelude.Classes.Ord_lam1}1}
return _idris_Prelude_46_Nat_46__123_Nat_32_instance_32_of_32_Prelude_46_Classes_46_Ord_95_lam1_125_(
arg0
)
else: # {U_Prelude.Nat.{Nat instance of Prelude.Classes.Ord_lam2}1}
P_c0 = fn0[1]
return _idris_Prelude_46_Nat_46__123_Nat_32_instance_32_of_32_Prelude_46_Classes_46_Ord_95_lam2_125_(
P_c0, arg0
)
else:
if fn0[0] < 65673:
if fn0[0] == 65670: # {U_Prelude.Nat.{Nat instance of Prelude.Classes.Ord_lam3}1}
return _idris_Prelude_46_Nat_46__123_Nat_32_instance_32_of_32_Prelude_46_Classes_46_Ord_95_lam3_125_(
arg0
)
elif fn0[0] == 65671: # {U_Prelude.Nat.{Nat instance of Prelude.Classes.Ord_lam4}1}
P_c0 = fn0[1]
return _idris_Prelude_46_Nat_46__123_Nat_32_instance_32_of_32_Prelude_46_Classes_46_Ord_95_lam4_125_(
P_c0, arg0
)
else: # {U_Prelude.Nat.{Nat instance of Prelude.Classes.Ord_lam5}1}
return _idris_Prelude_46_Nat_46__123_Nat_32_instance_32_of_32_Prelude_46_Classes_46_Ord_95_lam5_125_(
arg0
)
else:
if fn0[0] < 65675:
if fn0[0] == 65673: # {U_Prelude.Show.protectEsc1}
P_c0, P_c1 = fn0[1:]
return _idris_Prelude_46_Show_46_protectEsc(P_c0, P_c1, arg0)
else: # {U_Prelude.Show.{Int instance of Prelude.Show.Show_lam0}1}
return _idris_Prelude_46_Show_46__123_Int_32_instance_32_of_32_Prelude_46_Show_46_Show_95_lam0_125_(
arg0
)
else:
if fn0[0] == 65675: # {U_Prelude.Show.{Int instance of Prelude.Show.Show_lam1}1}
P_c0 = fn0[1]
return _idris_Prelude_46_Show_46__123_Int_32_instance_32_of_32_Prelude_46_Show_46_Show_95_lam1_125_(
P_c0, arg0
)
else: # {U_Prelude.Show.{Int instance of Prelude.Show.Show_lam2}1}
return _idris_Prelude_46_Show_46__123_Int_32_instance_32_of_32_Prelude_46_Show_46_Show_95_lam2_125_(
arg0
)
else:
if fn0[0] < 65683:
if fn0[0] < 65680:
if fn0[0] == 65677: # {U_Prelude.Show.{Prec instance of Prelude.Classes.Ord_lam0}1}
P_c0 = fn0[1]
return _idris_Prelude_46_Show_46__123_Prec_32_instance_32_of_32_Prelude_46_Classes_46_Ord_95_lam0_125_(
P_c0, arg0
)
elif fn0[0] == 65678: # {U_Prelude.Show.{Prec instance of Prelude.Classes.Ord_lam1}1}
return _idris_Prelude_46_Show_46__123_Prec_32_instance_32_of_32_Prelude_46_Classes_46_Ord_95_lam1_125_(
arg0
)
else: # {U_Prelude.Show.{Prec instance of Prelude.Classes.Ord_lam2}1}
P_c0 = fn0[1]
return _idris_Prelude_46_Show_46__123_Prec_32_instance_32_of_32_Prelude_46_Classes_46_Ord_95_lam2_125_(
P_c0, arg0
)
else:
if fn0[0] == 65680: # {U_Prelude.Show.{Prec instance of Prelude.Classes.Ord_lam3}1}
return _idris_Prelude_46_Show_46__123_Prec_32_instance_32_of_32_Prelude_46_Classes_46_Ord_95_lam3_125_(
arg0
)
elif fn0[0] == 65681: # {U_Prelude.Show.{Prec instance of Prelude.Classes.Ord_lam4}1}
P_c0 = fn0[1]
return _idris_Prelude_46_Show_46__123_Prec_32_instance_32_of_32_Prelude_46_Classes_46_Ord_95_lam4_125_(
P_c0, arg0
)
else: # {U_Prelude.Show.{Prec instance of Prelude.Classes.Ord_lam5}1}
return _idris_Prelude_46_Show_46__123_Prec_32_instance_32_of_32_Prelude_46_Classes_46_Ord_95_lam5_125_(
arg0
)
else:
if fn0[0] < 65686:
if fn0[0] == 65683: # {U_Prelude.Show.{case block in showLitChar at ./Prelude/Show.idr:126:27_lam0}1}
P_c0 = fn0[1]
return _idris_Prelude_46_Show_46__123_case_32_block_32_in_32_showLitChar_32_at_32__46__47_Prelude_47_Show_46_idr_58_126_58_27_95_lam0_125_(
P_c0, arg0
)
elif fn0[0] == 65684: # {U_Prelude.Show.{primNumShow0}1}
return _idris_Prelude_46_Show_46__123_primNumShow0_125_(arg0)
else: # {U_Prelude.Show.{showLitChar0}1}
return _idris_Prelude_46_Show_46__123_showLitChar0_125_(arg0)
else:
if fn0[0] < 65688:
if fn0[0] == 65686: # {U_Prelude.Show.{showLitChar10}1}
P_c0 = fn0[1]
return _idris_Prelude_46_Show_46__123_showLitChar10_125_(P_c0, arg0)
else: # {U_Prelude.Show.{showLitChar1}1}
return _idris_Prelude_46_Show_46__123_showLitChar1_125_(arg0)
else:
if fn0[0] == 65688: # {U_Prelude.Show.{showLitChar2}1}
return _idris_Prelude_46_Show_46__123_showLitChar2_125_(arg0)
else: # {U_Prelude.Show.{showLitChar3}1}
return _idris_Prelude_46_Show_46__123_showLitChar3_125_(arg0)
else:
if fn0[0] < 65716:
if fn0[0] < 65703:
if fn0[0] < 65696:
if fn0[0] < 65693:
if fn0[0] == 65690: # {U_Prelude.Show.{showLitChar4}1}
return _idris_Prelude_46_Show_46__123_showLitChar4_125_(arg0)
elif fn0[0] == 65691: # {U_Prelude.Show.{showLitChar5}1}
return _idris_Prelude_46_Show_46__123_showLitChar5_125_(arg0)
else: # {U_Prelude.Show.{showLitChar6}1}
return _idris_Prelude_46_Show_46__123_showLitChar6_125_(arg0)
else:
if fn0[0] == 65693: # {U_Prelude.Show.{showLitChar7}1}
return _idris_Prelude_46_Show_46__123_showLitChar7_125_(arg0)
elif fn0[0] == 65694: # {U_Prelude.Show.{showLitChar8}1}
return _idris_Prelude_46_Show_46__123_showLitChar8_125_(arg0)
else: # {U_Prelude.Show.{showLitChar9}1}
return _idris_Prelude_46_Show_46__123_showLitChar9_125_(arg0)
else:
if fn0[0] < 65699:
if fn0[0] == 65696: # {U_Prelude.Show.{showLitString0}1}
return _idris_Prelude_46_Show_46__123_showLitString0_125_(arg0)
elif fn0[0] == 65697: # {U_io_bind1}
P_c0, P_c1, P_c2, P_c3, P_c4 = fn0[1:]
return _idris_io_95_bind(P_c0, P_c1, P_c2, P_c3, P_c4, arg0)
else: # {U_io_return1}
P_c0, P_c1, P_c2 = fn0[1:]
return _idris_io_95_return(P_c0, P_c1, P_c2, arg0)
else:
if fn0[0] < 65701:
if fn0[0] == 65699: # {U_prim__strCons1}
P_c0 = fn0[1]
return _idris_prim_95__95_strCons(P_c0, arg0)
else: # {U_prim__toStrInt1}
return _idris_prim_95__95_toStrInt(arg0)
else:
if fn0[0] == 65701: # {U_{PE_(a, b) instance of Prelude.Show.Show_a94d79ab0}1}
return _idris__123_PE_95__40_a_44__32_b_41__32_instance_32_of_32_Prelude_46_Show_46_Show_95_a94d79ab0_125_(
arg0
)
else: # {U_{PE_(a, b) instance of Prelude.Show.Show_a94d79ab10}1}
return _idris__123_PE_95__40_a_44__32_b_41__32_instance_32_of_32_Prelude_46_Show_46_Show_95_a94d79ab10_125_(
arg0
)
else:
if fn0[0] < 65709:
if fn0[0] < 65706:
if fn0[0] == 65703: # {U_{PE_(a, b) instance of Prelude.Show.Show_a94d79ab11}1}
return _idris__123_PE_95__40_a_44__32_b_41__32_instance_32_of_32_Prelude_46_Show_46_Show_95_a94d79ab11_125_(
arg0
)
elif fn0[0] == 65704: # {U_{PE_(a, b) instance of Prelude.Show.Show_a94d79ab12}1}
return _idris__123_PE_95__40_a_44__32_b_41__32_instance_32_of_32_Prelude_46_Show_46_Show_95_a94d79ab12_125_(
arg0
)
else: # {U_{PE_(a, b) instance of Prelude.Show.Show_a94d79ab13}1}
return _idris__123_PE_95__40_a_44__32_b_41__32_instance_32_of_32_Prelude_46_Show_46_Show_95_a94d79ab13_125_(
arg0
)
else:
if fn0[0] == 65706: # {U_{PE_(a, b) instance of Prelude.Show.Show_a94d79ab14}1}
P_c0 = fn0[1]
return _idris__123_PE_95__40_a_44__32_b_41__32_instance_32_of_32_Prelude_46_Show_46_Show_95_a94d79ab14_125_(
P_c0, arg0
)
elif fn0[0] == 65707: # {U_{PE_(a, b) instance of Prelude.Show.Show_a94d79ab15}1}
return _idris__123_PE_95__40_a_44__32_b_41__32_instance_32_of_32_Prelude_46_Show_46_Show_95_a94d79ab15_125_(
arg0
)
else: # {U_{PE_(a, b) instance of Prelude.Show.Show_a94d79ab16}1}
return _idris__123_PE_95__40_a_44__32_b_41__32_instance_32_of_32_Prelude_46_Show_46_Show_95_a94d79ab16_125_(
arg0
)
else:
if fn0[0] < 65712:
if fn0[0] == 65709: # {U_{PE_(a, b) instance of Prelude.Show.Show_a94d79ab17}1}
return _idris__123_PE_95__40_a_44__32_b_41__32_instance_32_of_32_Prelude_46_Show_46_Show_95_a94d79ab17_125_(
arg0
)
elif fn0[0] == 65710: # {U_{PE_(a, b) instance of Prelude.Show.Show_a94d79ab18}1}
P_c0 = fn0[1]
return _idris__123_PE_95__40_a_44__32_b_41__32_instance_32_of_32_Prelude_46_Show_46_Show_95_a94d79ab18_125_(
P_c0, arg0
)
else: # {U_{PE_(a, b) instance of Prelude.Show.Show_a94d79ab19}1}
return _idris__123_PE_95__40_a_44__32_b_41__32_instance_32_of_32_Prelude_46_Show_46_Show_95_a94d79ab19_125_(
arg0
)
else:
if fn0[0] < 65714:
if fn0[0] == 65712: # {U_{PE_(a, b) instance of Prelude.Show.Show_a94d79ab1}1}
P_c0 = fn0[1]
return _idris__123_PE_95__40_a_44__32_b_41__32_instance_32_of_32_Prelude_46_Show_46_Show_95_a94d79ab1_125_(
P_c0, arg0
)
else: # {U_{PE_(a, b) instance of Prelude.Show.Show_a94d79ab20}1}
return _idris__123_PE_95__40_a_44__32_b_41__32_instance_32_of_32_Prelude_46_Show_46_Show_95_a94d79ab20_125_(
arg0
)
else:
if fn0[0] == 65714: # {U_{PE_(a, b) instance of Prelude.Show.Show_a94d79ab21}1}
P_c0 = fn0[1]
return _idris__123_PE_95__40_a_44__32_b_41__32_instance_32_of_32_Prelude_46_Show_46_Show_95_a94d79ab21_125_(
P_c0, arg0
)
else: # {U_{PE_(a, b) instance of Prelude.Show.Show_a94d79ab22}1}
return _idris__123_PE_95__40_a_44__32_b_41__32_instance_32_of_32_Prelude_46_Show_46_Show_95_a94d79ab22_125_(
arg0
)
else:
if fn0[0] < 65729:
if fn0[0] < 65722:
if fn0[0] < 65719:
if fn0[0] == 65716: # {U_{PE_(a, b) instance of Prelude.Show.Show_a94d79ab23}1}
return _idris__123_PE_95__40_a_44__32_b_41__32_instance_32_of_32_Prelude_46_Show_46_Show_95_a94d79ab23_125_(
arg0
)
elif fn0[0] == 65717: # {U_{PE_(a, b) instance of Prelude.Show.Show_a94d79ab24}1}
return _idris__123_PE_95__40_a_44__32_b_41__32_instance_32_of_32_Prelude_46_Show_46_Show_95_a94d79ab24_125_(
arg0
)
else: # {U_{PE_(a, b) instance of Prelude.Show.Show_a94d79ab25}1}
return _idris__123_PE_95__40_a_44__32_b_41__32_instance_32_of_32_Prelude_46_Show_46_Show_95_a94d79ab25_125_(
arg0
)
else:
if fn0[0] == 65719: # {U_{PE_(a, b) instance of Prelude.Show.Show_a94d79ab26}1}
return _idris__123_PE_95__40_a_44__32_b_41__32_instance_32_of_32_Prelude_46_Show_46_Show_95_a94d79ab26_125_(
arg0
)
elif fn0[0] == 65720: # {U_{PE_(a, b) instance of Prelude.Show.Show_a94d79ab2}1}
return _idris__123_PE_95__40_a_44__32_b_41__32_instance_32_of_32_Prelude_46_Show_46_Show_95_a94d79ab2_125_(
arg0
)
else: # {U_{PE_(a, b) instance of Prelude.Show.Show_a94d79ab3}1}
return _idris__123_PE_95__40_a_44__32_b_41__32_instance_32_of_32_Prelude_46_Show_46_Show_95_a94d79ab3_125_(
arg0
)
else:
if fn0[0] < 65725:
if fn0[0] == 65722: # {U_{PE_(a, b) instance of Prelude.Show.Show_a94d79ab4}1}
return _idris__123_PE_95__40_a_44__32_b_41__32_instance_32_of_32_Prelude_46_Show_46_Show_95_a94d79ab4_125_(
arg0
)
elif fn0[0] == 65723: # {U_{PE_(a, b) instance of Prelude.Show.Show_a94d79ab5}1}
P_c0 = fn0[1]
return _idris__123_PE_95__40_a_44__32_b_41__32_instance_32_of_32_Prelude_46_Show_46_Show_95_a94d79ab5_125_(
P_c0, arg0
)
else: # {U_{PE_(a, b) instance of Prelude.Show.Show_a94d79ab6}1}
return _idris__123_PE_95__40_a_44__32_b_41__32_instance_32_of_32_Prelude_46_Show_46_Show_95_a94d79ab6_125_(
arg0
)
else:
if fn0[0] < 65727:
if fn0[0] == 65725: # {U_{PE_(a, b) instance of Prelude.Show.Show_a94d79ab7}1}
return _idris__123_PE_95__40_a_44__32_b_41__32_instance_32_of_32_Prelude_46_Show_46_Show_95_a94d79ab7_125_(
arg0
)
else: # {U_{PE_(a, b) instance of Prelude.Show.Show_a94d79ab8}1}
P_c0 = fn0[1]
return _idris__123_PE_95__40_a_44__32_b_41__32_instance_32_of_32_Prelude_46_Show_46_Show_95_a94d79ab8_125_(
P_c0, arg0
)
else:
if fn0[0] == 65727: # {U_{PE_(a, b) instance of Prelude.Show.Show_a94d79ab9}1}
return _idris__123_PE_95__40_a_44__32_b_41__32_instance_32_of_32_Prelude_46_Show_46_Show_95_a94d79ab9_125_(
arg0
)
else: # {U_{PE_concatMap_af3155d10}1}
P_c0 = fn0[1]
return _idris__123_PE_95_concatMap_95_af3155d10_125_(P_c0, arg0)
else:
if fn0[0] < 65736:
if fn0[0] < 65732:
if fn0[0] == 65729: # {U_{PE_show_249676530}1}
return _idris__123_PE_95_show_95_249676530_125_(arg0)
elif fn0[0] == 65730: # {U_{PE_show_249676531}1}
return _idris__123_PE_95_show_95_249676531_125_(arg0)
else: # {U_{PE_show_249676532}1}
return _idris__123_PE_95_show_95_249676532_125_(arg0)
else:
if fn0[0] < 65734:
if fn0[0] == 65732: # {U_{Prelude.List.sortBy, splitRec_lam0}1}
P_c0 = fn0[1]
return _idris__123_Prelude_46_List_46_sortBy_44__32_splitRec_95_lam0_125_(P_c0, arg0)
else: # {U_{io_bind1}1}
P_c0, P_c1, P_c2, P_c3, P_c4, P_c5 = fn0[1:]
return io_bind1(P_c0, P_c1, P_c2, P_c3, P_c4, P_c5, arg0)
else:
if fn0[0] == 65734: # {U_PE_@@constructor of Prelude.Algebra.Monoid#Semigroup a_42111bf02}
P_c0 = fn0[1]
return (65645, P_c0, arg0) # {U_PE_@@constructor of Prelude.Algebra.Monoid#Semigroup a_42111bf01}
else: # {U_PE_@@constructor of Prelude.Applicative.Alternative#Applicative f_5102bba82}
return (65646, arg0) # {U_PE_@@constructor of Prelude.Applicative.Alternative#Applicative f_5102bba81}
else:
if fn0[0] < 65739:
if fn0[0] == 65736: # {U_PE_@@constructor of Prelude.Monad.Monad#Applicative m_d05ad59e2}
return (65647, arg0) # {U_PE_@@constructor of Prelude.Monad.Monad#Applicative m_d05ad59e1}
elif fn0[0] == 65737: # {U_Prelude.List.List instance of Prelude.Foldable.Foldable2}
P_c0, P_c1, P_c2 = fn0[1:]
return (65665, P_c0, P_c1, P_c2, arg0) # {U_Prelude.List.List instance of Prelude.Foldable.Foldable1}
else: # {U_Prelude.Nat.Nat instance of Prelude.Classes.Eq2}
return (65666, arg0) # {U_Prelude.Nat.Nat instance of Prelude.Classes.Eq1}
else:
if fn0[0] < 65741:
if fn0[0] == 65739: # {U_Prelude.List.List instance of Prelude.Foldable.Foldable3}
P_c0, P_c1 = fn0[1:]
return (65737, P_c0, P_c1, arg0) # {U_Prelude.List.List instance of Prelude.Foldable.Foldable2}
else: # {U_Prelude.List.List instance of Prelude.Foldable.Foldable4}
P_c0 = fn0[1]
return (65739, P_c0, arg0) # {U_Prelude.List.List instance of Prelude.Foldable.Foldable3}
else:
assert fn0[0] == 65741 # {U_Prelude.List.List instance of Prelude.Foldable.Foldable5}
return (65740, arg0) # {U_Prelude.List.List instance of Prelude.Foldable.Foldable4}
return _idris_error("unreachable due to case in tail position")
# Prelude.Classes.{Char instance of Prelude.Classes.Ord_lam0}
def _idris_Prelude_46_Classes_46__123_Char_32_instance_32_of_32_Prelude_46_Classes_46_Ord_95_lam0_125_(
in0, in1
):
while True:
return _idris_Prelude_46_Classes_46_Prelude_46_Classes_46__64_Prelude_46_Classes_46_Ord_36_Char_58__33_compare_58_0(
in0, in1
)
# {EVAL0}
def EVAL0(arg0):
while True:
return arg0
# Prelude.Classes.{Int instance of Prelude.Classes.Ord_lam0}
def _idris_Prelude_46_Classes_46__123_Int_32_instance_32_of_32_Prelude_46_Classes_46_Ord_95_lam0_125_(
in0, in1
):
while True:
return _idris_Prelude_46_Classes_46_Prelude_46_Classes_46__64_Prelude_46_Classes_46_Ord_36_Int_58__33_compare_58_0(
in0, in1
)
# Prelude.Show.{Int instance of Prelude.Show.Show_lam0}
def _idris_Prelude_46_Show_46__123_Int_32_instance_32_of_32_Prelude_46_Show_46_Show_95_lam0_125_(
in0
):
while True:
return APPLY0(
APPLY0(
_idris_Prelude_46_Show_46_showPrec(
None,
_idris_Prelude_46_Show_46__64_Prelude_46_Show_46_Show_36_Int()
),
(0,) # Prelude.Show.Open
),
in0
)
# Prelude.Nat.{Nat instance of Prelude.Classes.Ord_lam0}
def _idris_Prelude_46_Nat_46__123_Nat_32_instance_32_of_32_Prelude_46_Classes_46_Ord_95_lam0_125_(
in0, in1
):
while True:
return _idris_Prelude_46_Classes_46_Prelude_46_Nat_46__64_Prelude_46_Classes_46_Ord_36_Nat_58__33_compare_58_0(
in0, in1
)
# {PE_(a, b) instance of Prelude.Show.Show_a94d79ab0}
def _idris__123_PE_95__40_a_44__32_b_41__32_instance_32_of_32_Prelude_46_Show_46_Show_95_a94d79ab0_125_(
in1
):
while True:
return _idris_Prelude_46_Show_46_primNumShow(None, (65700,), (0,), in1) # {U_prim__toStrInt1}, Prelude.Show.Open
# {PE_concatMap_af3155d10}
def _idris__123_PE_95_concatMap_95_af3155d10_125_(e2, in0):
while True:
return APPLY0(
_idris_Prelude_46_Algebra_46__60__43__62_(None, (65734, None)), # {U_PE_@@constructor of Prelude.Algebra.Monoid#Semigroup a_42111bf02}
APPLY0(e2, in0)
)
# {PE_show_249676530}
def _idris__123_PE_95_show_95_249676530_125_(in0):
while True:
return _idris_Prelude_46_Show_46_Prelude_46_Show_46__64_Prelude_46_Show_46_Show_36_String_58__33_show_58_0(
in0
)
# Prelude.Show.{Prec instance of Prelude.Classes.Ord_lam0}
def _idris_Prelude_46_Show_46__123_Prec_32_instance_32_of_32_Prelude_46_Classes_46_Ord_95_lam0_125_(
in0, in1
):
while True:
return _idris_Prelude_46_Classes_46_Prelude_46_Show_46__64_Prelude_46_Classes_46_Ord_36_Prec_58__33_compare_58_0(
in0, in1
)
# Prelude.Classes.{Prelude.Classes.Char instance of Prelude.Classes.Ord, method <=_lam0}
def _idris_Prelude_46_Classes_46__123_Prelude_46_Classes_46_Char_32_instance_32_of_32_Prelude_46_Classes_46_Ord_44__32_method_32__60__61__95_lam0_125_(
e0, e1
):
while True:
aux1 = (e0 == e1)
if aux1 == 0:
return False
else:
return True
return _idris_error("unreachable due to case in tail position")
# Prelude.Classes.{Prelude.Classes.Char instance of Prelude.Classes.Ord, method >=_lam0}
def _idris_Prelude_46_Classes_46__123_Prelude_46_Classes_46_Char_32_instance_32_of_32_Prelude_46_Classes_46_Ord_44__32_method_32__62__61__95_lam0_125_(
e0, e1
):
while True:
aux1 = (e0 == e1)
if aux1 == 0:
return False
else:
return True
return _idris_error("unreachable due to case in tail position")
# Prelude.Classes.{Prelude.Classes.Int instance of Prelude.Classes.Ord, method <=_lam0}
def _idris_Prelude_46_Classes_46__123_Prelude_46_Classes_46_Int_32_instance_32_of_32_Prelude_46_Classes_46_Ord_44__32_method_32__60__61__95_lam0_125_(
e0, e1
):
while True:
aux1 = (e0 == e1)
if aux1 == 0:
return False
else:
return True
return _idris_error("unreachable due to case in tail position")
# {Prelude.List.sortBy, splitRec_lam0}
def _idris__123_Prelude_46_List_46_sortBy_44__32_splitRec_95_lam0_125_(in0, in6):
while True:
return in6.cons(in0)
# Prelude.Classes.{Prelude.Show.Prec instance of Prelude.Classes.Ord, method >=_lam0}
def _idris_Prelude_46_Classes_46__123_Prelude_46_Show_46_Prec_32_instance_32_of_32_Prelude_46_Classes_46_Ord_44__32_method_32__62__61__95_lam0_125_(
e0, e1
):
while True:
return _idris_Prelude_46_Classes_46_Prelude_46_Show_46__64_Prelude_46_Classes_46_Eq_36_Prec_58__33__61__61__58_0(
e0, e1
)
# Prelude.Show.{case block in showLitChar at ./Prelude/Show.idr:126:27_lam0}
def _idris_Prelude_46_Show_46__123_case_32_block_32_in_32_showLitChar_32_at_32__46__47_Prelude_47_Show_46_idr_58_126_58_27_95_lam0_125_(
in0, in1
):
while True:
return (in0 + in1)
# {io_bind0}
def io_bind0(e0, e1, e2, e3, e4, _idris_w, in0):
while True:
return APPLY0(e4, in0)
# Prelude.Chars.{isDigit0}
def _idris_Prelude_46_Chars_46__123_isDigit0_125_(e0):
while True:
return _idris_Prelude_46_Classes_46_Prelude_46_Classes_46__64_Prelude_46_Classes_46_Ord_36_Char_58__33__60__61__58_0(
e0, u'9'
)
# Main.{main0}
def _idris_Main_46__123_main0_125_(in1, in2):
while True:
return _idris_Prelude_46_Classes_46_Prelude_46_Classes_46__64_Prelude_46_Classes_46_Ord_36_String_58__33_compare_58_0(
in1, in2
)
# Prelude.Show.{primNumShow0}
def _idris_Prelude_46_Show_46__123_primNumShow0_125_(in1):
while True:
aux1 = (in1 == u'-')
if aux1 == 0:
return False
else:
return True
return _idris_error("unreachable due to case in tail position")
# Prelude.Interactive.{putStr'0}
def _idris_Prelude_46_Interactive_46__123_putStr_39_0_125_(e1, in0):
while True:
return sys.stdout.write(e1)
# Main.{pythag0}
def _idris_Main_46__123_pythag0_125_(in2, in1, in0, in3):
while True:
return APPLY0(
_idris_Prelude_46_Applicative_46_pure(None, None, (65736,)), # {U_PE_@@constructor of Prelude.Monad.Monad#Applicative m_d05ad59e2}
(in2, (in1, in0))
)
# {runMain0}
def runMain0():
while True:
return EVAL0(APPLY0(_idris_Main_46_main(), None))
# Prelude.Show.{showLitChar0}
def _idris_Prelude_46_Show_46__123_showLitChar0_125_(in0):
while True:
return (u'\\a' + in0)
# Prelude.Show.{showLitString0}
def _idris_Prelude_46_Show_46__123_showLitString0_125_(in2):
while True:
return (u'\\"' + in2)
# Prelude.Classes.{Char instance of Prelude.Classes.Ord_lam1}
def _idris_Prelude_46_Classes_46__123_Char_32_instance_32_of_32_Prelude_46_Classes_46_Ord_95_lam1_125_(
in0
):
while True:
return (65651, in0) # {U_Prelude.Classes.{Char instance of Prelude.Classes.Ord_lam0}1}
# Prelude.Classes.{Int instance of Prelude.Classes.Ord_lam1}
def _idris_Prelude_46_Classes_46__123_Int_32_instance_32_of_32_Prelude_46_Classes_46_Ord_95_lam1_125_(
in0
):
while True:
return (65657, in0) # {U_Prelude.Classes.{Int instance of Prelude.Classes.Ord_lam0}1}
# Prelude.Show.{Int instance of Prelude.Show.Show_lam1}
def _idris_Prelude_46_Show_46__123_Int_32_instance_32_of_32_Prelude_46_Show_46_Show_95_lam1_125_(
in1, in2
):
while True:
return _idris_Prelude_46_Show_46_primNumShow(None, (65700,), in1, in2) # {U_prim__toStrInt1}
# Prelude.Nat.{Nat instance of Prelude.Classes.Ord_lam1}
def _idris_Prelude_46_Nat_46__123_Nat_32_instance_32_of_32_Prelude_46_Classes_46_Ord_95_lam1_125_(
in0
):
while True:
return (65667, in0) # {U_Prelude.Nat.{Nat instance of Prelude.Classes.Ord_lam0}1}
# {PE_(a, b) instance of Prelude.Show.Show_a94d79ab1}
def _idris__123_PE_95__40_a_44__32_b_41__32_instance_32_of_32_Prelude_46_Show_46_Show_95_a94d79ab1_125_(
in2, in3
):
while True:
return _idris_Prelude_46_Show_46_primNumShow(None, (65700,), in2, in3) # {U_prim__toStrInt1}
# {PE_show_249676531}
def _idris__123_PE_95_show_95_249676531_125_(in2):
while True:
return _idris_Prelude_46_Show_46_Prelude_46_Show_46__64_Prelude_46_Show_46_Show_36_String_58__33_show_58_0(
in2
)
# Prelude.Show.{Prec instance of Prelude.Classes.Ord_lam1}
def _idris_Prelude_46_Show_46__123_Prec_32_instance_32_of_32_Prelude_46_Classes_46_Ord_95_lam1_125_(
in0
):
while True:
return (65677, in0) # {U_Prelude.Show.{Prec instance of Prelude.Classes.Ord_lam0}1}
# {io_bind1}
def io_bind1(e0, e1, e2, e3, e4, _idris_w, in0):
while True:
return APPLY0(io_bind0(e0, e1, e2, e3, e4, _idris_w, in0), _idris_w)
# Main.{main1}
def _idris_Main_46__123_main1_125_(in1):
while True:
return (65638, in1) # {U_Main.{main0}1}
# Prelude.Show.{primNumShow1}
def _idris_Prelude_46_Show_46__123_primNumShow1_125_(e0, e1, e2, e3, in0, in2, in3):
while True:
return (65684,) # {U_Prelude.Show.{primNumShow0}1}
# Prelude.Interactive.{putStr'1}
def _idris_Prelude_46_Interactive_46__123_putStr_39_1_125_(in1):
while True:
return (65698, None, None, Unit) # {U_io_return1}
# Main.{pythag1}
def _idris_Main_46__123_pythag1_125_(in1, in0, in2):
while True:
aux2 = (((in2 * in2) + (in1 * in1)) == (in0 * in0))
if aux2 == 0:
aux3 = False
else:
aux3 = True
aux1 = aux3
if not aux1: # Prelude.Bool.False
aux4 = _idris_PE_95_empty_95_8ff8f7b3(None)
else: # Prelude.Bool.True
aux4 = APPLY0(_idris_Prelude_46_Applicative_46_pure(None, None, (65735,)), Unit) # {U_PE_@@constructor of Prelude.Applicative.Alternative#Applicative f_5102bba82}
return _idris_Prelude_46_Monad_46_Prelude_46__64_Prelude_46_Monad_46_Monad_36_List_58__33__62__62__61__58_0(
None,
None,
aux4,
(65641, in2, in1, in0) # {U_Main.{pythag0}1}
)
# Prelude.Show.{showLitChar1}
def _idris_Prelude_46_Show_46__123_showLitChar1_125_(in1):
while True:
return (u'\\b' + in1)
# Prelude.Classes.{Char instance of Prelude.Classes.Ord_lam2}
def _idris_Prelude_46_Classes_46__123_Char_32_instance_32_of_32_Prelude_46_Classes_46_Ord_95_lam2_125_(
in2, in3
):
while True:
aux1 = APPLY0(
APPLY0(
_idris_Prelude_46_Classes_46_compare(
None,
_idris_Prelude_46_Classes_46__64_Prelude_46_Classes_46_Ord_36_Char()
),
in2
),
in3
)
if aux1[0] == 0: # Prelude.Classes.LT
return True
else:
return False
return _idris_error("unreachable due to case in tail position")
# Prelude.Classes.{Int instance of Prelude.Classes.Ord_lam2}
def _idris_Prelude_46_Classes_46__123_Int_32_instance_32_of_32_Prelude_46_Classes_46_Ord_95_lam2_125_(
in2, in3
):
while True:
aux1 = APPLY0(
APPLY0(
_idris_Prelude_46_Classes_46_compare(
None,
_idris_Prelude_46_Classes_46__64_Prelude_46_Classes_46_Ord_36_Int()
),
in2
),
in3
)
if aux1[0] == 0: # Prelude.Classes.LT
return True
else:
return False
return _idris_error("unreachable due to case in tail position")
# Prelude.Show.{Int instance of Prelude.Show.Show_lam2}
def _idris_Prelude_46_Show_46__123_Int_32_instance_32_of_32_Prelude_46_Show_46_Show_95_lam2_125_(
in1
):
while True:
return (65675, in1) # {U_Prelude.Show.{Int instance of Prelude.Show.Show_lam1}1}
# Prelude.Nat.{Nat instance of Prelude.Classes.Ord_lam2}
def _idris_Prelude_46_Nat_46__123_Nat_32_instance_32_of_32_Prelude_46_Classes_46_Ord_95_lam2_125_(
in2, in3
):
while True:
aux1 = APPLY0(
APPLY0(
_idris_Prelude_46_Classes_46_compare(
None,
_idris_Prelude_46_Nat_46__64_Prelude_46_Classes_46_Ord_36_Nat()
),
in2
),
in3
)
if aux1[0] == 0: # Prelude.Classes.LT
return True
else:
return False
return _idris_error("unreachable due to case in tail position")
# {PE_(a, b) instance of Prelude.Show.Show_a94d79ab2}
def _idris__123_PE_95__40_a_44__32_b_41__32_instance_32_of_32_Prelude_46_Show_46_Show_95_a94d79ab2_125_(
in2
):
while True:
return (65712, in2) # {U_{PE_(a, b) instance of Prelude.Show.Show_a94d79ab1}1}
# {PE_show_249676532}
def _idris__123_PE_95_show_95_249676532_125_(in1):
while True:
return (65730,) # {U_{PE_show_249676531}1}
# Prelude.Show.{Prec instance of Prelude.Classes.Ord_lam2}
def _idris_Prelude_46_Show_46__123_Prec_32_instance_32_of_32_Prelude_46_Classes_46_Ord_95_lam2_125_(
in2, in3
):
while True:
aux1 = APPLY0(
APPLY0(
_idris_Prelude_46_Classes_46_compare(
None,
_idris_Prelude_46_Show_46__64_Prelude_46_Classes_46_Ord_36_Prec()
),
in2
),
in3
)
if aux1[0] == 0: # Prelude.Classes.LT
return True
else:
return False
return _idris_error("unreachable due to case in tail position")
# {io_bind2}
def io_bind2(e0, e1, e2, e3, e4, _idris_w):
while True:
return (65733, e0, e1, e2, e3, e4, _idris_w) # {U_{io_bind1}1}
# Main.{main2}
def _idris_Main_46__123_main2_125_(in0):
while True:
return _idris_PE_95_printLn_39__95_cfef5baf(
None,
_idris_Prelude_46_List_46_sortBy(
None,
(65639,), # {U_Main.{main1}1}
ConsList().cons(u'baz').cons(u'bar').cons(u'foo')
)
)
# Prelude.Show.{primNumShow2}
def _idris_Prelude_46_Show_46__123_primNumShow2_125_(in0, e0, e1, e2, e3):
while True:
aux1 = _idris_Prelude_46_Strings_46_strM(in0)
if aux1[0] == 1: # Prelude.Strings.StrCons
in2, in3 = aux1[1:]
return APPLY0(
_idris_Prelude_46_Show_46__123_primNumShow1_125_(e0, e1, e2, e3, in0, in2, in3),
in2
)
else: # Prelude.Strings.StrNil
return False
return _idris_error("unreachable due to case in tail position")
# Main.{pythag2}
def _idris_Main_46__123_pythag2_125_(in0, in1):
while True:
return _idris_Prelude_46_Monad_46_Prelude_46__64_Prelude_46_Monad_46_Monad_36_List_58__33__62__62__61__58_0(
None,
None,
_idris_Prelude_46_Prelude_46__64_Prelude_46_Enum_36_Int_58__33_enumFromTo_58_0(
1, in1
),
(65642, in1, in0) # {U_Main.{pythag1}1}
)
# Prelude.Show.{showLitChar2}
def _idris_Prelude_46_Show_46__123_showLitChar2_125_(in2):
while True:
return (u'\\t' + in2)
# Prelude.Classes.{Char instance of Prelude.Classes.Ord_lam3}
def _idris_Prelude_46_Classes_46__123_Char_32_instance_32_of_32_Prelude_46_Classes_46_Ord_95_lam3_125_(
in2
):
while True:
return (65653, in2) # {U_Prelude.Classes.{Char instance of Prelude.Classes.Ord_lam2}1}
# Prelude.Classes.{Int instance of Prelude.Classes.Ord_lam3}
def _idris_Prelude_46_Classes_46__123_Int_32_instance_32_of_32_Prelude_46_Classes_46_Ord_95_lam3_125_(
in2
):
while True:
return (65659, in2) # {U_Prelude.Classes.{Int instance of Prelude.Classes.Ord_lam2}1}
# Prelude.Nat.{Nat instance of Prelude.Classes.Ord_lam3}
def _idris_Prelude_46_Nat_46__123_Nat_32_instance_32_of_32_Prelude_46_Classes_46_Ord_95_lam3_125_(
in2
):
while True:
return (65669, in2) # {U_Prelude.Nat.{Nat instance of Prelude.Classes.Ord_lam2}1}
# {PE_(a, b) instance of Prelude.Show.Show_a94d79ab3}
def _idris__123_PE_95__40_a_44__32_b_41__32_instance_32_of_32_Prelude_46_Show_46_Show_95_a94d79ab3_125_(
in4
):
while True:
return _idris_PE_95_Prelude_46_Show_46__40_a_44__32_b_41__32_instance_32_of_32_Prelude_46_Show_46_Show_44__32_method_32_show_95_cfed4029(
in4
)
# Prelude.Show.{Prec instance of Prelude.Classes.Ord_lam3}
def _idris_Prelude_46_Show_46__123_Prec_32_instance_32_of_32_Prelude_46_Classes_46_Ord_95_lam3_125_(
in2
):
while True:
return (65679, in2) # {U_Prelude.Show.{Prec instance of Prelude.Classes.Ord_lam2}1}
# Main.{pythag3}
def _idris_Main_46__123_pythag3_125_(in0):
while True:
return _idris_Prelude_46_Monad_46_Prelude_46__64_Prelude_46_Monad_46_Monad_36_List_58__33__62__62__61__58_0(
None,
None,
_idris_Prelude_46_Prelude_46__64_Prelude_46_Enum_36_Int_58__33_enumFromTo_58_0(
1, in0
),
(65643, in0) # {U_Main.{pythag2}1}
)
# Prelude.Show.{showLitChar3}
def _idris_Prelude_46_Show_46__123_showLitChar3_125_(in3):
while True:
return (u'\\n' + in3)
# Prelude.Classes.{Char instance of Prelude.Classes.Ord_lam4}
def _idris_Prelude_46_Classes_46__123_Char_32_instance_32_of_32_Prelude_46_Classes_46_Ord_95_lam4_125_(
in4, in5
):
while True:
aux1 = APPLY0(
APPLY0(
_idris_Prelude_46_Classes_46_compare(
None,
_idris_Prelude_46_Classes_46__64_Prelude_46_Classes_46_Ord_36_Char()
),
in4
),
in5
)
if aux1[0] == 2: # Prelude.Classes.GT
return True
else:
return False
return _idris_error("unreachable due to case in tail position")
# Prelude.Classes.{Int instance of Prelude.Classes.Ord_lam4}
def _idris_Prelude_46_Classes_46__123_Int_32_instance_32_of_32_Prelude_46_Classes_46_Ord_95_lam4_125_(
in4, in5
):
while True:
aux1 = APPLY0(
APPLY0(
_idris_Prelude_46_Classes_46_compare(
None,
_idris_Prelude_46_Classes_46__64_Prelude_46_Classes_46_Ord_36_Int()
),
in4
),
in5
)
if aux1[0] == 2: # Prelude.Classes.GT
return True
else:
return False
return _idris_error("unreachable due to case in tail position")
# Prelude.Nat.{Nat instance of Prelude.Classes.Ord_lam4}
def _idris_Prelude_46_Nat_46__123_Nat_32_instance_32_of_32_Prelude_46_Classes_46_Ord_95_lam4_125_(
in4, in5
):
while True:
aux1 = APPLY0(
APPLY0(
_idris_Prelude_46_Classes_46_compare(
None,
_idris_Prelude_46_Nat_46__64_Prelude_46_Classes_46_Ord_36_Nat()
),
in4
),
in5
)
if aux1[0] == 2: # Prelude.Classes.GT
return True
else:
return False
return _idris_error("unreachable due to case in tail position")
# {PE_(a, b) instance of Prelude.Show.Show_a94d79ab4}
def _idris__123_PE_95__40_a_44__32_b_41__32_instance_32_of_32_Prelude_46_Show_46_Show_95_a94d79ab4_125_(
in7
):
while True:
return _idris_Prelude_46_Show_46_primNumShow(None, (65700,), (0,), in7) # {U_prim__toStrInt1}, Prelude.Show.Open
# Prelude.Show.{Prec instance of Prelude.Classes.Ord_lam4}
def _idris_Prelude_46_Show_46__123_Prec_32_instance_32_of_32_Prelude_46_Classes_46_Ord_95_lam4_125_(
in4, in5
):
while True:
aux1 = APPLY0(
APPLY0(
_idris_Prelude_46_Classes_46_compare(
None,
_idris_Prelude_46_Show_46__64_Prelude_46_Classes_46_Ord_36_Prec()
),
in4
),
in5
)
if aux1[0] == 2: # Prelude.Classes.GT
return True
else:
return False
return _idris_error("unreachable due to case in tail position")
# Prelude.Show.{showLitChar4}
def _idris_Prelude_46_Show_46__123_showLitChar4_125_(in4):
while True:
return (u'\\v' + in4)
# Prelude.Classes.{Char instance of Prelude.Classes.Ord_lam5}
def _idris_Prelude_46_Classes_46__123_Char_32_instance_32_of_32_Prelude_46_Classes_46_Ord_95_lam5_125_(
in4
):
while True:
return (65655, in4) # {U_Prelude.Classes.{Char instance of Prelude.Classes.Ord_lam4}1}
# Prelude.Classes.{Int instance of Prelude.Classes.Ord_lam5}
def _idris_Prelude_46_Classes_46__123_Int_32_instance_32_of_32_Prelude_46_Classes_46_Ord_95_lam5_125_(
in4
):
while True:
return (65661, in4) # {U_Prelude.Classes.{Int instance of Prelude.Classes.Ord_lam4}1}
# Prelude.Nat.{Nat instance of Prelude.Classes.Ord_lam5}
def _idris_Prelude_46_Nat_46__123_Nat_32_instance_32_of_32_Prelude_46_Classes_46_Ord_95_lam5_125_(
in4
):
while True:
return (65671, in4) # {U_Prelude.Nat.{Nat instance of Prelude.Classes.Ord_lam4}1}
# {PE_(a, b) instance of Prelude.Show.Show_a94d79ab5}
def _idris__123_PE_95__40_a_44__32_b_41__32_instance_32_of_32_Prelude_46_Show_46_Show_95_a94d79ab5_125_(
in8, in9
):
while True:
return _idris_Prelude_46_Show_46_primNumShow(None, (65700,), in8, in9) # {U_prim__toStrInt1}
# Prelude.Show.{Prec instance of Prelude.Classes.Ord_lam5}
def _idris_Prelude_46_Show_46__123_Prec_32_instance_32_of_32_Prelude_46_Classes_46_Ord_95_lam5_125_(
in4
):
while True:
return (65681, in4) # {U_Prelude.Show.{Prec instance of Prelude.Classes.Ord_lam4}1}
# Prelude.Show.{showLitChar5}
def _idris_Prelude_46_Show_46__123_showLitChar5_125_(in5):
while True:
return (u'\\f' + in5)
# {PE_(a, b) instance of Prelude.Show.Show_a94d79ab6}
def _idris__123_PE_95__40_a_44__32_b_41__32_instance_32_of_32_Prelude_46_Show_46_Show_95_a94d79ab6_125_(
in8
):
while True:
return (65723, in8) # {U_{PE_(a, b) instance of Prelude.Show.Show_a94d79ab5}1}
# Prelude.Show.{showLitChar6}
def _idris_Prelude_46_Show_46__123_showLitChar6_125_(in6):
while True:
return (u'\\r' + in6)
# {PE_(a, b) instance of Prelude.Show.Show_a94d79ab7}
def _idris__123_PE_95__40_a_44__32_b_41__32_instance_32_of_32_Prelude_46_Show_46_Show_95_a94d79ab7_125_(
in10
):
while True:
return _idris_Prelude_46_Show_46_primNumShow(None, (65700,), (0,), in10) # {U_prim__toStrInt1}, Prelude.Show.Open
# Prelude.Show.{showLitChar7}
def _idris_Prelude_46_Show_46__123_showLitChar7_125_(in7):
while True:
aux1 = (in7 == u'H')
if aux1 == 0:
return False
else:
return True
return _idris_error("unreachable due to case in tail position")
# {PE_(a, b) instance of Prelude.Show.Show_a94d79ab8}
def _idris__123_PE_95__40_a_44__32_b_41__32_instance_32_of_32_Prelude_46_Show_46_Show_95_a94d79ab8_125_(
in11, in12
):
while True:
return _idris_Prelude_46_Show_46_primNumShow(None, (65700,), in11, in12) # {U_prim__toStrInt1}
# Prelude.Show.{showLitChar8}
def _idris_Prelude_46_Show_46__123_showLitChar8_125_(in8):
while True:
return (u'\\\\' + in8)
# {PE_(a, b) instance of Prelude.Show.Show_a94d79ab9}
def _idris__123_PE_95__40_a_44__32_b_41__32_instance_32_of_32_Prelude_46_Show_46_Show_95_a94d79ab9_125_(
in11
):
while True:
return (65726, in11) # {U_{PE_(a, b) instance of Prelude.Show.Show_a94d79ab8}1}
# Prelude.Show.{showLitChar9}
def _idris_Prelude_46_Show_46__123_showLitChar9_125_(in9):
while True:
return (u'\\DEL' + in9)
# {PE_(a, b) instance of Prelude.Show.Show_a94d79ab10}
def _idris__123_PE_95__40_a_44__32_b_41__32_instance_32_of_32_Prelude_46_Show_46_Show_95_a94d79ab10_125_(
in6
):
while True:
return _idris_Prelude_46_Show_46_Prelude_46_Show_46__64_Prelude_46_Show_46_Show_36__40_a_44__32_b_41__58__33_show_58_0(
None,
None,
None,
None,
(0, (65722,), (65724,)), # constructor of Prelude.Show.Show, {U_{PE_(a, b) instance of Prelude.Show.Show_a94d79ab4}1}, {U_{PE_(a, b) instance of Prelude.Show.Show_a94d79ab6}1}
(0, (65725,), (65727,)), # constructor of Prelude.Show.Show, {U_{PE_(a, b) instance of Prelude.Show.Show_a94d79ab7}1}, {U_{PE_(a, b) instance of Prelude.Show.Show_a94d79ab9}1}
in6
)
# Prelude.Show.{showLitChar10}
def _idris_Prelude_46_Show_46__123_showLitChar10_125_(in10, in11):
while True:
return (in10 + in11)
# {PE_(a, b) instance of Prelude.Show.Show_a94d79ab11}
def _idris__123_PE_95__40_a_44__32_b_41__32_instance_32_of_32_Prelude_46_Show_46_Show_95_a94d79ab11_125_(
in5
):
while True:
return (65702,) # {U_{PE_(a, b) instance of Prelude.Show.Show_a94d79ab10}1}
# {PE_(a, b) instance of Prelude.Show.Show_a94d79ab12}
def _idris__123_PE_95__40_a_44__32_b_41__32_instance_32_of_32_Prelude_46_Show_46_Show_95_a94d79ab12_125_(
in0
):
while True:
return _idris_Prelude_46_Show_46_Prelude_46_Show_46__64_Prelude_46_Show_46_Show_36__40_a_44__32_b_41__58__33_show_58_0(
None,
None,
None,
None,
(0, (65701,), (65720,)), # constructor of Prelude.Show.Show, {U_{PE_(a, b) instance of Prelude.Show.Show_a94d79ab0}1}, {U_{PE_(a, b) instance of Prelude.Show.Show_a94d79ab2}1}
(0, (65721,), (65703,)), # constructor of Prelude.Show.Show, {U_{PE_(a, b) instance of Prelude.Show.Show_a94d79ab3}1}, {U_{PE_(a, b) instance of Prelude.Show.Show_a94d79ab11}1}
in0
)
# {PE_(a, b) instance of Prelude.Show.Show_a94d79ab13}
def _idris__123_PE_95__40_a_44__32_b_41__32_instance_32_of_32_Prelude_46_Show_46_Show_95_a94d79ab13_125_(
in15
):
while True:
return _idris_Prelude_46_Show_46_primNumShow(None, (65700,), (0,), in15) # {U_prim__toStrInt1}, Prelude.Show.Open
# {PE_(a, b) instance of Prelude.Show.Show_a94d79ab14}
def _idris__123_PE_95__40_a_44__32_b_41__32_instance_32_of_32_Prelude_46_Show_46_Show_95_a94d79ab14_125_(
in16, in17
):
while True:
return _idris_Prelude_46_Show_46_primNumShow(None, (65700,), in16, in17) # {U_prim__toStrInt1}
# {PE_(a, b) instance of Prelude.Show.Show_a94d79ab15}
def _idris__123_PE_95__40_a_44__32_b_41__32_instance_32_of_32_Prelude_46_Show_46_Show_95_a94d79ab15_125_(
in16
):
while True:
return (65706, in16) # {U_{PE_(a, b) instance of Prelude.Show.Show_a94d79ab14}1}
# {PE_(a, b) instance of Prelude.Show.Show_a94d79ab16}
def _idris__123_PE_95__40_a_44__32_b_41__32_instance_32_of_32_Prelude_46_Show_46_Show_95_a94d79ab16_125_(
in18
):
while True:
return _idris_PE_95_Prelude_46_Show_46__40_a_44__32_b_41__32_instance_32_of_32_Prelude_46_Show_46_Show_44__32_method_32_show_95_cfed4029(
in18
)
# {PE_(a, b) instance of Prelude.Show.Show_a94d79ab17}
def _idris__123_PE_95__40_a_44__32_b_41__32_instance_32_of_32_Prelude_46_Show_46_Show_95_a94d79ab17_125_(
in21
):
while True:
return _idris_Prelude_46_Show_46_primNumShow(None, (65700,), (0,), in21) # {U_prim__toStrInt1}, Prelude.Show.Open
# {PE_(a, b) instance of Prelude.Show.Show_a94d79ab18}
def _idris__123_PE_95__40_a_44__32_b_41__32_instance_32_of_32_Prelude_46_Show_46_Show_95_a94d79ab18_125_(
in22, in23
):
while True:
return _idris_Prelude_46_Show_46_primNumShow(None, (65700,), in22, in23) # {U_prim__toStrInt1}
# {PE_(a, b) instance of Prelude.Show.Show_a94d79ab19}
def _idris__123_PE_95__40_a_44__32_b_41__32_instance_32_of_32_Prelude_46_Show_46_Show_95_a94d79ab19_125_(
in22
):
while True:
return (65710, in22) # {U_{PE_(a, b) instance of Prelude.Show.Show_a94d79ab18}1}
# {PE_(a, b) instance of Prelude.Show.Show_a94d79ab20}
def _idris__123_PE_95__40_a_44__32_b_41__32_instance_32_of_32_Prelude_46_Show_46_Show_95_a94d79ab20_125_(
in24
):
while True:
return _idris_Prelude_46_Show_46_primNumShow(None, (65700,), (0,), in24) # {U_prim__toStrInt1}, Prelude.Show.Open
# {PE_(a, b) instance of Prelude.Show.Show_a94d79ab21}
def _idris__123_PE_95__40_a_44__32_b_41__32_instance_32_of_32_Prelude_46_Show_46_Show_95_a94d79ab21_125_(
in25, in26
):
while True:
return _idris_Prelude_46_Show_46_primNumShow(None, (65700,), in25, in26) # {U_prim__toStrInt1}
# {PE_(a, b) instance of Prelude.Show.Show_a94d79ab22}
def _idris__123_PE_95__40_a_44__32_b_41__32_instance_32_of_32_Prelude_46_Show_46_Show_95_a94d79ab22_125_(
in25
):
while True:
return (65714, in25) # {U_{PE_(a, b) instance of Prelude.Show.Show_a94d79ab21}1}
# {PE_(a, b) instance of Prelude.Show.Show_a94d79ab23}
def _idris__123_PE_95__40_a_44__32_b_41__32_instance_32_of_32_Prelude_46_Show_46_Show_95_a94d79ab23_125_(
in20
):
while True:
return _idris_Prelude_46_Show_46_Prelude_46_Show_46__64_Prelude_46_Show_46_Show_36__40_a_44__32_b_41__58__33_show_58_0(
None,
None,
None,
None,
(0, (65709,), (65711,)), # constructor of Prelude.Show.Show, {U_{PE_(a, b) instance of Prelude.Show.Show_a94d79ab17}1}, {U_{PE_(a, b) instance of Prelude.Show.Show_a94d79ab19}1}
(0, (65713,), (65715,)), # constructor of Prelude.Show.Show, {U_{PE_(a, b) instance of Prelude.Show.Show_a94d79ab20}1}, {U_{PE_(a, b) instance of Prelude.Show.Show_a94d79ab22}1}
in20
)
# {PE_(a, b) instance of Prelude.Show.Show_a94d79ab24}
def _idris__123_PE_95__40_a_44__32_b_41__32_instance_32_of_32_Prelude_46_Show_46_Show_95_a94d79ab24_125_(
in19
):
while True:
return (65716,) # {U_{PE_(a, b) instance of Prelude.Show.Show_a94d79ab23}1}
# {PE_(a, b) instance of Prelude.Show.Show_a94d79ab25}
def _idris__123_PE_95__40_a_44__32_b_41__32_instance_32_of_32_Prelude_46_Show_46_Show_95_a94d79ab25_125_(
in14
):
while True:
return _idris_Prelude_46_Show_46_Prelude_46_Show_46__64_Prelude_46_Show_46_Show_36__40_a_44__32_b_41__58__33_show_58_0(
None,
None,
None,
None,
(0, (65705,), (65707,)), # constructor of Prelude.Show.Show, {U_{PE_(a, b) instance of Prelude.Show.Show_a94d79ab13}1}, {U_{PE_(a, b) instance of Prelude.Show.Show_a94d79ab15}1}
(0, (65708,), (65717,)), # constructor of Prelude.Show.Show, {U_{PE_(a, b) instance of Prelude.Show.Show_a94d79ab16}1}, {U_{PE_(a, b) instance of Prelude.Show.Show_a94d79ab24}1}
in14
)
# {PE_(a, b) instance of Prelude.Show.Show_a94d79ab26}
def _idris__123_PE_95__40_a_44__32_b_41__32_instance_32_of_32_Prelude_46_Show_46_Show_95_a94d79ab26_125_(
in13
):
while True:
return (65718,) # {U_{PE_(a, b) instance of Prelude.Show.Show_a94d79ab25}1}
# Decidable.Equality.Decidable.Equality.Char instance of Decidable.Equality.DecEq, method decEq, primitiveNotEq
def _idris_Decidable_46_Equality_46_Decidable_46_Equality_46__64_Decidable_46_Equality_46_DecEq_36_Char_58__33_decEq_58_0_58_primitiveNotEq_58_0():
while True:
return None
# Decidable.Equality.Decidable.Equality.Int instance of Decidable.Equality.DecEq, method decEq, primitiveNotEq
def _idris_Decidable_46_Equality_46_Decidable_46_Equality_46__64_Decidable_46_Equality_46_DecEq_36_Int_58__33_decEq_58_0_58_primitiveNotEq_58_0():
while True:
return None
# Decidable.Equality.Decidable.Equality.Integer instance of Decidable.Equality.DecEq, method decEq, primitiveNotEq
def _idris_Decidable_46_Equality_46_Decidable_46_Equality_46__64_Decidable_46_Equality_46_DecEq_36_Integer_58__33_decEq_58_0_58_primitiveNotEq_58_0():
while True:
return None
# Decidable.Equality.Decidable.Equality.ManagedPtr instance of Decidable.Equality.DecEq, method decEq, primitiveNotEq
def _idris_Decidable_46_Equality_46_Decidable_46_Equality_46__64_Decidable_46_Equality_46_DecEq_36_ManagedPtr_58__33_decEq_58_0_58_primitiveNotEq_58_0():
while True:
return None
# Decidable.Equality.Decidable.Equality.Ptr instance of Decidable.Equality.DecEq, method decEq, primitiveNotEq
def _idris_Decidable_46_Equality_46_Decidable_46_Equality_46__64_Decidable_46_Equality_46_DecEq_36_Ptr_58__33_decEq_58_0_58_primitiveNotEq_58_0():
while True:
return None
# Decidable.Equality.Decidable.Equality.String instance of Decidable.Equality.DecEq, method decEq, primitiveNotEq
def _idris_Decidable_46_Equality_46_Decidable_46_Equality_46__64_Decidable_46_Equality_46_DecEq_36_String_58__33_decEq_58_0_58_primitiveNotEq_58_0():
while True:
return None
# Prelude.Prelude.Int instance of Prelude.Enum, method enumFromTo, go
def _idris_Prelude_46_Prelude_46__64_Prelude_46_Enum_36_Int_58__33_enumFromTo_58_0_58_go_58_0(
e0, e1, e2, e3, e4
):
while True:
if e3 == 0:
return e2.cons(e4)
else:
in0 = (e3 - 1)
e0, e1, e2, e3, e4, = None, None, e2.cons(e4), in0, (e4 - 1),
continue
return _idris_error("unreachable due to tail call")
return _idris_error("unreachable due to case in tail position")
# Prelude.Show.Prelude.Show.List a instance of Prelude.Show.Show, method show, show'
def _idris_Prelude_46_Show_46_Prelude_46_Show_46__64_Prelude_46_Show_46_Show_36_List_32_a_58__33_show_58_0_58_show_39__58_0(
e0, e1, e2, e3, e4, e5
):
while True:
if e5: # Prelude.List.::
in0, in1 = e5.head, e5.tail
if not in1: # Prelude.List.Nil
return (e4 + APPLY0(_idris_Prelude_46_Show_46_show(None, e3), in0))
else:
e0, e1, e2, e3, e4, e5, = None, None, None, e3, (e4 + (APPLY0(_idris_Prelude_46_Show_46_show(None, e3), in0) + u', ')), in1,
continue
return _idris_error("unreachable due to tail call")
return _idris_error("unreachable due to case in tail position")
else: # Prelude.List.Nil
return e4
return _idris_error("unreachable due to case in tail position")
# Decidable.Equality.Decidable.Equality.Bool instance of Decidable.Equality.DecEq, method decEq
def _idris_Decidable_46_Equality_46_Decidable_46_Equality_46__64_Decidable_46_Equality_46_DecEq_36_Bool_58__33_decEq_58_0(
e0, e1
):
while True:
if not e1: # Prelude.Bool.False
if not e0: # Prelude.Bool.False
return (0,) # Prelude.Basics.Yes
else: # Prelude.Bool.True
return (1,) # Prelude.Basics.No
return _idris_error("unreachable due to case in tail position")
else: # Prelude.Bool.True
if not e0: # Prelude.Bool.False
return (1,) # Prelude.Basics.No
else: # Prelude.Bool.True
return (0,) # Prelude.Basics.Yes
return _idris_error("unreachable due to case in tail position")
return _idris_error("unreachable due to case in tail position")
# Prelude.Prelude.Int instance of Prelude.Enum, method enumFromTo
def _idris_Prelude_46_Prelude_46__64_Prelude_46_Enum_36_Int_58__33_enumFromTo_58_0(
e0, e1
):
while True:
aux1 = _idris_Prelude_46_Classes_46_Prelude_46_Classes_46__64_Prelude_46_Classes_46_Ord_36_Int_58__33__60__61__58_0(
e0, e1
)
if not aux1: # Prelude.Bool.False
return ConsList()
else: # Prelude.Bool.True
return _idris_Prelude_46_Prelude_46__64_Prelude_46_Enum_36_Int_58__33_enumFromTo_58_0_58_go_58_0(
None,
None,
ConsList(),
(e1 - e0),
e1
)
return _idris_error("unreachable due to case in tail position")
# Prelude.Classes.Prelude.Nat.Nat instance of Prelude.Classes.Eq, method ==
def _idris_Prelude_46_Classes_46_Prelude_46_Nat_46__64_Prelude_46_Classes_46_Eq_36_Nat_58__33__61__61__58_0(
e0, e1
):
while True:
if e1 == 0:
if e0 == 0:
return True
else:
return False
return _idris_error("unreachable due to case in tail position")
elif True:
in0 = (e1 - 1)
if e0 == 0:
return False
else:
in1 = (e0 - 1)
return APPLY0(APPLY0(_idris_Prelude_46_Classes_46__61__61_(None, (65738,)), in1), in0) # {U_Prelude.Nat.Nat instance of Prelude.Classes.Eq2}
return _idris_error("unreachable due to case in tail position")
else:
return False
return _idris_error("unreachable due to case in tail position")
# Prelude.Classes.Prelude.Classes.Ordering instance of Prelude.Classes.Eq, method ==
def _idris_Prelude_46_Classes_46_Prelude_46_Classes_46__64_Prelude_46_Classes_46_Eq_36_Ordering_58__33__61__61__58_0(
e0, e1
):
while True:
if e1[0] == 1: # Prelude.Classes.EQ
if e0[0] == 1: # Prelude.Classes.EQ
return True
else:
return False
return _idris_error("unreachable due to case in tail position")
elif e1[0] == 2: # Prelude.Classes.GT
if e0[0] == 2: # Prelude.Classes.GT
return True
else:
return False
return _idris_error("unreachable due to case in tail position")
elif e1[0] == 0: # Prelude.Classes.LT
if e0[0] == 0: # Prelude.Classes.LT
return True
else:
return False
return _idris_error("unreachable due to case in tail position")
else:
return False
return _idris_error("unreachable due to case in tail position")
# Prelude.Classes.Prelude.Show.Prec instance of Prelude.Classes.Eq, method ==
def _idris_Prelude_46_Classes_46_Prelude_46_Show_46__64_Prelude_46_Classes_46_Eq_36_Prec_58__33__61__61__58_0(
e0, e1
):
while True:
if e1[0] == 4: # Prelude.Show.User
in0 = e1[1]
if e0[0] == 4: # Prelude.Show.User
in1 = e0[1]
return _idris_Prelude_46_Classes_46_Prelude_46_Nat_46__64_Prelude_46_Classes_46_Eq_36_Nat_58__33__61__61__58_0(
in1, in0
)
else:
aux1 = (_idris_Prelude_46_Show_46_precCon(e0) == _idris_Prelude_46_Show_46_precCon(e1))
if aux1 == 0:
return False
else:
return True
return _idris_error("unreachable due to case in tail position")
return _idris_error("unreachable due to case in tail position")
else:
aux2 = (_idris_Prelude_46_Show_46_precCon(e0) == _idris_Prelude_46_Show_46_precCon(e1))
if aux2 == 0:
return False
else:
return True
return _idris_error("unreachable due to case in tail position")
return _idris_error("unreachable due to case in tail position")
# Prelude.Foldable.Prelude.List.List instance of Prelude.Foldable.Foldable, method foldr
def _idris_Prelude_46_Foldable_46_Prelude_46_List_46__64_Prelude_46_Foldable_46_Foldable_36_List_58__33_foldr_58_0(
e0, e1, e2, e3, e4
):
while True:
if e4: # Prelude.List.::
in0, in1 = e4.head, e4.tail
return APPLY0(
APPLY0(e2, in0),
APPLY0(
APPLY0(
APPLY0(_idris_Prelude_46_Foldable_46_foldr(None, None, None, (65741,)), e2), # {U_Prelude.List.List instance of Prelude.Foldable.Foldable5}
e3
),
in1
)
)
else: # Prelude.List.Nil
return e3
return _idris_error("unreachable due to case in tail position")
# Prelude.Monad.Prelude.List instance of Prelude.Monad.Monad, method >>=
def _idris_Prelude_46_Monad_46_Prelude_46__64_Prelude_46_Monad_46_Monad_36_List_58__33__62__62__61__58_0(
e0, e1, e2, e3
):
while True:
return _idris_PE_95_concatMap_95_af3155d1(None, None, e3, e2)
# Prelude.Classes.Prelude.Classes.Char instance of Prelude.Classes.Ord, method <=
def _idris_Prelude_46_Classes_46_Prelude_46_Classes_46__64_Prelude_46_Classes_46_Ord_36_Char_58__33__60__61__58_0(
e0, e1
):
while True:
aux1 = APPLY0(
APPLY0(
_idris_Prelude_46_Classes_46__60_(
None,
_idris_Prelude_46_Classes_46__64_Prelude_46_Classes_46_Ord_36_Char()
),
e0
),
e1
)
if not aux1: # Prelude.Bool.False
return _idris_Prelude_46_Classes_46__123_Prelude_46_Classes_46_Char_32_instance_32_of_32_Prelude_46_Classes_46_Ord_44__32_method_32__60__61__95_lam0_125_(
e0, e1
)
else: # Prelude.Bool.True
return True
return _idris_error("unreachable due to case in tail position")
# Prelude.Classes.Prelude.Classes.Char instance of Prelude.Classes.Ord, method >=
def _idris_Prelude_46_Classes_46_Prelude_46_Classes_46__64_Prelude_46_Classes_46_Ord_36_Char_58__33__62__61__58_0(
e0, e1
):
while True:
aux1 = APPLY0(
APPLY0(
_idris_Prelude_46_Classes_46__62_(
None,
_idris_Prelude_46_Classes_46__64_Prelude_46_Classes_46_Ord_36_Char()
),
e0
),
e1
)
if not aux1: # Prelude.Bool.False
return _idris_Prelude_46_Classes_46__123_Prelude_46_Classes_46_Char_32_instance_32_of_32_Prelude_46_Classes_46_Ord_44__32_method_32__62__61__95_lam0_125_(
e0, e1
)
else: # Prelude.Bool.True
return True
return _idris_error("unreachable due to case in tail position")
# Prelude.Classes.Prelude.Classes.Char instance of Prelude.Classes.Ord, method compare
def _idris_Prelude_46_Classes_46_Prelude_46_Classes_46__64_Prelude_46_Classes_46_Ord_36_Char_58__33_compare_58_0(
e0, e1
):
while True:
aux2 = (e0 == e1)
if aux2 == 0:
aux3 = False
else:
aux3 = True
aux1 = aux3
if not aux1: # Prelude.Bool.False
aux5 = (e0 < e1)
if aux5 == 0:
aux6 = False
else:
aux6 = True
aux4 = aux6
if not aux4: # Prelude.Bool.False
return (2,) # Prelude.Classes.GT
else: # Prelude.Bool.True
return (0,) # Prelude.Classes.LT
return _idris_error("unreachable due to case in tail position")
else: # Prelude.Bool.True
return (1,) # Prelude.Classes.EQ
return _idris_error("unreachable due to case in tail position")
# Prelude.Classes.Prelude.Classes.Int instance of Prelude.Classes.Ord, method <=
def _idris_Prelude_46_Classes_46_Prelude_46_Classes_46__64_Prelude_46_Classes_46_Ord_36_Int_58__33__60__61__58_0(
e0, e1
):
while True:
aux1 = APPLY0(
APPLY0(
_idris_Prelude_46_Classes_46__60_(
None,
_idris_Prelude_46_Classes_46__64_Prelude_46_Classes_46_Ord_36_Int()
),
e0
),
e1
)
if not aux1: # Prelude.Bool.False
return _idris_Prelude_46_Classes_46__123_Prelude_46_Classes_46_Int_32_instance_32_of_32_Prelude_46_Classes_46_Ord_44__32_method_32__60__61__95_lam0_125_(
e0, e1
)
else: # Prelude.Bool.True
return True
return _idris_error("unreachable due to case in tail position")
# Prelude.Classes.Prelude.Classes.Int instance of Prelude.Classes.Ord, method compare
def _idris_Prelude_46_Classes_46_Prelude_46_Classes_46__64_Prelude_46_Classes_46_Ord_36_Int_58__33_compare_58_0(
e0, e1
):
while True:
aux2 = (e0 == e1)
if aux2 == 0:
aux3 = False
else:
aux3 = True
aux1 = aux3
if not aux1: # Prelude.Bool.False
aux5 = (e0 < e1)
if aux5 == 0:
aux6 = False
else:
aux6 = True
aux4 = aux6
if not aux4: # Prelude.Bool.False
return (2,) # Prelude.Classes.GT
else: # Prelude.Bool.True
return (0,) # Prelude.Classes.LT
return _idris_error("unreachable due to case in tail position")
else: # Prelude.Bool.True
return (1,) # Prelude.Classes.EQ
return _idris_error("unreachable due to case in tail position")
# Prelude.Classes.Prelude.Classes.Integer instance of Prelude.Classes.Ord, method compare
def _idris_Prelude_46_Classes_46_Prelude_46_Classes_46__64_Prelude_46_Classes_46_Ord_36_Integer_58__33_compare_58_0(
e0, e1
):
while True:
aux2 = (e0 == e1)
if aux2 == 0:
aux3 = False
else:
aux3 = True
aux1 = aux3
if not aux1: # Prelude.Bool.False
aux5 = (e0 < e1)
if aux5 == 0:
aux6 = False
else:
aux6 = True
aux4 = aux6
if not aux4: # Prelude.Bool.False
return (2,) # Prelude.Classes.GT
else: # Prelude.Bool.True
return (0,) # Prelude.Classes.LT
return _idris_error("unreachable due to case in tail position")
else: # Prelude.Bool.True
return (1,) # Prelude.Classes.EQ
return _idris_error("unreachable due to case in tail position")
# Prelude.Classes.Prelude.Nat.Nat instance of Prelude.Classes.Ord, method compare
def _idris_Prelude_46_Classes_46_Prelude_46_Nat_46__64_Prelude_46_Classes_46_Ord_36_Nat_58__33_compare_58_0(
e0, e1
):
while True:
if e1 == 0:
if e0 == 0:
return (1,) # Prelude.Classes.EQ
else:
in0 = (e0 - 1)
return (2,) # Prelude.Classes.GT
return _idris_error("unreachable due to case in tail position")
else:
in1 = (e1 - 1)
if e0 == 0:
return (0,) # Prelude.Classes.LT
else:
in2 = (e0 - 1)
return APPLY0(
APPLY0(
_idris_Prelude_46_Classes_46_compare(
None,
_idris_Prelude_46_Nat_46__64_Prelude_46_Classes_46_Ord_36_Nat()
),
in2
),
in1
)
return _idris_error("unreachable due to case in tail position")
return _idris_error("unreachable due to case in tail position")
# Prelude.Classes.Prelude.Show.Prec instance of Prelude.Classes.Ord, method >=
def _idris_Prelude_46_Classes_46_Prelude_46_Show_46__64_Prelude_46_Classes_46_Ord_36_Prec_58__33__62__61__58_0(
e0, e1
):
while True:
aux1 = APPLY0(
APPLY0(
_idris_Prelude_46_Classes_46__62_(
None,
_idris_Prelude_46_Show_46__64_Prelude_46_Classes_46_Ord_36_Prec()
),
e0
),
e1
)
if not aux1: # Prelude.Bool.False
return _idris_Prelude_46_Classes_46__123_Prelude_46_Show_46_Prec_32_instance_32_of_32_Prelude_46_Classes_46_Ord_44__32_method_32__62__61__95_lam0_125_(
e0, e1
)
else: # Prelude.Bool.True
return True
return _idris_error("unreachable due to case in tail position")
# Prelude.Classes.Prelude.Show.Prec instance of Prelude.Classes.Ord, method compare
def _idris_Prelude_46_Classes_46_Prelude_46_Show_46__64_Prelude_46_Classes_46_Ord_36_Prec_58__33_compare_58_0(
e0, e1
):
while True:
if e1[0] == 4: # Prelude.Show.User
in0 = e1[1]
if e0[0] == 4: # Prelude.Show.User
in1 = e0[1]
return _idris_Prelude_46_Classes_46_Prelude_46_Nat_46__64_Prelude_46_Classes_46_Ord_36_Nat_58__33_compare_58_0(
in1, in0
)
else:
return _idris_Prelude_46_Classes_46_Prelude_46_Classes_46__64_Prelude_46_Classes_46_Ord_36_Integer_58__33_compare_58_0(
_idris_Prelude_46_Show_46_precCon(e0),
_idris_Prelude_46_Show_46_precCon(e1)
)
return _idris_error("unreachable due to case in tail position")
else:
return _idris_Prelude_46_Classes_46_Prelude_46_Classes_46__64_Prelude_46_Classes_46_Ord_36_Integer_58__33_compare_58_0(
_idris_Prelude_46_Show_46_precCon(e0),
_idris_Prelude_46_Show_46_precCon(e1)
)
return _idris_error("unreachable due to case in tail position")
# Prelude.Classes.Prelude.Classes.String instance of Prelude.Classes.Ord, method compare
def _idris_Prelude_46_Classes_46_Prelude_46_Classes_46__64_Prelude_46_Classes_46_Ord_36_String_58__33_compare_58_0(
e0, e1
):
while True:
aux2 = (e0 == e1)
if aux2 == 0:
aux3 = False
else:
aux3 = True
aux1 = aux3
if not aux1: # Prelude.Bool.False
aux5 = (e0 < e1)
if aux5 == 0:
aux6 = False
else:
aux6 = True
aux4 = aux6
if not aux4: # Prelude.Bool.False
return (2,) # Prelude.Classes.GT
else: # Prelude.Bool.True
return (0,) # Prelude.Classes.LT
return _idris_error("unreachable due to case in tail position")
else: # Prelude.Bool.True
return (1,) # Prelude.Classes.EQ
return _idris_error("unreachable due to case in tail position")
# Prelude.Show.Prelude.Show.(a, b) instance of Prelude.Show.Show, method show
def _idris_Prelude_46_Show_46_Prelude_46_Show_46__64_Prelude_46_Show_46_Show_36__40_a_44__32_b_41__58__33_show_58_0(
e0, e1, e2, e3, e4, e5, e6
):
while True:
assert True # Builtins.MkPair
in0, in1 = e6
return (u'(' + (APPLY0(_idris_Prelude_46_Show_46_show(None, e4), in0) + (u', ' + (APPLY0(_idris_Prelude_46_Show_46_show(None, e5), in1) + u')'))))
return _idris_error("unreachable due to case in tail position")
# Prelude.Show.Prelude.Show.String instance of Prelude.Show.Show, method show
def _idris_Prelude_46_Show_46_Prelude_46_Show_46__64_Prelude_46_Show_46_Show_36_String_58__33_show_58_0(
e0
):
while True:
aux1 = _idris_Prelude_46_Strings_46_strM(e0)
if aux1[0] == 1: # Prelude.Strings.StrCons
in0, in1 = aux1[1:]
aux2 = _idris__95_Prelude_46_Strings_46_unpack_95_with_95_24(
None,
_idris_Prelude_46_Strings_46_strM(in1)
).cons(in0)
else: # Prelude.Strings.StrNil
aux2 = ConsList()
return (u'"' + APPLY0(_idris_Prelude_46_Show_46_showLitString(aux2), u'"'))
# Prelude.List.sortBy, splitRec
def _idris_Prelude_46_List_46_sortBy_58_splitRec_58_2(e0, e1, e2, e3, e4, e5):
while True:
if e4: # Prelude.List.::
in0, in1 = e4.head, e4.tail
if e3: # Prelude.List.::
in2, in3 = e3.head, e3.tail
if in3: # Prelude.List.::
in4, in5 = in3.head, in3.tail
e0, e1, e2, e3, e4, e5, = None, None, None, in5, in1, (65648, None, None, None, e5, (65732, in0)), # {U_Prelude.Basics..1}, {U_{Prelude.List.sortBy, splitRec_lam0}1}
continue
return _idris_error("unreachable due to tail call")
else:
return (APPLY0(e5, ConsList()), e4)
return _idris_error("unreachable due to case in tail position")
else:
return (APPLY0(e5, ConsList()), e4)
return _idris_error("unreachable due to case in tail position")
else:
return (APPLY0(e5, ConsList()), e4)
return _idris_error("unreachable due to case in tail position")
# Prelude.Show.showLitChar, asciiTab
def _idris_Prelude_46_Show_46_showLitChar_58_asciiTab_58_10(e0):
while True:
return ConsList().cons(u'US').cons(u'RS').cons(u'GS').cons(u'FS').cons(u'ESC').cons(u'SUB').cons(u'EM').cons(u'CAN').cons(u'ETB').cons(u'SYN').cons(u'NAK').cons(u'DC4').cons(u'DC3').cons(u'DC2').cons(u'DC1').cons(u'DLE').cons(u'SI').cons(u'SO').cons(u'CR').cons(u'FF').cons(u'VT').cons(u'LF').cons(u'HT').cons(u'BS').cons(u'BEL').cons(u'ACK').cons(u'ENQ').cons(u'EOT').cons(u'ETX').cons(u'STX').cons(u'SOH').cons(u'NUL')
# Prelude.Show.showLitChar, getAt
def _idris_Prelude_46_Show_46_showLitChar_58_getAt_58_10(e0, e1, e2):
while True:
if e2: # Prelude.List.::
in0, in1 = e2.head, e2.tail
if e1 == 0:
return in0
else:
in2 = (e1 - 1)
e0, e1, e2, = None, in2, in1,
continue
return _idris_error("unreachable due to tail call")
return _idris_error("unreachable due to case in tail position")
else: # Prelude.List.Nil
return None
return _idris_error("unreachable due to case in tail position")
# with block in Prelude.Strings.strM
def _idris__95_Prelude_46_Strings_46_strM_95_with_95_21(e0, e1):
while True:
if e1[0] == 1: # Prelude.Basics.No
return _idris_really_95_believe_95_me(None, None, (0,)) # Prelude.Strings.StrNil
else: # Prelude.Basics.Yes
return _idris_really_95_believe_95_me(None, None, (1, e0[0], e0[1:])) # Prelude.Strings.StrCons
return _idris_error("unreachable due to case in tail position")
# with block in Prelude.Strings.unpack
def _idris__95_Prelude_46_Strings_46_unpack_95_with_95_24(e0, e1):
while True:
if e1[0] == 1: # Prelude.Strings.StrCons
in0, in1 = e1[1:]
return _idris__95_Prelude_46_Strings_46_unpack_95_with_95_24(
None,
_idris_Prelude_46_Strings_46_strM(in1)
).cons(in0)
else: # Prelude.Strings.StrNil
return ConsList()
return _idris_error("unreachable due to case in tail position")
# with block in Prelude.Classes.Prelude.Show.Prec instance of Prelude.Classes.Ord, method <
def _idris__95_Prelude_46_Classes_46_Prelude_46_Show_46__64_Prelude_46_Classes_46_Ord_36_Prec_58__33__60__58_0_95_with_95_25(
e0, e1, e2
):
while True:
if e0[0] == 0: # Prelude.Classes.LT
return True
else:
return False
return _idris_error("unreachable due to case in tail position")
# with block in Prelude.Classes.Prelude.Show.Prec instance of Prelude.Classes.Ord, method >
def _idris__95_Prelude_46_Classes_46_Prelude_46_Show_46__64_Prelude_46_Classes_46_Ord_36_Prec_58__33__62__58_0_95_with_95_27(
e0, e1, e2
):
while True:
if e0[0] == 2: # Prelude.Classes.GT
return True
else:
return False
return _idris_error("unreachable due to case in tail position")
# with block in Prelude.Show.firstCharIs
def _idris__95_Prelude_46_Show_46_firstCharIs_95_with_95_44(e0, e1, e2):
while True:
if e2[0] == 1: # Prelude.Strings.StrCons
in0, in1 = e2[1:]
return APPLY0(e0, in0)
else: # Prelude.Strings.StrNil
return False
return _idris_error("unreachable due to case in tail position")
# with block in Prelude.Classes.Prelude.Nat.Nat instance of Prelude.Classes.Ord, method <
def _idris__95_Prelude_46_Classes_46_Prelude_46_Nat_46__64_Prelude_46_Classes_46_Ord_36_Nat_58__33__60__58_0_95_with_95_82(
e0, e1, e2
):
while True:
if e0[0] == 0: # Prelude.Classes.LT
return True
else:
return False
return _idris_error("unreachable due to case in tail position")
# with block in Prelude.Classes.Prelude.Nat.Nat instance of Prelude.Classes.Ord, method >
def _idris__95_Prelude_46_Classes_46_Prelude_46_Nat_46__64_Prelude_46_Classes_46_Ord_36_Nat_58__33__62__58_0_95_with_95_84(
e0, e1, e2
):
while True:
if e0[0] == 2: # Prelude.Classes.GT
return True
else:
return False
return _idris_error("unreachable due to case in tail position")
# with block in Prelude.Classes.Prelude.Classes.Int instance of Prelude.Classes.Ord, method <
def _idris__95_Prelude_46_Classes_46_Prelude_46_Classes_46__64_Prelude_46_Classes_46_Ord_36_Int_58__33__60__58_0_95_with_95_96(
e0, e1, e2
):
while True:
if e0[0] == 0: # Prelude.Classes.LT
return True
else:
return False
return _idris_error("unreachable due to case in tail position")
# with block in Prelude.Classes.Prelude.Classes.Int instance of Prelude.Classes.Ord, method >
def _idris__95_Prelude_46_Classes_46_Prelude_46_Classes_46__64_Prelude_46_Classes_46_Ord_36_Int_58__33__62__58_0_95_with_95_98(
e0, e1, e2
):
while True:
if e0[0] == 2: # Prelude.Classes.GT
return True
else:
return False
return _idris_error("unreachable due to case in tail position")
# with block in Prelude.Classes.Prelude.Classes.Char instance of Prelude.Classes.Ord, method <
def _idris__95_Prelude_46_Classes_46_Prelude_46_Classes_46__64_Prelude_46_Classes_46_Ord_36_Char_58__33__60__58_0_95_with_95_129(
e0, e1, e2
):
while True:
if e0[0] == 0: # Prelude.Classes.LT
return True
else:
return False
return _idris_error("unreachable due to case in tail position")
# with block in Prelude.Classes.Prelude.Classes.Char instance of Prelude.Classes.Ord, method >
def _idris__95_Prelude_46_Classes_46_Prelude_46_Classes_46__64_Prelude_46_Classes_46_Ord_36_Char_58__33__62__58_0_95_with_95_131(
e0, e1, e2
):
while True:
if e0[0] == 2: # Prelude.Classes.GT
return True
else:
return False
return _idris_error("unreachable due to case in tail position")
# Prelude.Nat.Nat instance of Prelude.Classes.Eq
def _idris_Prelude_46_Nat_46__64_Prelude_46_Classes_46_Eq_36_Nat(meth0, meth1):
while True:
return _idris_Prelude_46_Classes_46_Prelude_46_Nat_46__64_Prelude_46_Classes_46_Eq_36_Nat_58__33__61__61__58_0(
meth0, meth1
)
# Prelude.List.List instance of Prelude.Foldable.Foldable
def _idris_Prelude_46_List_46__64_Prelude_46_Foldable_46_Foldable_36_List(
meth0, meth1, meth2, meth3, meth4
):
while True:
return _idris_Prelude_46_Foldable_46_Prelude_46_List_46__64_Prelude_46_Foldable_46_Foldable_36_List_58__33_foldr_58_0(
None, None, meth2, meth3, meth4
)
# Prelude.Classes.Char instance of Prelude.Classes.Ord
def _idris_Prelude_46_Classes_46__64_Prelude_46_Classes_46_Ord_36_Char():
while True:
return (0, (65652,), (65654,), (65656,)) # constructor of Prelude.Classes.Ord, {U_Prelude.Classes.{Char instance of Prelude.Classes.Ord_lam1}1}, {U_Prelude.Classes.{Char instance of Prelude.Classes.Ord_lam3}1}, {U_Prelude.Classes.{Char instance of Prelude.Classes.Ord_lam5}1}
# Prelude.Classes.Int instance of Prelude.Classes.Ord
def _idris_Prelude_46_Classes_46__64_Prelude_46_Classes_46_Ord_36_Int():
while True:
return (0, (65658,), (65660,), (65662,)) # constructor of Prelude.Classes.Ord, {U_Prelude.Classes.{Int instance of Prelude.Classes.Ord_lam1}1}, {U_Prelude.Classes.{Int instance of Prelude.Classes.Ord_lam3}1}, {U_Prelude.Classes.{Int instance of Prelude.Classes.Ord_lam5}1}
# Prelude.Nat.Nat instance of Prelude.Classes.Ord
def _idris_Prelude_46_Nat_46__64_Prelude_46_Classes_46_Ord_36_Nat():
while True:
return (0, (65668,), (65670,), (65672,)) # constructor of Prelude.Classes.Ord, {U_Prelude.Nat.{Nat instance of Prelude.Classes.Ord_lam1}1}, {U_Prelude.Nat.{Nat instance of Prelude.Classes.Ord_lam3}1}, {U_Prelude.Nat.{Nat instance of Prelude.Classes.Ord_lam5}1}
# Prelude.Show.Prec instance of Prelude.Classes.Ord
def _idris_Prelude_46_Show_46__64_Prelude_46_Classes_46_Ord_36_Prec():
while True:
return (0, (65678,), (65680,), (65682,)) # constructor of Prelude.Classes.Ord, {U_Prelude.Show.{Prec instance of Prelude.Classes.Ord_lam1}1}, {U_Prelude.Show.{Prec instance of Prelude.Classes.Ord_lam3}1}, {U_Prelude.Show.{Prec instance of Prelude.Classes.Ord_lam5}1}
# Prelude.Show.Int instance of Prelude.Show.Show
def _idris_Prelude_46_Show_46__64_Prelude_46_Show_46_Show_36_Int():
while True:
return (0, (65674,), (65676,)) # constructor of Prelude.Show.Show, {U_Prelude.Show.{Int instance of Prelude.Show.Show_lam0}1}, {U_Prelude.Show.{Int instance of Prelude.Show.Show_lam2}1}
# Prelude.List.case block in sortBy at ./Prelude/List.idr:775:22
def _idris_Prelude_46_List_46_sortBy_95__95__95__95__95_Prelude_95__95_List_95__95_idr_95_775_95_22_95_case(
e0, e1, e2, e3
):
while True:
assert True # Builtins.MkPair
in0, in1 = e3
return _idris_Prelude_46_List_46_mergeBy(
None,
e1,
_idris_Prelude_46_List_46_sortBy(None, e1, in0),
_idris_Prelude_46_List_46_sortBy(None, e1, in1)
)
return _idris_error("unreachable due to case in tail position")
# Prelude.Show.case block in showLitChar at ./Prelude/Show.idr:126:27
def _idris_Prelude_46_Show_46_showLitChar_95__95__95__95__95_Prelude_95__95_Show_95__95_idr_95_126_95_27_95_case(
e0, e1
):
while True:
if e1 is not None: # Prelude.Maybe.Just
in0 = e1
return (65648, None, None, None, (65699, u'\\'), (65683, in0)) # {U_Prelude.Basics..1}, {U_prim__strCons1}, {U_Prelude.Show.{case block in showLitChar at ./Prelude/Show.idr:126:27_lam0}1}
else: # Prelude.Maybe.Nothing
aux2 = _idris_Prelude_46_Classes_46_Prelude_46_Classes_46__64_Prelude_46_Classes_46_Ord_36_Char_58__33_compare_58_0(
e0,
u'\u007f'
)
if aux2[0] == 2: # Prelude.Classes.GT
aux3 = True
else:
aux3 = False
aux1 = aux3
if not aux1: # Prelude.Bool.False
return (65699, e0) # {U_prim__strCons1}
else: # Prelude.Bool.True
return (
65648, # {U_Prelude.Basics..1}
None,
None,
None,
(65699, u'\\'), # {U_prim__strCons1}
(
65673, # {U_Prelude.Show.protectEsc1}
(65650,), # {U_Prelude.Chars.isDigit1}
_idris_Prelude_46_Show_46_primNumShow(None, (65700,), (0,), ord(e0)) # {U_prim__toStrInt1}, Prelude.Show.Open
)
)
return _idris_error("unreachable due to case in tail position")
return _idris_error("unreachable due to case in tail position")
# case block in io_bind at IO.idr:109:34
def _idris_io_95_bind_95_IO_95__95_idr_95_109_95_34_95_case(
e0, e1, e2, e3, e4, e5, e6, e7
):
while True:
return APPLY0(e7, e5)
# case block in Void
def _idris_Void_95_case():
while True:
return None
# <<Void eliminator>>
def _idris_Void_95_elim():
while True:
return None
if __name__ == '__main__':
runMain0()
| 37.534916 | 424 | 0.692856 | 16,455 | 107,500 | 3.996536 | 0.035126 | 0.08266 | 0.063014 | 0.066238 | 0.884935 | 0.846783 | 0.800769 | 0.77755 | 0.72261 | 0.67468 | 0 | 0.14614 | 0.220372 | 107,500 | 2,863 | 425 | 37.548027 | 0.638528 | 0.231033 | 0 | 0.595539 | 0 | 0 | 0.056798 | 0 | 0 | 0 | 0 | 0 | 0.005051 | 1 | 0.104377 | false | 0.000421 | 0.001684 | 0.002946 | 0.335859 | 0.002525 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
a694f879a184197aa8ce1cf476121ddc8a3dbef3 | 45 | py | Python | test/run/t487.py | timmartin/skulpt | 2e3a3fbbaccc12baa29094a717ceec491a8a6750 | [
"MIT"
] | 2,671 | 2015-01-03T08:23:25.000Z | 2022-03-31T06:15:48.000Z | test/run/t487.py | timmartin/skulpt | 2e3a3fbbaccc12baa29094a717ceec491a8a6750 | [
"MIT"
] | 972 | 2015-01-05T08:11:00.000Z | 2022-03-29T13:47:15.000Z | test/run/t487.py | timmartin/skulpt | 2e3a3fbbaccc12baa29094a717ceec491a8a6750 | [
"MIT"
] | 845 | 2015-01-03T19:53:36.000Z | 2022-03-29T18:34:22.000Z | map(lambda x,y: x + y, [0,1,2,3,4,5], False)
| 22.5 | 44 | 0.533333 | 13 | 45 | 1.846154 | 0.846154 | 0.166667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.157895 | 0.155556 | 45 | 1 | 45 | 45 | 0.473684 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
a69b8f35650c988918b50adeea9c829092bce984 | 45 | py | Python | object_freezer/__init__.py | shunichironomura/freezer | 6e68c0c67b2936081505d1b7434e583f34bdcdab | [
"MIT"
] | null | null | null | object_freezer/__init__.py | shunichironomura/freezer | 6e68c0c67b2936081505d1b7434e583f34bdcdab | [
"MIT"
] | null | null | null | object_freezer/__init__.py | shunichironomura/freezer | 6e68c0c67b2936081505d1b7434e583f34bdcdab | [
"MIT"
] | null | null | null | from .core import *
from .freezeargs import * | 22.5 | 25 | 0.755556 | 6 | 45 | 5.666667 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.155556 | 45 | 2 | 25 | 22.5 | 0.894737 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
a6b6e025b4779641bf715180071f86533fb6fbaa | 88 | py | Python | delightfulsoup/utils/__init__.py | etpinard/delightfulsoup | 6d8cf976bf216e0e311808ffbd871a5915ba7b09 | [
"MIT"
] | null | null | null | delightfulsoup/utils/__init__.py | etpinard/delightfulsoup | 6d8cf976bf216e0e311808ffbd871a5915ba7b09 | [
"MIT"
] | null | null | null | delightfulsoup/utils/__init__.py | etpinard/delightfulsoup | 6d8cf976bf216e0e311808ffbd871a5915ba7b09 | [
"MIT"
] | null | null | null | """
utils
=====
"""
from wget_images import wget_images
from unminify import unminify
| 9.777778 | 35 | 0.715909 | 11 | 88 | 5.545455 | 0.545455 | 0.327869 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.159091 | 88 | 8 | 36 | 11 | 0.824324 | 0.125 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
a6b98f0cae49265a2610359006488dc88600bda4 | 363 | py | Python | openrec/legacy/utils/samplers/__init__.py | amirbiran/openrec | 69a1c57a7a1eec49720b776279b9120b80630ba2 | [
"Apache-2.0"
] | 1 | 2018-01-12T03:46:34.000Z | 2018-01-12T03:46:34.000Z | openrec/legacy/utils/samplers/__init__.py | amirbiran/openrec | 69a1c57a7a1eec49720b776279b9120b80630ba2 | [
"Apache-2.0"
] | 6 | 2020-01-28T22:51:16.000Z | 2022-02-10T00:11:19.000Z | openrec/legacy/utils/samplers/__init__.py | amirbiran/openrec | 69a1c57a7a1eec49720b776279b9120b80630ba2 | [
"Apache-2.0"
] | null | null | null | from openrec.legacy.utils.samplers.sampler import Sampler
from openrec.legacy.utils.samplers.pairwise_sampler import PairwiseSampler
from openrec.legacy.utils.samplers.n_pairwise_sampler import NPairwiseSampler
from openrec.legacy.utils.samplers.pointwise_sampler import PointwiseSampler
from openrec.legacy.utils.samplers.explicit_sampler import ExplicitSampler
| 60.5 | 77 | 0.889807 | 45 | 363 | 7.066667 | 0.333333 | 0.172956 | 0.267296 | 0.345912 | 0.471698 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.055096 | 363 | 5 | 78 | 72.6 | 0.927114 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
a6e481e4a3a5cc9f8a198954393fbc6c3d584748 | 40 | py | Python | backend/garpix_page/models/__init__.py | IGrrrG/garpix_page | f2294dc516129440dba17745a45faa79aff8d05c | [
"MIT"
] | null | null | null | backend/garpix_page/models/__init__.py | IGrrrG/garpix_page | f2294dc516129440dba17745a45faa79aff8d05c | [
"MIT"
] | null | null | null | backend/garpix_page/models/__init__.py | IGrrrG/garpix_page | f2294dc516129440dba17745a45faa79aff8d05c | [
"MIT"
] | null | null | null | from .base_page import BasePage # noqa
| 20 | 39 | 0.775 | 6 | 40 | 5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.175 | 40 | 1 | 40 | 40 | 0.909091 | 0.1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
a6f9815502af36b9870ff6bba2a91e827f1e6571 | 28 | py | Python | anvil/secrets/__init__.py | benlawraus/pyDALAnvilWorks | 8edc67b0fbe65bdcc0ef6fd2424f55046cacba7c | [
"MIT"
] | 6 | 2021-11-14T22:49:40.000Z | 2022-03-26T17:40:40.000Z | anvil/secrets/__init__.py | benlawraus/pyDALAnvilWorks | 8edc67b0fbe65bdcc0ef6fd2424f55046cacba7c | [
"MIT"
] | null | null | null | anvil/secrets/__init__.py | benlawraus/pyDALAnvilWorks | 8edc67b0fbe65bdcc0ef6fd2424f55046cacba7c | [
"MIT"
] | 1 | 2022-01-31T01:18:32.000Z | 2022-01-31T01:18:32.000Z | from .anvilSecrets import *
| 14 | 27 | 0.785714 | 3 | 28 | 7.333333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.142857 | 28 | 1 | 28 | 28 | 0.916667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
471c3ae90c4ba086a5925a8713bc00045ba1fa58 | 29 | py | Python | streamscrobbler/__init__.py | dirble/streamscrobbler-python | 1735893c9f504cc3b64959978e56a20dee0a8ece | [
"MIT"
] | 17 | 2016-10-05T13:03:25.000Z | 2022-03-27T01:03:01.000Z | streamscrobbler/__init__.py | Horrendus/streamscrobbler-python | ec8fd4e1549b7cba25833950aa565b17700453e0 | [
"MIT"
] | 4 | 2015-10-01T09:45:40.000Z | 2016-08-30T23:14:10.000Z | streamscrobbler/__init__.py | dirble/streamscrobbler-python | 1735893c9f504cc3b64959978e56a20dee0a8ece | [
"MIT"
] | 6 | 2017-07-11T03:40:58.000Z | 2021-06-14T17:35:36.000Z | from streamscrobbler import * | 29 | 29 | 0.862069 | 3 | 29 | 8.333333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.103448 | 29 | 1 | 29 | 29 | 0.961538 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
5b49efe74b6acf9c6efd95f43ba7b8b61e36d9ec | 50 | py | Python | sleeve-py/src/lib/IaC/Parameter/src/ParamFileParser.py | Sun-Wukong/sleeve | 93fe8dc6c580f1d5ddbcccdc01d9d9fdc0e9e3e8 | [
"MIT"
] | null | null | null | sleeve-py/src/lib/IaC/Parameter/src/ParamFileParser.py | Sun-Wukong/sleeve | 93fe8dc6c580f1d5ddbcccdc01d9d9fdc0e9e3e8 | [
"MIT"
] | null | null | null | sleeve-py/src/lib/IaC/Parameter/src/ParamFileParser.py | Sun-Wukong/sleeve | 93fe8dc6c580f1d5ddbcccdc01d9d9fdc0e9e3e8 | [
"MIT"
] | null | null | null | import jmespath
class ParamFileParser():
pass | 12.5 | 24 | 0.76 | 5 | 50 | 7.6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.18 | 50 | 4 | 25 | 12.5 | 0.926829 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
5b50d48eebfe33cba4b5334f411973f40843bcd9 | 32 | py | Python | Calculator/Sqrt.py | JohnnyUtah-9/calculatorclassproject | 4e24532d16549d9d2d2198331e9a283f48313b80 | [
"MIT"
] | null | null | null | Calculator/Sqrt.py | JohnnyUtah-9/calculatorclassproject | 4e24532d16549d9d2d2198331e9a283f48313b80 | [
"MIT"
] | null | null | null | Calculator/Sqrt.py | JohnnyUtah-9/calculatorclassproject | 4e24532d16549d9d2d2198331e9a283f48313b80 | [
"MIT"
] | 1 | 2021-02-28T22:22:08.000Z | 2021-02-28T22:22:08.000Z | def sqrt(a):
return a ** .5
| 10.666667 | 18 | 0.5 | 6 | 32 | 2.666667 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.045455 | 0.3125 | 32 | 2 | 19 | 16 | 0.681818 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | false | 0 | 0 | 0.5 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
f339a2ddabfd923b848d12a929eebf481e82375a | 991 | py | Python | 2021/06/01.py | apurvnakade/2021-advent-of-code | 08448c3aa8f00252da7c87094ca2a557c6a8b783 | [
"MIT"
] | null | null | null | 2021/06/01.py | apurvnakade/2021-advent-of-code | 08448c3aa8f00252da7c87094ca2a557c6a8b783 | [
"MIT"
] | null | null | null | 2021/06/01.py | apurvnakade/2021-advent-of-code | 08448c3aa8f00252da7c87094ca2a557c6a8b783 | [
"MIT"
] | null | null | null | import numpy as np
num_of_days = 257
fish_numbers = np.zeros(9)
initial_fish = [5,1,1,5,4,2,1,2,1,2,2,1,1,1,4,2,2,4,1,1,1,1,1,4,1,1,1,1,1,5,3,1,4,1,1,1,1,1,4,1,5,1,1,1,4,1,2,2,3,1,5,1,1,5,1,1,5,4,1,1,1,4,3,1,1,1,3,1,5,5,1,1,1,1,5,3,2,1,2,3,1,5,1,1,4,1,1,2,1,5,1,1,1,1,5,4,5,1,3,1,3,3,5,5,1,3,1,5,3,1,1,4,2,3,3,1,2,4,1,1,1,1,1,1,1,2,1,1,4,1,3,2,5,2,1,1,1,4,2,1,1,1,4,2,4,1,1,1,1,4,1,3,5,5,1,2,1,3,1,1,4,1,1,1,1,2,1,1,4,2,3,1,1,1,1,1,1,1,4,5,1,1,3,1,1,2,1,1,1,5,1,1,1,1,1,3,2,1,2,4,5,1,5,4,1,1,3,1,1,5,5,1,3,1,1,1,1,4,4,2,1,2,1,1,5,1,1,4,5,1,1,1,1,1,1,1,1,1,1,3,1,1,1,1,1,4,2,1,1,1,2,5,1,4,1,1,1,4,1,1,5,4,4,3,1,1,4,5,1,1,3,5,3,1,2,5,3,4,1,3,5,4,1,3,1,5,1,4,1,1,4,2,1,1,1,3,2,1,1,4]
for i in initial_fish:
fish_numbers[i] += 1
for day in range(1, num_of_days):
old_fish_numbers = fish_numbers.copy()
for timer in range(8):
fish_numbers[timer] = old_fish_numbers[timer + 1]
fish_numbers[8] = old_fish_numbers[0]
fish_numbers[6] += old_fish_numbers[0]
print(sum(fish_numbers)) | 55.055556 | 616 | 0.590313 | 369 | 991 | 1.528455 | 0.084011 | 0.340426 | 0.281915 | 0.212766 | 0.443262 | 0.310284 | 0.173759 | 0.102837 | 0.017731 | 0 | 0 | 0.335845 | 0.062563 | 991 | 18 | 617 | 55.055556 | 0.271259 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.076923 | 0 | 0.076923 | 0.076923 | 0 | 0 | 1 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
f350b345421b3214f1168a04cbf77b5e213ef915 | 114 | py | Python | pyapp_ext/aio_pika/checks.py | pyapp-org/pyapp.aio-pika | db568ceea747bedf4c1c29012ba61c30b5a8507b | [
"BSD-3-Clause"
] | null | null | null | pyapp_ext/aio_pika/checks.py | pyapp-org/pyapp.aio-pika | db568ceea747bedf4c1c29012ba61c30b5a8507b | [
"BSD-3-Clause"
] | null | null | null | pyapp_ext/aio_pika/checks.py | pyapp-org/pyapp.aio-pika | db568ceea747bedf4c1c29012ba61c30b5a8507b | [
"BSD-3-Clause"
] | null | null | null | from pyapp.checks.registry import register
from .factory import connection_factory
register(connection_factory)
| 19 | 42 | 0.859649 | 14 | 114 | 6.857143 | 0.571429 | 0.354167 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.096491 | 114 | 5 | 43 | 22.8 | 0.932039 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
f3a4f42945b7ddefb846de07a7792d0a3507281f | 61 | py | Python | useless/__init__.py | DiamondsBattle/useless | 8e1823fd897f2a67b3e9aab84fb53a5b18d0b716 | [
"MIT"
] | null | null | null | useless/__init__.py | DiamondsBattle/useless | 8e1823fd897f2a67b3e9aab84fb53a5b18d0b716 | [
"MIT"
] | null | null | null | useless/__init__.py | DiamondsBattle/useless | 8e1823fd897f2a67b3e9aab84fb53a5b18d0b716 | [
"MIT"
] | null | null | null | from useless.stack import Stack
from useless.globals import * | 30.5 | 31 | 0.836066 | 9 | 61 | 5.666667 | 0.555556 | 0.431373 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.114754 | 61 | 2 | 32 | 30.5 | 0.944444 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
f3be712bb68d5b506646c7edde2509061b3b92ad | 8,696 | py | Python | flopz/arch/arm/thumb/auto_instructions.py | LuDubies/flopz | 3e11f44af40a105b886d0f44d6de505203e72ae2 | [
"Apache-2.0"
] | 7 | 2021-11-19T15:53:58.000Z | 2022-03-28T03:38:52.000Z | flopz/arch/arm/thumb/auto_instructions.py | LuDubies/flopz | 3e11f44af40a105b886d0f44d6de505203e72ae2 | [
"Apache-2.0"
] | null | null | null | flopz/arch/arm/thumb/auto_instructions.py | LuDubies/flopz | 3e11f44af40a105b886d0f44d6de505203e72ae2 | [
"Apache-2.0"
] | 1 | 2022-03-25T12:44:01.000Z | 2022-03-25T12:44:01.000Z | from typing import List
from flopz.arch.register import Register
from flopz.arch.auto_instruction import AutoInstruction
from flopz.arch.arm.thumb.instructions import *
from enum import Enum
class AutoInstructionFailure(Exception):
def __init__(self, msg):
super().__init__(msg)
class AutoStore(AutoInstruction):
def __init__(self, *args, **kwargs):
super().__init__()
# use arg types to match fitting instruction
self.argtypes = [type(a) for a in args[:3]]
self.kwords = kwargs.keys()
self.args = args
self.kwargs = kwargs
def expand(self) -> List[ThumbInstruction]:
# check if we want to store halfwords or bytes
byte = False
hword = False
if "byte" in self.kwords:
byte = self.kwargs["byte"]
if "hword" in self.kwords:
hword = self.kwargs["hword"]
if self.argtypes == [Register, Register, Register]:
# its a register store instruction, allowed kwargs are: shift, byte, hword
if "shift" in self.kwords:
return [StrWR(*(a.get_val for a in self.args[:3]), shift=self.kwargs["shift"], byte=byte, hword=hword)]
else:
return [StrWR(*(a.get_val for a in self.args[3:]), shift=0, byte=byte, hword=hword)]
elif self.argtypes == [Register, Register, int]:
# its a immediate store instruction, allowed kwargs are: index, wback, byte, hword
registers = self.args[:2]
offset = self.args[2]
# decide on 16Bit encoding or 32 Bit encoding with either 8Bit or 12Bit immediate
# check for special indexing
indexing = "offset"
if "index" in self.kwords:
if not self.kwargs["index"]:
if "wback" not in self.kwords or not self.kwargs["wback"]:
raise AutoInstructionFailure("Invalid indexing (index == False && wback == False")
else:
indexing = "post-index"
if "index" not in self.kwords or self.kwargs["index"]:
if "wback" in self.kwords and self.kwargs["wback"]:
indexing = "pre-index"
# we have to use 32Bit T4 encoding for negative offsets or writeback
if indexing != "offset" or offset < 0:
if indexing == "offset":
return [StrW(registers[0].get_val, registers[1].get_val, offset=offset, byte=byte, hword=hword)]
elif indexing == "pre-index":
return [StrW(registers[0].get_val, registers[1].get_val, offset=offset, index=True, wback=False, byte=byte, hword=hword)]
elif indexing == "post-index":
return [StrW(registers[0].get_val, registers[1].get_val, offset=offset, index=False, wback=True, byte=byte, hword=hword)]
# check offset and register range to decide between 16 or 32Bit encoding
try:
# try to use 16Bit encoding
if byte:
return [Strb(*(r.get_val for r in registers), offset=offset)]
elif hword:
return [Strh(*(r.get_val for r in registers), offset=offset)]
else:
return [Str(*(r.get_val for r in registers), offset=offset)]
except ValueError:
# use 32Bit encoding
return [StrWI12(*(r.get_val for r in registers), offset=offset, byte=byte, hword=hword)]
else:
AutoInstructionFailure("Invalid argument types for {type(self)} AutoInstruction")
class AutoLoad(AutoInstruction):
def __init__(self, *args, **kwargs):
super().__init__()
# use arg types to match fitting instruction
self.argtypes = [type(a) for a in args[:3]]
self.kwords = kwargs.keys()
self.args = args
self.kwargs = kwargs
def expand(self) -> List[ThumbInstruction]:
# check if we want to load halfwords or bytes
byte = False
hword = False
if "byte" in self.kwords:
byte = self.kwargs["byte"]
if "hword" in self.kwords:
hword = self.kwargs["hword"]
if self.argtypes == [Register, Register, Register]:
# its a register store instruction, allowed kwargs are: shift, byte, hword
if "shift" in self.kwords:
return [LdrWR(*(a.get_val for a in self.args[:3]), shift=self.kwargs["shift"], byte=byte, hword=hword)]
else:
return [LdrWR(*(a.get_val for a in self.args[3:]), shift=0, byte=byte, hword=hword)]
elif self.argtypes == [Register, Register, int]:
# its a immediate store instruction, allowed kwargs are: index, wback, byte, hword
registers = self.args[:2]
offset = self.args[2]
# decide on 16Bit encoding or 32 Bit encoding with either 8Bit or 12Bit immediate
# check for special indexing
indexing = "offset"
if "index" in self.kwords:
if not self.kwargs["index"]:
if "wback" not in self.kwords or not self.kwargs["wback"]:
raise Exception("Invalid indexing (index == False && wback == False")
else:
indexing = "post-index"
if "index" not in self.kwords or self.kwargs["index"]:
if "wback" in self.kwords and self.kwargs["wback"]:
indexing = "pre-index"
# we have to use 32Bit T4 encoding for negative offsets or writeback
if indexing != "offset" or offset < 0:
if indexing == "offset":
return [LdrW(registers[0].get_val, registers[1].get_val, offset=offset, byte=byte, hword=hword)]
elif indexing == "pre-index":
return [LdrW(registers[0].get_val, registers[1].get_val, offset=offset, index=True, wback=False,
byte=byte, hword=hword)]
elif indexing == "post-index":
return [LdrW(registers[0].get_val, registers[1].get_val, offset=offset, index=False, wback=True,
byte=byte, hword=hword)]
# check offset and register range to decide between 16 or 32Bit encoding
try:
# try to use 16Bit encoding
if byte:
return [Ldrb(*(r.get_val for r in registers), offset=offset)]
elif hword:
return [Ldrh(*(r.get_val for r in registers), offset=offset)]
else:
return [Ldr(*(r.get_val for r in registers), offset=offset)]
except ValueError:
# use 32Bit encoding
return [LdrWI12(*(r.get_val for r in registers), offset=offset, byte=byte, hword=hword)]
elif self.argtypes == [Register, int]:
# load literalinstruction
return [LdrWLit(self.args[0].get_val, self.args[1])]
else:
AutoInstructionFailure("Invalid argument types for {type(self)} AutoInstruction")
class AutoBranch(AutoInstruction):
def __init__(self, *args):
super().__init__()
self.cond = None
self.offset = None
if not any(type(a) == int for a in args):
raise AutoInstructionFailure("AutoBranch instruction needs an int offset")
if len(args) == 1:
# arg has to be the desired offset
self.offset = args[0]
elif len(args) == 2:
# find condition and offset
if not any(isinstance(a, Enum) for a in args):
raise AutoInstructionFailure("AutoBranch instruction with 2 arguments expects a condition argument")
if isinstance(args[0], Enum):
self.cond = args[0]
self.offset = args[1]
else:
self.cond = args[1]
self.offset = args[0]
else:
raise AutoInstructionFailure(f"Invalid amount of arguments {len(args)} for AutoBranch (expects 2)")
def expand(self) -> List[ThumbInstruction]:
if self.cond is None:
# unconditional branch
try:
return [B_T2(self.offset)]
except ValueError:
return [B_T4(self.offset)]
else:
# conditional branch
try:
return [B_T1(self.cond, self.offset)]
except ValueError:
return [B_T3(self.cond, self.offset)]
# TODO encode jumps bigger than allowed for cond as IT and uncond
| 43.263682 | 141 | 0.563132 | 1,029 | 8,696 | 4.698737 | 0.14966 | 0.031024 | 0.034747 | 0.044674 | 0.804757 | 0.791727 | 0.778077 | 0.778077 | 0.749121 | 0.749121 | 0 | 0.014204 | 0.336132 | 8,696 | 200 | 142 | 43.48 | 0.823315 | 0.143629 | 0 | 0.636364 | 0 | 0 | 0.085445 | 0 | 0 | 0 | 0 | 0.005 | 0 | 1 | 0.048951 | false | 0 | 0.034965 | 0 | 0.272727 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
f3c0e86101f9a0584171d8622d53e44be22b565b | 37 | py | Python | init/31_addons.py | sourav-majumder/qtlab | 96b2a127b1df7b45622c90229bd5ef8a4083614e | [
"MIT"
] | null | null | null | init/31_addons.py | sourav-majumder/qtlab | 96b2a127b1df7b45622c90229bd5ef8a4083614e | [
"MIT"
] | null | null | null | init/31_addons.py | sourav-majumder/qtlab | 96b2a127b1df7b45622c90229bd5ef8a4083614e | [
"MIT"
] | null | null | null | from addons.batch import batch_start
| 18.5 | 36 | 0.864865 | 6 | 37 | 5.166667 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.108108 | 37 | 1 | 37 | 37 | 0.939394 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
cac18f3777df536b1e9337d497b0ad17fad95ccd | 79 | py | Python | devel/.private/px_comm/lib/python2.7/dist-packages/px_comm/msg/__init__.py | akshastry/Neo_WS | 6c646227b1fedf4fb8cf700533ca8fc47f381b46 | [
"MIT"
] | 1 | 2021-08-31T03:07:52.000Z | 2021-08-31T03:07:52.000Z | devel/.private/px_comm/lib/python2.7/dist-packages/px_comm/msg/__init__.py | akshastry/Neo_WS | 6c646227b1fedf4fb8cf700533ca8fc47f381b46 | [
"MIT"
] | null | null | null | devel/.private/px_comm/lib/python2.7/dist-packages/px_comm/msg/__init__.py | akshastry/Neo_WS | 6c646227b1fedf4fb8cf700533ca8fc47f381b46 | [
"MIT"
] | null | null | null | from ._CameraInfo import *
from ._Mavlink import *
from ._OpticalFlow import *
| 19.75 | 27 | 0.772152 | 9 | 79 | 6.444444 | 0.555556 | 0.344828 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.151899 | 79 | 3 | 28 | 26.333333 | 0.865672 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
cade3bb429b3f204518570195e9d5870df60d153 | 228 | py | Python | crosswalk/hosts/pm/scene.py | mikemalinowski/crosswalk | 9e1c49fcfd3a4a38e24e59660d06c2d903cc3ff4 | [
"MIT"
] | 1 | 2019-03-09T12:55:31.000Z | 2019-03-09T12:55:31.000Z | crosswalk/hosts/pm/scene.py | mikemalinowski/crosswalk | 9e1c49fcfd3a4a38e24e59660d06c2d903cc3ff4 | [
"MIT"
] | null | null | null | crosswalk/hosts/pm/scene.py | mikemalinowski/crosswalk | 9e1c49fcfd3a4a38e24e59660d06c2d903cc3ff4 | [
"MIT"
] | null | null | null | import crosswalk
def rename(element, new_name):
return None
def delete(element):
return None
def create(element_type, name):
return None
def select(elements):
return None
def deselect(elements=None):
return None
| 9.913043 | 31 | 0.745614 | 32 | 228 | 5.25 | 0.46875 | 0.297619 | 0.309524 | 0.202381 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.175439 | 228 | 22 | 32 | 10.363636 | 0.893617 | 0 | 0 | 0.454545 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.454545 | false | 0 | 0.090909 | 0.454545 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
1b03e1f2d6a8f6aa144f186ee2012ec8ee1b1d02 | 34 | py | Python | skt_cli/packages/utils/options_parser.py | Kludex/socketio-cli | 02252cd24284fe2490a4e2c40f8fc92cfbd02b83 | [
"MIT"
] | null | null | null | skt_cli/packages/utils/options_parser.py | Kludex/socketio-cli | 02252cd24284fe2490a4e2c40f8fc92cfbd02b83 | [
"MIT"
] | 20 | 2021-05-03T18:02:23.000Z | 2022-03-12T12:01:04.000Z | Lib/site-packages/socket_cli/utils/options_parser.py | fochoao/cpython | 3dc84b260e5bced65ebc2c45c40c8fa65f9b5aa9 | [
"bzip2-1.0.6",
"0BSD"
] | null | null | null | from optparse import OptionParser
| 17 | 33 | 0.882353 | 4 | 34 | 7.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.117647 | 34 | 1 | 34 | 34 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
1b104299715d5aa2f830933ad66b6d6c34780cd6 | 3,257 | py | Python | tinyauth/backends/proxy.py | Jc2k/microauth | ff7c9a1aa493fe50f7f59f618f3317910551b99d | [
"Apache-2.0"
] | 2 | 2018-06-07T18:39:37.000Z | 2020-05-16T11:08:29.000Z | tinyauth/backends/proxy.py | Jc2k/microauth | ff7c9a1aa493fe50f7f59f618f3317910551b99d | [
"Apache-2.0"
] | 2 | 2017-11-19T16:52:01.000Z | 2018-08-11T10:49:08.000Z | tinyauth/backends/proxy.py | Jc2k/microauth | ff7c9a1aa493fe50f7f59f618f3317910551b99d | [
"Apache-2.0"
] | 1 | 2018-05-26T06:03:04.000Z | 2018-05-26T06:03:04.000Z | import base64
import datetime
import requests
from flask import current_app
from tinyauth import exceptions
from tinyauth.utils.cache import cache
class Backend(object):
def __init__(self):
self.session = requests.Session()
@cache()
def get_policies(self, region, service, username):
endpoint = current_app.config['TINYAUTH_ENDPOINT']
uri = f'/api/v1/regions/{region}/services/{service}/user-policies/{username}'
response = self.session.get(
f'{endpoint}{uri}',
auth=(
current_app.config['TINYAUTH_ACCESS_KEY_ID'],
current_app.config['TINYAUTH_SECRET_ACCESS_KEY'],
),
headers={
'Accept': 'application/json',
},
verify=current_app.config.get('TINYAUTH_VERIFY', True),
)
if response.status_code == 404:
raise exceptions.NoSuchKey(identity=username)
expires = datetime.datetime.strptime(response.headers['Expires'], '%a, %d %b %Y %H:%M:%S GMT')
return expires, response.json()
@cache()
def get_user_key(self, protocol, region, service, date, username):
endpoint = current_app.config['TINYAUTH_ENDPOINT']
token_id = '/'.join((
username,
protocol,
date.strftime('%Y%m%d'),
))
uri = f'/api/v1/regions/{region}/services/{service}/user-signing-tokens/{token_id}'
response = self.session.get(
f'{endpoint}{uri}',
auth=(
current_app.config['TINYAUTH_ACCESS_KEY_ID'],
current_app.config['TINYAUTH_SECRET_ACCESS_KEY'],
),
headers={
'Accept': 'application/json',
},
verify=current_app.config.get('TINYAUTH_VERIFY', True),
)
if response.status_code == 404:
raise exceptions.NoSuchKey(identity=username)
expires = datetime.datetime.strptime(response.headers['Expires'], '%a, %d %b %Y %H:%M:%S GMT')
token = response.json()
token['key'] = base64.b64decode(token['key'])
return expires, token
@cache()
def get_access_key(self, protocol, region, service, date, access_key_id):
endpoint = current_app.config['TINYAUTH_ENDPOINT']
token_id = '/'.join((
access_key_id,
protocol,
date.strftime('%Y%m%d'),
))
uri = f'/api/v1/regions/{region}/services/{service}/access-key-signing-tokens/{token_id}'
response = self.session.get(
f'{endpoint}{uri}',
auth=(
current_app.config['TINYAUTH_ACCESS_KEY_ID'],
current_app.config['TINYAUTH_SECRET_ACCESS_KEY'],
),
headers={
'Accept': 'application/json',
},
verify=current_app.config.get('TINYAUTH_VERIFY', True),
)
if response.status_code == 404:
raise exceptions.NoSuchKey(identity=access_key_id)
expires = datetime.datetime.strptime(response.headers['Expires'], '%a, %d %b %Y %H:%M:%S GMT')
token = response.json()
token['key'] = base64.b64decode(token['key'])
return expires, token
| 32.247525 | 102 | 0.576297 | 348 | 3,257 | 5.224138 | 0.204023 | 0.071507 | 0.105611 | 0.118812 | 0.812981 | 0.812981 | 0.777778 | 0.746975 | 0.746975 | 0.666117 | 0 | 0.00952 | 0.290451 | 3,257 | 100 | 103 | 32.57 | 0.777153 | 0 | 0 | 0.6875 | 0 | 0.025 | 0.213387 | 0.112373 | 0 | 0 | 0 | 0 | 0 | 1 | 0.05 | false | 0 | 0.075 | 0 | 0.175 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
1b3f6c19754e1b5228d4854cf42ab84b684c0020 | 7,773 | py | Python | util/dataset.py | Isaac-Li-cn/certify_robustness | f904dc923afc6354e406c57a1c923d13fc39d315 | [
"BSD-3-Clause"
] | null | null | null | util/dataset.py | Isaac-Li-cn/certify_robustness | f904dc923afc6354e406c57a1c923d13fc39d315 | [
"BSD-3-Clause"
] | null | null | null | util/dataset.py | Isaac-Li-cn/certify_robustness | f904dc923afc6354e406c57a1c923d13fc39d315 | [
"BSD-3-Clause"
] | null | null | null | import pickle
import numpy as np
import torch
from torchvision import datasets, transforms
def load_pkl(pkl_file, batch_size):
info = pickle.load(open(pkl_file, 'rb'))
data_batch = info['data']
label_batch = info['label']
batch_num = data_batch.shape[0] // batch_size
point_num = data_batch.shape[0]
while True:
for batch_idx in range(batch_num):
data_this_batch = data_batch[batch_idx * batch_size : (batch_idx + 1) * batch_size]
label_this_batch = label_batch[batch_idx * batch_size : (batch_idx + 1) * batch_size]
yield torch.FloatTensor(data_this_batch), torch.LongTensor(label_this_batch)
permute_idx = np.random.permutation(point_num)
data_batch = data_batch[permute_idx]
label_batch = label_batch[permute_idx]
def mnist(batch_size, batch_size_test, horizontal_flip = False, random_clip = False, normalization = None):
if normalization == None:
normalization = transforms.Normalize(mean = [0.5,], std = [0.5,])
basic_transform = [transforms.ToTensor(), normalization]
data_augumentation = []
if horizontal_flip == True:
data_augumentation.append(transforms.RandomHorizontalFlip())
if random_clip == True:
data_augumentation.append(transforms.RandomCrop(28, 4))
train_set = datasets.MNIST(root = './data', train = True, download = True,
transform = transforms.Compose(data_augumentation + basic_transform))
train_loader = torch.utils.data.DataLoader(train_set, batch_size = batch_size, shuffle = True, pin_memory = True)
test_set = datasets.MNIST(root = './data', train = False, download = True,
transform = transforms.Compose(basic_transform))
test_loader = torch.utils.data.DataLoader(test_set, batch_size = batch_size_test, shuffle = False, pin_memory = True)#windows下的多线程读取有点问题
return train_loader, test_loader
def load_mnist(batch_size, flatten = True, dset = 'test', horizontal_flip = False,
random_clip = False, normalization = None, subset = None):
if subset == None:
train_loader, test_loader = mnist(batch_size = batch_size, batch_size_test = batch_size,
horizontal_flip = horizontal_flip, random_clip = random_clip, normalization = normalization)
loader = {'train': train_loader, 'test': test_loader}[dset]
while True:
for idx, (data_batch, label_batch) in enumerate(loader, 0):
if data_batch.shape[0] != batch_size:
continue
if flatten == True:
data_batch = data_batch.view(data_batch.shape[0], -1)
yield data_batch, label_batch
else:
train_loader, test_loader = mnist(batch_size = 1, batch_size_test = 1, horizontal_flip = horizontal_flip,
random_clip = random_clip, normalization = normalization)
loader = {'train': train_loader, 'test': test_loader}[dset]
label2mapping = {label: idx for idx, label in enumerate(subset)}
while True:
instance_num = 0
data_list = []
label_list = []
for idx, (data_batch, label_batch) in enumerate(loader, 0):
label = int(label_batch.data.cpu().numpy()[0])
if label in label2mapping:
labelmap = label2mapping[label]
data_list.append(data_batch)
label_list.append(labelmap)
if len(label_list) == batch_size:
data_batch_cat = torch.cat(data_list, dim = 0)
label_batch_cat = torch.LongTensor(label_list)
data_list = []
label_list = []
yield data_batch_cat, label_batch_cat
def fmnist(batch_size, batch_size_test, horizontal_flip = False, random_clip = False, normalization = None):
if normalization == None:
normalization = transforms.Normalize(mean = [0.5,], std = [0.5,])
basic_transform = [transforms.ToTensor(), normalization]
data_augumentation = []
if horizontal_flip == True:
data_augumentation.append(transforms.RandomHorizontalFlip())
if random_clip == True:
data_augumentation.append(transforms.RandomCrop(28, 4))
train_set = datasets.FashionMNIST(root = './data', train = True, download = True,
transform = transforms.Compose(data_augumentation + basic_transform))
train_loader = torch.utils.data.DataLoader(train_set, batch_size = batch_size, shuffle = True, num_workers = 4, pin_memory = True)
test_set = datasets.FashionMNIST(root = './data', train = False, download = True,
transform = transforms.Compose(basic_transform))
test_loader = torch.utils.data.DataLoader(test_set, batch_size = batch_size_test, shuffle = False, num_workers = 4, pin_memory = True)
return train_loader, test_loader
def load_fmnist(batch_size, flatten = True, dset = 'test', horizontal_flip = False,
random_clip = False, normalization = None, subset = None):
if subset == None:
train_loader, test_loader = fmnist(batch_size = batch_size, batch_size_test = batch_size,
horizontal_flip = horizontal_flip, random_clip = random_clip, normalization = normalization)
loader = {'train': train_loader, 'test': test_loader}[dset]
while True:
for idx, (data_batch, label_batch) in enumerate(loader, 0):
if data_batch.shape[0] != batch_size:
continue
if flatten == True:
data_batch = data_batch.view(data_batch.shape[0], -1)
yield data_batch, label_batch
else:
raise NotImplementedError('This part has not been implemented yet.')
def svhn(batch_size, batch_size_test, horizontal_flip = False, random_clip = False, normalization = None):
if normalization == None:
normalization = transforms.Normalize(mean = [0.5, 0.5, 0.5], std = [0.5, 0.5, 0.5])
basic_transform = [transforms.ToTensor(), normalization]
data_augumentation = []
if horizontal_flip == True:
data_augumentation.append(transforms.RandomHorizontalFlip())
if random_clip == True:
data_augumentation.append(transforms.RandomCrop(28, 4))
train_set = datasets.SVHN(root = './data', split = 'train', download = True,
transform = transforms.Compose(data_augumentation + basic_transform))
train_loader = torch.utils.data.DataLoader(train_set, batch_size = batch_size, shuffle = True, num_workers = 4, pin_memory = True)
test_set = datasets.SVHN(root = './data', split = 'test', download = True,
transform = transforms.Compose(data_augumentation + basic_transform))
test_loader = torch.utils.data.DataLoader(test_set, batch_size = batch_size_test, shuffle = True, num_workers = 4, pin_memory = True)
return train_loader, test_loader
def load_svhn(batch_size, flatten = True, dset = 'test', horizontal_flip = False,
random_clip = False, normalization = None, subset = None):
if subset == None:
train_loader, test_loader = svhn(batch_size = batch_size, batch_size_test = batch_size,
horizontal_flip = horizontal_flip, random_clip = random_clip, normalization = normalization)
loader = {'train': train_loader, 'test': test_loader}[dset]
while True:
for idx, (data_batch, label_batch) in enumerate(loader, 0):
if data_batch.shape[0] != batch_size:
continue
if flatten == True:
data_batch = data_batch.view(data_batch.shape[0], -1)
yield data_batch, label_batch
else:
raise NotImplementedError('This part has been implemented yet.')
| 43.915254 | 141 | 0.659076 | 928 | 7,773 | 5.258621 | 0.110991 | 0.082992 | 0.04877 | 0.055328 | 0.820902 | 0.810861 | 0.784631 | 0.769467 | 0.768648 | 0.757377 | 0 | 0.009834 | 0.24122 | 7,773 | 176 | 142 | 44.164773 | 0.817565 | 0.002316 | 0 | 0.6 | 0 | 0 | 0.022959 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.053846 | false | 0 | 0.030769 | 0 | 0.107692 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
1b7a16d15e9c79b20503628941f89f4c11169aa4 | 104 | py | Python | pretix_mbway/ifthenpay/ifthenpayexception.py | mrmurilo75/pretix-mbway | ad5be9199c8e4cbd8450d729ce01c970211701ab | [
"Apache-2.0"
] | null | null | null | pretix_mbway/ifthenpay/ifthenpayexception.py | mrmurilo75/pretix-mbway | ad5be9199c8e4cbd8450d729ce01c970211701ab | [
"Apache-2.0"
] | null | null | null | pretix_mbway/ifthenpay/ifthenpayexception.py | mrmurilo75/pretix-mbway | ad5be9199c8e4cbd8450d729ce01c970211701ab | [
"Apache-2.0"
] | null | null | null | from pretix.base.payment import PaymentException
class IfThenPayException(PaymentException):
pass
| 17.333333 | 48 | 0.826923 | 10 | 104 | 8.6 | 0.9 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.125 | 104 | 5 | 49 | 20.8 | 0.945055 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
1bcb368dd55c7e044cf0d37986c1e0d7b138a108 | 3,018 | py | Python | tests/extensions/model/res_unet.py | ottogin/inferno | 4f4a49a0b4439d12694200e2cc757a4d0d0dff2b | [
"Apache-2.0"
] | null | null | null | tests/extensions/model/res_unet.py | ottogin/inferno | 4f4a49a0b4439d12694200e2cc757a4d0d0dff2b | [
"Apache-2.0"
] | null | null | null | tests/extensions/model/res_unet.py | ottogin/inferno | 4f4a49a0b4439d12694200e2cc757a4d0d0dff2b | [
"Apache-2.0"
] | null | null | null | import unittest
import torch
import torch.cuda as cuda
from inferno.utils.model_utils import ModelTester
class ResUNetTest(unittest.TestCase):
def test_res_unet_2d(self):
from inferno.extensions.model import ResBlockUNet
tester = ModelTester((1, 1, 256, 256), (1, 1, 256, 256))
if cuda.is_available():
tester.cuda()
tester(ResBlockUNet(in_channels=1, out_channels=1, dim=2))
def test_res_unet_3d(self):
from inferno.extensions.model import ResBlockUNet
tester = ModelTester((1, 1, 16, 64, 64), (1, 1, 16, 64, 64))
if cuda.is_available():
tester.cuda()
# test default unet 3d
tester(ResBlockUNet(in_channels=1, out_channels=1, dim=3))
def test_2d_side_out_bot_up(self):
from inferno.extensions.model import ResBlockUNet
depth = 3
in_channels = 3
x = torch.rand(1, in_channels, 64, 32)
model = ResBlockUNet(in_channels=in_channels,
out_channels=8, dim=2,
side_out_parts=['bottom','up'],
unet_kwargs=dict(depth=depth))
out_list = model(x)
self.assertEqual(len(out_list), depth + 1)
self.assertEqual(list(out_list[0].size()), [1, 24, 8, 4])
self.assertEqual(list(out_list[1].size()), [1, 12, 16, 8])
self.assertEqual(list(out_list[2].size()), [1, 6, 32, 16])
self.assertEqual(list(out_list[3].size()), [1, 8, 64, 32])
def test_2d_side_out_up(self):
from inferno.extensions.model import ResBlockUNet
depth = 3
in_channels = 3
x = torch.rand(1, in_channels, 64, 32)
model = ResBlockUNet(in_channels=in_channels,
out_channels=8, dim=2,
side_out_parts=['up'],
unet_kwargs=dict(depth=depth))
out_list = model(x)
self.assertEqual(len(out_list), depth)
self.assertEqual(list(out_list[0].size()), [1,12, 16, 8])
self.assertEqual(list(out_list[1].size()), [1, 6, 32, 16])
self.assertEqual(list(out_list[2].size()), [1, 8, 64, 32])
def test_2d_side_out_down(self):
from inferno.extensions.model import ResBlockUNet
depth = 3
in_channels = 3
x = torch.rand(1, in_channels, 64, 32)
model = ResBlockUNet(in_channels=in_channels,
out_channels=8, dim=2,
side_out_parts=['down'],
unet_kwargs=dict(depth=depth))
out_list = model(x)
self.assertEqual(len(out_list), depth + 1)
self.assertEqual(list(out_list[0].size()), [1, 6, 64, 32])
self.assertEqual(list(out_list[1].size()), [1, 12, 32, 16])
self.assertEqual(list(out_list[2].size()), [1, 24, 16, 8])
# the actual output
self.assertEqual(list(out_list[3].size()), [1, 8, 64, 32])
if __name__ == '__main__':
unittest.main()
| 35.928571 | 68 | 0.575215 | 403 | 3,018 | 4.121588 | 0.158809 | 0.071644 | 0.125828 | 0.145695 | 0.84708 | 0.829019 | 0.796508 | 0.796508 | 0.779049 | 0.689344 | 0 | 0.070356 | 0.293572 | 3,018 | 83 | 69 | 36.361446 | 0.708724 | 0.012591 | 0 | 0.539683 | 0 | 0 | 0.00739 | 0 | 0 | 0 | 0 | 0 | 0.222222 | 1 | 0.079365 | false | 0 | 0.142857 | 0 | 0.238095 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
9402ab5169f8917591eeaef48bc34dd4e5e13ac9 | 4,484 | py | Python | tests/eval_clusters_test.py | rloganiv/meercat-aux | 4d9006095e9fb91034f8dae0baaa81a1567f6606 | [
"Apache-2.0"
] | 1 | 2021-11-24T03:32:00.000Z | 2021-11-24T03:32:00.000Z | tests/eval_clusters_test.py | rloganiv/meercat-aux | 4d9006095e9fb91034f8dae0baaa81a1567f6606 | [
"Apache-2.0"
] | 1 | 2020-12-09T00:15:33.000Z | 2021-05-27T00:52:03.000Z | tests/eval_clusters_test.py | rloganiv/streaming-cdc | 4d9006095e9fb91034f8dae0baaa81a1567f6606 | [
"Apache-2.0"
] | null | null | null | from unittest import TestCase
from meercat import eval_clusters
class TestMUC(TestCase):
# NOTE: We use inconsistent names for clusters since this is almost certain to be the case
# for most of our clustering outputs.
def test_case_1(self):
# Vilain Table 1 Row 1
true_clusters = {0: {'A', 'B', 'C', 'D'}}
pred_clusters = {1: {'A', 'B'}, 2: {'C', 'D'}}
p, r, f = eval_clusters.muc(true_clusters, pred_clusters)
assert p == 2/2
assert r == 2/3
def test_case_2(self):
# Villain Table 1 Row 2
true_clusters = {0: {'A', 'B'}, 1: {'C', 'D'}}
pred_clusters = {2: {'A', 'B', 'C', 'D'}}
p, r, f = eval_clusters.muc(true_clusters, pred_clusters)
assert p == 2/3
assert r == 2/2
def test_case_3(self):
# Villain Table 1 Row 3
true_clusters = {0: {'A', 'B', 'C', 'D'}}
pred_clusters = {1: {'A', 'B', 'C', 'D'}}
p, r, f = eval_clusters.muc(true_clusters, pred_clusters)
assert p == 3/3
assert r == 3/3
def test_case_4(self):
# Villain Table 1 Row 5 (Skipped 4 since equvalent to 1)
true_clusters = {0: {'A', 'B', 'C'}}
pred_clusters = {1: {'A', 'C'}, 2: {'B'}}
p, r, f = eval_clusters.muc(true_clusters, pred_clusters)
assert p == 1/1
assert r == 1/2
class TestB3(TestCase):
def test_case_1(self):
# Luo Table 1.a
true_clusters = {
0: {'1', '2', '3', '4', '5'},
1: {'6', '7'},
2: {'8', '9', 'A', 'B', 'C'},
}
pred_clusters = {
3: {'1', '2', '3', '4', '5'},
4: {'6', '7', '8', '9', 'A', 'B', 'C'},
}
total = 12
*_, f = eval_clusters.b3(true_clusters, pred_clusters, total)
# Result in table is only approximate so we only measure that it is close
assert abs(f - 0.865) < 1e-3
def test_case_2(self):
# Luo Table 1.b
true_clusters = {
0: {'1', '2', '3', '4', '5'},
1: {'6', '7'},
2: {'8', '9', 'A', 'B', 'C'},
}
pred_clusters = {
0: {'1', '2', '3', '4', '5', '8', '9', 'A', 'B', 'C'},
1: {'6', '7'},
}
total = 12
*_, f = eval_clusters.b3(true_clusters, pred_clusters, total)
# Result in table is only approximate so we only measure that it is close
assert abs(f - 0.737) < 1e-3
def test_case_3(self):
# Luo Table 1.c
true_clusters = {
0: {'1', '2', '3', '4', '5'},
1: {'6', '7'},
2: {'8', '9', 'A', 'B', 'C'},
}
pred_clusters = {
0: {'1', '2', '3', '4', '5', '6', '7', '8', '9', 'A', 'B', 'C'},
}
total = 12
*_, f = eval_clusters.b3(true_clusters, pred_clusters, total)
# Result in table is only approximate so we only measure that it is close
assert abs(f - 0.545) < 1e-3
class TestCeafE(TestCase):
def test_case_1(self):
# Luo Table 1.a
true_clusters = {
0: {0, 1, 2, 3, 4},
1: {5, 6},
2: {7, 8, 9, 10, 11},
}
pred_clusters = {
3: {0, 1, 2, 3, 4},
4: {5, 6, 7, 8, 9, 10, 11},
}
total = 12
*_, f = eval_clusters.ceaf_e(true_clusters, pred_clusters, total)
# Result in table is only approximate so we only measure that it is close
assert abs(f - 0.733) < 1e-3
def test_case_2(self):
# Luo Table 1.b
true_clusters = {
0: {0, 1, 2, 3, 4},
1: {5, 6},
2: {7, 8, 9, 10, 11},
}
pred_clusters = {
3: {0, 1, 2, 3, 4, 7, 8, 9, 10, 11},
4: {5, 6},
}
total = 12
*_, f = eval_clusters.ceaf_e(true_clusters, pred_clusters, total)
# Result in table is only approximate so we only measure that it is close
assert abs(f - 0.667) < 1e-3
def test_case_3(self):
# Luo Table 1.c
true_clusters = {
0: {0, 1, 2, 3, 4},
1: {5, 6},
2: {7, 8, 9, 10, 11},
}
pred_clusters = {
3: {0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11},
}
total = 12
*_, f = eval_clusters.ceaf_e(true_clusters, pred_clusters, total)
# Result in table is only approximate so we only measure that it is close
assert abs(f - 0.294) < 1e-3
| 32.028571 | 94 | 0.463649 | 660 | 4,484 | 3.028788 | 0.130303 | 0.12006 | 0.018009 | 0.024012 | 0.824912 | 0.76088 | 0.752376 | 0.743372 | 0.743372 | 0.743372 | 0 | 0.095591 | 0.367752 | 4,484 | 139 | 95 | 32.258993 | 0.609524 | 0.169492 | 0 | 0.552381 | 0 | 0 | 0.027538 | 0 | 0 | 0 | 0 | 0 | 0.133333 | 1 | 0.095238 | false | 0 | 0.019048 | 0 | 0.142857 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
9441842f286f9d5d5e7f01f9dc9716698c70b451 | 2,522 | py | Python | ymir/backend/src/ymir-app/alembic/versions/8d128d880788_update_task_tbl_use_text_to_hold_longer_.py | elliotmessi/ymir | 3ec8145a1f894778116eb5218de223f6dd805b70 | [
"Apache-2.0"
] | 1 | 2022-01-12T03:12:47.000Z | 2022-01-12T03:12:47.000Z | ymir/backend/src/ymir-app/alembic/versions/8d128d880788_update_task_tbl_use_text_to_hold_longer_.py | elliotmessi/ymir | 3ec8145a1f894778116eb5218de223f6dd805b70 | [
"Apache-2.0"
] | null | null | null | ymir/backend/src/ymir-app/alembic/versions/8d128d880788_update_task_tbl_use_text_to_hold_longer_.py | elliotmessi/ymir | 3ec8145a1f894778116eb5218de223f6dd805b70 | [
"Apache-2.0"
] | null | null | null | """update task tbl: use Text to hold longer string
Revision ID: 8d128d880788
Revises: 01d657267139
Create Date: 2021-12-06 16:04:13.216251
"""
import sqlalchemy as sa
from alembic import context, op
# revision identifiers, used by Alembic.
revision = "8d128d880788"
down_revision = "01d657267139"
branch_labels = None
depends_on = None
def upgrade():
# ### commands auto generated by Alembic - please adjust! ###
if context.get_x_argument(as_dictionary=True).get("sqlite", None):
with op.batch_alter_table("task") as batch_op:
batch_op.alter_column(
"parameters",
existing_type=sa.VARCHAR(length=500),
type_=sa.Text(length=500),
existing_nullable=True,
)
batch_op.alter_column(
"config",
existing_type=sa.VARCHAR(length=500),
type_=sa.Text(length=500),
existing_nullable=True,
)
else:
op.alter_column(
"task",
"parameters",
existing_type=sa.VARCHAR(length=500),
type_=sa.Text(length=500),
existing_nullable=True,
)
op.alter_column(
"task",
"config",
existing_type=sa.VARCHAR(length=500),
type_=sa.Text(length=500),
existing_nullable=True,
)
# ### end Alembic commands ###
def downgrade():
# ### commands auto generated by Alembic - please adjust! ###
if context.get_x_argument(as_dictionary=True).get("sqlite", None):
with op.batch_alter_table("task") as batch_op:
batch_op.alter_column(
"config",
existing_type=sa.Text(length=500),
type_=sa.VARCHAR(length=500),
existing_nullable=True,
)
batch_op.alter_column(
"parameters",
existing_type=sa.Text(length=500),
type_=sa.VARCHAR(length=500),
existing_nullable=True,
)
else:
op.alter_column(
"task",
"config",
existing_type=sa.Text(length=500),
type_=sa.VARCHAR(length=500),
existing_nullable=True,
)
op.alter_column(
"task",
"parameters",
existing_type=sa.Text(length=500),
type_=sa.VARCHAR(length=500),
existing_nullable=True,
)
# ### end Alembic commands ###
| 29.670588 | 70 | 0.549167 | 266 | 2,522 | 5.015038 | 0.263158 | 0.071964 | 0.077961 | 0.113943 | 0.788606 | 0.788606 | 0.788606 | 0.788606 | 0.761619 | 0.709145 | 0 | 0.066345 | 0.342585 | 2,522 | 84 | 71 | 30.02381 | 0.738239 | 0.130452 | 0 | 0.757576 | 0 | 0 | 0.057514 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.030303 | false | 0 | 0.030303 | 0 | 0.060606 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
8475482b92a86dc118c5096d6456df9cc147f186 | 30 | py | Python | dizoo/procgen/maze/envs/__init__.py | sailxjx/DI-engine | c6763f8e2ba885a2a02f611195a1b5f8b50bff00 | [
"Apache-2.0"
] | 464 | 2021-07-08T07:26:33.000Z | 2022-03-31T12:35:16.000Z | dizoo/procgen/maze/envs/__init__.py | sailxjx/DI-engine | c6763f8e2ba885a2a02f611195a1b5f8b50bff00 | [
"Apache-2.0"
] | 177 | 2021-07-09T08:22:55.000Z | 2022-03-31T07:35:22.000Z | dizoo/procgen/maze/envs/__init__.py | sailxjx/DI-engine | c6763f8e2ba885a2a02f611195a1b5f8b50bff00 | [
"Apache-2.0"
] | 92 | 2021-07-08T12:16:37.000Z | 2022-03-31T09:24:41.000Z | from .maze_env import MazeEnv
| 15 | 29 | 0.833333 | 5 | 30 | 4.8 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.133333 | 30 | 1 | 30 | 30 | 0.923077 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
84a4549a06641dd4c1c50fc72db618b9097896e6 | 415 | py | Python | src/managers/dummy.py | MattX/nanospectrum | 42b417d2640d9aecd8c8d5338e4bda09f5c0d6ef | [
"Apache-2.0"
] | null | null | null | src/managers/dummy.py | MattX/nanospectrum | 42b417d2640d9aecd8c8d5338e4bda09f5c0d6ef | [
"Apache-2.0"
] | null | null | null | src/managers/dummy.py | MattX/nanospectrum | 42b417d2640d9aecd8c8d5338e4bda09f5c0d6ef | [
"Apache-2.0"
] | null | null | null | import numpy as np
from .manager_base import ManagerBase, Panel
class DummyManager(ManagerBase):
def __init__(self, num_panels):
self.num_panels = num_panels
def get_num_panels(self):
return self.num_panels
def get_layout(self):
return [Panel(i, 75*i, 43 * (1 - i % 2), np.pi/3 * (i % 2 + 1)) for i in range(self.num_panels)]
def put_colors(self, colors):
pass
| 23.055556 | 104 | 0.645783 | 64 | 415 | 3.96875 | 0.5 | 0.212598 | 0.204724 | 0.11811 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.028754 | 0.245783 | 415 | 17 | 105 | 24.411765 | 0.782748 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.363636 | false | 0.090909 | 0.181818 | 0.181818 | 0.818182 | 0 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | 0 | 6 |
84b15425b2871a18b129b16dfe0537c677540f87 | 38 | py | Python | pyy_chr/ui/kivy/__init__.py | nleseul/pyy_chr | 2ef70bc82d8d470ab83952ce81aebc86e2d22316 | [
"Unlicense"
] | 1 | 2019-10-26T19:10:13.000Z | 2019-10-26T19:10:13.000Z | pyy_chr/ui/kivy/__init__.py | nleseul/pyy_chr | 2ef70bc82d8d470ab83952ce81aebc86e2d22316 | [
"Unlicense"
] | null | null | null | pyy_chr/ui/kivy/__init__.py | nleseul/pyy_chr | 2ef70bc82d8d470ab83952ce81aebc86e2d22316 | [
"Unlicense"
] | null | null | null | from .pixeldisplay import PixelDisplay | 38 | 38 | 0.894737 | 4 | 38 | 8.5 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.078947 | 38 | 1 | 38 | 38 | 0.971429 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
84da7c83bb203e96309c9726a0676bd5e1088818 | 33 | py | Python | sciviewer/__init__.py | shishirpy/sciviewer | b724e420b08b6abec9b005e755f319b8488e993b | [
"MIT"
] | 7 | 2021-05-10T18:35:45.000Z | 2021-11-22T02:50:28.000Z | sciviewer/__init__.py | shishirpy/sciviewer | b724e420b08b6abec9b005e755f319b8488e993b | [
"MIT"
] | 7 | 2021-05-31T19:44:41.000Z | 2021-07-08T18:37:12.000Z | sciviewer/__init__.py | shishirpy/sciviewer | b724e420b08b6abec9b005e755f319b8488e993b | [
"MIT"
] | 2 | 2021-05-31T00:21:38.000Z | 2021-10-01T19:51:45.000Z | from .sciviewer import SCIViewer
| 16.5 | 32 | 0.848485 | 4 | 33 | 7 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.121212 | 33 | 1 | 33 | 33 | 0.965517 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
ca0dee7e38989174b81a85585ac65da3fef184f7 | 28 | py | Python | python/authentication/forms.py | ChaseStull/NCHSApp | 642153727b5eb5f379f4d67baa3cb14b4b325d1e | [
"Apache-2.0"
] | null | null | null | python/authentication/forms.py | ChaseStull/NCHSApp | 642153727b5eb5f379f4d67baa3cb14b4b325d1e | [
"Apache-2.0"
] | null | null | null | python/authentication/forms.py | ChaseStull/NCHSApp | 642153727b5eb5f379f4d67baa3cb14b4b325d1e | [
"Apache-2.0"
] | null | null | null | from django import forms
| 9.333333 | 25 | 0.75 | 4 | 28 | 5.25 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.25 | 28 | 2 | 26 | 14 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
ca1c5d8d3e1519792fd0fc735c643b7fff9b877d | 69 | py | Python | python/testsuite/certifications/bp/__init__.py | jiportilla/ontology | 8a66bb7f76f805c64fc76cfc40ab7dfbc1146f40 | [
"MIT"
] | null | null | null | python/testsuite/certifications/bp/__init__.py | jiportilla/ontology | 8a66bb7f76f805c64fc76cfc40ab7dfbc1146f40 | [
"MIT"
] | null | null | null | python/testsuite/certifications/bp/__init__.py | jiportilla/ontology | 8a66bb7f76f805c64fc76cfc40ab7dfbc1146f40 | [
"MIT"
] | null | null | null | from .certification_regression_api import CertificationRegressionAPI
| 34.5 | 68 | 0.927536 | 6 | 69 | 10.333333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.057971 | 69 | 1 | 69 | 69 | 0.953846 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
ca35d8fcd53804486aec343430d350e57fea3e2c | 331 | py | Python | tests/test_paper.py | Squirtle692/Oh-my-papers | dba279ff4fcb22028b5f4290eb437dd4a87d4a2f | [
"MIT"
] | null | null | null | tests/test_paper.py | Squirtle692/Oh-my-papers | dba279ff4fcb22028b5f4290eb437dd4a87d4a2f | [
"MIT"
] | null | null | null | tests/test_paper.py | Squirtle692/Oh-my-papers | dba279ff4fcb22028b5f4290eb437dd4a87d4a2f | [
"MIT"
] | null | null | null |
def test_paper_init():
pass
def test_paper_download():
pass
def test_paper_view():
pass
def test_paper_cite():
pass
def test_paper_citation_file():
pass
def test_paperscollection_init():
pass
def test_paperscollection_download():
pass
def test_paperscollection_citation_files():
pass
| 10.34375 | 43 | 0.712991 | 42 | 331 | 5.190476 | 0.285714 | 0.256881 | 0.353211 | 0.293578 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.214502 | 331 | 31 | 44 | 10.677419 | 0.838462 | 0 | 0 | 0.5 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | true | 0.5 | 0 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
047796fe65476364e65d9ecd610f6edae4b26035 | 3,079 | py | Python | NBAStatScraper/test/test_standings.py | tdowhy/NBAStatScraper | eef1fa8ebb39d9e36ca137f1200957c9f4406000 | [
"MIT"
] | null | null | null | NBAStatScraper/test/test_standings.py | tdowhy/NBAStatScraper | eef1fa8ebb39d9e36ca137f1200957c9f4406000 | [
"MIT"
] | null | null | null | NBAStatScraper/test/test_standings.py | tdowhy/NBAStatScraper | eef1fa8ebb39d9e36ca137f1200957c9f4406000 | [
"MIT"
] | 1 | 2021-01-16T11:19:36.000Z | 2021-01-16T11:19:36.000Z | import unittest
import sys
sys.path.append('../')
import standings
import database.databasePrep.prep_standings as prep
class Test(unittest.TestCase):
def test_get_division_standings(self):
fields = ['teamId', 'wins', 'losses', 'ratio', 'ppg', 'oppg', 'year', 'conference']
yearsToTest = ['2019', '2010', '2002']
for year in yearsToTest:
df = prep.prep_standings(year)
self.assertCountEqual(list(df.columns), fields)
@unittest.expectedFailure
def test_get_division_standings_fail(self):
fields = ['teamId', 'wins', 'losses', 'ratio', 'ppg', 'oppg', 'year', 'conference']
yearsToTest = ['2222', '3333', 'asfd']
for year in yearsToTest:
df = prep.prep_standings(year)
self.assertCountEqual(list(df.columns), fields)
def test_get_conference_standings(self):
conferences = ['W', 'E']
yearsToTest = ['2019', '2017', '2016']
for c in conferences:
for year in yearsToTest:
if c == 'W':
fields = ['Western Conference', 'W', 'L', 'W/L%', 'GB', 'PS/G', 'PA/G', 'SRS']
else:
fields = ['Eastern Conference', 'W', 'L', 'W/L%', 'GB', 'PS/G', 'PA/G', 'SRS']
df = standings.get_conference_standings(c, year)
self.assertCountEqual(list(df.columns), fields)
@unittest.expectedFailure
def test_get_conference_standings_fail(self):
fields = ['teamId', 'wins', 'losses', 'ratio', 'ppg', 'oppg', 'year', 'conference']
yearsToTest = ['2222', '3333', 'asfd']
for year in yearsToTest:
df = standings.get_conference_standings('W',year)
self.assertCountEqual(list(df.columns), fields)
def test_get_league_standings(self):
fields = ['Rk', 'Team', 'Overall', 'Home', 'Road', 'E', 'W', 'A', 'C', 'SE', 'NW', 'P', 'SW', 'Pre',
'Post', '≤3', '≥10', 'Oct', 'Nov', 'Dec', 'Jan', 'Feb', 'Mar', 'Apr']
yearsToTest = ['2019', '2010']
for year in yearsToTest:
df = standings.get_league_standings(year)
self.assertCountEqual(list(df.columns), fields)
fields = ['Rk', 'Team', 'Overall', 'Home', 'Road', 'E', 'W', 'A', 'C', 'M', 'P', 'Pre',
'Post', '≤3', '≥10', 'Oct', 'Nov', 'Dec', 'Jan', 'Feb', 'Mar', 'Apr']
df = standings.get_league_standings('2002')
self.assertCountEqual(list(df.columns), fields)
@unittest.expectedFailure
def test_get_league_standings_fail(self):
fields = ['Rk', 'Team', 'Overall', 'Home', 'Road', 'E', 'W', 'A', 'C', 'SE', 'NW', 'P', 'SW', 'Pre',
'Post', '≤3', '≥10', 'Oct', 'Nov', 'Dec', 'Jan', 'Feb', 'Mar', 'Apr']
yearsToTest = ['2222', '3333', 'asfd']
for year in yearsToTest:
df = standings.get_league_standings(year)
self.assertCountEqual(list(df.columns), fields)
if __name__ == '__main__':
unittest.main() | 45.279412 | 109 | 0.539136 | 344 | 3,079 | 4.72093 | 0.244186 | 0.086207 | 0.103448 | 0.112069 | 0.823892 | 0.716133 | 0.716133 | 0.716133 | 0.716133 | 0.716133 | 0 | 0.030886 | 0.27444 | 3,079 | 68 | 110 | 45.279412 | 0.693375 | 0 | 0 | 0.534483 | 0 | 0 | 0.158314 | 0 | 0 | 0 | 0 | 0 | 0.12069 | 1 | 0.103448 | false | 0 | 0.068966 | 0 | 0.189655 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
048d614b61eebe443504bd4923e06a9cfad0a1a3 | 26 | py | Python | service/AWS/__init__.py | vishu221b/bookme-flask-REST-API-Collection | 9ee923e13d786af9b11421370edac718743855af | [
"MIT"
] | null | null | null | service/AWS/__init__.py | vishu221b/bookme-flask-REST-API-Collection | 9ee923e13d786af9b11421370edac718743855af | [
"MIT"
] | null | null | null | service/AWS/__init__.py | vishu221b/bookme-flask-REST-API-Collection | 9ee923e13d786af9b11421370edac718743855af | [
"MIT"
] | null | null | null | from .AwsService import *
| 13 | 25 | 0.769231 | 3 | 26 | 6.666667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.153846 | 26 | 1 | 26 | 26 | 0.909091 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
049922658ef6c91f40367ed7c8c0758258b01da0 | 220 | py | Python | text_vectors/__init__.py | chappers/text-vectors | 7ca9385c9cdbf3d4c8270abb28d9a86bef2e5fbd | [
"MIT"
] | null | null | null | text_vectors/__init__.py | chappers/text-vectors | 7ca9385c9cdbf3d4c8270abb28d9a86bef2e5fbd | [
"MIT"
] | 1 | 2018-04-14T23:02:45.000Z | 2018-04-14T23:02:45.000Z | text_vectors/__init__.py | chappers/text-vectors | 7ca9385c9cdbf3d4c8270abb28d9a86bef2e5fbd | [
"MIT"
] | null | null | null |
import warnings
from text_vectors.text_vectors import TextVec
try:
from text_vectors.version import version as __version__
except:
warnings.warn("Could not import version, has `text_vectors` been installed?")
| 22 | 81 | 0.790909 | 30 | 220 | 5.533333 | 0.533333 | 0.26506 | 0.180723 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.15 | 220 | 9 | 82 | 24.444444 | 0.887701 | 0 | 0 | 0 | 0 | 0 | 0.273973 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.666667 | 0 | 0.666667 | 0 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
04b3eb96f149f247bad4d3fdd03545b9c934616f | 1,251 | py | Python | wavenumber.py | zingale/powerspectrum_test | 140f4a70b43edf3fb68fc8ab1d56818f7a6c7a3b | [
"BSD-3-Clause"
] | 3 | 2017-03-17T14:53:13.000Z | 2018-05-02T21:54:17.000Z | wavenumber.py | zingale/powerspectrum_test | 140f4a70b43edf3fb68fc8ab1d56818f7a6c7a3b | [
"BSD-3-Clause"
] | null | null | null | wavenumber.py | zingale/powerspectrum_test | 140f4a70b43edf3fb68fc8ab1d56818f7a6c7a3b | [
"BSD-3-Clause"
] | null | null | null | #!/bin/env python
import numpy as np
import matplotlib.pyplot as plt
plt.subplot(211)
x = np.linspace(0.0, 1.0, 200)
k = 1.0
plt.plot(x, np.sin(2.0*np.pi*k*x))
xlabel = [0, 0.5, 1]
xlabelnames = [r"$0$", r"$L/2$", r"$L$"]
ylabel = [-1, 1]
ylabelnames = [r"$-1$", r"$1$"]
ax = plt.gca()
ax.spines['left'].set_position('zero')
ax.spines['right'].set_color('none')
ax.spines['bottom'].set_position('zero')
ax.spines['top'].set_color('none')
ax.spines['left'].set_smart_bounds(True)
ax.spines['bottom'].set_smart_bounds(True)
ax.xaxis.set_ticks_position('bottom')
ax.yaxis.set_ticks_position('left')
plt.xticks(xlabel, xlabelnames)
plt.yticks(ylabel, ylabelnames)
plt.ylim(-1.1,1.1)
plt.subplot(212)
x = np.linspace(0.0, 1.0, 200)
k = 4.0
plt.plot(x, np.sin(2.0*np.pi*k*x))
ax = plt.gca()
ax.spines['left'].set_position('zero')
ax.spines['right'].set_color('none')
ax.spines['bottom'].set_position('zero')
ax.spines['top'].set_color('none')
ax.spines['left'].set_smart_bounds(True)
ax.spines['bottom'].set_smart_bounds(True)
ax.xaxis.set_ticks_position('bottom')
ax.yaxis.set_ticks_position('left')
plt.xticks(xlabel, xlabelnames)
plt.yticks(ylabel, ylabelnames)
plt.ylim(-1.1,1.1)
plt.tight_layout()
plt.savefig("wavenumber.eps")
| 18.954545 | 42 | 0.689848 | 224 | 1,251 | 3.741071 | 0.254464 | 0.114558 | 0.057279 | 0.071599 | 0.804296 | 0.804296 | 0.804296 | 0.804296 | 0.804296 | 0.75895 | 0 | 0.040245 | 0.086331 | 1,251 | 65 | 43 | 19.246154 | 0.692913 | 0.01279 | 0 | 0.7 | 0 | 0 | 0.113636 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.05 | 0 | 0.05 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.