hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
6448ba5549fb9c70338edf64ad7ad0b04ae5f259 | 18,860 | py | Python | test/test_md027.py | scop/pymarkdown | 562ba8f7857d99ba09e86e42de5a37ec6d9b2c30 | [
"MIT"
] | null | null | null | test/test_md027.py | scop/pymarkdown | 562ba8f7857d99ba09e86e42de5a37ec6d9b2c30 | [
"MIT"
] | null | null | null | test/test_md027.py | scop/pymarkdown | 562ba8f7857d99ba09e86e42de5a37ec6d9b2c30 | [
"MIT"
] | null | null | null | """
Module to provide tests related to the MD027 rule.
"""
from test.markdown_scanner import MarkdownScanner
import pytest
@pytest.mark.rules
def test_md027_good_block_quote_empty():
"""
Test to make sure we get the expected behavior after scanning a good file from the
test/resources/rules/MD026 directory that has atx headings that do not end with
punctuation.
"""
# Arrange
scanner = MarkdownScanner()
supplied_arguments = [
"scan",
"test/resources/rules/md027/good_block_quote_empty.md",
]
expected_return_code = 0
expected_output = ""
expected_error = ""
# Act
execute_results = scanner.invoke_main(arguments=supplied_arguments)
# Assert
execute_results.assert_results(
expected_output, expected_error, expected_return_code
)
@pytest.mark.rules
def test_md027_good_block_quote_empty_just_blank():
"""
Test to make sure we get the expected behavior after scanning a good file from the
test/resources/rules/MD026 directory that has atx headings that do not end with
punctuation.
"""
# Arrange
scanner = MarkdownScanner()
supplied_arguments = [
"scan",
"test/resources/rules/md027/good_block_quote_empty_just_blank.md",
]
expected_return_code = 0
expected_output = ""
expected_error = ""
# Act
execute_results = scanner.invoke_main(arguments=supplied_arguments)
# Assert
execute_results.assert_results(
expected_output, expected_error, expected_return_code
)
@pytest.mark.rules
def test_md027_bad_block_quote_empty_too_many_spaces():
"""
Test to make sure we get the expected behavior after scanning a good file from the
test/resources/rules/MD026 directory that has atx headings that do not end with
punctuation.
"""
# Arrange
scanner = MarkdownScanner()
supplied_arguments = [
"scan",
"test/resources/rules/md027/bad_block_quote_empty_too_many_spaces.md",
]
expected_return_code = 1
expected_output = (
"test/resources/rules/md027/bad_block_quote_empty_too_many_spaces.md:1:3: "
+ "MD027: Multiple spaces after blockquote symbol (no-multiple-space-blockquote)"
)
expected_error = ""
# Act
execute_results = scanner.invoke_main(arguments=supplied_arguments)
# Assert
execute_results.assert_results(
expected_output, expected_error, expected_return_code
)
@pytest.mark.rules
def test_md027_good_block_quote_simple_text():
"""
Test to make sure we get the expected behavior after scanning a good file from the
test/resources/rules/MD026 directory that has atx headings that do not end with
punctuation.
"""
# Arrange
scanner = MarkdownScanner()
supplied_arguments = [
"scan",
"test/resources/rules/md027/good_block_quote_simple_text.md",
]
expected_return_code = 0
expected_output = ""
expected_error = ""
# Act
execute_results = scanner.invoke_main(arguments=supplied_arguments)
# Assert
execute_results.assert_results(
expected_output, expected_error, expected_return_code
)
@pytest.mark.rules
def test_md027_good_block_quote_followed_by_heading():
"""
Test to make sure we get the expected behavior after scanning a good file from the
test/resources/rules/MD026 directory that has atx headings that do not end with
punctuation.
"""
# Arrange
scanner = MarkdownScanner()
supplied_arguments = [
"--disable-rules",
"md022",
"scan",
"test/resources/rules/md027/good_block_quote_followed_by_heading.md",
]
expected_return_code = 0
expected_output = ""
expected_error = ""
# Act
execute_results = scanner.invoke_main(arguments=supplied_arguments)
# Assert
execute_results.assert_results(
expected_output, expected_error, expected_return_code
)
@pytest.mark.rules
def test_md027_good_block_quote_indent():
"""
Test to make sure we get the expected behavior after scanning a good file from the
test/resources/rules/MD026 directory that has atx headings that do not end with
punctuation.
"""
# Arrange
scanner = MarkdownScanner()
supplied_arguments = [
"scan",
"test/resources/rules/md027/good_block_quote_indent.md",
]
expected_return_code = 0
expected_output = ""
expected_error = ""
# Act
execute_results = scanner.invoke_main(arguments=supplied_arguments)
# Assert
execute_results.assert_results(
expected_output, expected_error, expected_return_code
)
@pytest.mark.rules
def test_md027_bad_block_quote_indent():
"""
Test to make sure we get the expected behavior after scanning a good file from the
test/resources/rules/MD026 directory that has atx headings that do not end with
punctuation.
"""
# Arrange
scanner = MarkdownScanner()
supplied_arguments = [
"--stack-trace",
"scan",
"test/resources/rules/md027/bad_block_quote_indent.md",
]
expected_return_code = 1
expected_output = (
"test/resources/rules/md027/bad_block_quote_indent.md:1:3: "
+ "MD027: Multiple spaces after blockquote symbol (no-multiple-space-blockquote)\n"
+ "test/resources/rules/md027/bad_block_quote_indent.md:2:3: "
+ "MD027: Multiple spaces after blockquote symbol (no-multiple-space-blockquote)"
)
expected_error = ""
# Act
execute_results = scanner.invoke_main(arguments=supplied_arguments)
# Assert
execute_results.assert_results(
expected_output, expected_error, expected_return_code
)
@pytest.mark.rules
def test_md027_bad_block_quote_indent_plus_one():
"""
Test to make sure we get the expected behavior after scanning a good file from the
test/resources/rules/MD026 directory that has atx headings that do not end with
punctuation.
"""
# Arrange
scanner = MarkdownScanner()
supplied_arguments = [
"--stack-trace",
"scan",
"test/resources/rules/md027/bad_block_quote_indent_plus_one.md",
]
expected_return_code = 1
expected_output = (
"test/resources/rules/md027/bad_block_quote_indent_plus_one.md:1:4: "
+ "MD027: Multiple spaces after blockquote symbol (no-multiple-space-blockquote)\n"
+ "test/resources/rules/md027/bad_block_quote_indent_plus_one.md:2:4: "
+ "MD027: Multiple spaces after blockquote symbol (no-multiple-space-blockquote)"
)
expected_error = ""
# Act
execute_results = scanner.invoke_main(arguments=supplied_arguments)
# Assert
execute_results.assert_results(
expected_output, expected_error, expected_return_code
)
@pytest.mark.rules
def test_md027_bad_block_quote_only_one_properly_indented():
"""
Test to make sure we get the expected behavior after scanning a good file from the
test/resources/rules/MD026 directory that has atx headings that do not end with
punctuation.
"""
# Arrange
scanner = MarkdownScanner()
supplied_arguments = [
"scan",
"test/resources/rules/md027/bad_block_quote_only_one_properly_indented.md",
]
expected_return_code = 1
expected_output = (
"test/resources/rules/md027/bad_block_quote_only_one_properly_indented.md:2:3: "
+ "MD027: Multiple spaces after blockquote symbol (no-multiple-space-blockquote)"
)
expected_error = ""
# Act
execute_results = scanner.invoke_main(arguments=supplied_arguments)
# Assert
execute_results.assert_results(
expected_output, expected_error, expected_return_code
)
@pytest.mark.rules
def test_md027_bad_block_quote_only_one_properly_indented_plus_one():
"""
Test to make sure we get the expected behavior after scanning a good file from the
test/resources/rules/MD026 directory that has atx headings that do not end with
punctuation.
"""
# Arrange
scanner = MarkdownScanner()
supplied_arguments = [
"scan",
"test/resources/rules/md027/bad_block_quote_only_one_properly_indented_plus_one.md",
]
expected_return_code = 1
expected_output = (
"test/resources/rules/md027/bad_block_quote_only_one_properly_indented_plus_one.md:2:4: "
+ "MD027: Multiple spaces after blockquote symbol (no-multiple-space-blockquote)"
)
expected_error = ""
# Act
execute_results = scanner.invoke_main(arguments=supplied_arguments)
# Assert
execute_results.assert_results(
expected_output, expected_error, expected_return_code
)
@pytest.mark.rules
def test_md027_good_block_quote_indent_with_blank():
"""
Test to make sure we get the expected behavior after scanning a good file from the
test/resources/rules/MD026 directory that has atx headings that do not end with
punctuation.
"""
# Arrange
scanner = MarkdownScanner()
supplied_arguments = [
"scan",
"test/resources/rules/md027/good_block_quote_indent_with_blank.md",
]
expected_return_code = 0
expected_output = ""
expected_error = ""
# Act
execute_results = scanner.invoke_main(arguments=supplied_arguments)
# Assert
execute_results.assert_results(
expected_output, expected_error, expected_return_code
)
@pytest.mark.rules
def test_md027_good_block_quote_indent_with_blank_space():
"""
Test to make sure we get the expected behavior after scanning a good file from the
test/resources/rules/MD026 directory that has atx headings that do not end with
punctuation.
"""
# Arrange
scanner = MarkdownScanner()
supplied_arguments = [
"scan",
"test/resources/rules/md027/good_block_quote_indent_with_blank_space.md",
]
expected_return_code = 0
expected_output = ""
expected_error = ""
# Act
execute_results = scanner.invoke_main(arguments=supplied_arguments)
# Assert
execute_results.assert_results(
expected_output, expected_error, expected_return_code
)
@pytest.mark.rules
def test_md027_bad_block_quote_indent_with_blank_two_spaces():
"""
Test to make sure we get the expected behavior after scanning a good file from the
test/resources/rules/MD026 directory that has atx headings that do not end with
punctuation.
"""
# Arrange
scanner = MarkdownScanner()
supplied_arguments = [
"scan",
"test/resources/rules/md027/bad_block_quote_indent_with_blank_two_spaces.md",
]
expected_return_code = 1
expected_output = (
"test/resources/rules/md027/bad_block_quote_indent_with_blank_two_spaces.md:2:3: "
+ "MD027: Multiple spaces after blockquote symbol (no-multiple-space-blockquote)"
)
expected_error = ""
# Act
execute_results = scanner.invoke_main(arguments=supplied_arguments)
# Assert
execute_results.assert_results(
expected_output, expected_error, expected_return_code
)
@pytest.mark.rules
def test_md027_bad_block_quote_indent_with_blank_two_spaces_plus_one():
"""
Test to make sure we get the expected behavior after scanning a good file from the
test/resources/rules/MD026 directory that has atx headings that do not end with
punctuation.
"""
# Arrange
scanner = MarkdownScanner()
supplied_arguments = [
"scan",
"test/resources/rules/md027/bad_block_quote_indent_with_blank_two_spaces_plus_one.md",
]
expected_return_code = 1
expected_output = (
"test/resources/rules/md027/bad_block_quote_indent_with_blank_two_spaces_plus_one.md:2:4: "
+ "MD027: Multiple spaces after blockquote symbol (no-multiple-space-blockquote)"
)
expected_error = ""
# Act
execute_results = scanner.invoke_main(arguments=supplied_arguments)
# Assert
execute_results.assert_results(
expected_output, expected_error, expected_return_code
)
@pytest.mark.rules
def test_md027_bad_block_quote_indent_with_blank_two_spaces_misaligned():
"""
Test to make sure we get the expected behavior after scanning a good file from the
test/resources/rules/MD026 directory that has atx headings that do not end with
punctuation.
"""
# Arrange
scanner = MarkdownScanner()
supplied_arguments = [
"scan",
"test/resources/rules/md027/bad_block_quote_indent_with_blank_two_spaces_misaligned.md",
]
expected_return_code = 1
expected_output = (
"test/resources/rules/md027/bad_block_quote_indent_with_blank_two_spaces_misaligned.md:2:4: "
+ "MD027: Multiple spaces after blockquote symbol (no-multiple-space-blockquote)"
)
expected_error = ""
# Act
execute_results = scanner.invoke_main(arguments=supplied_arguments)
# Assert
execute_results.assert_results(
expected_output, expected_error, expected_return_code
)
@pytest.mark.rules
def test_md027_good_block_quote_indent_with_blank_space_no_start():
"""
Test to make sure we get the expected behavior after scanning a good file from the
test/resources/rules/MD026 directory that has atx headings that do not end with
punctuation.
"""
# Arrange
scanner = MarkdownScanner()
supplied_arguments = [
"--disable-rules",
"md028",
"scan",
"test/resources/rules/md027/good_block_quote_indent_with_blank_space_no_start.md",
]
expected_return_code = 0
expected_output = ""
expected_error = ""
# Act
execute_results = scanner.invoke_main(arguments=supplied_arguments)
# Assert
execute_results.assert_results(
expected_output, expected_error, expected_return_code
)
@pytest.mark.rules
def test_md027_bad_two_block_quotes_space_top():
"""
Test to make sure we get the expected behavior after scanning a good file from the
test/resources/rules/MD026 directory that has atx headings that do not end with
punctuation.
"""
# Arrange
scanner = MarkdownScanner()
supplied_arguments = [
"--disable-rules",
"md028",
"scan",
"test/resources/rules/md027/bad_two_block_quotes_space_top.md",
]
expected_return_code = 1
expected_output = (
"test/resources/rules/md027/bad_two_block_quotes_space_top.md:1:3: "
+ "MD027: Multiple spaces after blockquote symbol (no-multiple-space-blockquote)"
)
expected_error = ""
# Act
execute_results = scanner.invoke_main(arguments=supplied_arguments)
# Assert
execute_results.assert_results(
expected_output, expected_error, expected_return_code
)
@pytest.mark.rules
def test_md027_bad_two_block_quotes_space_bottom():
"""
Test to make sure we get the expected behavior after scanning a good file from the
test/resources/rules/MD026 directory that has atx headings that do not end with
punctuation.
"""
# Arrange
scanner = MarkdownScanner()
supplied_arguments = [
"--disable-rules",
"md028",
"scan",
"test/resources/rules/md027/bad_two_block_quotes_space_bottom.md",
]
expected_return_code = 1
expected_output = (
"test/resources/rules/md027/bad_two_block_quotes_space_bottom.md:3:3: "
+ "MD027: Multiple spaces after blockquote symbol (no-multiple-space-blockquote)"
)
expected_error = ""
# Act
execute_results = scanner.invoke_main(arguments=supplied_arguments)
# Assert
execute_results.assert_results(
expected_output, expected_error, expected_return_code
)
@pytest.mark.rules
def test_md027_bad_misalligned_double_quote():
"""
Test to make sure we get the expected behavior after scanning a good file from the
test/resources/rules/MD026 directory that has atx headings that do not end with
punctuation.
"""
# Arrange
scanner = MarkdownScanner()
supplied_arguments = [
"scan",
"test/resources/rules/md027/bad_misalligned_double_quote.md",
]
expected_return_code = 1
expected_output = (
"test/resources/rules/md027/bad_misalligned_double_quote.md:2:4: "
+ "MD027: Multiple spaces after blockquote symbol (no-multiple-space-blockquote)"
)
expected_error = ""
# Act
execute_results = scanner.invoke_main(arguments=supplied_arguments)
# Assert
execute_results.assert_results(
expected_output, expected_error, expected_return_code
)
@pytest.mark.rules
def test_md027_good_alligned_double_quote():
"""
Test to make sure we get the expected behavior after scanning a good file from the
test/resources/rules/MD026 directory that has atx headings that do not end with
punctuation.
"""
# Arrange
scanner = MarkdownScanner()
supplied_arguments = [
"scan",
"test/resources/rules/md027/good_alligned_double_quote.md",
]
expected_return_code = 0
expected_output = ""
expected_error = ""
# Act
execute_results = scanner.invoke_main(arguments=supplied_arguments)
# Assert
execute_results.assert_results(
expected_output, expected_error, expected_return_code
)
@pytest.mark.skip
@pytest.mark.rules
def test_md027_bad_misalligned_quote_within_list():
"""
Test to make sure we get the expected behavior after scanning a good file from the
test/resources/rules/MD026 directory that has atx headings that do not end with
punctuation.
"""
# Arrange
scanner = MarkdownScanner()
supplied_arguments = [
"scan",
"test/resources/rules/md027/bad_misalligned_quote_within_list.md",
]
expected_return_code = 1
expected_output = ""
expected_error = ""
# Act
execute_results = scanner.invoke_main(arguments=supplied_arguments)
# Assert
execute_results.assert_results(
expected_output, expected_error, expected_return_code
)
@pytest.mark.skip
@pytest.mark.rules
def test_md027_good_alligned_quote_within_list():
"""
Test to make sure we get the expected behavior after scanning a good file from the
test/resources/rules/MD026 directory that has atx headings that do not end with
punctuation.
"""
# Arrange
scanner = MarkdownScanner()
supplied_arguments = [
"--stack-trace",
"scan",
"test/resources/rules/md027/good_alligned_quote_within_list.md",
]
expected_return_code = 0
expected_output = ""
expected_error = ""
# Act
execute_results = scanner.invoke_main(arguments=supplied_arguments)
# Assert
execute_results.assert_results(
expected_output, expected_error, expected_return_code
)
| 28.023774 | 101 | 0.703871 | 2,318 | 18,860 | 5.443917 | 0.044866 | 0.058721 | 0.081306 | 0.063793 | 0.991758 | 0.991758 | 0.989302 | 0.979634 | 0.971868 | 0.969253 | 0 | 0.022967 | 0.217391 | 18,860 | 672 | 102 | 28.065476 | 0.831978 | 0.23017 | 0 | 0.680556 | 0 | 0 | 0.25825 | 0.198637 | 0 | 0 | 0 | 0 | 0.061111 | 1 | 0.061111 | false | 0 | 0.005556 | 0 | 0.066667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
644a41a37c1677e5ef6f648ec510576394dc6a7a | 231 | py | Python | tests/models/fakes.py | lucaspanayiotou/OasisLMF_SQL | 619244f6c5b2e1b6483d50ada045fc24e081de42 | [
"BSD-3-Clause"
] | null | null | null | tests/models/fakes.py | lucaspanayiotou/OasisLMF_SQL | 619244f6c5b2e1b6483d50ada045fc24e081de42 | [
"BSD-3-Clause"
] | 1 | 2021-03-31T19:01:15.000Z | 2021-03-31T19:01:15.000Z | tests/models/fakes.py | OasisLMF/OasisLMF_SQL | 4c0edef7b346cf2a0b3cd0813320d063fa3e8b40 | [
"BSD-3-Clause"
] | 2 | 2019-03-21T09:22:34.000Z | 2020-01-16T15:09:58.000Z | from oasislmf.model_preparation.manager import OasisManager as om
def fake_model(supplier='supplier', model='model', version='version', resources=None):
return om().create_model(supplier, model, version, resources=resources)
| 38.5 | 86 | 0.787879 | 29 | 231 | 6.172414 | 0.586207 | 0.145251 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.095238 | 231 | 5 | 87 | 46.2 | 0.856459 | 0 | 0 | 0 | 0 | 0 | 0.08658 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
6463dd0bf19fba9a2672ec37ba66fd21017fa9fa | 174 | py | Python | applications/messenger/models/__init__.py | dev-easyshares/mighty | a6cf473fb8cfbf5b92db68c7b068fc8ae2911b8b | [
"MIT"
] | null | null | null | applications/messenger/models/__init__.py | dev-easyshares/mighty | a6cf473fb8cfbf5b92db68c7b068fc8ae2911b8b | [
"MIT"
] | 1 | 2022-03-12T00:57:37.000Z | 2022-03-12T00:57:37.000Z | applications/messenger/models/__init__.py | dev-easyshares/mighty | a6cf473fb8cfbf5b92db68c7b068fc8ae2911b8b | [
"MIT"
] | null | null | null | from mighty.applications.messenger.models.missive import Missive
from mighty.applications.messenger.models.notification import Notification
__all__ = (Missive, Notification) | 43.5 | 74 | 0.862069 | 19 | 174 | 7.684211 | 0.473684 | 0.136986 | 0.30137 | 0.424658 | 0.506849 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.068966 | 174 | 4 | 75 | 43.5 | 0.901235 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
64c260c79d4873616e0acad12a9b46bce566cb60 | 1,816 | py | Python | test/python/helpers.py | ryansun117/marius | c6a81b2ea6b6b468baf5277cf6955f9543b66c82 | [
"Apache-2.0"
] | null | null | null | test/python/helpers.py | ryansun117/marius | c6a81b2ea6b6b468baf5277cf6955f9543b66c82 | [
"Apache-2.0"
] | null | null | null | test/python/helpers.py | ryansun117/marius | c6a81b2ea6b6b468baf5277cf6955f9543b66c82 | [
"Apache-2.0"
] | null | null | null | from pathlib import Path
import random
def dataset_generator(train_file, valid_file, test_file, train_len=1000,
valid_len=100, test_len=100, delim="\t", start_col=0,
num_line_skip=0):
with open(str(Path(train_file)), "w") as f:
for i in range(num_line_skip):
f.write("This is a line needs to be skipped.\n")
for i in range(train_len):
src = random.randint(1, 100)
dst = random.randint(1, 100)
rel = random.randint(101, 110)
for j in range(start_col):
f.write("col_" + str(j) + delim)
f.write(str(src) + delim + str(rel) + delim + str(dst) + "\n")
f.close()
with open(str(Path(valid_file)), "w") as f:
for i in range(num_line_skip):
f.write("This is a line needs to be skipped.\n")
for i in range(valid_len):
src = random.randint(1, 100)
dst = random.randint(1, 100)
rel = random.randint(101, 110)
for j in range(start_col):
f.write("col_" + str(j) + delim)
f.write(str(src) + delim + str(rel) + delim + str(dst) + "\n")
f.close()
with open(str(Path(test_file)), "w") as f:
for i in range(num_line_skip):
f.write("This is a line needs to be skipped.\n")
for i in range(test_len):
src = random.randint(1, 100)
dst = random.randint(1, 100)
rel = random.randint(101, 110)
for j in range(start_col):
f.write("col_" + str(j) + delim)
f.write(str(src) + delim + str(rel) + delim + str(dst) + "\n")
f.close() | 43.238095 | 78 | 0.49174 | 252 | 1,816 | 3.43254 | 0.202381 | 0.072832 | 0.041619 | 0.076301 | 0.790751 | 0.790751 | 0.790751 | 0.790751 | 0.790751 | 0.790751 | 0 | 0.048085 | 0.381608 | 1,816 | 42 | 79 | 43.238095 | 0.722173 | 0 | 0 | 0.710526 | 0 | 0 | 0.073748 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.026316 | false | 0 | 0.052632 | 0 | 0.078947 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
b3971d3b3f40330353fab35a83f6a6295e8e3ce7 | 203 | py | Python | formatter/identation.py | natansilva/sql_formatter | 69cbd128db405c45b42694da4c4741ec664446e6 | [
"MIT"
] | null | null | null | formatter/identation.py | natansilva/sql_formatter | 69cbd128db405c45b42694da4c4741ec664446e6 | [
"MIT"
] | null | null | null | formatter/identation.py | natansilva/sql_formatter | 69cbd128db405c45b42694da4c4741ec664446e6 | [
"MIT"
] | null | null | null | import re
def remove_all_tabs(text_to_format):
return re.sub('\t', '', text_to_format)
def ident_in_select_from_clause(text_to_format):
return re.sub('[\n]?,[\s]?', '\n\t, ', text_to_format)
| 20.3 | 58 | 0.684729 | 35 | 203 | 3.571429 | 0.542857 | 0.192 | 0.384 | 0.288 | 0.368 | 0.368 | 0 | 0 | 0 | 0 | 0 | 0 | 0.133005 | 203 | 9 | 59 | 22.555556 | 0.710227 | 0 | 0 | 0 | 0 | 0 | 0.093596 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.4 | false | 0 | 0.2 | 0.4 | 1 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
3732c05a43bb9eb1932767a8770805d33dbeeef6 | 2,167 | py | Python | epytope/Data/pssms/tepitopepan/mat/DRB1_0342_9.py | christopher-mohr/epytope | 8ac9fe52c0b263bdb03235a5a6dffcb72012a4fd | [
"BSD-3-Clause"
] | 7 | 2021-02-01T18:11:28.000Z | 2022-01-31T19:14:07.000Z | epytope/Data/pssms/tepitopepan/mat/DRB1_0342_9.py | christopher-mohr/epytope | 8ac9fe52c0b263bdb03235a5a6dffcb72012a4fd | [
"BSD-3-Clause"
] | 22 | 2021-01-02T15:25:23.000Z | 2022-03-14T11:32:53.000Z | epytope/Data/pssms/tepitopepan/mat/DRB1_0342_9.py | christopher-mohr/epytope | 8ac9fe52c0b263bdb03235a5a6dffcb72012a4fd | [
"BSD-3-Clause"
] | 4 | 2021-05-28T08:50:38.000Z | 2022-03-14T11:45:32.000Z | DRB1_0342_9 = {0: {'A': -999.0, 'E': -999.0, 'D': -999.0, 'G': -999.0, 'F': -0.98558, 'I': -0.014418, 'H': -999.0, 'K': -999.0, 'M': -0.014418, 'L': -0.014418, 'N': -999.0, 'Q': -999.0, 'P': -999.0, 'S': -999.0, 'R': -999.0, 'T': -999.0, 'W': -0.98558, 'V': -0.014418, 'Y': -0.98558}, 1: {'A': 0.0, 'E': 0.1, 'D': -1.3, 'G': 0.5, 'F': 0.8, 'I': 1.1, 'H': 0.8, 'K': 1.1, 'M': 1.1, 'L': 1.0, 'N': 0.8, 'Q': 1.2, 'P': -0.5, 'S': -0.3, 'R': 2.2, 'T': 0.0, 'W': -0.1, 'V': 2.1, 'Y': 0.9}, 2: {'A': 0.0, 'E': -1.2, 'D': -1.3, 'G': 0.2, 'F': 0.8, 'I': 1.5, 'H': 0.2, 'K': 0.0, 'M': 1.4, 'L': 1.0, 'N': 0.5, 'Q': 0.0, 'P': 0.3, 'S': 0.2, 'R': 0.7, 'T': 0.0, 'W': 0.0, 'V': 0.5, 'Y': 0.8}, 3: {'A': 0.0, 'E': -0.99612, 'D': 2.2845, 'G': 0.48787, 'F': -0.99371, 'I': 0.50043, 'H': 0.0025044, 'K': -0.9985, 'M': 0.0056781, 'L': 0.00471, 'N': 0.20051, 'Q': 0.0014011, 'P': -1.0031, 'S': 0.6953, 'R': -1.0002, 'T': -0.99564, 'W': -0.99618, 'V': -0.0018717, 'Y': -0.99811}, 4: {'A': 0.0, 'E': 0.0, 'D': 0.0, 'G': 0.0, 'F': 0.0, 'I': 0.0, 'H': 0.0, 'K': 0.0, 'M': 0.0, 'L': 0.0, 'N': 0.0, 'Q': 0.0, 'P': 0.0, 'S': 0.0, 'R': 0.0, 'T': 0.0, 'W': 0.0, 'V': 0.0, 'Y': 0.0}, 5: {'A': 0.0, 'E': -1.44, 'D': -2.3393, 'G': -0.72894, 'F': -1.3838, 'I': 0.66466, 'H': -0.15515, 'K': 1.1414, 'M': -0.90482, 'L': 0.14623, 'N': -0.51693, 'Q': -0.35136, 'P': 0.4769, 'S': -0.052086, 'R': 0.85938, 'T': 0.84258, 'W': -1.38, 'V': 1.1824, 'Y': -1.3979}, 6: {'A': 0.0, 'E': -0.25721, 'D': -0.68382, 'G': -0.31197, 'F': 0.22891, 'I': 0.35102, 'H': -0.51332, 'K': -0.75217, 'M': 1.0091, 'L': 0.40838, 'N': 0.11109, 'Q': -0.12789, 'P': 0.24614, 'S': 0.0029041, 'R': -0.84196, 'T': -0.1061, 'W': -0.64137, 'V': 0.13173, 'Y': -0.2297}, 7: {'A': 0.0, 'E': 0.0, 'D': 0.0, 'G': 0.0, 'F': 0.0, 'I': 0.0, 'H': 0.0, 'K': 0.0, 'M': 0.0, 'L': 0.0, 'N': 0.0, 'Q': 0.0, 'P': 0.0, 'S': 0.0, 'R': 0.0, 'T': 0.0, 'W': 0.0, 'V': 0.0, 'Y': 0.0}, 8: {'A': 0.0, 'E': -0.54182, 'D': -0.78869, 'G': 0.1478, 'F': 0.55352, 'I': 0.43948, 'H': -0.38613, 'K': -0.2285, 'M': 0.82817, 'L': -0.20101, 'N': -0.73258, 'Q': -0.073797, 'P': -0.48481, 'S': 1.0175, 'R': 0.22077, 'T': -0.6178, 'W': -0.99494, 'V': 0.11956, 'Y': 0.066112}} | 2,167 | 2,167 | 0.399631 | 525 | 2,167 | 1.645714 | 0.201905 | 0.113426 | 0.027778 | 0.037037 | 0.222222 | 0.141204 | 0.141204 | 0.141204 | 0.131944 | 0.131944 | 0 | 0.380165 | 0.162437 | 2,167 | 1 | 2,167 | 2,167 | 0.095868 | 0 | 0 | 0 | 0 | 0 | 0.078875 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
378bffc289e99fd3cd1a17217857aa61ff4ea49d | 3,689 | py | Python | dgn/invertible_layer_helpers.py | matt-graham/differentiable-generator-networks | 5dcef70fe73461d56f0b79628aaba2722b09e10c | [
"MIT"
] | 1 | 2016-09-29T07:01:10.000Z | 2016-09-29T07:01:10.000Z | dgn/invertible_layer_helpers.py | matt-graham/differentiable-generator-networks | 5dcef70fe73461d56f0b79628aaba2722b09e10c | [
"MIT"
] | null | null | null | dgn/invertible_layer_helpers.py | matt-graham/differentiable-generator-networks | 5dcef70fe73461d56f0b79628aaba2722b09e10c | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
"""Invertible layer helper functions."""
__authors__ = 'Matt Graham'
__license__ = 'MIT'
import numpy as np
import theano.tensor as tt
import dgn.invertible_layers as layers
def alt_lower_upper_tri_layers(n_layer, weights_inits, biases_inits,
nl_fwd=tt.sinh, nl_inv=tt.arcsinh,
weights_prec=0., biases_prec=0.):
layers = []
for l in range(n_layer):
if l % 2 == 0:
layers.append(TriangularAffineLayer(
weights_init=np.tril(weights_inits[l]),
biases_init=biases_inits[l],
lower=True,
weights_prec=weights_prec,
biases_prec=biases_prec))
layers.append(ElementwiseLayer(nl_fwd, nl_inv))
else:
layers.append(TriangularAffineLayer(
weights_init=np.triu(weights_inits[l]),
biases_init=biases_inits[l],
lower=False,
weights_prec=weights_prec,
biases_prec=biases_prec))
layers.append(ElementwiseLayer(nl_fwd, nl_inv))
return layers
def alt_lower_upper_tri_with_fwd_diag_inv_nl_layers(
n_layer, weights_inits, biases_inits, diag_weights_inits,
nl_fwd=tt.sinh, nl_inv=tt.arcsinh, weights_prec=0., biases_prec=0.,
diag_weights_prec=0.):
layers = []
for l in range(n_layer):
if l % 2 == 0:
layers.append(TriangularAffineLayer(
weights_init=np.tril(weights_inits[l]),
biases_init=biases_inits[2*l],
lower=True,
weights_prec=weights_prec,
biases_prec=biases_prec))
layers.append(FwdDiagInvElementwiseLayer(
forward_func=nl_fwd,
inverse_func=nl_inv,
diag_weights_init=diag_weights_inits[l],
biases_init=biases_inits[2 * l + 1],
diag_weights_prec=weights_prec,
biases_prec=biases_prec))
else:
layers.append(TriangularAffineLayer(
weights_init=np.triu(weights_inits[l]),
biases_init=biases_inits[l],
lower=False,
weights_prec=weights_prec,
biases_prec=biases_prec))
layers.append(FwdDiagInvElementwiseLayer(
forward_func=nl_fwd,
inverse_func=nl_inv,
diag_weights_init=diag_weights_inits[l],
biases_init=biases_inits[2 * l + 1],
diag_weights_prec=diag_weights_prec,
biases_prec=biases_prec))
return layers
def diag_plus_rank_1_with_fwd_diag_inv_nl_layers(
n_layer, diag_weights_inits, u_vect_inits, v_vect_inits, biases_inits,
nl_fwd=tt.sinh, nl_inv=tt.arcsinh, diag_weights_prec=0.,
u_vect_prec=0., v_vect_prec=0., biases_prec=0.):
layers = []
for l in range(n_layer):
layers.append(DiagPlusRank1AffineLayer(
diag_weights_init=diag_weights_inits[2 * l],
u_vect_init=u_vect_inits[l],
v_vect_init=v_vect_inits[l],
biases_init=biases_inits[2 * l],
diag_weights_prec=diag_weights_prec,
u_vect_prec=u_vect_prec,
v_vect_prec=v_vect_prec,
biases_prec=biases_prec))
layers.append(FwdDiagInvElementwiseLayer(
forward_func=nl_fwd,
inverse_func=nl_inv,
diag_weights_init=diag_weights_inits[2 * l + 1],
biases_init=biases_inits[2 * l + 1],
diag_weights_prec=diag_weights_prec,
biases_prec=biases_prec))
return layers
| 38.427083 | 78 | 0.602331 | 449 | 3,689 | 4.547884 | 0.146993 | 0.107738 | 0.109696 | 0.082272 | 0.84427 | 0.833497 | 0.794319 | 0.773751 | 0.699804 | 0.699314 | 0 | 0.010664 | 0.313635 | 3,689 | 95 | 79 | 38.831579 | 0.795814 | 0.015451 | 0 | 0.717647 | 0 | 0 | 0.003861 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.035294 | false | 0 | 0.035294 | 0 | 0.105882 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
379c0560d6e691c7e425c1d2446591338132df8f | 42 | py | Python | xpd_client/__init__.py | moochannel/python-xpd-client | 46b1f7202b6ca94f202e13386cb1ebf4bf335a80 | [
"MIT"
] | null | null | null | xpd_client/__init__.py | moochannel/python-xpd-client | 46b1f7202b6ca94f202e13386cb1ebf4bf335a80 | [
"MIT"
] | 1 | 2018-01-26T10:32:02.000Z | 2018-01-26T10:32:02.000Z | xpd_client/__init__.py | moochannel/python-xpd-client | 46b1f7202b6ca94f202e13386cb1ebf4bf335a80 | [
"MIT"
] | null | null | null | from .xpd_client import XPdClient # noqa
| 21 | 41 | 0.785714 | 6 | 42 | 5.333333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.166667 | 42 | 1 | 42 | 42 | 0.914286 | 0.095238 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
807dfc6eae951aff831d7e7d0545b58795bcb34a | 161 | py | Python | scripts/build_swift.py | 1byte2bytes/SydChain | ac1fffd9f87c2afa6e2f6a0540d69dad0815ef4f | [
"MIT"
] | null | null | null | scripts/build_swift.py | 1byte2bytes/SydChain | ac1fffd9f87c2afa6e2f6a0540d69dad0815ef4f | [
"MIT"
] | null | null | null | scripts/build_swift.py | 1byte2bytes/SydChain | ac1fffd9f87c2afa6e2f6a0540d69dad0815ef4f | [
"MIT"
] | null | null | null | # Copyright (c) Sydney Erickson 2017
import buildlib
import buildsettings
buildlib.build_cmake("swift-swift-4.0.3-RELEASE.tar.gz", "-DCMAKE_BUILD_TYPE=Release") | 32.2 | 86 | 0.801242 | 24 | 161 | 5.25 | 0.791667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.04698 | 0.074534 | 161 | 5 | 86 | 32.2 | 0.798658 | 0.21118 | 0 | 0 | 0 | 0 | 0.460317 | 0.460317 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
80b2b92fc0821776db189806e44859a2f3da27e3 | 119 | py | Python | backend/src/services/speech_translation/__init__.py | didi/MeetDot | a57009d30c1347a9b85950c2e02b77685ce63952 | [
"Apache-2.0"
] | 6 | 2021-09-23T14:53:58.000Z | 2022-02-18T10:14:17.000Z | backend/src/services/speech_translation/__init__.py | didi/MeetDot | a57009d30c1347a9b85950c2e02b77685ce63952 | [
"Apache-2.0"
] | null | null | null | backend/src/services/speech_translation/__init__.py | didi/MeetDot | a57009d30c1347a9b85950c2e02b77685ce63952 | [
"Apache-2.0"
] | 1 | 2021-09-24T02:48:50.000Z | 2021-09-24T02:48:50.000Z | from .interface import SpeechTranslationConfig, SpeechTranslationRequest
from .service import SpeechTranslationService
| 39.666667 | 72 | 0.89916 | 9 | 119 | 11.888889 | 0.777778 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.07563 | 119 | 2 | 73 | 59.5 | 0.972727 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
03fb8a8f787f67eb14396a56f838f39708657f82 | 2,044 | py | Python | spaceship_shooter/constant.py | ChinaAthena/EasierRehabitation | 43b46f48602ca4627ab4e76e0f822dc3e1eaadf4 | [
"MIT"
] | 1 | 2020-03-09T19:47:10.000Z | 2020-03-09T19:47:10.000Z | spaceship_shooter/constant.py | ChinaAthena/EasierRehabitation | 43b46f48602ca4627ab4e76e0f822dc3e1eaadf4 | [
"MIT"
] | null | null | null | spaceship_shooter/constant.py | ChinaAthena/EasierRehabitation | 43b46f48602ca4627ab4e76e0f822dc3e1eaadf4 | [
"MIT"
] | null | null | null | # BLACK = (0, 0, 0)
# WHITE = (255, 255, 255)
# BRIGHT_RED = (255, 0, 0)
# BRIGHT_GREEN = (0, 255, 0)
# GREEN = (0, 128, 0)
# MAROON = (128, 0, 0)
# BRIGHT_BLUE = (0, 0, 255)
# BLUE = (0, 0, 128)
#
# ASSETS_DIR = "../assets/"
# BACKGROUND_IMG_PATH = ASSETS_DIR + "background.png"
# SPACESHIP_IMG_PATH = ASSETS_DIR + "spaceship.png"
# ASTEROID_IMG_PATH = [ASSETS_DIR + "asteroid0%d.png" % i for i in range(2)]
# BULLET_IMG_PATH = ASSETS_DIR + "bullet.png"
# EXPLOSION_IMG_PATHS = [ASSETS_DIR+"explosions/regularExplosion0%d.png" % i for i in range(9)]
#
# scale_of_player_image = [0.1, 0.1667]
# scale_of_asteroid_image = [0.1428, 0.1428]
# scale_of_bullet_image = [0.0125, 0.02]
# lam_of_generating_asteroid = 1000
# lam_of_generating_bullet = 400
# scale_of_asteroid_vel = 0.00588
# scale_of_bullet_vel = 0.00667
# angle_variance_of_asteroid = 0.01
# player_relative_position = [0.5, 0.9]
BLACK = (0, 0, 0)
WHITE = (255, 255, 255)
BRIGHT_RED = (255, 0, 0)
BRIGHT_GREEN = (0, 255, 0)
GREEN = (0, 128, 0)
MAROON = (128, 0, 0)
BRIGHT_BLUE = (0, 0, 255)
BLUE = (0, 0, 128)
ASSETS_DIR = "../assets/"
BACKGROUND_IMG_PATH = ASSETS_DIR + "background.png"
SPACESHIP_IMG_PATH = ASSETS_DIR + "spaceship.png"
ASTEROID_IMG_PATH = [ASSETS_DIR + "asteroid0%d.png" % i for i in range(2)]
BULLET_IMG_PATH = ASSETS_DIR + "bullet.png"
EXPLOSION_IMG_PATHS = [ASSETS_DIR+"explosions/regularExplosion0%d.png" % i for i in range(9)]
list_of_difficulty = [0.00188, 0.00288, 0.00388, 0.00488, 0.00588, 0.00688, 0.00788, 0.00888, 0.00988, 0.01088]
scale_of_player_image = [0.1, 0.1667]
scale_of_asteroid_image = [0.1428, 0.1428]
scale_of_bullet_image = [0.0125, 0.02]
lam_of_generating_asteroid = 1000
lam_of_generating_bullet = 400
f = open("difficulty.txt", "r")
nums = []
if f.mode == 'r':
nums = f.readlines()
nums = [int(i) for i in nums]
if nums:
scale_of_asteroid_vel = list_of_difficulty[nums[0]-1]
else:
scale_of_asteroid_vel = 0.00588
scale_of_bullet_vel = 0.00667
angle_variance_of_asteroid = 0.01
player_relative_position = [0.5, 0.9] | 31.9375 | 111 | 0.696673 | 350 | 2,044 | 3.791429 | 0.211429 | 0.018086 | 0.078372 | 0.096458 | 0.857573 | 0.857573 | 0.857573 | 0.857573 | 0.857573 | 0.857573 | 0 | 0.147126 | 0.148728 | 2,044 | 64 | 112 | 31.9375 | 0.615517 | 0.41047 | 0 | 0 | 0 | 0 | 0.094915 | 0.028814 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
ff00cf667b7862346807f49ae3356366e41d2e1f | 30 | py | Python | src/pyphase/__init__.py | hsharrison/pyphase | adb3de4cb540553851c06b5d137a3a9c18cdf240 | [
"MIT"
] | 1 | 2020-03-22T10:58:47.000Z | 2020-03-22T10:58:47.000Z | src/pyphase/__init__.py | hsharrison/pyphase | adb3de4cb540553851c06b5d137a3a9c18cdf240 | [
"MIT"
] | null | null | null | src/pyphase/__init__.py | hsharrison/pyphase | adb3de4cb540553851c06b5d137a3a9c18cdf240 | [
"MIT"
] | null | null | null | from pyphase.util import wrap
| 15 | 29 | 0.833333 | 5 | 30 | 5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.133333 | 30 | 1 | 30 | 30 | 0.961538 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
ff17f2c3801c4679b4cd027f4446ac3d72a8c0ad | 25,015 | py | Python | alphausblue/api/grouprootuser_pb2.py | alphauslabs/blue-sdk-python | 24120a60cd153a69080661a687938b417b32f947 | [
"Apache-2.0"
] | null | null | null | alphausblue/api/grouprootuser_pb2.py | alphauslabs/blue-sdk-python | 24120a60cd153a69080661a687938b417b32f947 | [
"Apache-2.0"
] | null | null | null | alphausblue/api/grouprootuser_pb2.py | alphauslabs/blue-sdk-python | 24120a60cd153a69080661a687938b417b32f947 | [
"Apache-2.0"
] | null | null | null | # -*- coding: utf-8 -*-
# Generated by the protocol buffer compiler. DO NOT EDIT!
# source: api/grouprootuser.proto
"""Generated protocol buffer code."""
from google.protobuf import descriptor as _descriptor
from google.protobuf import message as _message
from google.protobuf import reflection as _reflection
from google.protobuf import symbol_database as _symbol_database
# @@protoc_insertion_point(imports)
_sym_db = _symbol_database.Default()
DESCRIPTOR = _descriptor.FileDescriptor(
name='api/grouprootuser.proto',
package='blueapi.api',
syntax='proto3',
serialized_options=b'\n\031cloud.alphaus.blueapi.apiB\025ApiGroupRootUserProtoZ&github.com/alphauslabs/blue-sdk-go/api',
create_key=_descriptor._internal_create_key,
serialized_pb=b'\n\x17\x61pi/grouprootuser.proto\x12\x0b\x62lueapi.api\"\xb0\x02\n\rGroupRootUser\x12\r\n\x05\x65mail\x18\x01 \x01(\t\x12\x10\n\x08password\x18\x02 \x01(\t\x12\x0f\n\x07groupId\x18\x03 \x01(\t\x12\x11\n\tgroupName\x18\x04 \x01(\t\x12\x11\n\tgroupType\x18\x05 \x01(\t\x12\'\n\x04meta\x18\x06 \x01(\x0b\x32\x19.blueapi.api.FeatureFlags\x12\x1a\n\x12passwordUpdateTime\x18\x07 \x01(\t\x12\x12\n\nupdateTime\x18\x08 \x01(\t\x12\x14\n\x0cuserAccessId\x18\t \x01(\t\x12\x0e\n\x06userId\x18\n \x01(\t\x12\x1c\n\x14waveAvailabilityDays\x18\x0b \x01(\x05\x12\x16\n\x0ewaveRegistered\x18\x0c \x01(\t\x12\x12\n\nwaveStatus\x18\r \x01(\t\"\xbc\x07\n\x0c\x46\x65\x61tureFlags\x12\x17\n\x0f\x64\x61shboard_graph\x18\x01 \x01(\x08\x12\x15\n\rusage_account\x18\x02 \x01(\x08\x12\x1b\n\x13usage_account_graph\x18\x03 \x01(\x08\x12\'\n\x1fusage_account_menu_account_edit\x18\x04 \x01(\x08\x12!\n\x19usage_account_menu_budget\x18\x05 \x01(\x08\x12&\n\x1eusage_account_menu_budget_edit\x18\x06 \x01(\x08\x12#\n\x1busage_account_menu_fees_fee\x18\x07 \x01(\x08\x12&\n\x1eusage_account_menu_fees_credit\x18\x08 \x01(\x08\x12&\n\x1eusage_account_menu_fees_refund\x18\t \x01(\x08\x12*\n\"usage_account_menu_fees_other_fees\x18\n \x01(\x08\x12\x1d\n\x15usage_report_download\x18\x0b \x01(\x08\x12\x13\n\x0busage_group\x18\x0c \x01(\x08\x12\x19\n\x11usage_group_graph\x18\r \x01(\x08\x12\x11\n\tusage_tag\x18\x0e \x01(\x08\x12\x17\n\x0fusage_tag_graph\x18\x0f \x01(\x08\x12\x16\n\x0eusage_crosstag\x18\x10 \x01(\x08\x12\x1c\n\x14usage_crosstag_graph\x18\x11 \x01(\x08\x12\x14\n\x0cri_purchased\x18\x12 \x01(\x08\x12\x16\n\x0eri_utilization\x18\x13 \x01(\x08\x12\x19\n\x11ri_recommendation\x18\x14 \x01(\x08\x12\x14\n\x0csp_purchased\x18\x15 \x01(\x08\x12\x0f\n\x07invoice\x18\x16 \x01(\x08\x12%\n\x1dinvoice_download_csv_discount\x18\x17 \x01(\x08\x12#\n\x1binvoice_download_csv_merged\x18\x18 \x01(\x08\x12\x10\n\x08open_api\x18\x19 \x01(\x08\x12\x18\n\x10users_management\x18\x1a \x01(\x08\x12\x19\n\x11\x61q_coverage_ratio\x18\x1b \x01(\x08\x12\x18\n\x10\x61q_ri_management\x18\x1c \x01(\x08\x12\x18\n\x10\x61q_sp_management\x18\x1d \x01(\x08\x12\x1a\n\x12\x61q_ri_sp_instances\x18\x1e \x01(\x08\x12\x17\n\x0f\x61q_right_sizing\x18\x1f \x01(\x08\x12\x15\n\raq_scheduling\x18 \x01(\x08\x12\x16\n\x0ereport_filters\x18! \x01(\x08\x42Z\n\x19\x63loud.alphaus.blueapi.apiB\x15\x41piGroupRootUserProtoZ&github.com/alphauslabs/blue-sdk-go/apib\x06proto3'
)
_GROUPROOTUSER = _descriptor.Descriptor(
name='GroupRootUser',
full_name='blueapi.api.GroupRootUser',
filename=None,
file=DESCRIPTOR,
containing_type=None,
create_key=_descriptor._internal_create_key,
fields=[
_descriptor.FieldDescriptor(
name='email', full_name='blueapi.api.GroupRootUser.email', index=0,
number=1, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=b"".decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='password', full_name='blueapi.api.GroupRootUser.password', index=1,
number=2, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=b"".decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='groupId', full_name='blueapi.api.GroupRootUser.groupId', index=2,
number=3, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=b"".decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='groupName', full_name='blueapi.api.GroupRootUser.groupName', index=3,
number=4, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=b"".decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='groupType', full_name='blueapi.api.GroupRootUser.groupType', index=4,
number=5, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=b"".decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='meta', full_name='blueapi.api.GroupRootUser.meta', index=5,
number=6, type=11, cpp_type=10, label=1,
has_default_value=False, default_value=None,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='passwordUpdateTime', full_name='blueapi.api.GroupRootUser.passwordUpdateTime', index=6,
number=7, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=b"".decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='updateTime', full_name='blueapi.api.GroupRootUser.updateTime', index=7,
number=8, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=b"".decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='userAccessId', full_name='blueapi.api.GroupRootUser.userAccessId', index=8,
number=9, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=b"".decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='userId', full_name='blueapi.api.GroupRootUser.userId', index=9,
number=10, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=b"".decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='waveAvailabilityDays', full_name='blueapi.api.GroupRootUser.waveAvailabilityDays', index=10,
number=11, type=5, cpp_type=1, label=1,
has_default_value=False, default_value=0,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='waveRegistered', full_name='blueapi.api.GroupRootUser.waveRegistered', index=11,
number=12, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=b"".decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='waveStatus', full_name='blueapi.api.GroupRootUser.waveStatus', index=12,
number=13, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=b"".decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
],
extensions=[
],
nested_types=[],
enum_types=[
],
serialized_options=None,
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
],
serialized_start=41,
serialized_end=345,
)
_FEATUREFLAGS = _descriptor.Descriptor(
name='FeatureFlags',
full_name='blueapi.api.FeatureFlags',
filename=None,
file=DESCRIPTOR,
containing_type=None,
create_key=_descriptor._internal_create_key,
fields=[
_descriptor.FieldDescriptor(
name='dashboard_graph', full_name='blueapi.api.FeatureFlags.dashboard_graph', index=0,
number=1, type=8, cpp_type=7, label=1,
has_default_value=False, default_value=False,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='usage_account', full_name='blueapi.api.FeatureFlags.usage_account', index=1,
number=2, type=8, cpp_type=7, label=1,
has_default_value=False, default_value=False,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='usage_account_graph', full_name='blueapi.api.FeatureFlags.usage_account_graph', index=2,
number=3, type=8, cpp_type=7, label=1,
has_default_value=False, default_value=False,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='usage_account_menu_account_edit', full_name='blueapi.api.FeatureFlags.usage_account_menu_account_edit', index=3,
number=4, type=8, cpp_type=7, label=1,
has_default_value=False, default_value=False,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='usage_account_menu_budget', full_name='blueapi.api.FeatureFlags.usage_account_menu_budget', index=4,
number=5, type=8, cpp_type=7, label=1,
has_default_value=False, default_value=False,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='usage_account_menu_budget_edit', full_name='blueapi.api.FeatureFlags.usage_account_menu_budget_edit', index=5,
number=6, type=8, cpp_type=7, label=1,
has_default_value=False, default_value=False,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='usage_account_menu_fees_fee', full_name='blueapi.api.FeatureFlags.usage_account_menu_fees_fee', index=6,
number=7, type=8, cpp_type=7, label=1,
has_default_value=False, default_value=False,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='usage_account_menu_fees_credit', full_name='blueapi.api.FeatureFlags.usage_account_menu_fees_credit', index=7,
number=8, type=8, cpp_type=7, label=1,
has_default_value=False, default_value=False,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='usage_account_menu_fees_refund', full_name='blueapi.api.FeatureFlags.usage_account_menu_fees_refund', index=8,
number=9, type=8, cpp_type=7, label=1,
has_default_value=False, default_value=False,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='usage_account_menu_fees_other_fees', full_name='blueapi.api.FeatureFlags.usage_account_menu_fees_other_fees', index=9,
number=10, type=8, cpp_type=7, label=1,
has_default_value=False, default_value=False,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='usage_report_download', full_name='blueapi.api.FeatureFlags.usage_report_download', index=10,
number=11, type=8, cpp_type=7, label=1,
has_default_value=False, default_value=False,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='usage_group', full_name='blueapi.api.FeatureFlags.usage_group', index=11,
number=12, type=8, cpp_type=7, label=1,
has_default_value=False, default_value=False,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='usage_group_graph', full_name='blueapi.api.FeatureFlags.usage_group_graph', index=12,
number=13, type=8, cpp_type=7, label=1,
has_default_value=False, default_value=False,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='usage_tag', full_name='blueapi.api.FeatureFlags.usage_tag', index=13,
number=14, type=8, cpp_type=7, label=1,
has_default_value=False, default_value=False,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='usage_tag_graph', full_name='blueapi.api.FeatureFlags.usage_tag_graph', index=14,
number=15, type=8, cpp_type=7, label=1,
has_default_value=False, default_value=False,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='usage_crosstag', full_name='blueapi.api.FeatureFlags.usage_crosstag', index=15,
number=16, type=8, cpp_type=7, label=1,
has_default_value=False, default_value=False,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='usage_crosstag_graph', full_name='blueapi.api.FeatureFlags.usage_crosstag_graph', index=16,
number=17, type=8, cpp_type=7, label=1,
has_default_value=False, default_value=False,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='ri_purchased', full_name='blueapi.api.FeatureFlags.ri_purchased', index=17,
number=18, type=8, cpp_type=7, label=1,
has_default_value=False, default_value=False,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='ri_utilization', full_name='blueapi.api.FeatureFlags.ri_utilization', index=18,
number=19, type=8, cpp_type=7, label=1,
has_default_value=False, default_value=False,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='ri_recommendation', full_name='blueapi.api.FeatureFlags.ri_recommendation', index=19,
number=20, type=8, cpp_type=7, label=1,
has_default_value=False, default_value=False,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='sp_purchased', full_name='blueapi.api.FeatureFlags.sp_purchased', index=20,
number=21, type=8, cpp_type=7, label=1,
has_default_value=False, default_value=False,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='invoice', full_name='blueapi.api.FeatureFlags.invoice', index=21,
number=22, type=8, cpp_type=7, label=1,
has_default_value=False, default_value=False,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='invoice_download_csv_discount', full_name='blueapi.api.FeatureFlags.invoice_download_csv_discount', index=22,
number=23, type=8, cpp_type=7, label=1,
has_default_value=False, default_value=False,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='invoice_download_csv_merged', full_name='blueapi.api.FeatureFlags.invoice_download_csv_merged', index=23,
number=24, type=8, cpp_type=7, label=1,
has_default_value=False, default_value=False,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='open_api', full_name='blueapi.api.FeatureFlags.open_api', index=24,
number=25, type=8, cpp_type=7, label=1,
has_default_value=False, default_value=False,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='users_management', full_name='blueapi.api.FeatureFlags.users_management', index=25,
number=26, type=8, cpp_type=7, label=1,
has_default_value=False, default_value=False,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='aq_coverage_ratio', full_name='blueapi.api.FeatureFlags.aq_coverage_ratio', index=26,
number=27, type=8, cpp_type=7, label=1,
has_default_value=False, default_value=False,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='aq_ri_management', full_name='blueapi.api.FeatureFlags.aq_ri_management', index=27,
number=28, type=8, cpp_type=7, label=1,
has_default_value=False, default_value=False,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='aq_sp_management', full_name='blueapi.api.FeatureFlags.aq_sp_management', index=28,
number=29, type=8, cpp_type=7, label=1,
has_default_value=False, default_value=False,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='aq_ri_sp_instances', full_name='blueapi.api.FeatureFlags.aq_ri_sp_instances', index=29,
number=30, type=8, cpp_type=7, label=1,
has_default_value=False, default_value=False,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='aq_right_sizing', full_name='blueapi.api.FeatureFlags.aq_right_sizing', index=30,
number=31, type=8, cpp_type=7, label=1,
has_default_value=False, default_value=False,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='aq_scheduling', full_name='blueapi.api.FeatureFlags.aq_scheduling', index=31,
number=32, type=8, cpp_type=7, label=1,
has_default_value=False, default_value=False,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='report_filters', full_name='blueapi.api.FeatureFlags.report_filters', index=32,
number=33, type=8, cpp_type=7, label=1,
has_default_value=False, default_value=False,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
],
extensions=[
],
nested_types=[],
enum_types=[
],
serialized_options=None,
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
],
serialized_start=348,
serialized_end=1304,
)
_GROUPROOTUSER.fields_by_name['meta'].message_type = _FEATUREFLAGS
DESCRIPTOR.message_types_by_name['GroupRootUser'] = _GROUPROOTUSER
DESCRIPTOR.message_types_by_name['FeatureFlags'] = _FEATUREFLAGS
_sym_db.RegisterFileDescriptor(DESCRIPTOR)
GroupRootUser = _reflection.GeneratedProtocolMessageType('GroupRootUser', (_message.Message,), {
'DESCRIPTOR' : _GROUPROOTUSER,
'__module__' : 'api.grouprootuser_pb2'
# @@protoc_insertion_point(class_scope:blueapi.api.GroupRootUser)
})
_sym_db.RegisterMessage(GroupRootUser)
FeatureFlags = _reflection.GeneratedProtocolMessageType('FeatureFlags', (_message.Message,), {
'DESCRIPTOR' : _FEATUREFLAGS,
'__module__' : 'api.grouprootuser_pb2'
# @@protoc_insertion_point(class_scope:blueapi.api.FeatureFlags)
})
_sym_db.RegisterMessage(FeatureFlags)
DESCRIPTOR._options = None
# @@protoc_insertion_point(module_scope)
| 59.418052 | 2,443 | 0.761383 | 3,397 | 25,015 | 5.287901 | 0.077127 | 0.06235 | 0.098369 | 0.073651 | 0.804877 | 0.737794 | 0.708401 | 0.678617 | 0.665368 | 0.645438 | 0 | 0.041149 | 0.121767 | 25,015 | 420 | 2,444 | 59.559524 | 0.776503 | 0.013712 | 0 | 0.676768 | 1 | 0.007576 | 0.157015 | 0.128386 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.007576 | 0.010101 | 0 | 0.010101 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
209e17be4fbfb911cbe032d185f7a2121256e4aa | 7,249 | py | Python | myip/test_myip.py | orenhe/myip | bfaa8ba2090fc8bf933dfa031223500331fe6d62 | [
"MIT"
] | 2 | 2015-07-30T16:52:05.000Z | 2018-03-01T12:56:57.000Z | myip/test_myip.py | orenhe/myip | bfaa8ba2090fc8bf933dfa031223500331fe6d62 | [
"MIT"
] | 1 | 2017-09-12T07:50:15.000Z | 2017-09-12T07:50:15.000Z | myip/test_myip.py | orenhe/myip | bfaa8ba2090fc8bf933dfa031223500331fe6d62 | [
"MIT"
] | null | null | null | import unittest
import myip_cmd
import linux
import darwin
from mock import patch, Mock
SAMPLE_OUTPUT_LINUX = """1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc pfifo_fast state DOWN qlen 1000
link/ether 00:21:cc:b9:cb:d5 brd ff:ff:ff:ff:ff:ff
inet 1.2.3.4/8 brd 1.255.255.255 scope global eth0
3: wlan0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000
link/ether 10:0b:a9:81:ac:64 brd ff:ff:ff:ff:ff:ff
inet 192.168.1.100/24 brd 192.168.1.255 scope global wlan0
inet6 fe80::120b:a9ff:fe81:ac64/64 scope link
valid_lft forever preferred_lft forever
4: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN
link/ether 6a:5f:ce:b7:85:a7 brd ff:ff:ff:ff:ff:ff
inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0"""
SAMPLE_OUTPUT_LINUX_NO_IP_ASSIGNED = """2: eth0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc pfifo_fast state DOWN qlen 1000
link/ether 00:21:cc:b9:cb:d5 brd ff:ff:ff:ff:ff:ff"""
SAMPLE_OUTPUT_DARWIN = """lo: flags=8049<UP,LOOPBACK,RUNNING,MULTICAST> mtu 16384
inet 127.0.0.1 netmask 0xff000000
inet6 ::1 prefixlen 128
inet6 fe80::1%lo0 prefixlen 64 scopeid 0x1
gif0: flags=8010<POINTOPOINT,MULTICAST> mtu 1280
stf0: flags=0<> mtu 1280
eth0: flags=8863<UP,BROADCAST,SMART,RUNNING,SIMPLEX,MULTICAST> mtu 1500
inet6 fe80::214:51ff:fe68:77e0%en0 prefixlen 64 scopeid 0x4
inet 1.2.3.4 netmask 0xffffff00 broadcast 192.168.1.255
ether 00:14:51:68:77:e0
media: autoselect (10baseT/UTP <half-duplex>) status: active
supported media: none autoselect 10baseT/UTP <half-duplex> 10baseT/UTP <half-duplex,hw-loopback> 10baseT/UTP <full-duplex> 10baseT/UTP <full-duplex,hw-loopback> 10baseT/UTP <full-duplex,flow-control> 100baseTX <half-duplex> 100baseTX <half-duplex,hw-loopback> 100baseTX <full-duplex> 100baseTX <full-duplex,hw-loopback> 100baseTX <full-duplex,flow-control> 1000baseT <full-duplex> 1000baseT <full-duplex,hw-loopback> 1000baseT <full-duplex,flow-control>
wlan0: flags=8863<UP,BROADCAST,SMART,RUNNING,SIMPLEX,MULTICAST> mtu 1500
inet6 fe80::214:51ff:fe68:77e0%en0 prefixlen 64 scopeid 0x4
inet 192.168.1.100 netmask 0xffffff00 broadcast 192.168.1.255
ether 00:14:51:68:77:e0
media: autoselect (10baseT/UTP <half-duplex>) status: active
supported media: none autoselect 10baseT/UTP <half-duplex> 10baseT/UTP <half-duplex,hw-loopback> 10baseT/UTP <full-duplex> 10baseT/UTP <full-duplex,hw-loopback> 10baseT/UTP <full-duplex,flow-control> 100baseTX <half-duplex> 100baseTX <half-duplex,hw-loopback> 100baseTX <full-duplex> 100baseTX <full-duplex,hw-loopback> 100baseTX <full-duplex,flow-control> 1000baseT <full-duplex> 1000baseT <full-duplex,hw-loopback> 1000baseT <full-duplex,flow-control>
en8: flags=8863<UP,BROADCAST,SMART,RUNNING,SIMPLEX,MULTICAST> mtu 1500
ether 00:14:51:68:77:e1
media: autoselect (<unknown type>) status: inactive
supported media: none autoselect 10baseT/UTP <half-duplex> 10baseT/UTP <half-duplex,hw-loopback> 10baseT/UTP <full-duplex> 10baseT/UTP <full-duplex,hw-loopback> 10baseT/UTP <full-duplex,flow-control> 100baseTX <half-duplex> 100baseTX <half-duplex,hw-loopback> 100baseTX <full-duplex> 100baseTX <full-duplex,hw-loopback> 100baseTX <full-duplex,flow-control> 1000baseT <full-duplex> 1000baseT <full-duplex,hw-loopback> 1000baseT <full-duplex,flow-control>
fw0: flags=8863<UP,BROADCAST,SMART,RUNNING,SIMPLEX,MULTICAST> mtu 4078
lladdr 00:14:51:ff:fe:a8:a2:d2
media: autoselect <full-duplex> status: inactive
supported media: autoselect <full-duplex>
virbr0: flags=8863<UP,BROADCAST,SMART,RUNNING,SIMPLEX,MULTICAST> mtu 1500
inet6 fe80::214:51ff:fe68:77e0%en0 prefixlen 64 scopeid 0x4
inet 192.168.122.1 netmask 0xffffff00 broadcast 192.168.1.255
ether 00:14:51:68:77:e0
media: autoselect (10baseT/UTP <half-duplex>) status: active
supported media: none autoselect 10baseT/UTP <half-duplex> 10baseT/UTP <half-duplex,hw-loopback> 10baseT/UTP <full-duplex> 10baseT/UTP <full-duplex,hw-loopback> 10baseT/UTP <full-duplex,flow-control> 100baseTX <half-duplex> 100baseTX <half-duplex,hw-loopback> 100baseTX <full-duplex> 100baseTX <full-duplex,hw-loopback> 100baseTX <full-duplex,flow-control> 1000baseT <full-duplex> 1000baseT <full-duplex,hw-loopback> 1000baseT <full-duplex,flow-control>"""
SAMPLE_IP_HASH2 = {"wlan0": "192.168.1.100",
"eth0": "1.2.3.4",
"virbr0": "192.168.122.1",
"lo": "127.0.0.1",
}
class IpaddrLinuxParsingTests(unittest.TestCase):
@patch("commands.getstatusoutput")
def test_one_interface(self, mock_getoutput):
mock_getoutput.return_value = (0, SAMPLE_OUTPUT_LINUX)
self.assertEquals(SAMPLE_IP_HASH2, linux.parse_ip_addr_cmd(["wlan0"]))
@patch("commands.getstatusoutput")
def test_multiple_interfaces(self, mock_getoutput):
mock_getoutput.return_value = (0, SAMPLE_OUTPUT_LINUX)
self.assertEquals(SAMPLE_IP_HASH2, linux.parse_ip_addr_cmd([]))
@patch("commands.getstatusoutput")
def test_interface_with_no_ip_assigned(self, mock_getoutput):
mock_getoutput.return_value = (0, SAMPLE_OUTPUT_LINUX_NO_IP_ASSIGNED)
self.assertEquals({}, linux.parse_ip_addr_cmd([]))
class ifconfigDarwinParsingTests(unittest.TestCase):
@patch("commands.getstatusoutput")
def test_one_interface(self, mock_getoutput):
mock_getoutput.return_value = (0, SAMPLE_OUTPUT_DARWIN)
self.assertEquals(SAMPLE_IP_HASH2, darwin.parse_ip_addr_cmd(["wlan0"]))
@patch("commands.getstatusoutput")
def test_multiple_interfaces(self, mock_getoutput):
mock_getoutput.return_value = (0, SAMPLE_OUTPUT_DARWIN)
self.assertEquals(SAMPLE_IP_HASH2, darwin.parse_ip_addr_cmd(["wlan0"]))
class myipHighLevelTests(unittest.TestCase):
def tearDown(self):
reload(myip_cmd)
def test_get_primary_ip(self):
generate_ip_hash = Mock()
generate_ip_hash.return_value = SAMPLE_IP_HASH2
myip_cmd.parse_ip_addr_cmd = generate_ip_hash
config = myip_cmd.parse_args([])
ips = myip_cmd.get_ips(config)
self.assertEquals(["1.2.3.4"], ips)
def test_get_all_ips(self):
generate_ip_hash = Mock()
generate_ip_hash.return_value = SAMPLE_IP_HASH2
myip_cmd.parse_ip_addr_cmd = generate_ip_hash
config = myip_cmd.parse_args(["--all"])
ips = myip_cmd.get_ips(config)
self.assertEquals(["1.2.3.4", "192.168.1.100", "192.168.122.1"], ips)
def test_specific_interface(self):
generate_ip_hash = Mock()
generate_ip_hash.return_value = SAMPLE_IP_HASH2
myip_cmd.parse_ip_addr_cmd = generate_ip_hash
config = myip_cmd.parse_args(["wlan0"])
ips = myip_cmd.get_ips(config)
self.assertEquals(["192.168.1.100"], ips)
| 56.193798 | 464 | 0.721617 | 1,079 | 7,249 | 4.714551 | 0.164041 | 0.0747 | 0.062905 | 0.018872 | 0.778258 | 0.758994 | 0.7299 | 0.7299 | 0.713584 | 0.703558 | 0 | 0.113155 | 0.160022 | 7,249 | 128 | 465 | 56.632813 | 0.722286 | 0 | 0 | 0.386792 | 0 | 0.150943 | 0.657009 | 0.218129 | 0 | 0 | 0.007174 | 0 | 0.075472 | 1 | 0.084906 | false | 0 | 0.04717 | 0 | 0.160377 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
20ad899f281bedb9531e1f5ec4841c5b9adb1a16 | 42 | py | Python | src/app/managers/__init__.py | schwetzen/liblr | 408235a4f539a05f54f0376dbf9dbcd83957db03 | [
"Apache-2.0"
] | null | null | null | src/app/managers/__init__.py | schwetzen/liblr | 408235a4f539a05f54f0376dbf9dbcd83957db03 | [
"Apache-2.0"
] | 1 | 2018-12-07T22:15:28.000Z | 2018-12-07T22:15:28.000Z | src/app/managers/__init__.py | schwetzen/liblr | 408235a4f539a05f54f0376dbf9dbcd83957db03 | [
"Apache-2.0"
] | 2 | 2018-12-07T20:59:53.000Z | 2018-12-17T21:02:21.000Z | from app.managers.user import UserManager
| 21 | 41 | 0.857143 | 6 | 42 | 6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.095238 | 42 | 1 | 42 | 42 | 0.947368 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
20daf7e24a5b8a1e4d766d1305ea6060d6f5566b | 195 | py | Python | examples/flask_example/endpoints.py | exageraldo/connexion-auth-paths-extd | 2a1d161e25a47fe5f391850e1809cab052d95aff | [
"BSD-3-Clause"
] | 4 | 2022-02-07T03:44:24.000Z | 2022-03-11T00:58:10.000Z | examples/flask_example/endpoints.py | exageraldo/connexion-auth-paths-extd | 2a1d161e25a47fe5f391850e1809cab052d95aff | [
"BSD-3-Clause"
] | 2 | 2022-02-08T18:51:08.000Z | 2022-02-11T13:55:24.000Z | examples/flask_example/endpoints.py | exageraldo/connexion-auth-paths-extd | 2a1d161e25a47fe5f391850e1809cab052d95aff | [
"BSD-3-Clause"
] | null | null | null | from http import HTTPStatus
from flask import jsonify
def get_index():
return jsonify({}), HTTPStatus.NO_CONTENT
def get_welcome():
return jsonify({"welcome": "user"}), HTTPStatus.OK
| 17.727273 | 54 | 0.723077 | 25 | 195 | 5.52 | 0.6 | 0.086957 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.158974 | 195 | 10 | 55 | 19.5 | 0.841463 | 0 | 0 | 0 | 0 | 0 | 0.05641 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | true | 0 | 0.333333 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
20de4d8bbddef0da6feee31c37ca467ba6d0d541 | 2,571 | py | Python | migrations/0011_auto_20180424_1816.py | redditnfl/draft-cards | 63779107a731ad741c8cf02b98a4b3d74cdcc3ac | [
"Apache-2.0",
"0BSD"
] | null | null | null | migrations/0011_auto_20180424_1816.py | redditnfl/draft-cards | 63779107a731ad741c8cf02b98a4b3d74cdcc3ac | [
"Apache-2.0",
"0BSD"
] | 10 | 2020-06-05T20:27:08.000Z | 2022-02-10T10:47:58.000Z | migrations/0011_auto_20180424_1816.py | redditnfl/draft-cards | 63779107a731ad741c8cf02b98a4b3d74cdcc3ac | [
"Apache-2.0",
"0BSD"
] | 1 | 2021-06-06T01:11:32.000Z | 2021-06-06T01:11:32.000Z | # -*- coding: utf-8 -*-
# Generated by Django 1.11 on 2018-04-24 22:16
from __future__ import unicode_literals
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('draftcardposter', '0010_settings_layout'),
]
operations = [
migrations.AddField(
model_name='settings',
name='last_submitted_overall',
field=models.IntegerField(default=0),
),
migrations.AlterField(
model_name='player',
name='position',
field=models.CharField(choices=[('QB', 'Quarterback'), ('WR', 'Wide Reciever'), ('CB', 'Cornerback'), ('K', 'Kicker'), ('P', 'Punter'), ('LS', 'Long Snapper'), ('DE', 'Defensive End'), ('ILB', 'Inside Linebacker'), ('DT', 'Defensive Tackle'), ('RB', 'Runningback'), ('OT', 'Offensive Tackle'), ('OG', 'Offensive Guard'), ('TE', 'Tight end'), ('S', 'Safety'), ('LB', 'Linebacker'), ('C', 'Center'), ('FB', 'Fullback'), ('DB', 'Defensive Back'), ('OLB', 'Outside Linebacker'), ('OL', 'Offensive Lineman'), ('SS', 'Strong Safety'), ('DL', 'Defensive Lineman'), ('NT', 'Nose Tackle'), ('FS', 'Free Safety'), ('BL', 'Bandleader'), ('4-3 DT', '4-3 Defensive Tackle'), ('4-3 DE', '4-3 Defensive End'), ('4-3 MLB', '4-3 Middle Linebacker'), ('4-3 OLB', '4-3 Outside Linebacker'), ('3-4 DT', '3-4 Defensive Tackle'), ('3-4 DE', '3-4 Defensive End'), ('3-4 ILB', '3-4 Inside Linebacker'), ('3-4 OLB', '3-4 Outside Linebacker')], max_length=3),
),
migrations.AlterField(
model_name='priority',
name='position',
field=models.CharField(choices=[('QB', 'Quarterback'), ('WR', 'Wide Reciever'), ('CB', 'Cornerback'), ('K', 'Kicker'), ('P', 'Punter'), ('LS', 'Long Snapper'), ('DE', 'Defensive End'), ('ILB', 'Inside Linebacker'), ('DT', 'Defensive Tackle'), ('RB', 'Runningback'), ('OT', 'Offensive Tackle'), ('OG', 'Offensive Guard'), ('TE', 'Tight end'), ('S', 'Safety'), ('LB', 'Linebacker'), ('C', 'Center'), ('FB', 'Fullback'), ('DB', 'Defensive Back'), ('OLB', 'Outside Linebacker'), ('OL', 'Offensive Lineman'), ('SS', 'Strong Safety'), ('DL', 'Defensive Lineman'), ('NT', 'Nose Tackle'), ('FS', 'Free Safety'), ('BL', 'Bandleader'), ('4-3 DT', '4-3 Defensive Tackle'), ('4-3 DE', '4-3 Defensive End'), ('4-3 MLB', '4-3 Middle Linebacker'), ('4-3 OLB', '4-3 Outside Linebacker'), ('3-4 DT', '3-4 Defensive Tackle'), ('3-4 DE', '3-4 Defensive End'), ('3-4 ILB', '3-4 Inside Linebacker'), ('3-4 OLB', '3-4 Outside Linebacker')], max_length=3),
),
]
| 82.935484 | 945 | 0.572929 | 319 | 2,571 | 4.573668 | 0.322884 | 0.021933 | 0.030158 | 0.039753 | 0.753941 | 0.753941 | 0.753941 | 0.753941 | 0.753941 | 0.753941 | 0 | 0.040903 | 0.172695 | 2,571 | 30 | 946 | 85.7 | 0.64504 | 0.025671 | 0 | 0.391304 | 1 | 0 | 0.482414 | 0.008793 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.086957 | 0 | 0.217391 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
20e2564c144b87aced328777abaacbec14f17d53 | 80 | py | Python | code/environment/__init__.py | OnlinePredictorTS/AOLForTimeSeries | ba2cd6aae7f367c6af879d0a4e58870050c00d04 | [
"Apache-2.0"
] | null | null | null | code/environment/__init__.py | OnlinePredictorTS/AOLForTimeSeries | ba2cd6aae7f367c6af879d0a4e58870050c00d04 | [
"Apache-2.0"
] | null | null | null | code/environment/__init__.py | OnlinePredictorTS/AOLForTimeSeries | ba2cd6aae7f367c6af879d0a4e58870050c00d04 | [
"Apache-2.0"
] | null | null | null | # utils init file
import environment.RealCore
import environment.RealExperiment | 20 | 33 | 0.8625 | 9 | 80 | 7.666667 | 0.777778 | 0.492754 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.1 | 80 | 4 | 33 | 20 | 0.958333 | 0.1875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
4589f3e31d0b1336799e6971089882f55b12a181 | 165 | py | Python | fec/fec/context.py | cnlucas/fec-cms | aa67a0d4c19a350420d2f8c4b4e6f93acb808639 | [
"CC0-1.0"
] | 39 | 2018-03-09T21:56:17.000Z | 2022-01-20T02:31:38.000Z | fec/fec/context.py | rbtrsv/fec-cms | 3136d1cf300ce1505d7035de38038e1c045937e6 | [
"CC0-1.0"
] | 3,183 | 2018-03-09T20:30:55.000Z | 2022-03-30T21:27:49.000Z | fec/fec/context.py | rbtrsv/fec-cms | 3136d1cf300ce1505d7035de38038e1c045937e6 | [
"CC0-1.0"
] | 19 | 2018-03-09T20:47:31.000Z | 2022-03-10T02:54:33.000Z | from django.conf import settings
def features(request):
return {'features': settings.FEATURES}
def show_settings(request):
return {'settings': settings}
| 16.5 | 42 | 0.733333 | 19 | 165 | 6.315789 | 0.526316 | 0.216667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.157576 | 165 | 9 | 43 | 18.333333 | 0.863309 | 0 | 0 | 0 | 0 | 0 | 0.09697 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.4 | false | 0 | 0.2 | 0.4 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
45b364fd2d1de876b0ad19cced5f3a960d63326c | 95 | py | Python | angr/engines/vex/__init__.py | Kyle-Kyle/angr | 345b2131a7a67e3a6ffc7d9fd475146a3e12f837 | [
"BSD-2-Clause"
] | 6,132 | 2015-08-06T23:24:47.000Z | 2022-03-31T21:49:34.000Z | angr/engines/vex/__init__.py | Kyle-Kyle/angr | 345b2131a7a67e3a6ffc7d9fd475146a3e12f837 | [
"BSD-2-Clause"
] | 2,272 | 2015-08-10T08:40:07.000Z | 2022-03-31T23:46:44.000Z | angr/engines/vex/__init__.py | Kyle-Kyle/angr | 345b2131a7a67e3a6ffc7d9fd475146a3e12f837 | [
"BSD-2-Clause"
] | 1,155 | 2015-08-06T23:37:39.000Z | 2022-03-31T05:54:11.000Z | from .claripy import *
from .light import *
from .heavy import *
from .lifter import VEXLifter
| 19 | 29 | 0.757895 | 13 | 95 | 5.538462 | 0.538462 | 0.416667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.168421 | 95 | 4 | 30 | 23.75 | 0.911392 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
45c4d377ea00865edc156c28508f5bacc31a33d0 | 31,929 | py | Python | aau01/kmp.py | Micoael/3b1b-styled-video-code | 036b339573e48f807e215bc7c7be9c6fe32b601d | [
"Apache-2.0"
] | 7 | 2020-03-02T23:56:39.000Z | 2020-06-08T15:05:46.000Z | my3b1b/old/kmp.py | Micoael/3b1b-styled-video-code | 036b339573e48f807e215bc7c7be9c6fe32b601d | [
"Apache-2.0"
] | null | null | null | my3b1b/old/kmp.py | Micoael/3b1b-styled-video-code | 036b339573e48f807e215bc7c7be9c6fe32b601d | [
"Apache-2.0"
] | null | null | null | from manimlib.imports import *
from PrimoCreature import *
class StartingScene(Scene):
def construct(_):
name = TextMobject("<","/",">").shift(2*UP).scale(2)
mane = TextMobject("Micoael ","$\\rho$","rimo")
name[0].shift(8*LEFT).set_color(BLUE)
name[1].shift(8*UP).set_color(LIGHT_BROWN)
name[2].shift(8*RIGHT).set_color(BLUE)
_.play(name[0].shift,(8*RIGHT),
name[1].shift,(8*DOWN),
name[2].shift,(8*LEFT),)
mane[1].shift(0.1*UP)
_.play(FadeInFromDown(mane))
class StrMatcher:
def gen_next(s2):
k = -1
n = len(s2)
j = 0
next_list = [0 for i in range(n)]
next_list[0] = -1
while j < n-1:
if k == -1 or s2[k] == s2[j]:
k += 1
j += 1
next_list[j] = k
else:
k = next_list[k]
return next_list
def match(s1, s2, next_list):
ans = -1
i = 0
j = 0
while i < len(s1):
if s1[i] == s2[j] or j == -1:
i += 1
j += 1
else:
j = next_list[j]
if j == len(s2):
ans = i - len(s2)
break
return ans
class Introduction(Scene):
def construct(self):
primo = PrimoCreature(color=BLUE).shift(2*DOWN+4*LEFT)
self.play(FadeIn(primo))
texts = TextMobject("aaaaaaafsdiaaawsss\\\\dfsaaws","awsl","wsdawasa\\\\dwaawwaslwasawl").shift(2*RIGHT+DOWN)
primo.look_at(texts)
self.play(Write(texts))
palabras_ale = TextMobject("awsl ???")
self.play(PrimoCreatureSays(
primo, palabras_ale,
bubble_kwargs={"height": 5, "width": 6},
target_mode="plain"
))
self.wait(1.5)
self.play(texts[1].set_color,YELLOW,
texts[1].scale,2)
self.play(texts[1].scale,0.5)
palabras_ale = TextMobject("让计算机完成字符串查找?")
primo.look_at(palabras_ale)
self.play(PrimoCreatureSays(
primo, palabras_ale,
bubble_kwargs={"height": 5, "width": 6},
target_mode="plain"
))
self.wait(5)
palabras_ale = TextMobject("这不是很简单的吗?")
primo.look_at(palabras_ale)
self.play(PrimoCreatureSays(
primo, palabras_ale,
bubble_kwargs={"height": 5, "width": 6},
target_mode="plain"
))
class BasicAlgorithm(Scene):
def construct(_):
_.str1="aaaaaaafsdiaaawslsdfsaawsawslwsda"
_.str2="awsl"
_.init(_.str1,_.str2)
_.matchord(_.str1,_.str2)
def init(_,str1,str2):
_.a = 0
_.b = 0
_.comp = 0
_.len1 = len(str1)
_.len2 = len(str2)
_.moshi = VGroup()
_.yuanlai = VGroup()
_.next = VGroup()
_.rect = Rectangle(width=0.5,height=1,fill_color=GREEN,fill_opacity=0.3).move_to(np.array([-6,2.75,0]))
for i in range(0,len(str1)):
pos = np.array((-6+0.5*i,3,0.0))
square = TextMobject(str1[i])
square.move_to(pos)
_.moshi.add(square)
for i in range(0,len(str2)):
pos = np.array((-6+0.5*i,2.5,0.0))
square = TextMobject(str2[i])
_.yuanlai.add(square)
square.move_to(pos)
for i in range(0,len(str2)):
pos = np.array((-6+0.5*i,2,0.0))
square = TextMobject(str(StrMatcher.gen_next(str2)[i]))
_.next.add(square)
square.move_to(pos)
_.addTextsToScreen()
_.add(_.rect)
def addTextsToScreen(_):
_.play(Write(_.yuanlai),Write(_.moshi))
def shifts(_,val):
_.play(_.yuanlai.shift,(val*0.5*RIGHT),
_.next.shift,(val*0.5*RIGHT), run_time=0.5)
_.b += val
def shiftto(_,val):
_.play(_.yuanlai.shift,((val-_.b)*0.5*RIGHT),
_.next.shift,((val-_.b)*0.5*RIGHT) ,run_time=0.5)
_.b = val
def shiftgreen(_,val):
_.play(_.rect.shift,(val*0.5*RIGHT),run_time=0.5)
_.comp += val
def shiftgto(_,val):
_.play(_.rect.shift,((val-_.comp)*0.5*RIGHT),run_time=0.5)
_.comp = val
def compare(_,dig):
_.write(rect)
def alignw(_,bb,aa):
_.shiftto(bb-aa)
_.shiftgto(bb)
if _.str1[bb]==_.str2[aa]:
_.play(_.rect.set_color,(GREEN),run_time = 0.5)
else:
_.play(_.rect.set_color,(RED),run_time = 0.5)
def matchord(_,t, p):
i, j = 0, 0
n, m = len(t), len(p)
while i < n and j < m:
_.alignw(i,j)
if t[i] == p[j]:
i, j = i+1, j+1
else:
i, j = i-j+1, 0
if j == m:
return i-j
return -1
def match(_,s1, s2, next_list):
ans = -1
i = 0
j = 0
while i < len(s1):
_.alignw(i,j)
if s1[i] == s2[j] or j == -1:
i += 1
j += 1
else:
j = next_list[j]
if j == len(s2):
ans = i - len(s2)
break
return ans
class ProblemWithCommonAlgorithm(Scene):
def construct(_):
_.str1="aaaaaaafsdiaaawslsdfsaawsawslwsda"
_.str2="aaaf"
_.init(_.str1,_.str2)
_.matchord(_.str1,_.str2)
def init(_,str1,str2):
_.a = 0
_.b = 0
_.comp = 0
_.len1 = len(str1)
_.len2 = len(str2)
_.moshi = VGroup()
_.yuanlai = VGroup()
_.next = VGroup()
_.rect = Rectangle(width=0.5,height=1,fill_color=GREEN,fill_opacity=0.3).move_to(np.array([-6,2.75,0]))
for i in range(0,len(str1)):
pos = np.array((-6+0.5*i,3,0.0))
square = TextMobject(str1[i])
square.move_to(pos)
_.moshi.add(square)
for i in range(0,len(str2)):
pos = np.array((-6+0.5*i,2.5,0.0))
square = TextMobject(str2[i])
_.yuanlai.add(square)
square.move_to(pos)
for i in range(0,len(str2)):
pos = np.array((-6+0.5*i,2,0.0))
square = TextMobject(str(StrMatcher.gen_next(str2)[i]))
_.next.add(square)
square.move_to(pos)
_.addTextsToScreen()
_.add(_.rect)
def addTextsToScreen(_):
_.play(Write(_.yuanlai),Write(_.moshi))
def shifts(_,val):
_.play(_.yuanlai.shift,(val*0.5*RIGHT),
run_time=0.5)
_.b += val
def shiftto(_,val):
_.play(_.yuanlai.shift,((val-_.b)*0.5*RIGHT),
run_time=0.5)
_.b = val
def shiftgreen(_,val):
_.play(_.rect.shift,(val*0.5*RIGHT),run_time=0.5)
_.comp += val
def shiftgto(_,val):
_.play(_.rect.shift,((val-_.comp)*0.5*RIGHT),run_time=0.5)
_.comp = val
def compare(_,dig):
_.write(rect)
def alignw(_,bb,aa):
_.shiftto(bb-aa)
_.shiftgto(bb)
if _.str1[bb]==_.str2[aa]:
_.play(_.rect.set_color,(GREEN),run_time = 0.5)
else:
_.play(_.rect.set_color,(RED),run_time = 0.5)
def matchord(_,t, p):
i, j = 0, 0
n, m = len(t), len(p)
while i < n and j < m:
_.alignw(i,j)
if t[i] == p[j]:
i, j = i+1, j+1
else:
i, j = i-j+1, 0
if j == m:
return i-j
return -1
def match(_,s1, s2, next_list):
ans = -1
i = 0
j = 0
while i < len(s1):
_.alignw(i,j)
if s1[i] == s2[j] or j == -1:
i += 1
j += 1
else:
j = next_list[j]
if j == len(s2):
ans = i - len(s2)
break
return ans
class HowToImprove(Scene):
def construct(_):
primo = PrimoCreature(color=BLUE).shift(2*DOWN+4*LEFT)
_.play(FadeIn(primo))
palabras_ale = TextMobject("减少重复的移动?!")
_.play(PrimoCreatureSays(
primo, palabras_ale,
bubble_kwargs={"height": 5, "width": 6},
target_mode="plain"
))
_.wait(1.5)
_.clear()
ori = TextMobject("MicoaelPrim","p")
patt = TextMobject("MicoaelPrim","o")
al = VGroup(ori,patt).arrange(DOWN)
_.play(Write(al))
_.play(ori[1].set_color,RED,
patt[1].set_color,RED)
_.play(FadeOut(al))
ori = TextMobject("MicoaelMico","p")
patt = TextMobject("Mico","ael","Mico","o")
al = VGroup(ori,patt).arrange(DOWN)
_.play(Write(al))
_.play(ori[1].set_color,RED,
patt[3].set_color,RED)
_.play(patt[0].set_color,YELLOW,
patt[2].set_color,YELLOW)
_.play(patt.shift,1.8*RIGHT,)
_.wait(3)
_.clear()
_.play(FadeIn(primo))
palabras_ale = TextMobject("也就是说找到前后长度对称\\\\的最大长度是吧?")
_.play(PrimoCreatureSays(
primo, palabras_ale,
bubble_kwargs={"height": 5, "width": 6},
target_mode="plain"
))
_.wait(3)
_.clear()
exam = TextMobject("M","i","c","o","a","M","i","c","o").scale(2)
_.play(Write(exam))
_.play( exam[0].set_color,YELLOW,
exam[5].set_color,YELLOW)
_.wait(0.5)
_.play( exam[1].set_color,YELLOW,
exam[6].set_color,YELLOW)
_.wait(0.5)
_.play( exam[2].set_color,YELLOW,
exam[7].set_color,YELLOW)
_.wait(0.5)
_.play( exam[3].set_color,YELLOW,
exam[8].set_color,YELLOW)
txt = TextMobject("$G=4$").shift(2*UP)
_.play(Transform(exam.copy(),txt))
class TheConnectionBetweenPatternAndTheOrigin(Scene):
def construct(_):
pat = "MicoaMico"
_.init("MicoaMickcMicoa",pat)
_.shiftgreen(8)
_.play(_.yuanlai[0].set_color,YELLOW,_.yuanlai[5].set_color,YELLOW)
_.play(_.yuanlai[1].set_color,YELLOW,_.yuanlai[6].set_color,YELLOW)
_.play(_.yuanlai[2].set_color,YELLOW,_.yuanlai[7].set_color,YELLOW)
size = TextMobject("$G=3$")
_.play(FadeInFromDown(size))
_.play(FocusOn(_.yuanlai[3]))
_.shifts(5)
_.wait(3)
mask = TextMobject("----","我们不知道原来的字符串","----").add_background_rectangle().move_to(_.moshi)
_.play(_.yuanlai[0].set_color,WHITE,_.yuanlai[5].set_color,WHITE,run_time=0.1)
_.play(_.yuanlai[1].set_color,WHITE,_.yuanlai[6].set_color,WHITE,run_time=0.1)
_.play(_.yuanlai[2].set_color,WHITE,_.yuanlai[7].set_color,WHITE,run_time=0.1)
_.play(Write(mask))
gr = VGroup()
for i in range (len(pat)+1):
stri = ""
for j in range(i):
stri=stri+(pat[j])
gr.add(TextMobject(stri))
gr.arrange(DOWN).shift(0.5*DOWN)
_.play(FadeOut(size))
_.play(Transform(_.yuanlai.copy(),gr))
primo = PrimoCreature(color=BLUE).shift(2*DOWN+4*LEFT)
_.play(FadeIn(primo))
palabras_ale = TextMobject("也就是说我们把这一堆东西\\\\的最长公共前后缀算出来就好了吧")
_.play(PrimoCreatureSays(
primo, palabras_ale,
bubble_kwargs={"height": 5, "width": 6},
target_mode="plain"
) )
_.wait(3)
_.clear()
_.add(gr)
for i in range(1,len(gr)):
_.play(ShowCreationThenDestructionAround(gr[i]))
primo = PrimoCreature(color=BLUE).shift(2*DOWN+4*LEFT)
_.play(FadeIn(primo))
palabras_ale = TextMobject("看上去好简单的样子!")
_.play(PrimoCreatureSays(
primo, palabras_ale,
bubble_kwargs={"height": 5, "width": 6},
target_mode="plain"
) )
_.wait(3)
def init(_,str1,str2):
_.a = 0
_.b = 0
_.comp = 0
_.len1 = len(str1)
_.len2 = len(str2)
_.moshi = VGroup()
_.yuanlai = VGroup()
_.next = VGroup()
_.rect = Rectangle(width=0.5,height=1,fill_color=GREEN,fill_opacity=0.3).move_to(np.array([-3,2.75,0]))
for i in range(0,len(str1)):
pos = np.array((-3+0.5*i,3,0.0))
square = TextMobject(str1[i])
square.move_to(pos)
_.moshi.add(square)
for i in range(0,len(str2)):
pos = np.array((-3+0.5*i,2.5,0.0))
square = TextMobject(str2[i])
_.yuanlai.add(square)
square.move_to(pos)
for i in range(0,len(str2)):
pos = np.array((-3+0.5*i,2,0.0))
square = TextMobject(str(i))
_.next.add(square)
square.move_to(pos)
_.addTextsToScreen()
_.add(_.rect)
_.play(_.rect.set_color,RED)
def addTextsToScreen(_):
_.play(Write(_.yuanlai),Write(_.moshi),Write(_.next))
def shifts(_,val):
_.play(_.yuanlai.shift,(val*0.5*RIGHT),
_.next.shift,(val*0.5*RIGHT), run_time=0.5)
_.b += val
def shiftto(_,val):
_.play(_.yuanlai.shift,((val-_.b)*0.5*RIGHT),
_.next.shift,((val-_.b)*0.5*RIGHT) ,run_time=0.5)
_.b = val
def shiftgreen(_,val):
_.play(_.rect.shift,(val*0.5*RIGHT),run_time=0.5)
_.comp += val
def shiftgto(_,val):
_.play(_.rect.shift,((val-_.comp)*0.5*RIGHT),run_time=0.5)
_.comp = val
def compare(_):
_.play(Write(_.rect))
def alignw(_,bb,aa):
_.shiftto(bb-aa)
_.shiftgto(bb)
if _.str1[bb]==_.str2[aa]:
_.play(_.rect.set_color,(GREEN),run_time = 0.5)
else:
_.play(_.rect.set_color,(RED),run_time = 0.5)
def matchord(_,t, p):
i, j = 0, 0
n, m = len(t), len(p)
while i < n and j < m:
_.alignw(i,j)
if t[i] == p[j]:
i, j = i+1, j+1
else:
i, j = i-j+1, 0
if j == m:
return i-j
return -1
def match(_,s1, s2, next_list):
ans = -1
i = 0
j = 0
while i < len(s1):
_.alignw(i,j)
if s1[i] == s2[j] or j == -1:
i += 1
j += 1
else:
j = next_list[j]
if j == len(s2):
ans = i - len(s2)
break
return ans
class HardToFigure(Scene):
def construct(_):
_.str1="AGCAxxx"
_.str2="AGCT"
_.init(_.str1,_.str2)
_.match(_.str1,_.str2,StrMatcher.gen_next(_.str2))
def init(_,str1,str2):
_.a = 0
_.b = 0
_.comp = 0
_.len1 = len(str1)
_.len2 = len(str2)
_.moshi = VGroup()
_.yuanlai = VGroup()
_.next = VGroup()
_.rect = Rectangle(width=0.5,height=1,fill_color=GREEN,fill_opacity=0.3).move_to(np.array([-3,2.75,0]))
for i in range(0,len(str1)):
pos = np.array((-3+0.5*i,3,0.0))
square = TextMobject(str1[i])
square.move_to(pos)
_.moshi.add(square)
for i in range(0,len(str2)):
pos = np.array((-3+0.5*i,2.5,0.0))
square = TextMobject(str2[i])
_.yuanlai.add(square)
square.move_to(pos)
for i in range(0,len(str2)):
pos = np.array((-3+0.5*i,2,0.0))
square = TextMobject(str(StrMatcher.gen_next(str2)[i]))
_.next.add(square)
square.move_to(pos)
_.addTextsToScreen()
_.add(_.rect)
def addTextsToScreen(_):
_.play(Write(_.yuanlai),Write(_.moshi),Write(_.next))
def shifts(_,val):
_.play(_.yuanlai.shift,(val*0.5*RIGHT),
_.next.shift,(val*0.5*RIGHT), run_time=0.5)
_.b += val
def shiftto(_,val):
_.play(_.yuanlai.shift,((val-_.b)*0.5*RIGHT),
_.next.shift,((val-_.b)*0.5*RIGHT) ,run_time=0.5)
_.b = val
def shiftgreen(_,val):
_.play(_.rect.shift,(val*0.5*RIGHT),run_time=0.5)
_.comp += val
def shiftgto(_,val):
_.play(_.rect.shift,((val-_.comp)*0.5*RIGHT),run_time=0.5)
_.comp = val
def compare(_,dig):
_.write(rect)
def alignw(_,bb,aa):
_.shiftto(bb-aa)
_.shiftgto(bb)
if _.str1[bb]==_.str2[aa]:
_.play(_.rect.set_color,(GREEN),run_time = 0.5)
else:
_.play(_.rect.set_color,(RED),run_time = 0.5)
def matchord(_,t, p):
i, j = 0, 0
n, m = len(t), len(p)
while i < n and j < m:
_.alignw(i,j)
if t[i] == p[j]:
i, j = i+1, j+1
else:
i, j = i-j+1, 0
if j == m:
return i-j
return -1
def match(_,s1, s2, next_list):
ans = -1
i = 0
j = 0
while i < len(s1):
_.alignw(i,j)
if s1[i] == s2[j] or j == -1:
i += 1
j += 1
else:
j = next_list[j]
if j == len(s2):
ans = i - len(s2)
break
return ans
class UnderstandRousThought(Scene):
def construct(_):
pat = "MicoaMico"
gr = VGroup()
for i in range (len(pat)+1):
stri = ""
for j in range(i):
stri=stri+(pat[j])
gr.add(TextMobject(stri))
gr.arrange(DOWN).shift(0.5*DOWN)
_.play(Write(gr))
primo = PrimoCreature(color=BLUE).shift(2*DOWN+4*LEFT)
_.play(FadeIn(primo))
palabras_ale = TextMobject("遍历一遍不就好了吗?!")
_.play(PrimoCreatureSays(
primo, palabras_ale,
bubble_kwargs={"height": 5, "width": 6},
target_mode="plain"
))
_.wait(3)
_.clear()
_.add(gr)
primo = PrimoCreature(color=LIGHT_BROWN).shift(2*DOWN+4*RIGHT).flip()
_.play(FadeIn(primo))
palabras_ale = TextMobject("试着递推一下!")
_.play(PrimoCreatureSays(
primo, palabras_ale,
bubble_kwargs={"height": 5, "width": 6},
target_mode="plain"
))
m = ValueTracker(0)
def upd(obj):
obj.tex_string = "G="+str( int(m.get_value()))
_.G = TextMobject("G=",str(int(m.get_value()))).add_updater(upd).shift(2*DOWN)
_.wait(3)
_.clear()
_.txt0 = TextMobject("A","G","C","T","A","G","C","A","G","C","T","G","C","A");
_.show(0)
_.add(_.G)
_.moveto(0)
_.wait(1)
_.show(1)
_.changeval(0)
_.moveto(1)
_.wait(1)
_.show(2)
_.changeval(0)
_.moveto(2)
_.wait(1)
_.show(3)
_.changeval(0)
_.moveto(3)
_.wait(1)
_.show(4)
_.compare(0,4)
_.changeval(1)
_.moveto(4)
_.cls()
_.cc(0,4)
_.wait(1)
_.show(5)
_.compare(1,5)
_.changeval(2)
_.moveto(5)
_.cc(1,5)
_.wait(1)
_.show(6)
_.compare(2,6)
_.changeval(3)
_.moveto(6)
_.cc(2,6)
_.wait(1)
_.show(7)
_.compare(3,7)
_.changeval("?")
a = TextMobject("每检验到一个不匹配的就要归零吗?").add_background_rectangle()
_.play(Write(a))
_.wait(3)
_.play(Uncreate(a))
a = TextMobject("有没有更小的区间让他们相同呢?").add_background_rectangle()
_.play(Write(a))
_.wait(3)
_.play(Uncreate(a))
a = TextMobject("如果有,该怎么找到呢?").add_background_rectangle()
_.play(Write(a))
_.wait(3)
_.play(Uncreate(a))
_.compare(6,6)
a = TextMobject("下一个公共前后缀有可能存在这里的next").add_background_rectangle().shift(2.5*UP)
_.play(Write(a))
_.wait(1)
a = TextMobject("如果发现他两个字符相等或$next$是$0$就不用继续下去了").add_background_rectangle().shift(2*UP)
_.play(Write(a))
_.wait(1)
a = TextMobject("(到头也没发现相同的)").add_background_rectangle().shift(1.5*UP)
_.play(Write(a))
_.wait(1)
_.compare(6,6)
_.compare(3,3)
_.compare(0,0)
_.changeval(1)
_.moveto(7)
_.cls()
_.cc(0,7)
_.show(8)
_.compare(1,8)
_.changeval(2)
_.moveto(8)
_.cc(1,8)
_.show(9)
_.changeval(3)
_.moveto(9)
_.compare(2,9)
_.cc(2,9)
_.wait(1)
_.show(10)
_.changeval(4)
_.compare(3,10)
_.cc(3,10)
_.moveto(10)
_.wait(1)
_.show(11)
_.compare(10,10)
_.compare(4,4)
_.compare(0,0)
_.changeval(0)
_.moveto(11)
_.wait(1)
_.cls()
_.show(12)
_.compare(11,11)
_.compare(0,0)
_.changeval(0)
_.moveto(12)
_.wait(1)
_.show(13)
_.compare(12,12)
_.compare(0,0)
_.changeval(1)
_.moveto(13)
_.wait(1)
_.txt0.shift(0.5*LEFT)
_.wait(3)
def cc(_,a,b):
_.play(_.txt0[a].set_color,BLUE,_.txt0[b].set_color,BLUE)
def cls(_):
for i in range (len(_.txt0)):
_.txt0[i].set_color(WHITE)
def moveto(_,to):
p = _.G[1].copy()
_.play(p.move_to,_.txt0[to],p.shift,0.8*DOWN)
def changeval(_,a):
_.G.become(TextMobject("G=",str(a)).shift(2*DOWN))
def compare(_,a,b):
_.play(ShowCreationThenDestructionAround(_.txt0[a]),ShowCreationThenDestructionAround(_.txt0[b]))
def show(_,m):
_.play(FadeInFromDown(_.txt0[m]))
def init(_,str1,str2):
_.a = 0
_.b = 0
_.comp = 0
_.len1 = len(str1)
_.len2 = len(str2)
_.moshi = VGroup()
_.yuanlai = VGroup()
_.next = VGroup()
_.rect = Rectangle(width=0.5,height=1,fill_color=GREEN,fill_opacity=0.3).move_to(np.array([-3,2.75,0]))
for i in range(0,len(str1)):
pos = np.array((-3+0.5*i,3,0.0))
square = TextMobject(str1[i])
square.move_to(pos)
_.moshi.add(square)
for i in range(0,len(str2)):
pos = np.array((-3+0.5*i,2.5,0.0))
square = TextMobject(str2[i])
_.yuanlai.add(square)
square.move_to(pos)
for i in range(0,len(str2)):
pos = np.array((-3+0.5*i,2,0.0))
square = TextMobject(str(i))
_.next.add(square)
square.move_to(pos)
_.addTextsToScreen()
_.add(_.rect)
_.play(_.rect.set_color,RED)
def addTextsToScreen(_):
_.play(Write(_.yuanlai),Write(_.moshi),Write(_.next))
def shifts(_,val):
_.play(_.yuanlai.shift,(val*0.5*RIGHT),
_.next.shift,(val*0.5*RIGHT), run_time=0.5)
_.b += val
def shiftto(_,val):
_.play(_.yuanlai.shift,((val-_.b)*0.5*RIGHT),
_.next.shift,((val-_.b)*0.5*RIGHT) ,run_time=0.5)
_.b = val
def shiftgreen(_,val):
_.play(_.rect.shift,(val*0.5*RIGHT),run_time=0.5)
_.comp += val
def shiftgto(_,val):
_.play(_.rect.shift,((val-_.comp)*0.5*RIGHT),run_time=0.5)
_.comp = val
def alignw(_,bb,aa):
_.shiftto(bb-aa)
_.shiftgto(bb)
if _.str1[bb]==_.str2[aa]:
_.play(_.rect.set_color,(GREEN),run_time = 0.5)
else:
_.play(_.rect.set_color,(RED),run_time = 0.5)
def matchord(_,t, p):
i, j = 0, 0
n, m = len(t), len(p)
while i < n and j < m:
_.alignw(i,j)
if t[i] == p[j]:
i, j = i+1, j+1
else:
i, j = i-j+1, 0
if j == m:
return i-j
return -1
def match(_,s1, s2, next_list):
ans = -1
i = 0
j = 0
while i < len(s1):
_.alignw(i,j)
if s1[i] == s2[j] or j == -1:
i += 1
j += 1
else:
j = next_list[j]
if j == len(s2):
ans = i - len(s2)
break
return ans
class AlmostDone(Scene):
def construct(_):
_.str1="1我们1我们11我们1我终于1完成了1next数组1的查找"
_.str2="1我们1我终于1"
_.init(_.str1,_.str2)
_.match(_.str1,_.str2,StrMatcher.gen_next(_.str2))
def init(_,str1,str2):
_.a = 0
_.b = 0
_.comp = 0
_.len1 = len(str1)
_.len2 = len(str2)
_.moshi = VGroup()
_.yuanlai = VGroup()
_.next = VGroup()
_.rect = Rectangle(width=0.5,height=1,fill_color=GREEN,fill_opacity=0.3).move_to(np.array([-3,2.75,0]))
for i in range(0,len(str1)):
pos = np.array((-3+0.5*i,3,0.0))
square = TextMobject(str1[i])
square.move_to(pos)
_.moshi.add(square)
for i in range(0,len(str2)):
pos = np.array((-3+0.5*i,2.5,0.0))
square = TextMobject(str2[i])
_.yuanlai.add(square)
square.move_to(pos)
for i in range(0,len(str2)):
pos = np.array((-3+0.5*i,2,0.0))
square = TextMobject(str(StrMatcher.gen_next(str2)[i]))
_.next.add(square)
square.move_to(pos)
_.addTextsToScreen()
_.add(_.rect)
def addTextsToScreen(_):
_.play(Write(_.yuanlai),Write(_.moshi),Write(_.next))
def shifts(_,val):
_.play(_.yuanlai.shift,(val*0.5*RIGHT),
_.next.shift,(val*0.5*RIGHT), run_time=0.5)
_.b += val
def shiftto(_,val):
_.play(_.yuanlai.shift,((val-_.b)*0.5*RIGHT),
_.next.shift,((val-_.b)*0.5*RIGHT) ,run_time=0.5)
_.b = val
def shiftgreen(_,val):
_.play(_.rect.shift,(val*0.5*RIGHT),run_time=0.5)
_.comp += val
def shiftgto(_,val):
_.play(_.rect.shift,((val-_.comp)*0.5*RIGHT),run_time=0.5)
_.comp = val
def compare(_,dig):
_.write(rect)
def alignw(_,bb,aa):
_.shiftto(bb-aa)
_.shiftgto(bb)
if _.str1[bb]==_.str2[aa]:
_.play(_.rect.set_color,(GREEN),run_time = 0.5)
else:
_.play(_.rect.set_color,(RED),run_time = 0.5)
def matchord(_,t, p):
i, j = 0, 0
n, m = len(t), len(p)
while i < n and j < m:
_.alignw(i,j)
if t[i] == p[j]:
i, j = i+1, j+1
else:
i, j = i-j+1, 0
if j == m:
return i-j
return -1
def match(_,s1, s2, next_list):
ans = -1
i = 0
j = 0
while i < len(s1):
_.alignw(i,j)
if s1[i] == s2[j] or j == -1:
i += 1
j += 1
else:
j = next_list[j]
if j == len(s2):
ans = i - len(s2)
break
return ans
class Demostrate3(Scene):
def construct(_):
_.str1="ji0de0san0lian0"
_.str2="0san0lian0"
_.init(_.str1,_.str2)
_.match(_.str1,_.str2,StrMatcher.gen_next(_.str2))
def init(_,str1,str2):
_.a = 0
_.b = 0
_.comp = 0
_.len1 = len(str1)
_.len2 = len(str2)
_.moshi = VGroup()
_.yuanlai = VGroup()
_.next = VGroup()
_.rect = Rectangle(width=0.5,height=1,fill_color=GREEN,fill_opacity=0.3).move_to(np.array([-3,2.75,0]))
for i in range(0,len(str1)):
pos = np.array((-3+0.5*i,3,0.0))
square = TextMobject(str1[i])
square.move_to(pos)
_.moshi.add(square)
for i in range(0,len(str2)):
pos = np.array((-3+0.5*i,2.5,0.0))
square = TextMobject(str2[i])
_.yuanlai.add(square)
square.move_to(pos)
for i in range(0,len(str2)):
pos = np.array((-3+0.5*i,2,0.0))
square = TextMobject(str(StrMatcher.gen_next(str2)[i]))
_.next.add(square)
square.move_to(pos)
_.addTextsToScreen()
_.add(_.rect)
def addTextsToScreen(_):
_.play(Write(_.yuanlai),Write(_.moshi),Write(_.next))
def shifts(_,val):
_.play(_.yuanlai.shift,(val*0.5*RIGHT),
_.next.shift,(val*0.5*RIGHT), run_time=0.5)
_.b += val
def shiftto(_,val):
_.play(_.yuanlai.shift,((val-_.b)*0.5*RIGHT),
_.next.shift,((val-_.b)*0.5*RIGHT) ,run_time=0.5)
_.b = val
def shiftgreen(_,val):
_.play(_.rect.shift,(val*0.5*RIGHT),run_time=0.5)
_.comp += val
def shiftgto(_,val):
_.play(_.rect.shift,((val-_.comp)*0.5*RIGHT),run_time=0.5)
_.comp = val
def compare(_,dig):
_.write(rect)
def alignw(_,bb,aa):
_.shiftto(bb-aa)
_.shiftgto(bb)
if _.str1[bb]==_.str2[aa]:
_.play(_.rect.set_color,(GREEN),run_time = 0.5)
else:
_.play(_.rect.set_color,(RED),run_time = 0.5)
def matchord(_,t, p):
i, j = 0, 0
n, m = len(t), len(p)
while i < n and j < m:
_.alignw(i,j)
if t[i] == p[j]:
i, j = i+1, j+1
else:
i, j = i-j+1, 0
if j == m:
return i-j
return -1
def match(_,s1, s2, next_list):
ans = -1
i = 0
j = 0
while i < len(s1):
_.alignw(i,j)
if s1[i] == s2[j] or j == -1:
i += 1
j += 1
else:
j = next_list[j]
if j == len(s2):
ans = i - len(s2)
break
return ans
| 30.20719 | 118 | 0.449215 | 3,850 | 31,929 | 3.497403 | 0.056104 | 0.017378 | 0.026736 | 0.028073 | 0.774229 | 0.754475 | 0.730561 | 0.72915 | 0.716673 | 0.708652 | 0 | 0.05152 | 0.396348 | 31,929 | 1,056 | 119 | 30.235795 | 0.647089 | 0 | 0 | 0.800654 | 0 | 0 | 0.023872 | 0.007644 | 0 | 0 | 0 | 0 | 0 | 1 | 0.095861 | false | 0 | 0.002179 | 0 | 0.135076 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
45ccd4c8ee3c7889ccf5c6cefb898f24cf932ccd | 150 | py | Python | jira/jira_integration/doctype/jira_settings/test_jira_settings.py | hrwX/jira | f2d5f09584e246074199670d562591c933d07bb6 | [
"MIT"
] | null | null | null | jira/jira_integration/doctype/jira_settings/test_jira_settings.py | hrwX/jira | f2d5f09584e246074199670d562591c933d07bb6 | [
"MIT"
] | null | null | null | jira/jira_integration/doctype/jira_settings/test_jira_settings.py | hrwX/jira | f2d5f09584e246074199670d562591c933d07bb6 | [
"MIT"
] | null | null | null | # Copyright (c) 2021, Alyf GmbH and Contributors
# See license.txt
# import frappe
import unittest
class TestJiraSettings(unittest.TestCase):
pass
| 16.666667 | 48 | 0.78 | 19 | 150 | 6.157895 | 0.894737 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.03125 | 0.146667 | 150 | 8 | 49 | 18.75 | 0.882813 | 0.506667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
b3139945a4e6ad11a8fd2459af9fb13d708aef7a | 5,393 | py | Python | client/tests/output_adapter_tests/test_slack_output_adapter.py | TheGuardianWolf/tellmefacts | 79968e3d4284e307cc5a12d5147006aa3ba2a2ca | [
"MIT"
] | null | null | null | client/tests/output_adapter_tests/test_slack_output_adapter.py | TheGuardianWolf/tellmefacts | 79968e3d4284e307cc5a12d5147006aa3ba2a2ca | [
"MIT"
] | null | null | null | client/tests/output_adapter_tests/test_slack_output_adapter.py | TheGuardianWolf/tellmefacts | 79968e3d4284e307cc5a12d5147006aa3ba2a2ca | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
import pytest
from client.output import Slack
from chatterbot.conversation import Statement
from slackclient import SlackClient
@pytest.fixture()
def slack_adapter(mocker):
"""
Create and patches for an output slack adapter.
"""
# Patch methods in the slackclient library so that no real requests to
# Slack are made
mock_api_call = {'ok': True}
mocker.patch(
'slackclient.SlackClient.api_call', return_value=mock_api_call)
mocker.patch('slackclient.SlackClient.rtm_read')
mocker.patch('slackclient.SlackClient.rtm_send_message', autospec=True)
sc = SlackClient('xoxp-1234123412341234-12341234-1234')
s = Slack(slack_client=sc, bot_name='tellmefacts')
return s
class TestSlackOutputAdapter(object):
def test_slack(self, slack_adapter):
"""
Test object attributes.
"""
assert slack_adapter.default_channel == '#general'
def test_send_message_api(self, slack_adapter, monkeypatch):
"""
Test sending a full response through Slack RTM after retrieving channel
data from the last input statement.
"""
slack_adapter.send_message(Statement('hi'), 'abcd')
# Check whether the call had the correct side effects
assert slack_adapter.events.get('send').is_set()
assert slack_adapter.slack_client.api_call.called
# Check call args
args, kwargs = slack_adapter.slack_client.api_call.call_args
assert kwargs['text'] == 'hi'
assert kwargs['channel'] == 'abcd'
assert not kwargs['as_user']
# Clear send event as this method is a consumer of the event
slack_adapter.events.get('send').clear()
def test_send_message_rtm(self, slack_adapter, monkeypatch):
"""
Test sending a message through Slack RTM.
"""
# Pretend that websockets is connected
monkeypatch.setattr(slack_adapter.slack_client.server, 'websocket',
True)
slack_adapter.send_message(Statement('hi'), 'abcd')
# Check whether the call had the correct side effects
assert slack_adapter.events.get('send').is_set()
assert slack_adapter.slack_client.rtm_send_message.called
# Check call args
args, kwargs = slack_adapter.slack_client.rtm_send_message.call_args
assert kwargs['message'] == 'hi'
assert kwargs['channel'] == 'abcd'
# Clear send event as this method is a consumer of the event
slack_adapter.events.get('send').clear()
def test_process_response_api(self, slack_adapter, mocker, monkeypatch):
"""
Test sending a full response through Slack Web API after retrieving
channel data from the last input statement.
"""
# Create and set the chatbot object for this adapter to contain one
# last input statement with a known channel.
mock_sessions = mocker.Mock(conversation_sessions=mocker.Mock(
get=mocker.Mock(return_value=mocker.Mock(conversation=mocker.Mock(
get_last_input_statement=mocker.Mock(return_value=Statement(
'input', extra_data={'channel': 'abcd'})))))))
monkeypatch.setattr(slack_adapter, 'chatbot', mock_sessions)
# Test adapter echo
assert str(slack_adapter.process_response(Statement('test'))) == 'test'
# Check that the call produced the right side effects
assert slack_adapter.events.get('send').is_set()
assert slack_adapter.slack_client.api_call.called
# Check call args
args, kwargs = slack_adapter.slack_client.api_call.call_args
assert kwargs['text'] == 'test'
assert kwargs['channel'] == 'abcd'
assert not kwargs['as_user']
# Clear send event as this method is a consumer of the event
slack_adapter.events.get('send').clear()
def test_process_response_rtm(self, slack_adapter, mocker, monkeypatch):
"""
Test sending a full response through Slack RTM after retrieving channel
data from the last input statement.
"""
# Create and set the chatbot object for this adapter to contain one
# last input statement with a known channel.
mock_sessions = mocker.Mock(conversation_sessions=mocker.Mock(
get=mocker.Mock(return_value=mocker.Mock(conversation=mocker.Mock(
get_last_input_statement=mocker.Mock(return_value=Statement(
'input', extra_data={'channel': 'abcd'})))))))
monkeypatch.setattr(slack_adapter, 'chatbot', mock_sessions)
# Pretend websockets is connected
monkeypatch.setattr(slack_adapter.slack_client.server, 'websocket',
True)
# Test adapter echo
assert str(slack_adapter.process_response(Statement('test'))) == 'test'
# Check that the call produced the right side effects
assert slack_adapter.events.get('send').is_set()
assert slack_adapter.slack_client.rtm_send_message.called
# Check call args
args, kwargs = slack_adapter.slack_client.rtm_send_message.call_args
assert kwargs['message'] == 'test'
assert kwargs['channel'] == 'abcd'
# Clear send event as this method is a consumer of the event
slack_adapter.events.get('send').clear() | 40.548872 | 79 | 0.66679 | 661 | 5,393 | 5.273828 | 0.183056 | 0.110155 | 0.048766 | 0.065978 | 0.80895 | 0.784854 | 0.784854 | 0.769076 | 0.769076 | 0.769076 | 0 | 0.007087 | 0.241239 | 5,393 | 133 | 80 | 40.548872 | 0.844819 | 0.255702 | 0 | 0.615385 | 0 | 0 | 0.097968 | 0.036217 | 0 | 0 | 0 | 0 | 0.323077 | 1 | 0.092308 | false | 0 | 0.061538 | 0 | 0.184615 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
b35579c295741a9a8f007749dc002e8f7a7d8717 | 119 | py | Python | autoencoda/__init__.py | j-abc/autoencoda | c892afe52a18c9f7fca61116459190ae59ea76a0 | [
"MIT"
] | null | null | null | autoencoda/__init__.py | j-abc/autoencoda | c892afe52a18c9f7fca61116459190ae59ea76a0 | [
"MIT"
] | 8 | 2019-06-16T20:19:21.000Z | 2022-02-10T00:22:38.000Z | autoencoda/__init__.py | j-abc/autoencoda | c892afe52a18c9f7fca61116459190ae59ea76a0 | [
"MIT"
] | 1 | 2019-09-17T22:07:32.000Z | 2019-09-17T22:07:32.000Z | from . import billboard_query
from . import ingest
from . import models
from . import predict
from . import preprocess
| 19.833333 | 29 | 0.789916 | 16 | 119 | 5.8125 | 0.5 | 0.537634 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.168067 | 119 | 5 | 30 | 23.8 | 0.939394 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
2fa3e9fe5bef8d298b339926ed1c589e5b1ccc4a | 76 | py | Python | test_demo.py | kansasvirtual/Pytest201 | 55542f969f1b42ca02c9ba7de4881d8fb8941e95 | [
"MIT"
] | null | null | null | test_demo.py | kansasvirtual/Pytest201 | 55542f969f1b42ca02c9ba7de4881d8fb8941e95 | [
"MIT"
] | null | null | null | test_demo.py | kansasvirtual/Pytest201 | 55542f969f1b42ca02c9ba7de4881d8fb8941e95 | [
"MIT"
] | null | null | null | def test_add():
assert demo.add(1, 2) == 3
def test_error():
pass
| 10.857143 | 30 | 0.578947 | 13 | 76 | 3.230769 | 0.769231 | 0.333333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.053571 | 0.263158 | 76 | 6 | 31 | 12.666667 | 0.696429 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.25 | 1 | 0.5 | true | 0.25 | 0 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
2fadd8a52ba6474897618ab03e7868ae5cba8343 | 35 | py | Python | tests/import/import3a.py | sebastien-riou/micropython | 116c15842fd48ddb77b0bc016341d936a0756573 | [
"MIT"
] | 13,648 | 2015-01-01T01:34:51.000Z | 2022-03-31T16:19:53.000Z | tests/import/import3a.py | sebastien-riou/micropython | 116c15842fd48ddb77b0bc016341d936a0756573 | [
"MIT"
] | 7,092 | 2015-01-01T07:59:11.000Z | 2022-03-31T23:52:18.000Z | tests/import/import3a.py | sebastien-riou/micropython | 116c15842fd48ddb77b0bc016341d936a0756573 | [
"MIT"
] | 4,942 | 2015-01-02T11:48:50.000Z | 2022-03-31T19:57:10.000Z | from import1b import *
print(var)
| 8.75 | 22 | 0.742857 | 5 | 35 | 5.2 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.034483 | 0.171429 | 35 | 3 | 23 | 11.666667 | 0.862069 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0.5 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 6 |
6405098ef44818e0023454fa214d77df06257295 | 76 | py | Python | scraper/main.py | PatchyVideo/PatchyVideo | cafbdfa34591d7292090d5e67bb633b974447b64 | [
"MIT"
] | 13 | 2020-06-04T00:25:24.000Z | 2022-03-31T13:12:17.000Z | scraper/main.py | PatchyVideo/PatchyVideo | cafbdfa34591d7292090d5e67bb633b974447b64 | [
"MIT"
] | 1 | 2021-01-03T04:17:45.000Z | 2021-02-07T14:19:04.000Z | scraper/main.py | PatchyVideo/PatchyVideo | cafbdfa34591d7292090d5e67bb633b974447b64 | [
"MIT"
] | null | null | null |
from .init import app
from . import postVideo
from . import postPlaylist
| 10.857143 | 26 | 0.763158 | 10 | 76 | 5.8 | 0.6 | 0.344828 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.197368 | 76 | 6 | 27 | 12.666667 | 0.95082 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
642ecbee000a91c3ce840f3048535c7ba2b37fd7 | 31 | py | Python | ogb/nodeproppred/__init__.py | mufeili/ogb | 0190bb642e44fec976a9e0686663d1dc939fedd2 | [
"MIT"
] | 9 | 2019-07-21T18:00:27.000Z | 2020-08-21T08:26:30.000Z | ogb/nodeproppred/__init__.py | mufeili/ogb | 0190bb642e44fec976a9e0686663d1dc939fedd2 | [
"MIT"
] | 2 | 2019-10-30T09:05:56.000Z | 2020-09-18T10:41:34.000Z | ogb/nodeproppred/__init__.py | mufeili/ogb | 0190bb642e44fec976a9e0686663d1dc939fedd2 | [
"MIT"
] | 3 | 2019-07-22T15:04:11.000Z | 2021-06-21T09:38:56.000Z | from .evaluate import Evaluator | 31 | 31 | 0.870968 | 4 | 31 | 6.75 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.096774 | 31 | 1 | 31 | 31 | 0.964286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
6439beb248d4de7ff320993f92206801b605030b | 10,885 | py | Python | tests/testflows/rbac/tests/privileges/system/drop_cache.py | mcspring/ClickHouse | 08f713f177f950c2f675c2c75d1261c91066888c | [
"Apache-2.0"
] | 18 | 2021-05-29T01:12:33.000Z | 2021-11-18T12:34:48.000Z | tests/testflows/rbac/tests/privileges/system/drop_cache.py | mcspring/ClickHouse | 08f713f177f950c2f675c2c75d1261c91066888c | [
"Apache-2.0"
] | null | null | null | tests/testflows/rbac/tests/privileges/system/drop_cache.py | mcspring/ClickHouse | 08f713f177f950c2f675c2c75d1261c91066888c | [
"Apache-2.0"
] | 2 | 2021-07-13T06:42:45.000Z | 2021-07-21T13:47:22.000Z | from testflows.core import *
from testflows.asserts import error
from rbac.requirements import *
from rbac.helper.common import *
import rbac.helper.errors as errors
@TestSuite
def dns_cache_privileges_granted_directly(self, node=None):
"""Check that a user is able to execute `SYSTEM DROP DNS CACHE` if and only if
they have `SYSTEM DROP DNS CACHE` privilege granted directly.
"""
user_name = f"user_{getuid()}"
if node is None:
node = self.context.node
with user(node, f"{user_name}"):
Suite(run=dns_cache, flags=TE,
examples=Examples("privilege grant_target_name user_name", [
tuple(list(row)+[user_name,user_name]) for row in dns_cache.examples
], args=Args(name="check privilege={privilege}", format_name=True)))
@TestSuite
def dns_cache_privileges_granted_via_role(self, node=None):
"""Check that a user is able to execute `SYSTEM DROP DNS CACHE` if and only if
they have `SYSTEM DROP DNS CACHE` privilege granted via role.
"""
user_name = f"user_{getuid()}"
role_name = f"role_{getuid()}"
if node is None:
node = self.context.node
with user(node, f"{user_name}"), role(node, f"{role_name}"):
with When("I grant the role to the user"):
node.query(f"GRANT {role_name} TO {user_name}")
Suite(run=dns_cache, flags=TE,
examples=Examples("privilege grant_target_name user_name", [
tuple(list(row)+[role_name,user_name]) for row in dns_cache.examples
], args=Args(name="check privilege={privilege}", format_name=True)))
@TestOutline(Suite)
@Requirements(
RQ_SRS_006_RBAC_Privileges_System_DropCache_DNS("1.0"),
)
@Examples("privilege",[
("SYSTEM",),
("SYSTEM DROP CACHE",),
("SYSTEM DROP DNS CACHE",),
("DROP CACHE",),
("DROP DNS CACHE",),
("SYSTEM DROP DNS",),
("DROP DNS",),
])
def dns_cache(self, privilege, grant_target_name, user_name, node=None):
"""Run checks for `SYSTEM DROP DNS CACHE` privilege.
"""
exitcode, message = errors.not_enough_privileges(name=user_name)
if node is None:
node = self.context.node
with Scenario("SYSTEM DROP DNS CACHE without privilege"):
with When("I check the user is unable to execute SYSTEM DROP DNS CACHE"):
node.query("SYSTEM DROP DNS CACHE", settings = [("user", f"{user_name}")],
exitcode=exitcode, message=message)
with Scenario("SYSTEM DROP DNS CACHE with privilege"):
with When(f"I grant {privilege} on the table"):
node.query(f"GRANT {privilege} ON *.* TO {grant_target_name}")
with Then("I check the user is bale to execute SYSTEM DROP DNS CACHE"):
node.query("SYSTEM DROP DNS CACHE", settings = [("user", f"{user_name}")])
with Scenario("SYSTEM DROP DNS CACHE with revoked privilege"):
with When(f"I grant {privilege} on the table"):
node.query(f"GRANT {privilege} ON *.* TO {grant_target_name}")
with And(f"I revoke {privilege} on the table"):
node.query(f"REVOKE {privilege} ON *.* FROM {grant_target_name}")
with Then("I check the user is unable to execute SYSTEM DROP DNS CACHE"):
node.query("SYSTEM DROP DNS CACHE", settings = [("user", f"{user_name}")],
exitcode=exitcode, message=message)
@TestSuite
def mark_cache_privileges_granted_directly(self, node=None):
"""Check that a user is able to execute `SYSTEM DROP MARK CACHE` if and only if
they have `SYSTEM DROP MARK CACHE` privilege granted directly.
"""
user_name = f"user_{getuid()}"
if node is None:
node = self.context.node
with user(node, f"{user_name}"):
Suite(run=mark_cache, flags=TE,
examples=Examples("privilege grant_target_name user_name", [
tuple(list(row)+[user_name,user_name]) for row in mark_cache.examples
], args=Args(name="check privilege={privilege}", format_name=True)))
@TestSuite
def mark_cache_privileges_granted_via_role(self, node=None):
"""Check that a user is able to execute `SYSTEM DROP MARK CACHE` if and only if
they have `SYSTEM DROP MARK CACHE` privilege granted via role.
"""
user_name = f"user_{getuid()}"
role_name = f"role_{getuid()}"
if node is None:
node = self.context.node
with user(node, f"{user_name}"), role(node, f"{role_name}"):
with When("I grant the role to the user"):
node.query(f"GRANT {role_name} TO {user_name}")
Suite(run=mark_cache, flags=TE,
examples=Examples("privilege grant_target_name user_name", [
tuple(list(row)+[role_name,user_name]) for row in mark_cache.examples
], args=Args(name="check privilege={privilege}", format_name=True)))
@TestOutline(Suite)
@Requirements(
RQ_SRS_006_RBAC_Privileges_System_DropCache_Mark("1.0"),
)
@Examples("privilege",[
("SYSTEM",),
("SYSTEM DROP CACHE",),
("SYSTEM DROP MARK CACHE",),
("DROP CACHE",),
("DROP MARK CACHE",),
("SYSTEM DROP MARK",),
("DROP MARKS",),
])
def mark_cache(self, privilege, grant_target_name, user_name, node=None):
"""Run checks for `SYSTEM DROP MARK CACHE` privilege.
"""
exitcode, message = errors.not_enough_privileges(name=user_name)
if node is None:
node = self.context.node
with Scenario("SYSTEM DROP MARK CACHE without privilege"):
with When("I check the user is unable to execute SYSTEM DROP MARK CACHE"):
node.query("SYSTEM DROP MARK CACHE", settings = [("user", f"{user_name}")],
exitcode=exitcode, message=message)
with Scenario("SYSTEM DROP MARK CACHE with privilege"):
with When(f"I grant {privilege} on the table"):
node.query(f"GRANT {privilege} ON *.* TO {grant_target_name}")
with Then("I check the user is bale to execute SYSTEM DROP MARK CACHE"):
node.query("SYSTEM DROP MARK CACHE", settings = [("user", f"{user_name}")])
with Scenario("SYSTEM DROP MARK CACHE with revoked privilege"):
with When(f"I grant {privilege} on the table"):
node.query(f"GRANT {privilege} ON *.* TO {grant_target_name}")
with And(f"I revoke {privilege} on the table"):
node.query(f"REVOKE {privilege} ON *.* FROM {grant_target_name}")
with Then("I check the user is unable to execute SYSTEM DROP MARK CACHE"):
node.query("SYSTEM DROP MARK CACHE", settings = [("user", f"{user_name}")],
exitcode=exitcode, message=message)
@TestSuite
def uncompressed_cache_privileges_granted_directly(self, node=None):
"""Check that a user is able to execute `SYSTEM DROP UNCOMPRESSED CACHE` if and only if
they have `SYSTEM DROP UNCOMPRESSED CACHE` privilege granted directly.
"""
user_name = f"user_{getuid()}"
if node is None:
node = self.context.node
with user(node, f"{user_name}"):
Suite(run=uncompressed_cache, flags=TE,
examples=Examples("privilege grant_target_name user_name", [
tuple(list(row)+[user_name,user_name]) for row in uncompressed_cache.examples
], args=Args(name="check privilege={privilege}", format_name=True)))
@TestSuite
def uncompressed_cache_privileges_granted_via_role(self, node=None):
"""Check that a user is able to execute `SYSTEM DROP UNCOMPRESSED CACHE` if and only if
they have `SYSTEM DROP UNCOMPRESSED CACHE` privilege granted via role.
"""
user_name = f"user_{getuid()}"
role_name = f"role_{getuid()}"
if node is None:
node = self.context.node
with user(node, f"{user_name}"), role(node, f"{role_name}"):
with When("I grant the role to the user"):
node.query(f"GRANT {role_name} TO {user_name}")
Suite(run=uncompressed_cache, flags=TE,
examples=Examples("privilege grant_target_name user_name", [
tuple(list(row)+[role_name,user_name]) for row in uncompressed_cache.examples
], args=Args(name="check privilege={privilege}", format_name=True)))
@TestOutline(Suite)
@Requirements(
RQ_SRS_006_RBAC_Privileges_System_DropCache_Uncompressed("1.0"),
)
@Examples("privilege",[
("SYSTEM",),
("SYSTEM DROP CACHE",),
("SYSTEM DROP UNCOMPRESSED CACHE",),
("DROP CACHE",),
("DROP UNCOMPRESSED CACHE",),
("SYSTEM DROP UNCOMPRESSED",),
("DROP UNCOMPRESSED",),
])
def uncompressed_cache(self, privilege, grant_target_name, user_name, node=None):
"""Run checks for `SYSTEM DROP UNCOMPRESSED CACHE` privilege.
"""
exitcode, message = errors.not_enough_privileges(name=user_name)
if node is None:
node = self.context.node
with Scenario("SYSTEM DROP UNCOMPRESSED CACHE without privilege"):
with When("I check the user is unable to execute SYSTEM DROP UNCOMPRESSED CACHE"):
node.query("SYSTEM DROP UNCOMPRESSED CACHE", settings = [("user", f"{user_name}")],
exitcode=exitcode, message=message)
with Scenario("SYSTEM DROP UNCOMPRESSED CACHE with privilege"):
with When(f"I grant {privilege} on the table"):
node.query(f"GRANT {privilege} ON *.* TO {grant_target_name}")
with Then("I check the user is bale to execute SYSTEM DROP UNCOMPRESSED CACHE"):
node.query("SYSTEM DROP UNCOMPRESSED CACHE", settings = [("user", f"{user_name}")])
with Scenario("SYSTEM DROP UNCOMPRESSED CACHE with revoked privilege"):
with When(f"I grant {privilege} on the table"):
node.query(f"GRANT {privilege} ON *.* TO {grant_target_name}")
with And(f"I revoke {privilege} on the table"):
node.query(f"REVOKE {privilege} ON *.* FROM {grant_target_name}")
with Then("I check the user is unable to execute SYSTEM DROP UNCOMPRESSED CACHE"):
node.query("SYSTEM DROP UNCOMPRESSED CACHE", settings = [("user", f"{user_name}")],
exitcode=exitcode, message=message)
@TestFeature
@Name("system drop cache")
@Requirements(
RQ_SRS_006_RBAC_Privileges_System_DropCache("1.0"),
)
def feature(self, node="clickhouse1"):
"""Check the RBAC functionality of SYSTEM DROP CACHE.
"""
self.context.node = self.context.cluster.node(node)
Suite(run=dns_cache_privileges_granted_directly, setup=instrument_clickhouse_server_log)
Suite(run=dns_cache_privileges_granted_via_role, setup=instrument_clickhouse_server_log)
Suite(run=mark_cache_privileges_granted_directly, setup=instrument_clickhouse_server_log)
Suite(run=mark_cache_privileges_granted_via_role, setup=instrument_clickhouse_server_log)
Suite(run=uncompressed_cache_privileges_granted_directly, setup=instrument_clickhouse_server_log)
Suite(run=uncompressed_cache_privileges_granted_via_role, setup=instrument_clickhouse_server_log)
| 40.314815 | 101 | 0.66927 | 1,470 | 10,885 | 4.80068 | 0.068027 | 0.075103 | 0.03826 | 0.040385 | 0.927023 | 0.92433 | 0.912569 | 0.907184 | 0.900241 | 0.900241 | 0 | 0.002445 | 0.210932 | 10,885 | 269 | 102 | 40.464684 | 0.819187 | 0.100873 | 0 | 0.624339 | 0 | 0 | 0.330304 | 0.013014 | 0 | 0 | 0 | 0 | 0.005291 | 1 | 0.05291 | false | 0 | 0.026455 | 0 | 0.079365 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
ff52ab48b937ab8bbc64551038e7b7708865c9cd | 104 | py | Python | cloudnetpy/categorize/__init__.py | saveriogzz/cloudnetpy | baa3ed5f254425c5a9c787556ec652ea659b38ba | [
"MIT"
] | 13 | 2020-02-16T06:52:51.000Z | 2022-03-10T09:43:19.000Z | cloudnetpy/categorize/__init__.py | saveriogzz/cloudnetpy | baa3ed5f254425c5a9c787556ec652ea659b38ba | [
"MIT"
] | 17 | 2020-01-15T10:47:08.000Z | 2022-03-28T13:08:23.000Z | cloudnetpy/categorize/__init__.py | saveriogzz/cloudnetpy | baa3ed5f254425c5a9c787556ec652ea659b38ba | [
"MIT"
] | 12 | 2020-03-03T16:45:13.000Z | 2022-03-23T08:02:43.000Z | from .datasource import DataSource
from .categorize import generate_categorize
from .radar import Radar
| 26 | 43 | 0.855769 | 13 | 104 | 6.769231 | 0.461538 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.115385 | 104 | 3 | 44 | 34.666667 | 0.956522 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
ff743e5c69e5f42f90d38907674b688dc4f85200 | 137 | py | Python | pyccx/bc/__init__.py | drlukeparry/pyccx | 7f9eaebeda334da895da4c7593f5fe40936554b0 | [
"BSD-2-Clause"
] | 10 | 2020-04-09T11:22:13.000Z | 2022-02-14T08:07:52.000Z | pyccx/bc/__init__.py | drlukeparry/pyccx | 7f9eaebeda334da895da4c7593f5fe40936554b0 | [
"BSD-2-Clause"
] | 4 | 2020-04-10T15:56:42.000Z | 2021-04-08T12:34:47.000Z | pyccx/bc/__init__.py | drlukeparry/pyccx | 7f9eaebeda334da895da4c7593f5fe40936554b0 | [
"BSD-2-Clause"
] | 3 | 2020-04-22T16:14:26.000Z | 2021-06-26T23:14:48.000Z | from .boundarycondition import BoundaryCondition, BoundaryConditionType, Acceleration, Film, Fixed, Force, HeatFlux, Pressure, Radiation
| 68.5 | 136 | 0.846715 | 12 | 137 | 9.666667 | 0.916667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.087591 | 137 | 1 | 137 | 137 | 0.928 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
ff99110afbfbb1426f428dc875738e33e778c206 | 373 | py | Python | src/pynumerals/__init__.py | numeralbank/pynumerals | 7c827ba7e7892b2779573cd3047ab44da027243d | [
"Apache-2.0"
] | null | null | null | src/pynumerals/__init__.py | numeralbank/pynumerals | 7c827ba7e7892b2779573cd3047ab44da027243d | [
"Apache-2.0"
] | 5 | 2020-07-06T13:53:57.000Z | 2020-10-23T13:33:18.000Z | src/pynumerals/__init__.py | numeralbank/pynumerals | 7c827ba7e7892b2779573cd3047ab44da027243d | [
"Apache-2.0"
] | null | null | null | __version__ = "1.0.0.dev0"
from pynumerals.errorcheck import * # noqa: F401, F403
from pynumerals.mappings import * # noqa: F401, F403
from pynumerals.numerals_html import * # noqa: F401, F403
from pynumerals.numerals_utils import * # noqa: F401, F403
from pynumerals.process_html import * # noqa: F401, F403
from pynumerals.value_parser import * # noqa: F401, F403
| 41.444444 | 59 | 0.747989 | 51 | 373 | 5.313725 | 0.352941 | 0.309963 | 0.309963 | 0.398524 | 0.678967 | 0.678967 | 0.442804 | 0 | 0 | 0 | 0 | 0.126984 | 0.155496 | 373 | 8 | 60 | 46.625 | 0.733333 | 0.270777 | 0 | 0 | 0 | 0 | 0.037736 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.857143 | 0 | 0.857143 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
44191d4ed3dc518aef5c78e93d49d21e7a6ac72f | 311 | py | Python | algoneer/result/__init__.py | algoneer/algoneer-py | 5f300543116278c91a9cf8c9ef5a1375e3f1e75d | [
"MIT"
] | 10 | 2019-08-05T16:06:12.000Z | 2020-12-19T16:40:48.000Z | algoneer/result/__init__.py | algoneer/algoneer-py | 5f300543116278c91a9cf8c9ef5a1375e3f1e75d | [
"MIT"
] | null | null | null | algoneer/result/__init__.py | algoneer/algoneer-py | 5f300543116278c91a9cf8c9ef5a1375e3f1e75d | [
"MIT"
] | 1 | 2020-04-27T08:50:14.000Z | 2020-04-27T08:50:14.000Z | from .algorithm_result import AlgorithmResult
from .model_result import ModelResult
from .datapoint_model_result import DatapointModelResult
from .dataset_result import DatasetResult
from .dataset_model_result import DatasetModelResult
from .result import Result
from .result_collection import ResultCollection
| 38.875 | 56 | 0.88746 | 36 | 311 | 7.444444 | 0.388889 | 0.268657 | 0.190299 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.090032 | 311 | 7 | 57 | 44.428571 | 0.946996 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
4426e21da693bb0d7c45320a2398e0e631a2a88d | 6,143 | py | Python | discordbot.py | Kuraplayz04/kuradayobot_heroku | 9440ef58a58a5c54ea962b595955e23ac456b38d | [
"MIT"
] | null | null | null | discordbot.py | Kuraplayz04/kuradayobot_heroku | 9440ef58a58a5c54ea962b595955e23ac456b38d | [
"MIT"
] | null | null | null | discordbot.py | Kuraplayz04/kuradayobot_heroku | 9440ef58a58a5c54ea962b595955e23ac456b38d | [
"MIT"
] | null | null | null | # Discord.pyの読み込み
import discord
# Discordへ接続するのに必要
client = discord.Client(activity=discord.Game(name='青鬼基幹システム v1.7'))
# 自分のBotのアクセストークンを記入
TOKEN = "ODQyNzM1NDQxOTQxMTAyNjAy.YJ5oig.3UZkcAZSP7cNJKcQg2enzzanSpo"
@client.event
async def on_member_join(member):
channel = client.get_channel(825992683701010450)
await channel.send(f'{member} joined on {member.joined_at}')
# Bot起動時に実行される
@client.event
async def on_ready():
print('ログインしました')
# メッセージを取得した時に実行される
@client.event
async def on_message(message, lastmessage=None):
# Botのメッセージは除外
if message.author.bot:
return
# 条件に当てはまるメッセージかチェックし正しい場合は返す
def check(msg):
return msg.author == message.author
# /getとチャンネル上に打ち込むとBotが反応を示す
if message.content.startswith("/f3"):
await message.delete()
# /getと打ち込まれたチャンネル上に下記の文章を出力
# ユーザーからのメッセージを待つ
wait_message = await client.wait_for("message", check=check)
an0 = '<@&825992683465080845>'
an1 = '\n青鬼ごっこやります。('
an2 = ')\nサーバー: EventServer\nID:KuraPlayz04\n\nver1.16.2'
# メッセージを打ち込まれたのを確認すると下記の文章を出力
channel = client.get_channel(825992683701010449)
await channel.send(an0 + an1 + wait_message.content + an2)
embed_r_3 = discord.Embed(title="アナウンス完了!", description="Tier3チャット\nメンションアナウンス",color=discord.Colour.dark_blue())
await message.channel.send(embed=embed_r_3)
if message.content.startswith("/f2"):
await message.delete()
# /getと打ち込まれたチャンネル上に下記の文章を出力
# ユーザーからのメッセージを待つ
wait_message = await client.wait_for("message", check=check)
an0 = '<@&825992683465080845>'
an1 = '\n青鬼ごっこやります。('
an2 = ')\nサーバー: EventServer\nID:KuraPlayz04\n\nver1.16.2'
# メッセージを打ち込まれたのを確認すると下記の文章を出力
channel = client.get_channel(825992683701010450)
await channel.send(an0 + an1 + wait_message.content + an2)
embed_r_2 = discord.Embed(title="アナウンス完了!", description="Tier2チャット\nメンションアナウンス",color=discord.Colour.red())
await message.channel.send(embed=embed_r_2)
if message.content.startswith("/f1"):
await message.delete()
# /getと打ち込まれたチャンネル上に下記の文章を出力
# ユーザーからのメッセージを待つ
wait_message = await client.wait_for("message", check=check)
an0 = '<@&825992683465080845>'
an1 = '\n青鬼ごっこやります。('
an2 = ')\nサーバー: EventServer\nID:KuraPlayz04\n\nver1.16.2'
# メッセージを打ち込まれたのを確認すると下記の文章を出力
channel = client.get_channel(825992683701010451)
await channel.send(an0 + an1 + wait_message.content + an2)
embed_r_1 = discord.Embed(title="アナウンス完了!", description="Tier1チャット\nメンションアナウンス",color=discord.Colour.purple())
await message.channel.send(embed=embed_r_1)
if message.content.startswith("/n3"):
await message.delete()
# ユーザーからのメッセージを待つ
wait_message = await client.wait_for("message", check=check)
ans1 = '\n次どぞ('
ans2 = ')'
# メッセージを打ち込まれたのを確認すると下記の文章を出力
channel = client.get_channel(825992683701010449)
await channel.send(ans1 + wait_message.content + ans2)
embed_r_3 = discord.Embed(title="アナウンス完了!", description="Tier3チャット\nネクストアナウンス",color=discord.Colour.dark_blue())
await message.channel.send(embed=embed_r_3)
if message.content.startswith("/n2"):
await message.delete()
# ユーザーからのメッセージを待つ
wait_message = await client.wait_for("message", check=check)
ans1 = '\n次どぞ('
ans2 = ')'
# メッセージを打ち込まれたのを確認すると下記の文章を出力
channel = client.get_channel(825992683701010450)
await channel.send(ans1 + wait_message.content + ans2)
embed_r_2 = discord.Embed(title="アナウンス完了!", description="Tier2チャット\nネクストアナウンス",color=discord.Colour.red())
await message.channel.send(embed=embed_r_2)
if message.content.startswith("/n1"):
await message.delete()
# ユーザーからのメッセージを待つ
wait_message = await client.wait_for("message", check=check)
ans1 = '\n次どぞ('
ans2 = ')'
# メッセージを打ち込まれたのを確認すると下記の文章を出力
channel = client.get_channel(825992683701010451)
await channel.send(ans1 + wait_message.content + ans2)
embed_r_1 = discord.Embed(title="アナウンス完了!", description="Tier1チャット\nネクストアナウンス",color=discord.Colour.purple())
await message.channel.send(embed=embed_r_1)
if message.content.startswith("/l3"):
await message.delete()
# ユーザーからのメッセージを待つ
wait_message = await client.wait_for("message", check=check)
ansl1 = 'ラストどうぞ('
ansl2 = ')'
# メッセージを打ち込まれたのを確認すると下記の文章を出力
channel = client.get_channel(825992683701010449)
await channel.send(ansl1 + wait_message.content + ansl2)
embed_r_3 = discord.Embed(title="アナウンス完了!", description="Tier3チャット\nラストアナウンス",color=discord.Colour.dark_blue())
await message.channel.send(embed=embed_r_3)
if message.content.startswith("/l2"):
await message.delete()
# ユーザーからのメッセージを待つ
wait_message = await client.wait_for("message", check=check)
ansl1 = '\nラストどうぞ('
ansl2 = ')'
# メッセージを打ち込まれたのを確認すると下記の文章を出力
channel = client.get_channel(825992683701010450)
await channel.send(ansl1 + wait_message.content + ansl2)
embed_r_2 = discord.Embed(title="アナウンス完了!", description="Tier2チャット\nラストアナウンス",color=discord.Colour.red())
await message.channel.send(embed=embed_r_2)
if message.content.startswith("/l1"):
await message.delete()
# ユーザーからのメッセージを待つ
wait_message = await client.wait_for("message", check=check)
ansl1 = '\nラストどうぞ('
ansl2 = ')'
# メッセージを打ち込まれたのを確認すると下記の文章を出力
channel = client.get_channel(825992683701010451)
await channel.send(ansl1 + wait_message.content + ansl2)
embed_r_1 = discord.Embed(title="アナウンス完了!", description="Tier1チャット\nラストアナウンス",color=discord.Colour.purple())
await message.channel.send(embed=embed_r_1)
# Botの実行
client.run(TOKEN)
| 36.349112 | 121 | 0.663031 | 634 | 6,143 | 6.29653 | 0.16877 | 0.052355 | 0.04008 | 0.057615 | 0.853707 | 0.837926 | 0.836673 | 0.836673 | 0.822395 | 0.667836 | 0 | 0.070487 | 0.221716 | 6,143 | 168 | 122 | 36.565476 | 0.764484 | 0.103207 | 0 | 0.683168 | 0 | 0 | 0.138762 | 0.056235 | 0 | 0 | 0 | 0 | 0 | 1 | 0.009901 | false | 0 | 0.009901 | 0.009901 | 0.039604 | 0.009901 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
9297cb2af9e010c4967d2cdadf4a06a7947b0ae5 | 31 | py | Python | notebook_image_tabs/__init__.py | oscar6echo/notebook-image-tabs | 3b628d1d672d9bdf0716ccf88cd8f527021c06ef | [
"MIT"
] | 4 | 2020-04-18T13:09:06.000Z | 2022-02-03T07:42:30.000Z | notebook_image_tabs/__init__.py | oscar6echo/notebook-image-tabs | 3b628d1d672d9bdf0716ccf88cd8f527021c06ef | [
"MIT"
] | null | null | null | notebook_image_tabs/__init__.py | oscar6echo/notebook-image-tabs | 3b628d1d672d9bdf0716ccf88cd8f527021c06ef | [
"MIT"
] | null | null | null |
from .viewer import ImageTabs
| 10.333333 | 29 | 0.806452 | 4 | 31 | 6.25 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.16129 | 31 | 2 | 30 | 15.5 | 0.961538 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
920be80d45e568a068c83aaf230e81c851c81461 | 95,596 | py | Python | testing/test_mappo.py | rallen10/ergo_particle_gym | 5bb8073d880ab1da60ee333d892ea8a4720f3396 | [
"FSFULLR",
"FSFUL"
] | null | null | null | testing/test_mappo.py | rallen10/ergo_particle_gym | 5bb8073d880ab1da60ee333d892ea8a4720f3396 | [
"FSFULLR",
"FSFUL"
] | null | null | null | testing/test_mappo.py | rallen10/ergo_particle_gym | 5bb8073d880ab1da60ee333d892ea8a4720f3396 | [
"FSFULLR",
"FSFUL"
] | 3 | 2019-12-08T08:36:23.000Z | 2021-11-07T17:35:53.000Z | #!/usr/bin/env python
# suite of unit, integration, system, and/or acceptance tests for train.py.
# To run test, simply call:
#
# in a shell with conda environment ergo_particle_gym activated:
# nosetests test_train.py
#
# in ipython:
# run test_train.py
import sys
import os.path
sys.path.append(os.path.join(os.path.dirname(__file__), '..'))
import unittest
import numpy as np
import tensorflow as tf
from gym import spaces
from numpy.random import rand
from train import OrderingException, DeepMLP
from collections import namedtuple
from rl_algorithms.mappo import PPOAgentComputer, PPOGroupTrainer, UpdateException, redistributed_softmax, central_critic_network
import rl_algorithms.maddpg.maddpg.common.tf_util as U
from rl_algorithms.baselines.baselines.common import explained_variance
# from particle_environments.mager.world import MortalAgent
_DEBUG = False
if _DEBUG:
import matplotlib.pyplot as plt
class TestPPOAgentComputer1(unittest.TestCase):
''' test PPOAgentComputer class from mappo.py
'''
def setUp(self):
pass
def test_process_individual_agent_episode_returns_and_advantages_1(self):
''' one-step return and advantage calculation with float rewards'''
Model = namedtuple('Model', ['value'])
Args = namedtuple('Args', ['max_episode_len', 'gamma'])
value_func = lambda obs, M: sum(obs)
model = Model(value_func)
gamma = 1.0
args = Args(1, gamma)
ppo_agent = PPOAgentComputer(name="ppo_agent_0", model=model,
obs_shape_n=None, act_space_n=None, agent_index=0, args=args, local_q_func=None, lam=1.0)
ppo_agent.mbi_observations = [np.array([ 0.52141883, -0.66102998]), np.array([-0.39118867, -0.08772333])]
ppo_agent.mbi_rewards = [0.0]
ppo_agent.mbi_obs_values = [-0.13961115000000002]
ppo_agent.mbi_dones = [False, True]
ppo_agent.mbi_actions = [np.random.uniform(-1,1,2)]
ppo_agent.mbi_neglogp_actions = [np.random.uniform(0,1)]
ppo_agent.mbi_healths = [1.0]
ppo_agent.process_individual_agent_episode_returns_and_advantages(factual_values=None, counterfactual_values=None)
# check return and advantage
self.assertAlmostEqual(ppo_agent.mbi_returns[0], 0.0)
self.assertAlmostEqual(ppo_agent.mbi_factual_advantages[0], 0.13961115000000002)
def test_process_individual_agent_episode_returns_and_advantages_3(self):
'''mappo: two-step return and advantage calculation'''
Model = namedtuple('Model', ['value'])
Args = namedtuple('Args', ['max_episode_len', 'gamma'])
value_func = lambda obs, M: np.mean(obs)
model = Model(value_func)
gamma = 0.9627477525841408
lam = 0.9447698026141256
args = Args(2, gamma)
ppo_agent = PPOAgentComputer(name="ppo_agent_0", model=model,
obs_shape_n=None, act_space_n=None, agent_index=0, args=args, local_q_func=None, lam=lam)
ppo_agent.mbi_observations = [np.array([ 0.4660721 , -3.39177499]),
np.array([-4.13104788, -4.52925146]),
np.array([ 3.16713255, -2.30391816])]
ppo_agent.mbi_rewards = [-0.71486004, -1.92588795]
ppo_agent.mbi_obs_values = [value_func(ppo_agent.mbi_observations[0], M=None),
value_func(ppo_agent.mbi_observations[1], M=None)]
ppo_agent.mbi_dones = [False, False, True]
ppo_agent.mbi_actions = [np.random.uniform(-1,1,2), np.random.uniform(-1,1,2)]
ppo_agent.mbi_neglogp_actions = [np.random.uniform(0,1), np.random.uniform(0,1)]
ppo_agent.mbi_healths = [1.0, 1.0]
# calculate expected values
delta_1 = -1.92588795 - np.mean([-4.13104788, -4.52925146])
exp_returns_1 = -1.92588795
exp_advantages_1 = delta_1
delta_0 = -0.71486004 + gamma*np.mean([-4.13104788, -4.52925146]) - np.mean([ 0.4660721 , -3.39177499])
exp_advantages_0 = delta_0 + gamma*lam*delta_1
exp_returns_0 = exp_advantages_0 + np.mean([ 0.4660721 , -3.39177499])
# check return and advantage
ppo_agent.process_individual_agent_episode_returns_and_advantages(factual_values=None, counterfactual_values=None)
self.assertAlmostEqual(ppo_agent.mbi_returns[1], exp_returns_1,places=5)
self.assertAlmostEqual(ppo_agent.mbi_factual_advantages[1], exp_advantages_1,places=5)
self.assertAlmostEqual(ppo_agent.mbi_returns[0], exp_returns_0,places=5)
self.assertAlmostEqual(ppo_agent.mbi_factual_advantages[0], exp_advantages_0,places=5)
def test_process_individual_agent_episode_returns_and_advantages_4(self):
'''mappo: error handling for multi-step batch with inconsistent dones'''
Model = namedtuple('Model', ['value'])
Args = namedtuple('Args', ['max_episode_len', 'gamma'])
value_func = lambda obs, M: np.mean(obs)
model = Model(value_func)
gamma = 0.9627477525841408
lam = 0.9447698026141256
args = Args(2, gamma)
ppo_agent = PPOAgentComputer(name="ppo_agent_0", model=model,
obs_shape_n=None, act_space_n=None, agent_index=0, args=args, local_q_func=None, lam=lam)
ppo_agent.mbi_observations = [np.array([ 0.4660721 , -3.39177499]),
np.array([-4.13104788, -4.52925146]),
np.array([ 3.16713255, -2.30391816])]
ppo_agent.mbi_rewards = [-0.71486004, -1.92588795]
ppo_agent.mbi_obs_values = [value_func(ppo_agent.mbi_observations[0], M=None),
value_func(ppo_agent.mbi_observations[1], M=None)]
ppo_agent.mbi_dones = [False, True, False]
ppo_agent.mbi_actions = [np.random.uniform(-1,1,2), np.random.uniform(-1,1,2)]
ppo_agent.mbi_neglogp_actions = [np.random.uniform(0,1), np.random.uniform(0,1)]
ppo_agent.mbi_healths = [1.0, 1.0]
# check error is raised
with self.assertRaises(UpdateException):
ppo_agent.process_individual_agent_episode_returns_and_advantages(factual_values=None, counterfactual_values=None)
def test_process_individual_agent_episode_returns_and_advantages_5(self):
'''mappo: extended sequence returns don't depend on value func'''
Model = namedtuple('Model', ['value'])
Args = namedtuple('Args', ['max_episode_len', 'gamma'])
value_func = lambda obs, M: np.mean(obs)
reward_func = lambda obs: np.sum(obs)
model = Model(value_func)
gamma = 1.0
lam = 1.0
args = Args(10, gamma)
ppo_agent = PPOAgentComputer(name="ppo_agent_0", model=model,
obs_shape_n=None, act_space_n=None, agent_index=0, args=args, local_q_func=None, lam=lam)
ppo_agent.mbi_observations = [np.array([-0.61322181, 0.60141474]),
np.array([-0.68131643, -0.46429067]),
np.array([-0.32310118, -0.21411603]),
np.array([ 0.59954657, -0.09719427]),
np.array([0.20816313, 0.15251241]),
np.array([0.14608069, 0.69522925]),
np.array([-0.03096035, 0.10213929]),
np.array([ 0.66119021, -0.69454451]),
np.array([-0.69480874, 0.09734647]),
np.array([0.74504277, 0.20447294]),
np.array([0.16639411, 0.67739031])]
ppo_agent.mbi_dones = 11*[False]
ppo_agent.mbi_dones[-1] = True
ppo_agent.mbi_actions = [np.random.uniform(-1,1,2),
np.random.uniform(-1,1,2),
np.random.uniform(-1,1,2),
np.random.uniform(-1,1,2),
np.random.uniform(-1,1,2),
np.random.uniform(-1,1,2),
np.random.uniform(-1,1,2),
np.random.uniform(-1,1,2),
np.random.uniform(-1,1,2),
np.random.uniform(-1,1,2)]
ppo_agent.mbi_neglogp_actions = list(np.random.uniform(0,1,10))
ppo_agent.mbi_healths = list(np.ones(10))
for obs in ppo_agent.mbi_observations[:-1]:
ppo_agent.mbi_rewards.append(reward_func(obs))
ppo_agent.mbi_obs_values.append(value_func(obs, M=None))
# check returns
ppo_agent.process_individual_agent_episode_returns_and_advantages(factual_values=None, counterfactual_values=None)
for i, ret in enumerate(ppo_agent.mbi_returns):
self.assertAlmostEqual(ret, np.sum([reward_func(obs) for obs in ppo_agent.mbi_observations[i:-1]]), places=5)
class TestPPOGroupTrainer1(unittest.TestCase):
''' test PPOGroupTrainer class from mappo.py
'''
def setUp(self):
''' the with tf.Graph.as_default()... command allows for multiple calls to setUp
without causing variable scopes to "clash". See baselines/common/tests/util.py for examples
'''
with tf.Graph().as_default(), tf.Session(config=tf.ConfigProto(allow_soft_placement=True)).as_default():
# create trainer that would live in a simple 1D environment
# with 1D continuous observations and actions
# and single step episodes
self.group_trainer = PPOGroupTrainer(
n_agents=2,
obs_space=spaces.Box(low=-1.0, high=1.0, shape=(1,), dtype=np.float32),
act_space=spaces.Box(low=-1.0, high=1.0, shape=(1,), dtype=np.float32),
n_steps_per_episode=1, ent_coef=0.0, local_actor_learning_rate=3e-4, vf_coef=0.5,
num_layers=2, num_units=64, activation='tanh', cliprange=0.2, shared_reward=False,
critic_type='distributed_local_observations', central_critic_model=None, central_critic_learning_rate=None,
central_critic_num_units=None,
joint_state_space_len=3*4, max_grad_norm = 0.5, n_opt_epochs = 4,
n_episodes_per_batch=1, n_minibatches=1)
# overwrite model value estimator with simple pass-through function
# to simplify testing
self.group_trainer.local_actor_critic_model.value = lambda obs, M: obs
# Populate the group with stripped out versions of agents
Args = namedtuple('Args', ['max_episode_len', 'gamma'])
args = Args(1, 0.99)
self.agent_0 = PPOAgentComputer(
name="agent_0",
model=self.group_trainer.local_actor_critic_model,
obs_shape_n=None, act_space_n=None,
agent_index=0, args=args, local_q_func=None)
self.agent_1 = PPOAgentComputer(
name="agent_1",
model=self.group_trainer.local_actor_critic_model,
obs_shape_n=None, act_space_n=None,
agent_index=1, args=args, local_q_func=None)
self.group_trainer.update_agent_trainer_group([self.agent_0, self.agent_1])
# give agents artificially, randomly generated experience
self.agent_0.mbi_observations = [np.array([-0.78438007]), np.array([-0.62432])]
self.agent_0.mbi_rewards = [-0.78438007]
self.agent_0.mbi_obs_values = [-0.78438007] # value func just passes through input (ie observations)
self.agent_0.mbi_actions = [np.array([-0.90892982])]
self.agent_0.mbi_dones = [False, True]
self.agent_0.mbi_neglogp_actions = [0.0]
self.agent_0.mbi_healths = [0.0]
self.agent_1.mbi_observations = [np.array([0.03254343]), np.array([0.24190804])]
self.agent_1.mbi_rewards = [0.03254343]
self.agent_1.mbi_obs_values = [0.03254343] # value func just passes through input (ie observations)
self.agent_1.mbi_actions = [np.array([-0.61390828])]
self.agent_1.mbi_dones = [False, True]
self.agent_1.mbi_neglogp_actions = [0.0]
self.agent_1.mbi_healths = [0.0]
def tearDown(self):
'''Don't actually tearDown the tf graph
Note: it may seem tempting to use tf.reset_default_graph(), but this
causes an error in subsequent setUp calls with something to do with
op: NoOp ... is not an element of this graph
Instead use the with tf.Graph.as_default()... in setUp
'''
pass
def test_process_individual_agent_episode_returns_and_advantages_1(self):
'''mappo: one-step with zero advantage '''
self.agent_0.process_individual_agent_episode_returns_and_advantages(factual_values=None, counterfactual_values=None)
self.assertAlmostEqual(self.agent_0.mbi_returns[0], -0.78438007, places=5)
self.assertAlmostEqual(self.agent_0.mbi_factual_advantages[0], 0.0, places=5)
self.agent_1.process_individual_agent_episode_returns_and_advantages(factual_values=None, counterfactual_values=None)
self.assertAlmostEqual(self.agent_1.mbi_returns[0], 0.03254343, places=5)
self.assertAlmostEqual(self.agent_0.mbi_factual_advantages[0], 0.0, places=5)
def test_update_group_policy_1(self):
'''mappo: smoke test - update_group_policy without throwing an error'''
self.assertEqual(len(self.group_trainer.agent_trainer_group[0].mbi_rewards), 1)
self.assertEqual(len(self.group_trainer.agent_trainer_group[1].mbi_rewards), 1)
self.group_trainer.update_group_policy(terminal=1)
self.assertEqual(len(self.group_trainer.agent_trainer_group[0].mbi_rewards), 0)
self.assertEqual(len(self.group_trainer.agent_trainer_group[1].mbi_rewards), 0)
class TestPPOGroupTrainer2(unittest.TestCase):
''' test PPOGroupTrainer class from mappo.py
'''
def setUp(self):
''' the with tf.Graph.as_default()... command allows for multiple calls to setUp
without causing variable scopes to "clash". See baselines/common/tests/util.py for examples
'''
with tf.Graph().as_default(), tf.Session(config=tf.ConfigProto(allow_soft_placement=True)).as_default():
# create trainer that would live in a simple 1D environment
# with 1D continuous observations and actions
# and single step episodes
self.episode_len = 5
self.group_trainer = PPOGroupTrainer(
n_agents=3,
obs_space=spaces.Box(low=-1.0, high=1.0, shape=(1,), dtype=np.float32),
act_space=spaces.Box(low=-1.0, high=1.0, shape=(1,), dtype=np.float32),
n_steps_per_episode=self.episode_len, ent_coef=0.0, local_actor_learning_rate=3e-4, vf_coef=0.5,
num_layers=2, num_units=64, activation='tanh', cliprange=0.2,
n_episodes_per_batch=10, shared_reward=False,
critic_type='distributed_local_observations', central_critic_model=None, central_critic_learning_rate=None,
central_critic_num_units=None,
joint_state_space_len=3*4, max_grad_norm = 0.5, n_opt_epochs = 4, n_minibatches=4)
# overwrite model value estimator with simple pass-through function
# to simplify testing
self.group_trainer.local_actor_critic_model.value = lambda obs, M: obs
# Populate the group with stripped out versions of agents
Args = namedtuple('Args', ['max_episode_len', 'gamma'])
args = Args(self.episode_len, 0.99)
self.agent_0 = PPOAgentComputer(
name="agent_0",
model=self.group_trainer.local_actor_critic_model,
obs_shape_n=None, act_space_n=None,
agent_index=0, args=args, local_q_func=None, lam=1.0)
self.agent_1 = PPOAgentComputer(
name="agent_1",
model=self.group_trainer.local_actor_critic_model,
obs_shape_n=None, act_space_n=None,
agent_index=1, args=args, local_q_func=None, lam=1.0)
self.agent_2 = PPOAgentComputer(
name="agent_1",
model=self.group_trainer.local_actor_critic_model,
obs_shape_n=None, act_space_n=None,
agent_index=2, args=args, local_q_func=None, lam=1.0)
self.group_trainer.update_agent_trainer_group([self.agent_0, self.agent_1, self.agent_2])
def tearDown(self):
'''Don't actually tearDown the tf graph
Note: it may seem tempting to use tf.reset_default_graph(), but this
causes an error in subsequent setUp calls with something to do with
op: NoOp ... is not an element of this graph
Instead use the with tf.Graph.as_default()... in setUp
'''
pass
def test_iterative_update_group_policy_1(self):
'''mappo: run several iterations of update_group_policy calls and check minibatch sizes'''
for ep in range(10):
# for each episode, the group batch data should grow by number of agents
self.assertEqual(len(self.group_trainer.batch_observations),
self.group_trainer.n_agents*self.group_trainer.n_steps_per_episode*ep)
self.assertEqual(len(self.group_trainer.batch_observations),
len(self.group_trainer.batch_factual_values))
self.assertEqual(len(self.group_trainer.batch_observations),
len(self.group_trainer.batch_counterfactual_values))
self.assertEqual(len(self.group_trainer.batch_observations),
len(self.group_trainer.batch_returns))
self.assertEqual(len(self.group_trainer.batch_observations),
len(self.group_trainer.batch_actions))
self.assertEqual(len(self.group_trainer.batch_observations),
len(self.group_trainer.batch_neglogp_actions))
self.assertEqual(len(self.group_trainer.batch_observations),
len(self.group_trainer.batch_dones))
for ag in self.group_trainer.agent_trainer_group:
for step in range(5):
ag.mbi_observations.append(np.random.uniform(-1., +1., 1))
ag.mbi_actions.append(np.random.uniform(-1., +1., 1))
ag.mbi_rewards.append(np.random.uniform(0, +1.))
ag.mbi_obs_values.append(np.random.uniform(0, +1.))
ag.mbi_dones.append(False)
ag.mbi_neglogp_actions.append(-np.log(np.random.uniform(0,1)))
ag.mbi_healths.append(1.0)
ag.mbi_observations.append(np.random.uniform(-1., +1., 1))
ag.mbi_dones.append(True)
self.group_trainer.update_group_policy(terminal=1)
self.assertEqual(len(self.group_trainer.agent_trainer_group[0].mbi_rewards), 0)
self.assertEqual(len(self.group_trainer.agent_trainer_group[1].mbi_rewards), 0)
self.assertEqual(len(self.group_trainer.agent_trainer_group[2].mbi_rewards), 0)
# after 10 episode, a policy update should have occurred and cleared the group
# minibatch
self.assertEqual(len(self.group_trainer.batch_observations), 0)
self.assertEqual(len(self.group_trainer.batch_factual_values), 0)
self.assertEqual(len(self.group_trainer.batch_counterfactual_values), 0)
self.assertEqual(len(self.group_trainer.batch_actions), 0)
self.assertEqual(len(self.group_trainer.batch_returns), 0)
self.assertEqual(len(self.group_trainer.batch_dones), 0)
self.assertEqual(len(self.group_trainer.batch_neglogp_actions), 0)
self.assertEqual(len(self.group_trainer.batch_effective_returns), 0)
def test_multi_agent_returns_1(self):
'''mappo: equal returns when shared rewards and lamba=1, regardless of individual value estimates'''
n_episodes = 10
for ep in range(n_episodes):
# Generate true global state of system
state = np.zeros(len(self.group_trainer.agent_trainer_group))
for step in range(self.episode_len):
for ag_ind, ag in enumerate(self.group_trainer.agent_trainer_group):
ag.mbi_observations.append(np.random.normal(state[ag_ind], 0.1, 1))
ag.mbi_actions.append(np.random.normal(1.0, 0.1, 1))
ag.mbi_obs_values.append(np.random.normal(ag.mbi_observations[-1][0], 10.0))
ag.mbi_dones.append(False)
ag.mbi_neglogp_actions.append(-np.log(np.random.uniform(0,1)))
ag.mbi_healths.append(1.0)
# update state
state[ag_ind] += ag.mbi_actions[-1][0]
# calculate reward:
reward = np.mean(state)
for ag_ind, ag in enumerate(self.group_trainer.agent_trainer_group):
ag.mbi_rewards.append(reward)
if step == self.episode_len-1:
for ag_ind, ag in enumerate(self.group_trainer.agent_trainer_group):
ag.mbi_observations.append(np.random.normal(state[ag_ind], 0.1, 1))
ag.mbi_dones.append(True)
# test that returns are same for all agents
self.agent_0.process_individual_agent_episode_returns_and_advantages(factual_values=None, counterfactual_values=None)
self.agent_1.process_individual_agent_episode_returns_and_advantages(factual_values=None, counterfactual_values=None)
self.agent_2.process_individual_agent_episode_returns_and_advantages(factual_values=None, counterfactual_values=None)
for step in range(self.episode_len):
# rewards
self.assertAlmostEqual(self.agent_0.mbi_rewards[step], self.agent_1.mbi_rewards[step], places=5)
self.assertAlmostEqual(self.agent_0.mbi_rewards[step], self.agent_2.mbi_rewards[step], places=5)
# returns
self.assertAlmostEqual(self.agent_0.mbi_returns[step], self.agent_1.mbi_returns[step], places=5)
self.assertAlmostEqual(self.agent_0.mbi_returns[step], self.agent_2.mbi_returns[step], places=5)
# reset for next episode (not actually calling training)
self.agent_0.clear_individual_agent_episode_data()
self.agent_1.clear_individual_agent_episode_data()
self.agent_2.clear_individual_agent_episode_data()
def test_multi_agent_returns_2(self):
'''mappo: equal returns, rewards, and advantages when values centralized'''
n_episodes = 10
for ep in range(n_episodes):
# Generate true global state of system
state = np.zeros(len(self.group_trainer.agent_trainer_group))
central_values = np.zeros(self.episode_len+1)
for step in range(self.episode_len):
for ag_ind, ag in enumerate(self.group_trainer.agent_trainer_group):
ag.mbi_observations.append(np.random.normal(state[ag_ind], 0.1, 1))
ag.mbi_actions.append(np.random.normal(1.0, 0.1, 1))
ag.mbi_obs_values.append(np.random.normal(ag.mbi_observations[-1][0], 10.0))
ag.mbi_dones.append(False)
ag.mbi_neglogp_actions.append(-np.log(np.random.uniform(0,1)))
ag.mbi_healths.append(1.0)
if step ==self.episode_len-1:
ag.mbi_observations.append(np.random.normal(state[ag_ind], 0.1, 1))
ag.mbi_dones.append(True)
# update state
state[ag_ind] += ag.mbi_actions[-1][0]
# calculate reward:
reward = np.mean(state)
for ag_ind, ag in enumerate(self.group_trainer.agent_trainer_group):
ag.mbi_rewards.append(reward)
# calculate centralized values
central_values[step] = np.mean([ag.mbi_obs_values[step] for ag in self.group_trainer.agent_trainer_group])
# if step == self.episode_len-1:
# for ag_ind, ag in enumerate(self.group_trainer.agent_trainer_group):
# ag.mbi_observations.append(np.random.normal(state[ag_ind], 0.1, 1))
# ag.mbi_dones.append(True)
# central_values[step+1] = np.mean([np.random.normal(ag.mbi_observations[-1][0], 10.0) for ag in self.group_trainer.agent_trainer_group])
# test that returns, advantages are same for all agents with centralized values
self.agent_0.process_individual_agent_episode_returns_and_advantages(factual_values=central_values, counterfactual_values=None)
self.agent_1.process_individual_agent_episode_returns_and_advantages(factual_values=central_values, counterfactual_values=None)
self.agent_2.process_individual_agent_episode_returns_and_advantages(factual_values=central_values, counterfactual_values=None)
for step in range(self.episode_len):
# rewards
self.assertAlmostEqual(self.agent_0.mbi_rewards[step], self.agent_1.mbi_rewards[step], places=5)
self.assertAlmostEqual(self.agent_0.mbi_rewards[step], self.agent_2.mbi_rewards[step], places=5)
# returns
self.assertAlmostEqual(self.agent_0.mbi_returns[step], self.agent_1.mbi_returns[step], places=5)
self.assertAlmostEqual(self.agent_0.mbi_returns[step], self.agent_2.mbi_returns[step], places=5)
# advantages
self.assertAlmostEqual(self.agent_0.mbi_factual_advantages[step], self.agent_1.mbi_factual_advantages[step], places=5)
self.assertAlmostEqual(self.agent_0.mbi_factual_advantages[step], self.agent_2.mbi_factual_advantages[step], places=5)
# values
self.assertAlmostEqual(self.agent_0.mbi_factual_values[step], self.agent_1.mbi_factual_values[step], places=5)
self.assertAlmostEqual(self.agent_0.mbi_factual_values[step], self.agent_2.mbi_factual_values[step], places=5)
# reset for next episode (not actually calling training)
self.agent_0.clear_individual_agent_episode_data()
self.agent_1.clear_individual_agent_episode_data()
self.agent_2.clear_individual_agent_episode_data()
def test_multi_agent_heuristic_credit_assignment_1(self):
'''mappo: heuristic credits: all agents receive equal credit if return equals return mean and all actions same probability'''
# change shared_reward to true for this test
self.group_trainer.shared_reward = True
self.group_trainer.crediting_algorithm = 'batch_mean_deviation_heuristic'
for trial in range(10):
# generate random reward history that each agent will have for every episode
common_reward_history = np.random.normal(0,10, self.group_trainer.n_steps_per_episode)
# generate random action probability that all agent use for given step
common_neglogp_actions = -np.log(np.random.uniform(0,1,
(self.group_trainer.n_episodes_per_batch, self.group_trainer.n_steps_per_episode)))
for ep in range(self.group_trainer.n_episodes_per_batch):
# check size of batch is growing appropriately
self.assertEqual(len(self.group_trainer.batch_observations),
self.group_trainer.n_agents*self.group_trainer.n_steps_per_episode*ep)
self.assertEqual(len(self.group_trainer.batch_observations),
len(self.group_trainer.batch_factual_values))
self.assertEqual(len(self.group_trainer.batch_observations),
len(self.group_trainer.batch_counterfactual_values))
self.assertEqual(len(self.group_trainer.batch_observations),
len(self.group_trainer.batch_returns))
self.assertEqual(len(self.group_trainer.batch_observations),
len(self.group_trainer.batch_actions))
self.assertEqual(len(self.group_trainer.batch_observations),
len(self.group_trainer.batch_neglogp_actions))
self.assertEqual(len(self.group_trainer.batch_observations),
len(self.group_trainer.batch_dones))
for ag in self.group_trainer.agent_trainer_group:
for step in range(self.group_trainer.n_steps_per_episode):
ag.mbi_observations.append(np.random.uniform(-1., +1., 1))
ag.mbi_actions.append(np.random.uniform(-1., +1., 1))
ag.mbi_rewards.append(common_reward_history[step])
ag.mbi_obs_values.append(np.random.uniform(0, +1.))
ag.mbi_dones.append(False)
ag.mbi_neglogp_actions.append(common_neglogp_actions[ep][step])
ag.mbi_healths.append(1.0)
ag.mbi_observations.append(np.random.uniform(-1., +1., 1))
ag.mbi_dones.append(True)
if ep < self.group_trainer.n_episodes_per_batch - 1:
self.group_trainer.update_group_policy(terminal=1)
self.assertEqual(len(self.group_trainer.agent_trainer_group[0].mbi_rewards), 0)
self.assertEqual(len(self.group_trainer.agent_trainer_group[1].mbi_rewards), 0)
self.assertEqual(len(self.group_trainer.agent_trainer_group[2].mbi_rewards), 0)
else:
# don't actually run final update call, call batch_credit_assignment instead
break
# format batch and run credit assignment
episode_factual_values, episode_counterfactual_values = self.group_trainer.process_episode_value_centralization_and_credit_assignment()
self.group_trainer.process_episode_returns_and_store_group_training_batch(episode_factual_values, episode_counterfactual_values)
self.group_trainer.process_episode_clear_data()
crediting_info = self.group_trainer.batch_credit_assignment()
return_stds = crediting_info[1]
credit_scale = crediting_info[2]
# check that every agent is receiving the same credit
self.assertEqual(len(self.group_trainer.batch_effective_returns), self.group_trainer.n_data_per_batch)
for ep in range(self.group_trainer.n_episodes_per_batch):
for step in range(self.group_trainer.n_steps_per_episode):
self.assertAlmostEqual(return_stds[step], 0.0, places=5)
self.assertAlmostEqual(credit_scale[ep][step], 0.0, places=5)
for ag in range(self.group_trainer.n_agents):
batch_index = (ep*self.group_trainer.n_agents + ag) * self.group_trainer.n_steps_per_episode + step
self.assertAlmostEqual(self.group_trainer.batch_neglogp_actions[batch_index],
common_neglogp_actions[ep][step], places=5)
self.assertAlmostEqual(self.group_trainer.batch_effective_returns[batch_index],
self.group_trainer.batch_returns[batch_index]/float(self.group_trainer.n_agents),
places=5)
# execute training to refresh batch data
self.group_trainer.execute_group_training()
def test_multi_agent_heurisitic_credit_assignment_2(self):
'''mappo: heurisitc credits: one agent receives all the credit when action prob much larger and returns=mean'''
# change shared_reward to true for this test
self.group_trainer.shared_reward = True
self.group_trainer.crediting_algorithm = 'batch_mean_deviation_heuristic'
for trial in range(10):
# generate random reward history that each agent will have for every episode
common_reward_history = np.random.normal(0,10, self.group_trainer.n_steps_per_episode)
# generate random action probabilities with one agent recieving high prob and others low
high_neglogp_actions = -np.log(np.random.uniform(0.999,1,
(self.group_trainer.n_episodes_per_batch, self.group_trainer.n_steps_per_episode)))
low_neglogp_actions = -np.log(np.random.uniform(0,0.001,
(self.group_trainer.n_episodes_per_batch, self.group_trainer.n_steps_per_episode)))
# pick random agent to recieve high probility actions
lucky_agent = np.random.randint(self.group_trainer.n_agents,
size=(self.group_trainer.n_episodes_per_batch,))
for ep in range(self.group_trainer.n_episodes_per_batch):
# check size of batch is growing appropriately
self.assertEqual(len(self.group_trainer.batch_observations),
self.group_trainer.n_agents*self.group_trainer.n_steps_per_episode*ep)
self.assertEqual(len(self.group_trainer.batch_observations),
len(self.group_trainer.batch_factual_values))
self.assertEqual(len(self.group_trainer.batch_observations),
len(self.group_trainer.batch_counterfactual_values))
self.assertEqual(len(self.group_trainer.batch_observations),
len(self.group_trainer.batch_returns))
self.assertEqual(len(self.group_trainer.batch_observations),
len(self.group_trainer.batch_actions))
self.assertEqual(len(self.group_trainer.batch_observations),
len(self.group_trainer.batch_neglogp_actions))
self.assertEqual(len(self.group_trainer.batch_observations),
len(self.group_trainer.batch_dones))
for ag_ind, ag in enumerate(self.group_trainer.agent_trainer_group):
for step in range(self.group_trainer.n_steps_per_episode):
ag.mbi_observations.append(np.random.uniform(-1., +1., 1))
ag.mbi_actions.append(np.random.uniform(-1., +1., 1))
ag.mbi_rewards.append(common_reward_history[step])
ag.mbi_obs_values.append(np.random.uniform(0, +1.))
ag.mbi_dones.append(False)
ag.mbi_healths.append(1.0)
if ag_ind == lucky_agent[ep]:
ag.mbi_neglogp_actions.append(high_neglogp_actions[ep][step])
else:
ag.mbi_neglogp_actions.append(low_neglogp_actions[ep][step])
ag.mbi_observations.append(np.random.uniform(-1., +1., 1))
ag.mbi_dones.append(True)
if ep < self.group_trainer.n_episodes_per_batch - 1:
self.group_trainer.update_group_policy(terminal=1)
self.assertEqual(len(self.group_trainer.agent_trainer_group[0].mbi_rewards), 0)
self.assertEqual(len(self.group_trainer.agent_trainer_group[1].mbi_rewards), 0)
self.assertEqual(len(self.group_trainer.agent_trainer_group[2].mbi_rewards), 0)
else:
# don't actually run final update call, call batch_credit_assignment instead
break
# format batch and run credit assignment
episode_factual_values, episode_counterfactual_values = self.group_trainer.process_episode_value_centralization_and_credit_assignment()
self.group_trainer.process_episode_returns_and_store_group_training_batch(episode_factual_values, episode_counterfactual_values)
self.group_trainer.process_episode_clear_data()
crediting_info = self.group_trainer.batch_credit_assignment()
return_stds = crediting_info[1]
credit_scale = crediting_info[2]
# check that one agent recieves almost all the credit
self.assertEqual(len(self.group_trainer.batch_effective_returns), self.group_trainer.n_data_per_batch)
for ep in range(self.group_trainer.n_episodes_per_batch):
for step in range(self.group_trainer.n_steps_per_episode):
self.assertAlmostEqual(return_stds[step], 0.0, places=5)
self.assertAlmostEqual(credit_scale[ep][step], 0.0, places=5)
for ag in range(self.group_trainer.n_agents):
batch_index = (ep*self.group_trainer.n_agents + ag) * self.group_trainer.n_steps_per_episode + step
tol = abs(self.group_trainer.batch_returns[batch_index])/10.0
if ag == lucky_agent[ep]:
self.assertAlmostEqual(self.group_trainer.batch_effective_returns[batch_index],
self.group_trainer.batch_returns[batch_index], delta=tol)
else:
self.assertAlmostEqual(self.group_trainer.batch_effective_returns[batch_index], 0.0, delta=tol)
# execute training to refresh batch data
self.group_trainer.execute_group_training()
def test_multi_agent_heuristic_credit_assignment_3(self):
'''mappo: No crediting: check that returns equal credits when no crediting applied'''
# change shared_reward to true for this test
self.group_trainer.crediting_algorithm = None
for trial in range(10):
# generate random reward history that each agent will have for every episode
common_reward_history = np.random.normal(0,10, self.group_trainer.n_steps_per_episode)
for ep in range(10):
# for each episode, the group batch data should grow by number of agents
self.assertEqual(len(self.group_trainer.batch_observations),
self.group_trainer.n_agents*self.group_trainer.n_steps_per_episode*ep)
self.assertEqual(len(self.group_trainer.batch_observations),
len(self.group_trainer.batch_factual_values))
self.assertEqual(len(self.group_trainer.batch_observations),
len(self.group_trainer.batch_counterfactual_values))
self.assertEqual(len(self.group_trainer.batch_observations),
len(self.group_trainer.batch_returns))
self.assertEqual(len(self.group_trainer.batch_observations),
len(self.group_trainer.batch_actions))
self.assertEqual(len(self.group_trainer.batch_observations),
len(self.group_trainer.batch_neglogp_actions))
self.assertEqual(len(self.group_trainer.batch_observations),
len(self.group_trainer.batch_dones))
for ag in self.group_trainer.agent_trainer_group:
for step in range(5):
ag.mbi_observations.append(np.random.uniform(-1., +1., 1))
ag.mbi_actions.append(np.random.uniform(-1., +1., 1))
ag.mbi_rewards.append(common_reward_history[step])
ag.mbi_obs_values.append(np.random.uniform(0, +1.))
ag.mbi_dones.append(False)
ag.mbi_neglogp_actions.append(-np.log(np.random.uniform(0,1)))
ag.mbi_healths.append(1.0)
ag.mbi_observations.append(np.random.uniform(-1., +1., 1))
ag.mbi_dones.append(True)
if ep < self.group_trainer.n_episodes_per_batch - 1:
self.group_trainer.update_group_policy(terminal=1)
self.assertEqual(len(self.group_trainer.agent_trainer_group[0].mbi_rewards), 0)
self.assertEqual(len(self.group_trainer.agent_trainer_group[1].mbi_rewards), 0)
self.assertEqual(len(self.group_trainer.agent_trainer_group[2].mbi_rewards), 0)
else:
# don't actually run final update call, call batch_credit_assignment instead
break
# format batch and run credit assignment
episode_factual_values, episode_counterfactual_values = self.group_trainer.process_episode_value_centralization_and_credit_assignment()
self.group_trainer.process_episode_returns_and_store_group_training_batch(episode_factual_values, episode_counterfactual_values)
self.group_trainer.process_episode_clear_data()
self.group_trainer.batch_credit_assignment()
# credit_scale = crediting_info[2]
# check that one agent recieves almost all the credit
self.assertEqual(len(self.group_trainer.batch_effective_returns), self.group_trainer.n_data_per_batch)
for ep in range(self.group_trainer.n_episodes_per_batch):
for step in range(self.group_trainer.n_steps_per_episode):
# self.assertAlmostEqual(credit_scale[ep][step], 0.0, places=5)
expected_credit = self.group_trainer.batch_effective_returns[ep*self.group_trainer.n_agents*self.group_trainer.n_steps_per_episode+step]
for ag in range(self.group_trainer.n_agents):
batch_index = (ep*self.group_trainer.n_agents + ag) * self.group_trainer.n_steps_per_episode + step
# tol = abs(self.group_trainer.batch_returns[batch_index])/10.0
self.assertAlmostEqual(
self.group_trainer.batch_effective_returns[batch_index],
self.group_trainer.batch_returns[batch_index], places=4)
self.assertAlmostEqual(
self.group_trainer.batch_effective_returns[batch_index],
expected_credit, places=4)
# execute training to refresh batch data
self.group_trainer.execute_group_training()
class TestCentralCriticNetwork1(unittest.TestCase):
''' test central_critic_network class from mappo.py
'''
def setUp(self):
''' the with tf.Graph.as_default()... command allows for multiple calls to setUp
without causing variable scopes to "clash". See baselines/common/tests/util.py for examples
'''
with tf.Graph().as_default() as self.setup_graph, tf.Session(config=tf.ConfigProto(allow_soft_placement=True)).as_default() as self.setup_sess:
self.test_n_training_iterations = 1000
self.test_n_data_per_batch = 100
self.test_num_layers = 2
self.test_num_units = 8
self.test_activation = 'tanh'
self.test_learning_rate = 1e-2
self.test_input_size = 1
self.test_cliprange = 0.2
joint_state_stamped_ph = [U.BatchInput((self.test_input_size, ), name="joint_state").get()]
deep_mlp = DeepMLP(num_layers=self.test_num_layers, activation=self.test_activation)
self.central_vf_value, self.central_vf_train, self.central_vf_debug = central_critic_network(
inputs_placeholder_n=joint_state_stamped_ph,
v_func=deep_mlp.deep_mlp_model,
optimizer=tf.train.AdamOptimizer(learning_rate=self.test_learning_rate),
scope = "joint_state_critic",
num_units=self.test_num_units,
grad_norm_clipping=0.5
)
def tearDown(self):
'''Don't actually tearDown the tf graph
Note: it may seem tempting to use tf.reset_default_graph(), but this
causes an error in subsequent setUp calls with something to do with
op: NoOp ... is not an element of this graph
Instead use the with tf.Graph.as_default()... in setUp
'''
pass
def test_central_critic_network_constant_target(self):
'''mappo: central critic learning constant target value'''
# randomly generated but fixed constant target, regardless of input
const_target = 8.245529015329097
# in order to make calls to the central value function, we need to operate within the tf session
# and initialize variables
with self.setup_sess:
self.setup_sess.run(tf.global_variables_initializer())
for train_iter in range(self.test_n_training_iterations):
# create individual training batch of random input but fixed target
training_feed = [[], [], [], []]
for i in range(self.test_n_data_per_batch):
rand_input = np.random.uniform(-1., +1., self.test_input_size)
training_feed[0].append(rand_input)
training_feed[1].append(const_target)
training_feed[2].append(self.central_vf_value(np.expand_dims(rand_input, axis=0))[0])
training_feed[3] = self.test_cliprange
# call train and update target network
central_vf_loss = self.central_vf_train(*training_feed)
# check that value estimate has converged to const_target
test_vals = []
for test_iter in range(1000):
test_vals.append(self.central_vf_value(np.expand_dims(np.random.uniform(-1., +1., self.test_input_size),axis=0)))
# print("test mean = {} | test std = {}".format(np.mean(test_vals), np.std(test_vals)))
self.assertAlmostEqual(np.mean(test_vals), const_target, places=3)
class TestCentralCriticNetwork2(unittest.TestCase):
''' test central_critic_network class from mappo.py
'''
def setUp(self):
''' the with tf.Graph.as_default()... command allows for multiple calls to setUp
without causing variable scopes to "clash". See baselines/common/tests/util.py for examples
'''
with tf.Graph().as_default() as self.setup_graph, tf.Session(config=tf.ConfigProto(allow_soft_placement=True)).as_default() as self.setup_sess:
self.test_n_training_iterations = 1000
self.test_n_data_per_batch = 128
self.test_num_layers = 4
self.test_num_units = 64
self.test_activation = 'elu'
self.test_learning_rate = 1e-3
self.test_input_size = 1
self.test_test_size = 10000
self.test_cliprange = 0.2
joint_state_stamped_ph = [U.BatchInput((self.test_input_size, ), name="joint_state").get()]
deep_mlp = DeepMLP(num_layers=self.test_num_layers, activation=self.test_activation)
self.central_vf_value, self.central_vf_train, self.central_vf_debug = central_critic_network(
inputs_placeholder_n=joint_state_stamped_ph,
v_func=deep_mlp.deep_mlp_model,
optimizer=tf.train.AdamOptimizer(learning_rate=self.test_learning_rate),
scope = "joint_state_critic",
num_units=self.test_num_units,
grad_norm_clipping=0.5
)
def tearDown(self):
'''Don't actually tearDown the tf graph
Note: it may seem tempting to use tf.reset_default_graph(), but this
causes an error in subsequent setUp calls with something to do with
op: NoOp ... is not an element of this graph
Instead use the with tf.Graph.as_default()... in setUp
'''
pass
def test_central_critic_network_periodic_target(self):
'''mappo: central critic learning periodic function (this may take a while)'''
# sinusoidal target function
periodic_target = lambda x: np.sin(x)
# in order to make calls to the central value function, we need to operate within the tf session
# and initialize variables
with self.setup_sess:
self.setup_sess.run(tf.global_variables_initializer())
central_vf_loss = []
central_vf_expvar = []
for train_iter in range(self.test_n_training_iterations):
# create individual training batch of random input but fixed target
training_feed = [[], [], [], []]
for i in range(self.test_n_data_per_batch):
rand_input = np.random.uniform(-10., +10., self.test_input_size)
training_feed[0].append(rand_input)
training_feed[1].append(periodic_target(rand_input)[0])
training_feed[2].append(self.central_vf_value(np.expand_dims(rand_input, axis=0))[0])
training_feed[3] = self.test_cliprange
# call train and update target network
central_vf_loss.append(self.central_vf_train(*training_feed))
central_vf_expvar.append(explained_variance(self.central_vf_value(training_feed[0]), np.asarray(training_feed[1])))
if _DEBUG:
rand_in = np.random.uniform(-10., +10., self.test_input_size)
val_est = self.central_vf_value(np.expand_dims(rand_in,axis=0))
val_tar = periodic_target(rand_in[0])
example_diff = val_est - val_tar
print("iter {} | in={:5.2f} | tar={:5.2f} | est={:7.3f} | diff={:7.3f} | loss={:7.3E} | expln var={:7.3E}".format(
train_iter,
rand_in[0],
val_tar,
val_est[0],
example_diff[0],
central_vf_loss[-1],
central_vf_expvar[-1]
))
if _DEBUG:
ti = np.arange(self.test_n_training_iterations)
plt.plot(ti, central_vf_loss, ti, central_vf_expvar)
plt.xlabel('training iteration')
plt.ylabel('value loss & explained variance')
plt.legend(['value loss', 'explained variance'])
plt.show()
# check value loss has converged to expected level (based on emperical testing)
self.assertLessEqual(np.mean(central_vf_loss[-int(self.test_n_training_iterations*.005):]), 5e-3)
self.assertGreaterEqual(np.mean(central_vf_expvar[-int(self.test_n_training_iterations*.005):]), 0.975)
# check that value estimate has converged
test_vals = [[],[],[],[]]
for test_iter in range(self.test_test_size):
test_vals[0].append(np.random.uniform(-10., +10., self.test_input_size))
test_vals[1].append(self.central_vf_value(np.expand_dims(test_vals[0][-1],axis=0)))
test_vals[2].append(periodic_target(test_vals[0][-1]))
test_vals[3].append(test_vals[1][-1] - test_vals[2][-1])
# print("test mean = {} | test std = {}".format(np.mean(test_vals), np.std(test_vals)))
self.assertAlmostEqual(np.mean(test_vals[3]), 0.0, places=1)
self.assertLessEqual(np.std(test_vals[3]), 0.1)
class TestCentralCriticNetwork3(unittest.TestCase):
''' test central_critic_network class from mappo.py
'''
def setUp(self):
''' the with tf.Graph.as_default()... command allows for multiple calls to setUp
without causing variable scopes to "clash". See baselines/common/tests/util.py for examples
'''
with tf.Graph().as_default() as self.setup_graph, tf.Session(config=tf.ConfigProto(allow_soft_placement=True)).as_default() as self.setup_sess:
self.test_n_training_iterations = 1000
self.test_n_data_per_batch = 128
self.test_num_layers = 4
self.test_num_units = 64
self.test_activation = 'elu'
self.test_learning_rate = 1e-3
self.test_agent_state_len = 5
self.test_n_agents = 4
self.test_input_size = 1 + self.test_agent_state_len*self.test_n_agents
self.test_test_size = 10000
self.test_cliprange = 0.2
joint_state_stamped_ph = [U.BatchInput((self.test_input_size, ), name="joint_state").get()]
deep_mlp = DeepMLP(num_layers=self.test_num_layers, activation=self.test_activation)
self.central_vf_value, self.central_vf_train, self.central_vf_debug = central_critic_network(
inputs_placeholder_n=joint_state_stamped_ph,
v_func=deep_mlp.deep_mlp_model,
optimizer=tf.train.AdamOptimizer(learning_rate=self.test_learning_rate),
scope = "joint_state_critic",
num_units=self.test_num_units,
grad_norm_clipping=0.5
)
def tearDown(self):
'''Don't actually tearDown the tf graph
Note: it may seem tempting to use tf.reset_default_graph(), but this
causes an error in subsequent setUp calls with something to do with
op: NoOp ... is not an element of this graph
Instead use the with tf.Graph.as_default()... in setUp
'''
pass
def test_central_critic_network_terminated_target(self):
'''mappo: central critic learning nonlinear terminated target similar to XOR (this may take a while)'''
# randomly generated but fixed constant target, regardless of input
def terminated_target(s):
# reward if only one agent is terminated
n_term = sum(s[self.test_agent_state_len::self.test_agent_state_len])
if np.isclose(n_term, 1.0):
return s[0]
else:
return 0.0
def gen_rand_input():
rand_input = [np.random.randint(50)+1]
for agsi in range(1, self.test_input_size, self.test_agent_state_len):
rand_input.extend(np.random.uniform(-10., +10., self.test_agent_state_len-1))
rand_input.extend([np.random.randint(2)])
return rand_input
# in order to make calls to the central value function, we need to operate within the tf session
# and initialize variables
with self.setup_sess:
self.setup_sess.run(tf.global_variables_initializer())
central_vf_loss = []
central_vf_expvar = []
for train_iter in range(self.test_n_training_iterations):
# create individual training batch of random input but fixed target
training_feed = [[], [], [], []]
for i in range(self.test_n_data_per_batch):
rand_input = gen_rand_input()
training_feed[0].append(rand_input)
training_feed[1].append(terminated_target(rand_input))
training_feed[2].append(self.central_vf_value(np.expand_dims(rand_input, axis=0))[0])
training_feed[3] = self.test_cliprange
# call train and update target network
central_vf_loss.append(self.central_vf_train(*training_feed))
central_vf_expvar.append(explained_variance(self.central_vf_value(training_feed[0]), np.asarray(training_feed[1])))
if _DEBUG:
rand_in = gen_rand_input()
val_est = self.central_vf_value(np.expand_dims(rand_in,axis=0))
val_tar = terminated_target(rand_in)
example_diff = val_est - val_tar
print("iter {} | in={:5.2f} | tar={:5.2f} | est={:7.3f} | diff={:7.3f} | loss={:7.3E} | expln var={:7.3E}".format(
train_iter,
rand_in[0],
val_tar,
val_est[0],
example_diff[0],
central_vf_loss[-1],
central_vf_expvar[-1]
))
if _DEBUG:
ti = np.arange(self.test_n_training_iterations)
plt.plot(ti, central_vf_loss, ti, central_vf_expvar)
plt.xlabel('training iteration')
plt.ylabel('value loss & explained variance')
plt.legend(['value loss', 'explained variance'])
plt.show()
# check value loss and explained variance has converged to expected level (based on emperical testing)
self.assertLessEqual(np.mean(central_vf_loss[-int(self.test_n_training_iterations*.005):]), 2.0)
self.assertGreaterEqual(np.mean(central_vf_expvar[-int(self.test_n_training_iterations*.005):]), 0.975)
# # check that value estimate has converged
# test_vals = [[],[],[],[]]
# for test_iter in range(self.test_test_size):
# test_vals[0].append(np.random.uniform(-10., +10., self.test_input_size))
# test_vals[1].append(self.central_vf_value(np.expand_dims(test_vals[0][-1],axis=0)))
# test_vals[2].append(periodic_target(test_vals[0][-1]))
# test_vals[3].append(test_vals[1][-1] - test_vals[2][-1])
# # print("test mean = {} | test std = {}".format(np.mean(test_vals), np.std(test_vals)))
# self.assertAlmostEqual(np.mean(test_vals[3]), 0.0, places=2)
# self.assertLessEqual(np.std(test_vals[3]), 0.1)
class TestPPOGroupTrainer3(unittest.TestCase):
''' test PPOGroupTrainer class from mappo.py
'''
def setUp(self):
''' the with tf.Graph.as_default()... command allows for multiple calls to setUp
without causing variable scopes to "clash". See baselines/common/tests/util.py for examples
'''
with tf.Graph().as_default(), tf.Session(config=tf.ConfigProto(allow_soft_placement=True)).as_default():
self.test_n_training_iterations = 1000
self.test_episode_len = 5
self.test_n_episodes_per_batch = 10
self.test_num_layers = 2
self.test_activation = 'tanh'
self.test_n_opt_epochs = 4
self.test_n_minibatches = 4
self.test_gamma = 0.99
self.test_joint_state_space_len = 1
deep_mlp = DeepMLP(num_layers=self.test_num_layers, activation=self.test_activation)
# create trainer that would live in a simple 1D environment
# with 1D continuous observations and actions
# and single step episodes
self.group_trainer = PPOGroupTrainer(
n_agents=3,
obs_space=spaces.Box(low=-1.0, high=1.0, shape=(1,), dtype=np.float32),
act_space=spaces.Box(low=-1.0, high=1.0, shape=(1,), dtype=np.float32),
n_steps_per_episode=self.test_episode_len, ent_coef=0.0, local_actor_learning_rate=3e-4, vf_coef=0.5,
num_layers=2, num_units=4, activation=self.test_activation, cliprange=0.2,
n_episodes_per_batch=self.test_n_episodes_per_batch, shared_reward=True,
critic_type='central_joint_state', central_critic_model=deep_mlp.deep_mlp_model,
central_critic_learning_rate=3e-4, central_critic_num_units=4, joint_state_space_len=self.test_joint_state_space_len,
max_grad_norm = 0.5, n_opt_epochs=self.test_n_opt_epochs, n_minibatches=self.test_n_minibatches)
# Populate the group with stripped out versions of agents
Args = namedtuple('Args', ['max_episode_len', 'gamma'])
args = Args(self.test_episode_len, self.test_gamma)
self.agent_0 = PPOAgentComputer(
name="agent_0",
model=self.group_trainer.local_actor_critic_model,
obs_shape_n=None, act_space_n=None,
agent_index=0, args=args, local_q_func=None, lam=1.0)
self.agent_1 = PPOAgentComputer(
name="agent_1",
model=self.group_trainer.local_actor_critic_model,
obs_shape_n=None, act_space_n=None,
agent_index=1, args=args, local_q_func=None, lam=1.0)
self.agent_2 = PPOAgentComputer(
name="agent_1",
model=self.group_trainer.local_actor_critic_model,
obs_shape_n=None, act_space_n=None,
agent_index=2, args=args, local_q_func=None, lam=1.0)
self.group_trainer.update_agent_trainer_group([self.agent_0, self.agent_1, self.agent_2])
def tearDown(self):
'''Don't actually tearDown the tf graph
Note: it may seem tempting to use tf.reset_default_graph(), but this
causes an error in subsequent setUp calls with something to do with
op: NoOp ... is not an element of this graph
Instead use the with tf.Graph.as_default()... in setUp
'''
pass
def nontest_execute_group_training_central_joint_state_critic_1(self):
'''mappo: (this test currently deprecated but not removed yet because using some of the code as a guide)
integration test of many functions to ensure central joint state critic converges given constant input
'''
self.assertTrue(False) # this test currently deprecated but not removed yet because using some of the code as a guide
const_reward = 8.245529015329097
# in order to make calls to the central value function, we need to operate within the tf session
# and initialize variables
# with self.group_trainer.sess:
# tf.global_variables_initializer()
with self.group_trainer.sess as sess:
sess.run(tf.global_variables_initializer())
training_loss_stats = []
for train_iter in range(self.test_n_training_iterations):
for ep in range(self.test_n_episodes_per_batch):
# for each episode, the group batch data should grow by number of agents*time steps
self.assertEqual(len(self.group_trainer.batch_observations),
self.group_trainer.n_agents*self.group_trainer.n_steps_per_episode*ep)
self.assertEqual(len(self.group_trainer.batch_observations),
len(self.group_trainer.batch_factual_values))
self.assertEqual(len(self.group_trainer.batch_observations),
len(self.group_trainer.batch_counterfactual_values))
self.assertEqual(len(self.group_trainer.batch_observations),
len(self.group_trainer.batch_returns))
self.assertEqual(len(self.group_trainer.batch_observations),
len(self.group_trainer.batch_actions))
self.assertEqual(len(self.group_trainer.batch_observations),
len(self.group_trainer.batch_neglogp_actions))
self.assertEqual(len(self.group_trainer.batch_observations),
len(self.group_trainer.batch_dones))
# for each episode, joint data should grow by number of time steps
self.assertEqual(len(self.group_trainer.batch_joint_state_stamped),
(self.group_trainer.n_steps_per_episode+1)*ep)
# populate episode with random data, except rewards, those are constant
for ag in self.group_trainer.agent_trainer_group:
for step in range(self.test_episode_len):
ag.mbi_observations.append(np.random.uniform(-1., +1., 1))
ag.mbi_actions.append(np.random.uniform(-1., +1., 1))
ag.mbi_rewards.append(const_reward) # only element that is constant, not randomly varying
ag.mbi_obs_values.append(np.random.uniform(0, +1.))
ag.mbi_dones.append(False)
ag.mbi_neglogp_actions.append(-np.log(np.random.uniform(0,1)))
ag.mbi_healths.append(1.0)
ag.mbi_observations.append(np.random.uniform(-1., +1., 1))
ag.mbi_dones.append(True)
for step in range(self.test_episode_len+1):
# self.group_trainer.record_joint_state(np.array([
# np.random.uniform(-1., +1., 4), np.random.uniform(-1., +1., 4), np.random.uniform(-1., +1., 4)]))
self.group_trainer.record_joint_state(np.array([np.random.uniform(-1., +1., self.test_joint_state_space_len)]))
# self.group_trainer.update_group_policy(terminal=1)
episode_factual_values, episode_counterfactual_values = self.group_trainer.process_episode_value_centralization_and_credit_assignment()
self.group_trainer.process_episode_returns_and_store_group_training_batch(episode_factual_values, episode_counterfactual_values)
self.group_trainer.process_episode_clear_data()
# check that returns are always the same sequence, given the constant reward
cur_return = const_reward
for ep_step in range(self.test_episode_len):
self.assertAlmostEqual(self.group_trainer.batch_joint_state_stamped[ep_step][0], self.test_episode_len-ep_step)
self.assertAlmostEqual(self.group_trainer.batch_joint_returns[-ep_step-1], cur_return, places=5)
cur_return = const_reward + self.test_gamma*cur_return
# check that individuals' memories are properly cleared out
self.assertEqual(len(self.group_trainer.agent_trainer_group[0].mbi_rewards), 0)
self.assertEqual(len(self.group_trainer.agent_trainer_group[1].mbi_rewards), 0)
self.assertEqual(len(self.group_trainer.agent_trainer_group[2].mbi_rewards), 0)
# after episodes per batch, update policy
self.group_trainer.batch_credit_assignment()
batch_loss_stats = self.group_trainer.execute_group_training()
training_loss_stats += [[self.test_episode_len*self.test_n_episodes_per_batch*(train_iter+1)] + L for L in batch_loss_stats]
self.assertEqual(len(self.group_trainer.batch_observations), 0)
self.assertEqual(len(self.group_trainer.batch_joint_state_stamped), 0)
self.assertEqual(len(self.group_trainer.batch_joint_returns), 0)
self.assertEqual(len(self.group_trainer.batch_factual_values), 0)
self.assertEqual(len(self.group_trainer.batch_counterfactual_values), 0)
self.assertEqual(len(self.group_trainer.batch_actions), 0)
self.assertEqual(len(self.group_trainer.batch_returns), 0)
self.assertEqual(len(self.group_trainer.batch_dones), 0)
self.assertEqual(len(self.group_trainer.batch_neglogp_actions), 0)
self.assertEqual(len(self.group_trainer.batch_effective_returns), 0)
print("training iter {}: value at t = {}: {} | value at t = {}: {}".format(
train_iter, self.test_episode_len, self.group_trainer.central_vf_value(np.expand_dims(np.concatenate(([0], np.random.uniform(-1., +1., self.test_joint_state_space_len))),axis=0)),
0, self.group_trainer.central_vf_value(np.expand_dims(np.concatenate(([self.test_episode_len], np.random.uniform(-1., +1., self.test_joint_state_space_len))),axis=0))))
print(self.group_trainer.central_vf_value(np.expand_dims(np.concatenate(([self.test_episode_len], np.random.uniform(-1., +1., self.test_joint_state_space_len))),axis=0)))
self.assertTrue(False)
class TestPPOGroupTrainer_LocalCritic_NoCrediting_1(unittest.TestCase):
'''Unit tests for individual subroutines in PPOGroupTrainer'''
def setUp(self):
''' the with tf.Graph.as_default()... command allows for multiple calls to setUp
without causing variable scopes to "clash". See baselines/common/tests/util.py for examples
'''
with tf.Graph().as_default(), tf.Session(config=tf.ConfigProto(allow_soft_placement=True)).as_default():
self.group_trainer = PPOGroupTrainer(
n_agents=3,
obs_space=spaces.Box(low=-1.0, high=1.0, shape=(1,), dtype=np.float32),
act_space=spaces.Box(low=-1.0, high=1.0, shape=(1,), dtype=np.float32),
n_steps_per_episode=50, ent_coef=0.0, local_actor_learning_rate=3e-4, vf_coef=0.5,
num_layers=2, num_units=4, activation='tanh', cliprange=0.2,
n_episodes_per_batch=16, shared_reward=True,
critic_type='distributed_local_observations', central_critic_model=None,
central_critic_learning_rate=None, central_critic_num_units=None, joint_state_space_len=None,
max_grad_norm = 0.5, n_opt_epochs=4, n_minibatches=4)
def tearDown(self):
'''Don't actually tearDown the tf graph
Note: it may seem tempting to use tf.reset_default_graph(), but this
causes an error in subsequent setUp calls with something to do with
op: NoOp ... is not an element of this graph
Instead use the with tf.Graph.as_default()... in setUp
'''
pass
def test_process_episode_value_centralization_and_credit_assignment_1(self):
'''mappo:process_episode_value_centralization_and_credit_assignment: local critic, no crediting'''
# create trainer that would live in a simple 1D environment
# with 1D continuous observations and actions
# and single step episodes
# Populate the group with generic objects
self.group_trainer.update_agent_trainer_group([object, object, object])
# call the centralization and crediting function
episode_factual_values, episode_counterfactual_values = self.group_trainer.process_episode_value_centralization_and_credit_assignment()
# check outputs
self.assertTrue(episode_factual_values is None)
self.assertTrue(episode_counterfactual_values is None)
self.assertEqual(len(self.group_trainer.batch_joint_observations_stamped), 51)
self.assertEqual(len(self.group_trainer.batch_joint_state_stamped), 51)
for i,_ in enumerate(self.group_trainer.batch_joint_observations_stamped):
self.assertTrue(self.group_trainer.batch_joint_observations_stamped[i] is None)
self.assertTrue(self.group_trainer.batch_joint_state_stamped[i] is None)
class TestPPOGroupTrainer_JointObserveCritic_NoCrediting_1(unittest.TestCase):
'''Unit tests for individual subroutines in PPOGroupTrainer'''
def setUp(self):
''' the with tf.Graph.as_default()... command allows for multiple calls to setUp
without causing variable scopes to "clash". See baselines/common/tests/util.py for examples
'''
with tf.Graph().as_default(), tf.Session(config=tf.ConfigProto(allow_soft_placement=True)).as_default():
# create trainer that would live in a simple 1D environment
# with 1D continuous observations and actions
# with randomized parameterized when they are not important for this test
self.group_trainer = PPOGroupTrainer(
n_agents=np.random.randint(9)+2,
obs_space=spaces.Box(low=-1.0, high=1.0, shape=(1,), dtype=np.float32),
act_space=spaces.Box(low=-1.0, high=1.0, shape=(1,), dtype=np.float32),
n_steps_per_episode=50, ent_coef=np.random.rand(), local_actor_learning_rate=np.random.rand(), vf_coef=np.random.rand(),
num_layers=np.random.randint(15)+2, num_units=np.random.randint(255)+2, activation='tanh', cliprange=np.random.rand(),
n_episodes_per_batch=np.random.randint(1024)+1, shared_reward=True,
critic_type='central_joint_observations', central_critic_model=DeepMLP(num_layers=np.random.randint(16)+1, activation='tanh').deep_mlp_model,
central_critic_learning_rate=np.random.rand(), joint_state_space_len=np.random.randint(256)+1,
central_critic_num_units=np.random.randint(255)+2,
max_grad_norm = np.random.rand(), n_opt_epochs=np.random.randint(16)+1, n_minibatches=np.random.randint(16)+1)
def tearDown(self):
'''Don't actually tearDown the tf graph
Note: it may seem tempting to use tf.reset_default_graph(), but this
causes an error in subsequent setUp calls with something to do with
op: NoOp ... is not an element of this graph
Instead use the with tf.Graph.as_default()... in setUp
'''
pass
def test_process_episode_value_centralization_and_credit_assignment_1(self):
'''mappo:process_episode_value_centralization_and_credit_assignment: joint observations critic, no crediting'''
n_steps = self.group_trainer.n_steps_per_episode
n_agents = self.group_trainer.n_agents
# Overwrite central value function with simple, dummy value function
self.group_trainer.central_vf_value = lambda jnt_obs: [sum(sum(jnt_obs))]
# Populate the group with stripped out versions of agents with random observation
class DummyAgent(object):
def __init__(self, nsteps):
# self.mbi_observations = list(np.random.uniform(-1,1,group_trainer.n_steps_per_episode+1))
self.mbi_observations = [[np.random.uniform(-1,1)] for i in range(nsteps+1)]
agent_group = []
for i in range(self.group_trainer.n_agents):
agent_group.append(DummyAgent(n_steps))
self.group_trainer.update_agent_trainer_group(agent_group)
# call the centralization and crediting function
episode_factual_values, episode_counterfactual_values = self.group_trainer.process_episode_value_centralization_and_credit_assignment()
# check outputs
self.assertEqual(n_agents, self.group_trainer.n_agents)
self.assertEqual(len(episode_factual_values), n_steps+1)
self.assertEqual(len(episode_counterfactual_values), n_agents)
self.assertEqual(len(self.group_trainer.batch_joint_observations_stamped), n_steps+1)
self.assertEqual(len(self.group_trainer.batch_joint_state_stamped), n_steps+1)
for i in range(n_steps+1):
self.assertEqual(len(self.group_trainer.batch_joint_observations_stamped[i]), n_agents+1)
self.assertAlmostEqual(self.group_trainer.batch_joint_observations_stamped[i][0], n_steps+1-i) # check time stamp
expect_value = n_steps+1 - i + sum([ag.mbi_observations[i][0] for ag in agent_group])
if i == n_steps: expect_value = 0.0
self.assertAlmostEqual(episode_factual_values[i], expect_value) # all equal without crediting
for agi in range(n_agents):
self.assertTrue(episode_counterfactual_values[agi][i] is None) # No crediting
class TestPPOGroupTrainer_JointStateCritic_NoCrediting_1(unittest.TestCase):
'''Unit tests for individual subroutines in PPOGroupTrainer'''
def setUp(self):
''' the with tf.Graph.as_default()... command allows for multiple calls to setUp
without causing variable scopes to "clash". See baselines/common/tests/util.py for examples
'''
with tf.Graph().as_default(), tf.Session(config=tf.ConfigProto(allow_soft_placement=True)).as_default():
# create trainer that would live in a simple 1D environment
# with 1D continuous observations and actions
# with randomized parameterized when they are not important for this test
n_agents=np.random.randint(9)+2
self.group_trainer = PPOGroupTrainer(
n_agents=n_agents,
obs_space=spaces.Box(low=-1.0, high=1.0, shape=(1,), dtype=np.float32),
act_space=spaces.Box(low=-1.0, high=1.0, shape=(1,), dtype=np.float32),
n_steps_per_episode=50, ent_coef=np.random.rand(), local_actor_learning_rate=np.random.rand(), vf_coef=np.random.rand(),
num_layers=np.random.randint(8)+1, num_units=np.random.randint(63)+2, activation='tanh', cliprange=np.random.rand(),
n_episodes_per_batch=np.random.randint(63)+2, shared_reward=True,
critic_type='central_joint_state', central_critic_model=DeepMLP(num_layers=np.random.randint(8)+1, activation='tanh').deep_mlp_model,
central_critic_learning_rate=np.random.rand(), central_critic_num_units=np.random.randint(63)+2, joint_state_space_len=2*n_agents,
max_grad_norm = np.random.rand(), n_opt_epochs=np.random.randint(16)+1, n_minibatches=np.random.randint(16)+1, joint_state_entity_len=2)
def tearDown(self):
'''Don't actually tearDown the tf graph
Note: it may seem tempting to use tf.reset_default_graph(), but this
causes an error in subsequent setUp calls with something to do with
op: NoOp ... is not an element of this graph
Instead use the with tf.Graph.as_default()... in setUp
'''
pass
def test_process_episode_value_centralization_and_credit_assignment_1(self):
'''mappo:process_episode_value_centralization_and_credit_assignment: joint state critic, no crediting'''
n_steps = self.group_trainer.n_steps_per_episode
n_agents = self.group_trainer.n_agents
# Overwrite central value function with simple, dummy value function
self.group_trainer.central_vf_value = lambda jnt_obs: [sum(sum(jnt_obs))]
# Populate the group with stripped out versions of agents
class DummyAgent(object):
def __init__(self):
pass
agent_group = []
for i in range(self.group_trainer.n_agents):
agent_group.append(DummyAgent())
self.group_trainer.update_agent_trainer_group(agent_group)
# create randomized central state generator
self.group_trainer.episode_joint_state = [np.random.uniform(-1,1,n_agents) for i in range(n_steps+1)]
# call the centralization and crediting function
episode_factual_values, episode_counterfactual_values = self.group_trainer.process_episode_value_centralization_and_credit_assignment()
# check outputs
self.assertEqual(n_agents, self.group_trainer.n_agents)
self.assertEqual(len(episode_factual_values), n_steps+1)
self.assertEqual(len(episode_counterfactual_values), n_agents)
self.assertEqual(len(self.group_trainer.batch_joint_observations_stamped), n_steps+1)
self.assertEqual(len(self.group_trainer.batch_joint_state_stamped), n_steps+1)
for i in range(n_steps+1):
self.assertEqual(len(self.group_trainer.batch_joint_state_stamped[i]), n_agents+1)
self.assertAlmostEqual(self.group_trainer.batch_joint_state_stamped[i][0], n_steps+1-i) # check time stamp
expect_value = n_steps+1 - i + sum(self.group_trainer.episode_joint_state[i])
if i == n_steps: expect_value = 0.0
self.assertAlmostEqual(episode_factual_values[i], expect_value) # all equal without crediting
for agi in range(n_agents):
self.assertTrue(episode_counterfactual_values[agi][i] is None) # No crediting
def test_process_episode_subroutines_1(self):
'''mappo:process_episode_[subroutine]: joint state critic, no crediting'''
n_steps = self.group_trainer.n_steps_per_episode
n_agents = self.group_trainer.n_agents
n_episodes = self.group_trainer.n_episodes_per_batch
n_trials = 10
gamma = 0.99
lam = 1.0
# Overwrite central value function with simple, simple value function
self.group_trainer.central_vf_value = lambda jnt_obs: [sum(sum(jnt_obs))]
# Establish args for stripped out versions of agents
Args = namedtuple('Args', ['max_episode_len', 'gamma'])
args = Args(n_steps, gamma)
for trial in range(n_trials):
# generate random reward history that each agent will have for every episode
common_reward_history = np.random.normal(0,10, n_steps)
# generate new group of agents
agent_group = []
for agi in range(n_agents):
# agent_group.append(DummyAgent(n_steps, gamma, lam))
agent_group.append(PPOAgentComputer(
name="agent_{}".format(agi),
model=self.group_trainer.local_actor_critic_model,
obs_shape_n=None, act_space_n=None,
agent_index=agi, args=args, local_q_func=None, lam=lam))
self.group_trainer.update_agent_trainer_group(agent_group)
for ep in range(n_episodes):
# for each episode, the group batch data should grow by number of agents
expect_len = n_agents*n_steps*ep
self.assertEqual(len(self.group_trainer.batch_observations), expect_len)
self.assertEqual(len(self.group_trainer.batch_factual_values), expect_len)
self.assertEqual(len(self.group_trainer.batch_counterfactual_values), expect_len)
self.assertEqual(len(self.group_trainer.batch_actions), expect_len)
self.assertEqual(len(self.group_trainer.batch_neglogp_actions), expect_len)
self.assertEqual(len(self.group_trainer.batch_dones), expect_len)
self.assertEqual(len(self.group_trainer.batch_returns), expect_len)
self.assertEqual(len(self.group_trainer.batch_joint_observations_stamped), ep*(n_steps+1))
self.assertEqual(len(self.group_trainer.batch_joint_state_stamped), ep*(n_steps+1))
# fill agent history with random input, except rewards the same
for ag in self.group_trainer.agent_trainer_group:
for step in range(n_steps):
ag.mbi_observations.append(np.random.uniform(-1., +1., 1))
ag.mbi_actions.append(np.random.uniform(-1., +1., 1))
ag.mbi_rewards.append(common_reward_history[step])
ag.mbi_obs_values.append(np.random.uniform(0, +1.))
ag.mbi_dones.append(False)
ag.mbi_neglogp_actions.append(-np.log(np.random.uniform(0,1)))
ag.mbi_healths.append(1.0)
ag.mbi_observations.append(np.random.uniform(-1., +1., 1))
ag.mbi_dones.append(True)
# create randomized central state generator
self.group_trainer.episode_joint_state = [np.random.uniform(-1,1,n_agents) for i in range(n_steps+1)]
# get episode baseline values
episode_factual_values, episode_counterfactual_values = self.group_trainer.process_episode_value_centralization_and_credit_assignment()
# check baseline values
self.assertEqual(len(episode_factual_values), self.group_trainer.n_steps_per_episode+1)
self.assertEqual(len(episode_counterfactual_values), self.group_trainer.n_agents)
for i in range(n_steps+1):
self.assertEqual(len(self.group_trainer.batch_joint_state_stamped[i]), n_agents+1)
self.assertAlmostEqual(self.group_trainer.batch_joint_state_stamped[i][0], n_steps+1-i) # check time stamp
expect_value = n_steps+1 - i + sum(self.group_trainer.episode_joint_state[i])
if i == n_steps: expect_value = 0.0
self.assertAlmostEqual(episode_factual_values[i], expect_value) # all equal without crediting
for agi in range(n_agents):
self.assertTrue(episode_counterfactual_values[agi][i] is None) # No crediting
# calculate returns, advantages and store in batch
self.group_trainer.process_episode_returns_and_store_group_training_batch(episode_factual_values, episode_counterfactual_values)
# check episode and batch data
for agi, ag in enumerate(self.group_trainer.agent_trainer_group):
# with no crediting, returns and values should match batch_joint values
self.assertTrue(np.allclose(self.group_trainer.batch_joint_returns[-n_steps-1:], ag.mbi_returns))
s1 = -n_steps*(n_agents-agi)
s2 = -n_steps*(n_agents-agi-1) if -n_steps*(n_agents-agi-1) < 0 else None
self.assertTrue(np.allclose(self.group_trainer.batch_factual_values[s1:s2], ag.mbi_factual_values[:-1]))
self.assertTrue(np.allclose(self.group_trainer.batch_actions[s1:s2], ag.mbi_actions))
self.assertTrue(np.allclose(self.group_trainer.batch_returns[s1:s2], ag.mbi_returns[:-1]))
self.assertTrue(np.allclose(self.group_trainer.batch_neglogp_actions[s1:s2], ag.mbi_neglogp_actions))
self.assertTrue(np.allclose(self.group_trainer.batch_healths[s1:s2], ag.mbi_healths))
# clear episode data
self.group_trainer.process_episode_clear_data()
# check episode data cleared out
self.assertEqual(len(self.group_trainer.episode_joint_state), 0) # episode state cleared out
for ag in self.group_trainer.agent_trainer_group:
# each agent's episode data cleared
self.assertEqual(len(ag.mbi_observations), 0)
self.assertEqual(len(ag.mbi_actions), 0)
self.assertEqual(len(ag.mbi_rewards), 0)
self.assertEqual(len(ag.mbi_obs_values), 0)
self.assertEqual(len(ag.mbi_dones), 0)
self.assertEqual(len(ag.mbi_neglogp_actions), 0)
self.assertEqual(len(ag.mbi_healths), 0)
if ep == n_episodes - 1:
# clear out batch data (don't actually run any of the training functions,
# trying to keep this test more trimmed down)
# Clear out group batch
self.group_trainer.batch_observations = []
self.group_trainer.batch_joint_observations_stamped = []
self.group_trainer.batch_joint_state_stamped = []
self.group_trainer.batch_returns = []
self.group_trainer.batch_joint_returns = []
self.group_trainer.batch_effective_returns = []
self.group_trainer.batch_dones = []
self.group_trainer.batch_actions = []
self.group_trainer.batch_factual_values = []
self.group_trainer.batch_counterfactual_values = []
self.group_trainer.batch_effective_values = []
self.group_trainer.batch_neglogp_actions = []
class TestPPOGroupTrainer_JointStateCritic_TerminatedBaselineCrediting_1(unittest.TestCase):
'''Tests for individual subroutines in PPOGroupTrainer with joint-state critic and terminated baseline crediting'''
def setUp(self):
''' the with tf.Graph.as_default()... command allows for multiple calls to setUp
without causing variable scopes to "clash". See baselines/common/tests/util.py for examples
'''
with tf.Graph().as_default(), tf.Session(config=tf.ConfigProto(allow_soft_placement=True)).as_default():
# create trainer that would live in a simple 1D environment
# with 1D continuous observations and actions
# with randomized parameterized when they are not important for this test
n_agents = np.random.randint(9)+2
self.entity_state_len = 5
self.group_trainer = PPOGroupTrainer(
n_agents=n_agents,
obs_space=spaces.Box(low=-1.0, high=1.0, shape=(1,), dtype=np.float32),
act_space=spaces.Box(low=-1.0, high=1.0, shape=(1,), dtype=np.float32),
n_steps_per_episode=50, ent_coef=np.random.rand(), local_actor_learning_rate=np.random.rand(), vf_coef=np.random.rand(),
num_layers=np.random.randint(8)+1, num_units=np.random.randint(63)+2, activation='tanh', cliprange=np.random.rand(),
n_episodes_per_batch=np.random.randint(63)+2, shared_reward=True,
critic_type='central_joint_state', central_critic_model=DeepMLP(num_layers=np.random.randint(8)+1, activation='tanh').deep_mlp_model,
central_critic_learning_rate=np.random.rand(), joint_state_space_len=self.entity_state_len*n_agents,
central_critic_num_units=np.random.randint(63)+2,
max_grad_norm = np.random.rand(), n_opt_epochs=np.random.randint(16)+1, n_minibatches=np.random.randint(16)+1,
crediting_algorithm = 'terminated_baseline')
def tearDown(self):
'''Don't actually tearDown the tf graph
Note: it may seem tempting to use tf.reset_default_graph(), but this
causes an error in subsequent setUp calls with something to do with
op: NoOp ... is not an element of this graph
Instead use the with tf.Graph.as_default()... in setUp
'''
pass
def test_process_episode_value_centralization_and_credit_assignment_1(self):
'''mappo:process_episode_value_centralization_and_credit_assignment: joint state critic, terminated baseline crediting'''
n_steps = self.group_trainer.n_steps_per_episode
n_agents = self.group_trainer.n_agents
entity_state_len = self.entity_state_len
# Overwrite central value function with simple function that sums non-terminated states
# self.group_trainer.central_vf_value = lambda s: [sum([s1*(1-s2) for s1,s2 in zip(s[1::entity_state_len], s[5::entity_state_len])])]
def value_func(jss):
jss = jss[0] # strip off additional layer that is added in mappo
return [sum([s1*(1-s2) for s1,s2 in zip(jss[1::entity_state_len], jss[5::entity_state_len])])]
self.group_trainer.central_vf_value = value_func
# Populate the group with stripped out versions of agents
class DummyAgent(object):
def __init__(self):
pass
agent_group = []
jsl = []
for agi in range(self.group_trainer.n_agents):
agent_group.append(DummyAgent())
jsl.append("agent_{}".format(agi))
self.group_trainer.update_agent_trainer_group(agent_group)
self.group_trainer.joint_state_labels = jsl
# create randomized central state generator with all agents at full health
# self.group_trainer.episode_joint_state = [[None]*n_agents]*(n_steps+1)
self.group_trainer.episode_joint_state = [None]*(n_steps+1)
for i in range(n_steps+1):
cur_state = []
for agi in range(n_agents):
cur_state.extend(np.append(np.random.uniform(-1,1,entity_state_len-1), 0.0))
self.group_trainer.episode_joint_state[i] = cur_state
# call the centralization and crediting function
episode_factual_values, episode_counterfactual_values = self.group_trainer.process_episode_value_centralization_and_credit_assignment()
# check outputs
self.assertEqual(n_agents, self.group_trainer.n_agents)
self.assertEqual(len(episode_factual_values), n_steps+1)
self.assertEqual(len(episode_counterfactual_values), n_agents)
self.assertEqual(len(self.group_trainer.batch_joint_observations_stamped), n_steps+1)
self.assertEqual(len(self.group_trainer.batch_joint_state_stamped), n_steps+1)
for i in range(n_steps+1):
self.assertEqual(len(self.group_trainer.batch_joint_state_stamped[i]), entity_state_len*n_agents+1)
self.assertAlmostEqual(self.group_trainer.batch_joint_state_stamped[i][0], n_steps+1-i) # check time stamp
actual_expect_value = sum(self.group_trainer.batch_joint_state_stamped[i][1::entity_state_len]) # expected true value of state is sum over non-terminated states, ignoring stamp, with no agents terminated
self.assertAlmostEqual(self.group_trainer.central_vf_value(np.expand_dims(self.group_trainer.batch_joint_state_stamped[i],axis=0))[0], actual_expect_value)
if i == n_steps:
self.assertAlmostEqual(episode_factual_values[i], 0.0)
else:
self.assertAlmostEqual(episode_factual_values[i], actual_expect_value)
for agi in range(n_agents):
self.assertAlmostEqual(self.group_trainer.batch_joint_state_stamped[i][1+(agi+1)*entity_state_len-1], 0) # actual termination values are false
counterfactual_expect_value = actual_expect_value - self.group_trainer.batch_joint_state_stamped[i][1+agi*entity_state_len]
self.assertAlmostEqual(episode_counterfactual_values[agi][i], counterfactual_expect_value) # all equal without crediting
class TestRedistributedSoftmax(unittest.TestCase):
'''
'''
def setUp(self):
pass
def test_redistributed_softmax_single_value(self):
'''redistributed_softmax: random single-value'''
for _ in range(100):
p_arr = [np.random.normal(0.0, 10.0)]
scale = np.random.uniform(0.0, 1.0)
p_scaled = redistributed_softmax(p_arr, scale)
self.assertAlmostEqual(p_scaled[0], 1.0)
def test_redistributed_softmax_two_values(self):
'''redistributed_softmax: random two-values'''
for _ in range(100):
p_arr = np.random.normal(0.0, 10.0, 2)
scale = np.random.uniform(0.0, 1.0)
p_scaled = redistributed_softmax(p_arr, scale)
self.assertAlmostEqual(sum(p_scaled), 1.0)
if scale > 0.5:
self.assertGreaterEqual(p_scaled[p_arr.argmin()], p_scaled[p_arr.argmax()])
else:
self.assertLessEqual(p_scaled[p_arr.argmin()], p_scaled[p_arr.argmax()])
def test_redistributed_softmax_multi_values(self):
'''redistributed_softmax: random multi-values'''
for _ in range(100):
n = np.random.randint(1,20)
p_arr = np.random.normal(0.0, 10.0, n)
scale = np.random.uniform(0.0, 1.0)
p_scaled = redistributed_softmax(p_arr, scale)
self.assertAlmostEqual(sum(p_scaled), 1.0)
if n > 1 and scale > 1.0 - 1.0/float(n):
self.assertFalse(p_scaled.argmax() == p_arr.argmax())
if __name__ == '__main__':
unittest.main()
| 56.970203 | 216 | 0.642433 | 12,270 | 95,596 | 4.736593 | 0.051019 | 0.070615 | 0.093878 | 0.056007 | 0.872983 | 0.849255 | 0.817251 | 0.79607 | 0.776455 | 0.745741 | 0 | 0.028545 | 0.263055 | 95,596 | 1,677 | 217 | 57.004174 | 0.79642 | 0.174338 | 0 | 0.667887 | 0 | 0.00183 | 0.014853 | 0.002261 | 0 | 0 | 0 | 0 | 0.175663 | 1 | 0.047575 | false | 0.012809 | 0.011894 | 0 | 0.076853 | 0.00366 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
920e351f78c851e89e76908782aed96259d10e98 | 124 | py | Python | tests/test_maltracx.py | maltra-cx/maltracx_python_client | 7cb6f03815f3cdcc86d6d16434b1b31ab6a31948 | [
"BSD-3-Clause"
] | null | null | null | tests/test_maltracx.py | maltra-cx/maltracx_python_client | 7cb6f03815f3cdcc86d6d16434b1b31ab6a31948 | [
"BSD-3-Clause"
] | null | null | null | tests/test_maltracx.py | maltra-cx/maltracx_python_client | 7cb6f03815f3cdcc86d6d16434b1b31ab6a31948 | [
"BSD-3-Clause"
] | null | null | null | import os
import sys
rootdir = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
sys.path = [rootdir] + sys.path
| 20.666667 | 69 | 0.741935 | 20 | 124 | 4.4 | 0.4 | 0.204545 | 0.295455 | 0.340909 | 0.363636 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.104839 | 124 | 5 | 70 | 24.8 | 0.792793 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
a6be991d7c4f1479314d857be3586c8413dfc2fc | 14,685 | py | Python | gym_anybullet/envs/anymal_steerable_envs.py | bibbygoodwin/rl-baselines-zoo | 5550cc0407fbcc70c1a01eccca06c5a07c9fbe6e | [
"MIT"
] | 4 | 2020-02-24T12:32:21.000Z | 2020-02-24T19:11:14.000Z | gym_anybullet/envs/anymal_steerable_envs.py | bibbygoodwin/rl-baselines-zoo | 5550cc0407fbcc70c1a01eccca06c5a07c9fbe6e | [
"MIT"
] | null | null | null | gym_anybullet/envs/anymal_steerable_envs.py | bibbygoodwin/rl-baselines-zoo | 5550cc0407fbcc70c1a01eccca06c5a07c9fbe6e | [
"MIT"
] | null | null | null | import gym
from gym import spaces
from gym.utils import seeding
import pybullet as p
import numpy as np
from common.paths import MODELS_PATH
import time
def gaussian(x, mu, sig):
return np.exp(-np.power(x - mu, 2.) / (2 * np.power(sig, 2.)))
class ANYmalHistoryNC(gym.Env):
"""
An env
NC = No Constraints?
"""
def __init__(self, render=False):
self._observation = []
self.observation_space = spaces.Box(-1 * np.array([np.inf] * 66), np.array([np.inf] * 66), dtype=np.float32)
self.quadruped_joint_angles = [0.03, 0.4, -0.8, -0.03, 0.4, -0.8, 0.03, -0.4, 0.8, -0.03, -0.4, 0.8]
# actions_low = np.asarray([-0.09, -0.2, -1.4, -0.15, -0.2, -1.4, -0.09, -1.0, 0.2, -0.15, -1.0, 0.2])
# actions_high = np.asarray([0.15, 1.0, -0.2, 0.09, 1.0, -0.2, 0.15, 0.2, 1.4, 0.09, 0.2, 1.4])
actions_low = np.asarray([-0.12, -0.873, -0.873, -0.20, -0.873, -0.873, -0.12, -0.873, -0.873, -0.20, -0.873, -0.873])
actions_high = np.asarray([0.20, 0.873, 0.873, 0.12, 0.873, 0.873, 0.20, 0.873, 0.873, 0.12, 0.873, 0.873])
self.action_space = spaces.Box(actions_low, actions_high, dtype=np.float32)
self.timestep = 0.01
self.render_mode = render
if render:
self.physics_client = p.connect(p.GUI)
else:
self.physics_client = p.connect(p.DIRECT)
p.resetSimulation()
p.setGravity(0, 0, -9.81)
p.setTimeStep(self.timestep)
self.plane_id = p.loadURDF(MODELS_PATH + 'plane/plane.urdf')
self.quadruped_start_pos = [0, 0, 0.5]
self.quadruped_start_orientation = p.getQuaternionFromEuler([0, 0, 0])
self.prev_joint_states = np.zeros((4, 12))
self.prev_joint_states[:-1, :] = self.prev_joint_states[1:, :]
self.prev_joint_states[-1, :] = self.quadruped_joint_angles
self.prev_action = self.quadruped_joint_angles
self.quadruped_id = p.loadURDF(MODELS_PATH + 'anymal_boxy/anymal_boxy.urdf',
self.quadruped_start_pos, self.quadruped_start_orientation)
p.setPhysicsEngineParameter(numSolverIterations=100)
self.quadruped_joint_ids = []
active_joint = 0
for j in range(p.getNumJoints(self.quadruped_id)):
p.changeDynamics(self.quadruped_id, j, linearDamping=0, angularDamping=0)
info = p.getJointInfo(self.quadruped_id, j)
joint_type = info[2]
if joint_type == p.JOINT_PRISMATIC or joint_type == p.JOINT_REVOLUTE:
self.quadruped_joint_ids.append(j)
p.resetJointState(self.quadruped_id, j, self.quadruped_joint_angles[active_joint])
active_joint += 1
self.feet_ids = {'LF':5, 'RF':10, 'LH':15, 'RH':20}
self.env_step_counter = 0
self.quadruped_pos = self.quadruped_start_pos
self.quadruped_orientation = self.quadruped_start_orientation
joint_torques = []
for j in self.quadruped_joint_ids:
joint_torques.append(p.getJointState(self.quadruped_id, j)[3])
self.prev_torques = np.asarray(joint_torques)
def step(self, action):
self.quadruped_pos, self.quadruped_orientation = p.getBasePositionAndOrientation(self.quadruped_id)
action = np.clip(np.asarray(action[:]), self.action_space.low, self.action_space.high)
self._perform_action(action)
p.stepSimulation()
self.prev_action = action
self._observation = self._compute_observation()
reward = self._compute_reward()
done = self._compute_done()
self.env_step_counter += 1
return np.array(self._observation), reward, done, {}
def reset(self):
self.env_step_counter = 0
p.resetSimulation()
p.setGravity(0, 0, -9.81)
p.setTimeStep(self.timestep)
p.loadURDF(MODELS_PATH + 'plane/plane.urdf')
quadruped_start_pos = [0, 0, 0.5]
quadruped_start_orientation = p.getQuaternionFromEuler([0, 0, 0])
self.quadruped_id = p.loadURDF(MODELS_PATH + 'anymal_boxy/anymal_boxy.urdf',
quadruped_start_pos, quadruped_start_orientation)
active_joint = 0
for j in self.quadruped_joint_ids:
p.resetJointState(self.quadruped_id, j, self.quadruped_joint_angles[active_joint])
active_joint += 1
self._observation = self._compute_observation()
return np.array(self._observation)
def _perform_action(self, action):
i = 0
for j in self.quadruped_joint_ids:
p.setJointMotorControl2(self.quadruped_id, j, p.POSITION_CONTROL, action[i], force=40)
i += 1
def _compute_observation(self):
quadruped_pos, quadruped_orientation = p.getBasePositionAndOrientation(self.quadruped_id)
quadruped_orientation = p.getEulerFromQuaternion(quadruped_orientation)
joint_states = []
for j in self.quadruped_joint_ids:
joint_states.append(p.getJointState(self.quadruped_id, j)[0])
self.prev_joint_states[:-1, :] = self.prev_joint_states[1:, :]
self.prev_joint_states[-1, :] = joint_states
observations = np.concatenate([quadruped_pos, quadruped_orientation,
self.prev_joint_states.flatten(), self.prev_action])
return observations
def _compute_reward(self):
quadruped_pos, quadruped_orientation = p.getBasePositionAndOrientation(self.quadruped_id)
quadruped_linear_vel, quadruped_angular_vel = p.getBaseVelocity(self.quadruped_id)
joint_torques = []
for j in self.quadruped_joint_ids:
joint_torques.append(p.getJointState(self.quadruped_id, j)[3])
vel_x = quadruped_linear_vel[0]
vel_y = quadruped_linear_vel[1]
vel_yaw = quadruped_angular_vel[2]
quadruped_orientation = p.getEulerFromQuaternion(quadruped_orientation)
if vel_x < 0.7:
rew_vel_x = vel_x
else:
rew_vel_x = 1.4 - vel_x
reward = 1 * rew_vel_x - 0.01 * np.abs(vel_y) \
- 0.01 * np.abs(vel_yaw) \
- 0.01 * np.abs(quadruped_orientation[0]) - 0.01 * np.abs(quadruped_orientation[1]) \
- 0.005 * np.abs(0.5 - quadruped_pos[2]) \
- 0.00001 * np.linalg.norm(np.asarray(joint_torques)) \
- 0.0001 * np.linalg.norm(self.prev_torques - joint_torques)
self.prev_torques = np.asarray(joint_torques)
return reward
def _get_velocity(self):
quadruped_linear_vel, quadruped_angular_vel = p.getBaseVelocity(self.quadruped_id)
vel_x = quadruped_linear_vel[0]
vel_y = quadruped_linear_vel[1]
return np.sqrt(vel_x**2+vel_y**2)
def _get_foot_contacts(self):
LF = 0 if p.getContactPoints(self.quadruped_id, self.plane_id, linkIndexA=self.feet_ids['LF']) == () else 1
RF = 0 if p.getContactPoints(self.quadruped_id, self.plane_id, linkIndexA=self.feet_ids['RF']) == () else 1
LH = 0 if p.getContactPoints(self.quadruped_id, self.plane_id, linkIndexA=self.feet_ids['LH']) == () else 1
RH = 0 if p.getContactPoints(self.quadruped_id, self.plane_id, linkIndexA=self.feet_ids['RH']) == () else 1
return LF, RF, LH, RH
def _compute_done(self):
quadruped_pos, quadruped_orientation = p.getBasePositionAndOrientation(self.quadruped_id)
quadruped_linear_vel, _ = p.getBaseVelocity(self.quadruped_id)
vel_x = quadruped_linear_vel[0]
quadruped_orientation = p.getEulerFromQuaternion(quadruped_orientation)
done = bool(quadruped_pos[2] < 0.3)
done = bool(done or np.abs(quadruped_orientation[0]) >= np.pi / 4)
done = bool(done or np.abs(quadruped_orientation[1]) >= np.pi / 4)
done = bool(done or np.abs(quadruped_orientation[2]) >= np.pi / 4)
done = bool(done or vel_x > 1)
done = bool(done or self.env_step_counter >= 4096)
return done
def render(self, mode='human', close=False):
if not self.render_mode:
p.disconnect()
self.render_mode = True
self.physics_client = p.connect(p.GUI)
p.configureDebugVisualizer(p.COV_ENABLE_GUI, 0)
p.resetSimulation()
p.setGravity(0, 0, -9.81)
p.setTimeStep(0.01)
p.loadURDF(MODELS_PATH + 'plane/plane.urdf')
self.quadruped_id = p.loadURDF(MODELS_PATH + 'anymal_boxy/anymal_boxy.urdf',
self.quadruped_start_pos, self.quadruped_start_orientation)
p.setPhysicsEngineParameter(numSolverIterations=100)
self.quadruped_joint_ids = []
for j in range(p.getNumJoints(self.quadruped_id)):
p.changeDynamics(self.quadruped_id, j, linearDamping=0, angularDamping=0)
info = p.getJointInfo(self.quadruped_id, j)
joint_type = info[2]
if joint_type == p.JOINT_PRISMATIC or joint_type == p.JOINT_REVOLUTE:
self.quadruped_joint_ids.append(j)
p.setRealTimeSimulation(1)
time.sleep(0.01)
class ANYmalHistory3(ANYmalHistoryNC):
def __init__(self, *args, **kwargs):
super(ANYmalHistory3, self).__init__(*args, **kwargs)
self.prev_joint_states = np.zeros((3, 12))
self.observation_space = spaces.Box(-1 * np.array([np.inf] * 42), np.array([np.inf] * 42), dtype=np.float32)
def _perform_action(self, action):
i = 0
for j in self.quadruped_joint_ids:
p.setJointMotorControl2(self.quadruped_id, j, p.POSITION_CONTROL, action[i], force=40)
i += 1
def _compute_observation(self):
quadruped_pos, quadruped_orientation = p.getBasePositionAndOrientation(self.quadruped_id)
quadruped_orientation = p.getEulerFromQuaternion(quadruped_orientation)
joint_states = []
for j in self.quadruped_joint_ids:
joint_states.append(p.getJointState(self.quadruped_id, j)[0])
self.prev_joint_states[:-1, :] = self.prev_joint_states[1:, :]
self.prev_joint_states[-1, :] = joint_states
observations = np.concatenate([quadruped_pos, quadruped_orientation,
self.prev_joint_states.flatten()])
return observations
def _compute_reward(self):
quadruped_pos, quadruped_orientation = p.getBasePositionAndOrientation(self.quadruped_id)
quadruped_linear_vel, quadruped_angular_vel = p.getBaseVelocity(self.quadruped_id)
joint_torques = []
for j in self.quadruped_joint_ids:
joint_torques.append(p.getJointState(self.quadruped_id, j)[3])
vel_x = quadruped_linear_vel[0]
vel_y = quadruped_linear_vel[1]
vel_yaw = quadruped_angular_vel[2]
quadruped_orientation = p.getEulerFromQuaternion(quadruped_orientation)
if vel_x < 0.7:
rew_vel_x = vel_x
else:
rew_vel_x = 1.4 - vel_x
reward = 1 * rew_vel_x - 0.01 * np.abs(vel_y) \
- 0.01 * np.abs(vel_yaw) \
- 0.01 * np.abs(quadruped_orientation[0]) - 0.01 * np.abs(quadruped_orientation[1]) \
- 0.0001 * np.linalg.norm(self.prev_torques - joint_torques)
self.prev_torques = np.asarray(joint_torques)
return reward
class ANYmalHistory3Steer(ANYmalHistory3):
def __init__(self, *args, **kwargs):
super(ANYmalHistory3, self).__init__(*args, **kwargs)
self.goal_velocity_low = 0.2
self.goal_velocity_high = 0.9
self.target_velocity = np.random.uniform(self.goal_velocity_low, self.goal_velocity_high)
self.observation_space = spaces.Box(-1 * np.array([np.inf] * 55), np.array([np.inf] * 55), dtype=np.float32)
self.prev_joint_states = np.zeros((3, 18))
def _compute_observation(self):
quadruped_pos, quadruped_orientation = p.getBasePositionAndOrientation(self.quadruped_id)
quadruped_orientation = p.getEulerFromQuaternion(quadruped_orientation)
joint_states = []
for j in self.quadruped_joint_ids:
joint_states.append(p.getJointState(self.quadruped_id, j)[0])
self.prev_joint_states[:-1, :] = self.prev_joint_states[1:, :]
self.prev_joint_states[-1, 0:12] = joint_states
self.prev_joint_states[-1, 12:15] = quadruped_pos
self.prev_joint_states[-1, 15:18] = quadruped_orientation
observations = np.concatenate([self.prev_joint_states.flatten(), np.array(self.target_velocity).reshape(1)])
return observations
def reset(self):
self.env_step_counter = 0
self.target_velocity = np.random.uniform(self.goal_velocity_low, self.goal_velocity_high)
p.resetSimulation()
p.setGravity(0, 0, -9.81)
p.setTimeStep(self.timestep)
p.loadURDF(MODELS_PATH + 'plane/plane.urdf')
quadruped_start_pos = [0, 0, 0.5]
quadruped_start_orientation = p.getQuaternionFromEuler([0, 0, 0])
self.quadruped_id = p.loadURDF(MODELS_PATH + 'anymal_boxy/anymal_boxy.urdf',
quadruped_start_pos, quadruped_start_orientation)
active_joint = 0
for j in self.quadruped_joint_ids:
p.resetJointState(self.quadruped_id, j, self.quadruped_joint_angles[active_joint])
active_joint += 1
self._observation = self._compute_observation()
return np.array(self._observation)
def _compute_reward(self):
quadruped_pos, quadruped_orientation = p.getBasePositionAndOrientation(self.quadruped_id)
quadruped_linear_vel, quadruped_angular_vel = p.getBaseVelocity(self.quadruped_id)
joint_torques = []
for j in self.quadruped_joint_ids:
joint_torques.append(p.getJointState(self.quadruped_id, j)[3])
vel_x = quadruped_linear_vel[0]
vel_y = quadruped_linear_vel[1]
vel_yaw = quadruped_angular_vel[2]
quadruped_orientation = p.getEulerFromQuaternion(quadruped_orientation)
if vel_x < self.target_velocity:
rew_vel_x = vel_x
else:
rew_vel_x = (2 * self.target_velocity) - vel_x
reward = 1 * rew_vel_x - 0.01 * np.abs(vel_y) \
- 0.01 * np.abs(vel_yaw) \
- 0.01 * np.abs(quadruped_orientation[0]) - 0.01 * np.abs(quadruped_orientation[1]) \
- 0.0001 * np.linalg.norm(self.prev_torques - joint_torques)
self.prev_torques = np.asarray(joint_torques)
return reward | 39.796748 | 126 | 0.639768 | 1,919 | 14,685 | 4.643043 | 0.097968 | 0.116723 | 0.065657 | 0.042649 | 0.83771 | 0.79899 | 0.776319 | 0.749383 | 0.734343 | 0.716835 | 0 | 0.041407 | 0.246782 | 14,685 | 369 | 127 | 39.796748 | 0.764126 | 0.015186 | 0 | 0.683206 | 0 | 0 | 0.013641 | 0.007755 | 0 | 0 | 0 | 0 | 0 | 1 | 0.072519 | false | 0 | 0.026718 | 0.003817 | 0.160305 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
a6d4d8dddadf06b12919360793b22f0271e0c9ba | 148 | py | Python | user/models.py | pspyasasvi/webapp | e56c0186271a23c69433ca5e8bc418d8d3069919 | [
"MIT"
] | 6 | 2021-02-20T00:56:11.000Z | 2022-02-09T00:29:41.000Z | user/models.py | pspyasasvi/webapp | e56c0186271a23c69433ca5e8bc418d8d3069919 | [
"MIT"
] | null | null | null | user/models.py | pspyasasvi/webapp | e56c0186271a23c69433ca5e8bc418d8d3069919 | [
"MIT"
] | 1 | 2021-02-28T15:10:55.000Z | 2021-02-28T15:10:55.000Z | from django.db import models
from django.contrib.auth.models import AbstractUser
# Create your models here.
class UserModel(AbstractUser):
pass | 24.666667 | 51 | 0.804054 | 20 | 148 | 5.95 | 0.7 | 0.168067 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.135135 | 148 | 6 | 52 | 24.666667 | 0.929688 | 0.162162 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.25 | 0.5 | 0 | 0.75 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
a6ef222a65605219c06efb1c3a29e746fb34fe42 | 124 | py | Python | python/testData/refactoring/move/baseClass/before/src/a.py | jnthn/intellij-community | 8fa7c8a3ace62400c838e0d5926a7be106aa8557 | [
"Apache-2.0"
] | 2 | 2018-12-29T09:53:39.000Z | 2018-12-29T09:53:42.000Z | python/testData/refactoring/move/baseClass/before/src/a.py | jnthn/intellij-community | 8fa7c8a3ace62400c838e0d5926a7be106aa8557 | [
"Apache-2.0"
] | 173 | 2018-07-05T13:59:39.000Z | 2018-08-09T01:12:03.000Z | python/testData/refactoring/move/baseClass/before/src/a.py | jnthn/intellij-community | 8fa7c8a3ace62400c838e0d5926a7be106aa8557 | [
"Apache-2.0"
] | 2 | 2020-03-15T08:57:37.000Z | 2020-04-07T04:48:14.000Z | class B(object):
def __init__(self):
pass
class C(B):
def __init__(self):
super(C, self).__init__() | 17.714286 | 33 | 0.572581 | 17 | 124 | 3.470588 | 0.529412 | 0.237288 | 0.372881 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.282258 | 124 | 7 | 33 | 17.714286 | 0.662921 | 0 | 0 | 0.333333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0.166667 | 0 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
47275d252249cc5c7d45106aa6c7c61cc56cfe3a | 266 | py | Python | project-5/RL/tasks/__init__.py | linuxbender/Deep_Learning | 3df4b26777a71ddbe461ac46dafa36b34be84348 | [
"MIT"
] | null | null | null | project-5/RL/tasks/__init__.py | linuxbender/Deep_Learning | 3df4b26777a71ddbe461ac46dafa36b34be84348 | [
"MIT"
] | null | null | null | project-5/RL/tasks/__init__.py | linuxbender/Deep_Learning | 3df4b26777a71ddbe461ac46dafa36b34be84348 | [
"MIT"
] | null | null | null | from quad_controller_rl.tasks.base_task import BaseTask
from quad_controller_rl.tasks.takeoff import Takeoff
from quad_controller_rl.tasks.hover import Hover
from quad_controller_rl.tasks.landing import Landing
from quad_controller_rl.tasks.combined import Combined
| 44.333333 | 55 | 0.887218 | 41 | 266 | 5.487805 | 0.317073 | 0.177778 | 0.4 | 0.444444 | 0.555556 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.075188 | 266 | 5 | 56 | 53.2 | 0.914634 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
5b2fdadfbc78f404c65007befa2aa87311144c2f | 37 | py | Python | cm2.py | 896385665/testRebase | 6a4478a7c47b250d86dd275040139719900d92b7 | [
"MIT"
] | null | null | null | cm2.py | 896385665/testRebase | 6a4478a7c47b250d86dd275040139719900d92b7 | [
"MIT"
] | null | null | null | cm2.py | 896385665/testRebase | 6a4478a7c47b250d86dd275040139719900d92b7 | [
"MIT"
] | null | null | null | '''cm2'''
a = 11
d = 14
f = 8
u = 88 | 6.166667 | 9 | 0.378378 | 9 | 37 | 1.555556 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.333333 | 0.351351 | 37 | 6 | 10 | 6.166667 | 0.25 | 0.081081 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
5b4283230a742d72dfe76cef967f034e3c8c70fa | 93 | py | Python | dashboard/admin.py | iDevam/FoodPantry | fe0b64813b895e53ce7675d4316e1dbc96cdf7c9 | [
"MIT"
] | null | null | null | dashboard/admin.py | iDevam/FoodPantry | fe0b64813b895e53ce7675d4316e1dbc96cdf7c9 | [
"MIT"
] | null | null | null | dashboard/admin.py | iDevam/FoodPantry | fe0b64813b895e53ce7675d4316e1dbc96cdf7c9 | [
"MIT"
] | 8 | 2020-04-21T01:45:14.000Z | 2020-09-19T13:10:04.000Z | from django.contrib import admin
from dashboard.models import *
admin.site.register(profile) | 23.25 | 32 | 0.827957 | 13 | 93 | 5.923077 | 0.769231 | 0.285714 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.096774 | 93 | 4 | 33 | 23.25 | 0.916667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
5b431124751f4d49054f199ed986df9eb761b8d5 | 152 | py | Python | 0_basic_versions_check.py | codeclassifiers/nnfs | 8583c1ccf3d155779057cb5041d52a3002282b04 | [
"MIT"
] | 1 | 2021-09-18T05:00:05.000Z | 2021-09-18T05:00:05.000Z | 0_basic_versions_check.py | codeclassifiers/nnfs | 8583c1ccf3d155779057cb5041d52a3002282b04 | [
"MIT"
] | null | null | null | 0_basic_versions_check.py | codeclassifiers/nnfs | 8583c1ccf3d155779057cb5041d52a3002282b04 | [
"MIT"
] | 1 | 2021-09-18T05:00:06.000Z | 2021-09-18T05:00:06.000Z | import sys
import numpy as np
import matplotlib
print("Python", sys.version)
print("Numpy", np.__version__)
print("Matplotlib", matplotlib.__version__) | 21.714286 | 43 | 0.789474 | 20 | 152 | 5.6 | 0.45 | 0.214286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.092105 | 152 | 7 | 43 | 21.714286 | 0.811594 | 0 | 0 | 0 | 0 | 0 | 0.137255 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0.5 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 6 |
5b86216cb19c875a3598cf940da62ba23a6c6c22 | 303 | py | Python | three.py/mathutils/__init__.py | lukestanley/three.py | a3fa99cb3553aca8c74ceabb8203edeb55450803 | [
"MIT"
] | 80 | 2019-04-04T13:41:32.000Z | 2022-01-12T18:40:19.000Z | three.py/mathutils/__init__.py | lukestanley/three.py | a3fa99cb3553aca8c74ceabb8203edeb55450803 | [
"MIT"
] | 9 | 2019-04-04T14:43:50.000Z | 2020-03-29T04:50:53.000Z | three.py/mathutils/__init__.py | lukestanley/three.py | a3fa99cb3553aca8c74ceabb8203edeb55450803 | [
"MIT"
] | 17 | 2019-04-04T14:20:42.000Z | 2022-03-03T16:26:29.000Z | from mathutils.MatrixFactory import *
from mathutils.Matrix import *
from mathutils.Curve import *
from mathutils.CurveFactory import *
from mathutils.Multicurve import *
from mathutils.Surface import *
from mathutils.Hilbert3D import *
from mathutils.RandomUtils import *
from mathutils.Tween import *
| 30.3 | 37 | 0.821782 | 36 | 303 | 6.916667 | 0.333333 | 0.46988 | 0.610442 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.003745 | 0.118812 | 303 | 9 | 38 | 33.666667 | 0.928839 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
5b92d9ae57f660ed01b9fa9f53311d76fdb91799 | 48 | py | Python | crabageprediction/venv/Lib/site-packages/mpl_toolkits/axes_grid/clip_path.py | 13rianlucero/CrabAgePrediction | 92bc7fbe1040f49e820473e33cc3902a5a7177c7 | [
"MIT"
] | 603 | 2020-12-23T13:49:32.000Z | 2022-03-31T23:38:03.000Z | venv/lib/python3.7/site-packages/mpl_toolkits/axes_grid/clip_path.py | John1001Song/Big-Data-Robo-Adviser | 9444dce96954c546333d5aecc92a06c3bfd19aa5 | [
"MIT"
] | 387 | 2020-12-15T14:54:04.000Z | 2022-03-31T07:00:21.000Z | venv/lib/python3.7/site-packages/mpl_toolkits/axes_grid/clip_path.py | John1001Song/Big-Data-Robo-Adviser | 9444dce96954c546333d5aecc92a06c3bfd19aa5 | [
"MIT"
] | 64 | 2018-04-25T08:51:57.000Z | 2022-01-29T14:13:57.000Z | from mpl_toolkits.axisartist.clip_path import *
| 24 | 47 | 0.854167 | 7 | 48 | 5.571429 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.083333 | 48 | 1 | 48 | 48 | 0.886364 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
5ba2a0dbaa39ffbbe22dd3c3ae8bce24ee985f69 | 34 | py | Python | vimeodownload/__init__.py | jamiegyoung/vimeodownload.py | bdbb75491337082a473a258bcc09afd25dba2bdd | [
"MIT"
] | 2 | 2021-04-01T13:45:27.000Z | 2021-11-02T04:10:20.000Z | vimeodownload/__init__.py | jamiegyoung/vimeo-download-py | bdbb75491337082a473a258bcc09afd25dba2bdd | [
"MIT"
] | null | null | null | vimeodownload/__init__.py | jamiegyoung/vimeo-download-py | bdbb75491337082a473a258bcc09afd25dba2bdd | [
"MIT"
] | null | null | null | from .downloader import get_video
| 17 | 33 | 0.852941 | 5 | 34 | 5.6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.117647 | 34 | 1 | 34 | 34 | 0.933333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
5bb98597d11d68baaf61fcb2d882b5dc895ebd99 | 26,457 | py | Python | wsol/inception.py | umairjavaid/anonymous | 84e9a2b2b8ceeed0d0097c3c0489090138985dea | [
"MIT"
] | null | null | null | wsol/inception.py | umairjavaid/anonymous | 84e9a2b2b8ceeed0d0097c3c0489090138985dea | [
"MIT"
] | 1 | 2021-07-01T07:53:38.000Z | 2021-07-01T07:53:38.000Z | wsol/inception.py | umairjavaid/wsol2 | 7d258b6b4a99df62b35747656937a58f58bc36b7 | [
"MIT"
] | null | null | null | """
Original code: https://github.com/pytorch/vision/blob/master/torchvision/models/inception.py
"""
import os
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch.utils.model_zoo import load_url
from .method import AcolBase
from .method import ADL
from .method import normalize_tensor
from .method import spg
from .method import mymodel2
from .method import MyModel2
from .util import initialize_weights
from .util import remove_layer
__all__ = ['inception_v3']
model_urls = {
'inception_v3_google':
'https://download.pytorch.org/models/inception_v3_google-1a9a5a14.pth',
}
class BasicConv2d(nn.Module):
def __init__(self, in_channels, out_channels, kernel_size, **kwargs):
super(BasicConv2d, self).__init__()
self.conv = nn.Conv2d(in_channels, out_channels,
kernel_size, bias=False, **kwargs)
self.bn = nn.BatchNorm2d(out_channels, eps=0.001)
def forward(self, x):
x = self.conv(x)
x = self.bn(x)
return F.relu(x, inplace=True)
class InceptionA(nn.Module):
def __init__(self, in_channels, pool_features):
super(InceptionA, self).__init__()
self.branch1x1 = BasicConv2d(in_channels, 64, 1)
self.branch5x5_1 = BasicConv2d(in_channels, 48, 1)
self.branch5x5_2 = BasicConv2d(48, 64, 5, padding=2)
self.branch3x3dbl_1 = BasicConv2d(in_channels, 64, 1)
self.branch3x3dbl_2 = BasicConv2d(64, 96, 3, padding=1)
self.branch3x3dbl_3 = BasicConv2d(96, 96, 3, padding=1)
self.branch_pool = BasicConv2d(in_channels, pool_features, 1)
def forward(self, x):
branch1x1 = self.branch1x1(x)
branch5x5 = self.branch5x5_1(x)
branch5x5 = self.branch5x5_2(branch5x5)
branch3x3dbl = self.branch3x3dbl_1(x)
branch3x3dbl = self.branch3x3dbl_2(branch3x3dbl)
branch3x3dbl = self.branch3x3dbl_3(branch3x3dbl)
branch_pool = F.avg_pool2d(x, kernel_size=3, stride=1, padding=1)
branch_pool = self.branch_pool(branch_pool)
outputs = [branch1x1, branch5x5, branch3x3dbl, branch_pool]
return torch.cat(outputs, 1)
class InceptionB(nn.Module):
def __init__(self, in_channels, kernel_size=3, stride=2, padding=0):
super(InceptionB, self).__init__()
self.branch3x3 = BasicConv2d(in_channels, 384, kernel_size,
stride=stride, padding=padding)
self.branch3x3dbl_1 = BasicConv2d(in_channels, 64, 1)
self.branch3x3dbl_2 = BasicConv2d(64, 96, 3, padding=1)
self.branch3x3dbl_3 = BasicConv2d(96, 96, 3,
stride=stride, padding=padding)
self.stride = stride
def forward(self, x):
branch3x3 = self.branch3x3(x)
branch3x3dbl = self.branch3x3dbl_1(x)
branch3x3dbl = self.branch3x3dbl_2(branch3x3dbl)
branch3x3dbl = self.branch3x3dbl_3(branch3x3dbl)
branch_pool = F.max_pool2d(x, kernel_size=3, stride=self.stride,
padding=1)
outputs = [branch3x3, branch3x3dbl, branch_pool]
return torch.cat(outputs, 1)
class InceptionC(nn.Module):
def __init__(self, in_channels, channels_7x7):
super(InceptionC, self).__init__()
self.branch1x1 = BasicConv2d(in_channels, 192, 1)
c7 = channels_7x7
self.branch7x7_1 = BasicConv2d(in_channels, c7, 1)
self.branch7x7_2 = BasicConv2d(c7, c7, (1, 7), padding=(0, 3))
self.branch7x7_3 = BasicConv2d(c7, 192, (7, 1), padding=(3, 0))
self.branch7x7dbl_1 = BasicConv2d(in_channels, c7, 1)
self.branch7x7dbl_2 = BasicConv2d(c7, c7, (7, 1), padding=(3, 0))
self.branch7x7dbl_3 = BasicConv2d(c7, c7, (1, 7), padding=(0, 3))
self.branch7x7dbl_4 = BasicConv2d(c7, c7, (7, 1), padding=(3, 0))
self.branch7x7dbl_5 = BasicConv2d(c7, 192, (1, 7), padding=(0, 3))
self.branch_pool = BasicConv2d(in_channels, 192, 1)
def forward(self, x):
branch1x1 = self.branch1x1(x)
branch7x7 = self.branch7x7_1(x)
branch7x7 = self.branch7x7_2(branch7x7)
branch7x7 = self.branch7x7_3(branch7x7)
branch7x7dbl = self.branch7x7dbl_1(x)
branch7x7dbl = self.branch7x7dbl_2(branch7x7dbl)
branch7x7dbl = self.branch7x7dbl_3(branch7x7dbl)
branch7x7dbl = self.branch7x7dbl_4(branch7x7dbl)
branch7x7dbl = self.branch7x7dbl_5(branch7x7dbl)
branch_pool = F.avg_pool2d(x, kernel_size=3, stride=1, padding=1)
branch_pool = self.branch_pool(branch_pool)
outputs = [branch1x1, branch7x7, branch7x7dbl, branch_pool]
return torch.cat(outputs, 1)
class InceptionCam(nn.Module):
def __init__(self, num_classes=1000, large_feature_map=False, **kwargs):
super(InceptionCam, self).__init__()
self.large_feature_map = large_feature_map
self.Conv2d_1a_3x3 = BasicConv2d(3, 32, 3, stride=2, padding=1)
self.Conv2d_2a_3x3 = BasicConv2d(32, 32, 3, stride=1, padding=0)
self.Conv2d_2b_3x3 = BasicConv2d(32, 64, 3, stride=1, padding=1)
self.Conv2d_3b_1x1 = BasicConv2d(64, 80, 1, stride=1, padding=0)
self.Conv2d_4a_3x3 = BasicConv2d(80, 192, 3, stride=1, padding=0)
self.Mixed_5b = InceptionA(192, pool_features=32)
self.Mixed_5c = InceptionA(256, pool_features=64)
self.Mixed_5d = InceptionA(288, pool_features=64)
self.Mixed_6a = InceptionB(288, kernel_size=3, stride=1, padding=1)
self.Mixed_6b = InceptionC(768, channels_7x7=128)
self.Mixed_6c = InceptionC(768, channels_7x7=160)
self.Mixed_6d = InceptionC(768, channels_7x7=160)
self.Mixed_6e = InceptionC(768, channels_7x7=192)
self.SPG_A3_1b = nn.Sequential(
nn.Conv2d(768, 1024, 3, padding=1),
nn.ReLU(True),
)
self.SPG_A3_2b = nn.Sequential(
nn.Conv2d(1024, 1024, 3, padding=1),
nn.ReLU(True),
)
self.SPG_A4 = nn.Conv2d(1024, num_classes, 1, padding=0)
self.avgpool = nn.AdaptiveAvgPool2d(1)
initialize_weights(self.modules(), init_mode='xavier')
def forward(self, x, labels=None, return_cam=False):
batch_size = x.shape[0]
x = self.Conv2d_1a_3x3(x)
x = self.Conv2d_2a_3x3(x)
x = self.Conv2d_2b_3x3(x)
x = F.max_pool2d(x, kernel_size=3, stride=2, padding=1, ceil_mode=True)
x = self.Conv2d_3b_1x1(x)
x = self.Conv2d_4a_3x3(x)
x = F.max_pool2d(x, kernel_size=3, stride=2, padding=1, ceil_mode=True)
x = self.Mixed_5b(x)
x = self.Mixed_5c(x)
x = self.Mixed_5d(x)
if not self.large_feature_map:
x = F.max_pool2d(x, kernel_size=3, stride=2, ceil_mode=True)
x = self.Mixed_6a(x)
x = self.Mixed_6b(x)
x = self.Mixed_6c(x)
x = self.Mixed_6d(x)
feat = self.Mixed_6e(x)
x = F.dropout(feat, 0.5, self.training)
x = self.SPG_A3_1b(x)
x = F.dropout(x, 0.5, self.training)
x = self.SPG_A3_2b(x)
x = F.dropout(x, 0.5, self.training)
feat_map = self.SPG_A4(x)
logits = self.avgpool(feat_map)
logits = logits.view(logits.shape[0:2])
if return_cam:
feature_map = feat_map.clone().detach()
cams = feature_map[range(batch_size), labels]
return cams
return {'logits': logits}
def get_loss(self, logits, target):
loss_cls = nn.CrossEntropyLoss()(logits, target.long())
return loss_cls
class InceptionAcol(AcolBase):
def __init__(self, num_classes=1000, large_feature_map=False, **kwargs):
super(InceptionAcol, self).__init__()
self.large_feature_map = large_feature_map
self.drop_threshold = kwargs['acol_drop_threshold']
self.Conv2d_1a_3x3 = BasicConv2d(3, 32, 3, stride=2, padding=1)
self.Conv2d_2a_3x3 = BasicConv2d(32, 32, 3)
self.Conv2d_2b_3x3 = BasicConv2d(32, 64, 3, padding=1)
self.Conv2d_3b_1x1 = BasicConv2d(64, 80, 1)
self.Conv2d_4a_3x3 = BasicConv2d(80, 192, 3)
self.Mixed_5b = InceptionA(192, pool_features=32)
self.Mixed_5c = InceptionA(256, pool_features=64)
self.Mixed_5d = InceptionA(288, pool_features=64)
self.Mixed_6a = InceptionB(288, kernel_size=3, stride=1, padding=1)
self.Mixed_6b = InceptionC(768, channels_7x7=128)
self.Mixed_6c = InceptionC(768, channels_7x7=160)
self.Mixed_6d = InceptionC(768, channels_7x7=160)
self.Mixed_6e = InceptionC(768, channels_7x7=192)
self.classifier_A = nn.Sequential(
nn.Conv2d(768, 1024, kernel_size=3, stride=1, padding=1),
nn.ReLU(True),
nn.Conv2d(1024, 1024, kernel_size=3, stride=1, padding=1),
nn.ReLU(True),
nn.Conv2d(1024, num_classes, kernel_size=1, padding=0)
)
self.classifier_B = nn.Sequential(
nn.Conv2d(768, 1024, kernel_size=3, stride=1, padding=1),
nn.ReLU(True),
nn.Conv2d(1024, 1024, kernel_size=3, stride=1, padding=1),
nn.ReLU(True),
nn.Conv2d(1024, num_classes, kernel_size=1, padding=0)
)
self.avgpool = nn.AdaptiveAvgPool2d(1)
initialize_weights(self.modules(), init_mode='xavier')
def forward(self, x, labels=None, return_cam=False):
batch_size = x.shape[0]
x = self.Conv2d_1a_3x3(x)
x = self.Conv2d_2a_3x3(x)
x = self.Conv2d_2b_3x3(x)
x = F.max_pool2d(x, kernel_size=3, stride=2, padding=1, ceil_mode=True)
x = self.Conv2d_3b_1x1(x)
x = self.Conv2d_4a_3x3(x)
x = F.max_pool2d(x, kernel_size=3, stride=2, padding=1, ceil_mode=True)
x = self.Mixed_5b(x)
x = self.Mixed_5c(x)
x = self.Mixed_5d(x)
if not self.large_feature_map:
x = F.max_pool2d(x, kernel_size=3, stride=2, ceil_mode=True)
x = self.Mixed_6a(x)
x = self.Mixed_6b(x)
x = self.Mixed_6c(x)
x = self.Mixed_6d(x)
feature = self.Mixed_6e(x)
logits_dict = self._acol_logits(feature=feature, labels=labels,
drop_threshold=self.drop_threshold)
if return_cam:
normalized_a = normalize_tensor(
logits_dict['feat_map_a'].clone().detach())
normalized_b = normalize_tensor(
logits_dict['feat_map_b'].clone().detach())
feature_maps = torch.max(normalized_a, normalized_b)
cams = feature_maps[range(batch_size), labels]
return cams
return logits_dict
class InceptionSpg(nn.Module):
def __init__(self, num_classes=1000, large_feature_map=False, **kwargs):
super(InceptionSpg, self).__init__()
self.large_feature_map = large_feature_map
self.Conv2d_1a_3x3 = BasicConv2d(3, 32, 3, stride=2, padding=1)
self.Conv2d_2a_3x3 = BasicConv2d(32, 32, 3, stride=1, padding=0)
self.Conv2d_2b_3x3 = BasicConv2d(32, 64, 3, stride=1, padding=1)
self.Conv2d_3b_1x1 = BasicConv2d(64, 80, 1, stride=1, padding=0)
self.Conv2d_4a_3x3 = BasicConv2d(80, 192, 3, stride=1, padding=0)
self.Mixed_5b = InceptionA(192, pool_features=32)
self.Mixed_5c = InceptionA(256, pool_features=64)
self.Mixed_5d = InceptionA(288, pool_features=64)
self.Mixed_6a = InceptionB(288, kernel_size=3, stride=1, padding=1)
self.Mixed_6b = InceptionC(768, channels_7x7=128)
self.Mixed_6c = InceptionC(768, channels_7x7=160)
self.Mixed_6d = InceptionC(768, channels_7x7=160)
self.Mixed_6e = InceptionC(768, channels_7x7=192)
self.SPG_A3_1b = nn.Sequential(
nn.Conv2d(768, 1024, kernel_size=3, padding=1),
nn.ReLU(True),
)
self.SPG_A3_2b = nn.Sequential(
nn.Conv2d(1024, 1024, kernel_size=3, padding=1),
nn.ReLU(True),
)
self.SPG_A4 = nn.Conv2d(1024, num_classes, kernel_size=1, padding=0)
self.avgpool = nn.AdaptiveAvgPool2d(1)
self.SPG_B_1a = nn.Sequential(
nn.Conv2d(288, 512, kernel_size=3, padding=1),
nn.ReLU(inplace=True),
)
self.SPG_B_2a = nn.Sequential(
nn.Conv2d(768, 512, kernel_size=3, padding=1),
nn.ReLU(inplace=True),
)
self.SPG_B_shared = nn.Sequential(
nn.Conv2d(512, 512, kernel_size=3, padding=1),
nn.ReLU(inplace=True),
nn.Conv2d(512, 1, kernel_size=1, padding=0),
)
self.SPG_C = nn.Sequential(
nn.Conv2d(1024, 512, kernel_size=3, padding=1),
nn.ReLU(inplace=True),
nn.Conv2d(512, 1, kernel_size=1),
)
initialize_weights(self.modules(), init_mode='xavier')
def forward(self, x, labels=None, return_cam=False):
batch_size = x.shape[0]
x = self.Conv2d_1a_3x3(x)
x = self.Conv2d_2a_3x3(x)
x = self.Conv2d_2b_3x3(x)
x = F.max_pool2d(x, kernel_size=3, stride=2, padding=1, ceil_mode=True)
x = self.Conv2d_3b_1x1(x)
x = self.Conv2d_4a_3x3(x)
x = F.max_pool2d(x, kernel_size=3, stride=2, padding=1, ceil_mode=True)
x = self.Mixed_5b(x)
x = self.Mixed_5c(x)
x = self.Mixed_5d(x)
if not self.large_feature_map:
x = F.max_pool2d(x, kernel_size=3, stride=2, ceil_mode=True)
logits_b1 = self.SPG_B_1a(x)
logits_b1 = self.SPG_B_shared(logits_b1)
x = self.Mixed_6a(x)
x = self.Mixed_6b(x)
x = self.Mixed_6c(x)
x = self.Mixed_6d(x)
feat = self.Mixed_6e(x)
logits_b2 = self.SPG_B_2a(x)
logits_b2 = self.SPG_B_shared(logits_b2)
x = F.dropout(feat, 0.5, self.training)
x = self.SPG_A3_1b(x)
x = F.dropout(x, 0.5, self.training)
x = self.SPG_A3_2b(x)
x = F.dropout(x, 0.5, self.training)
feat_map = self.SPG_A4(x)
logits_c = self.SPG_C(x)
logits = self.avgpool(feat_map)
logits = logits.view(logits.shape[0:2])
labels = logits.argmax(dim=1).long() if labels is None else labels
attention, fused_attention = spg.compute_attention(
feat_map=feat_map, labels=labels,
logits_b1=logits_b1, logits_b2=logits_b2)
if return_cam:
feature_map = feat_map.clone().detach()
cams = feature_map[range(batch_size), labels]
return cams
return {'attention': attention, 'fused_attention': fused_attention,
'logits': logits, 'logits_b1': logits_b1,
'logits_b2': logits_b2, 'logits_c': logits_c}
class InceptionAdl(nn.Module):
def __init__(self, num_classes=1000, large_feature_map=False, **kwargs):
super(InceptionAdl, self).__init__()
self.large_feature_map = large_feature_map
self.adl_drop_rate = kwargs['adl_drop_rate']
self.adl_threshold = kwargs['adl_drop_threshold']
self.ADL_5d = ADL(self.adl_drop_rate, self.adl_threshold)
self.ADL_6e = ADL(self.adl_drop_rate, self.adl_threshold)
self.ADL_A3_2b = ADL(self.adl_drop_rate, self.adl_threshold)
self.Conv2d_1a_3x3 = BasicConv2d(3, 32, 3, stride=2, padding=1)
self.Conv2d_2a_3x3 = BasicConv2d(32, 32, 3, stride=1, padding=0)
self.Conv2d_2b_3x3 = BasicConv2d(32, 64, 3, stride=1, padding=1)
self.Conv2d_3b_1x1 = BasicConv2d(64, 80, 1, stride=1, padding=0)
self.Conv2d_4a_3x3 = BasicConv2d(80, 192, 3, stride=1, padding=0)
self.Mixed_5b = InceptionA(192, pool_features=32)
self.Mixed_5c = InceptionA(256, pool_features=64)
self.Mixed_5d = InceptionA(288, pool_features=64)
self.Mixed_6a = InceptionB(288, kernel_size=3, stride=1, padding=1)
self.Mixed_6b = InceptionC(768, channels_7x7=128)
self.Mixed_6c = InceptionC(768, channels_7x7=160)
self.Mixed_6d = InceptionC(768, channels_7x7=160)
self.Mixed_6e = InceptionC(768, channels_7x7=192)
self.SPG_A3_1b = nn.Sequential(
nn.Conv2d(768, 1024, 3, padding=1),
nn.ReLU(True),
)
self.SPG_A3_2b = nn.Sequential(
nn.Conv2d(1024, 1024, 3, padding=1),
nn.ReLU(True),
)
self.SPG_A4 = nn.Conv2d(1024, num_classes, 1, padding=0)
self.avgpool = nn.AdaptiveAvgPool2d(1)
initialize_weights(self.modules(), init_mode='xavier')
def forward(self, x, labels=None, return_cam=False):
batch_size = x.shape[0]
x = self.Conv2d_1a_3x3(x)
x = self.Conv2d_2a_3x3(x)
x = self.Conv2d_2b_3x3(x)
x = F.max_pool2d(x, kernel_size=3, stride=2, padding=1, ceil_mode=True)
x = self.Conv2d_3b_1x1(x)
x = self.Conv2d_4a_3x3(x)
x = F.max_pool2d(x, kernel_size=3, stride=2, padding=1, ceil_mode=True)
x = self.Mixed_5b(x)
x = self.Mixed_5c(x)
x = self.Mixed_5d(x)
x = self.ADL_5d(x)
if not self.large_feature_map:
x = F.max_pool2d(x, kernel_size=3, stride=2, ceil_mode=True)
x = self.Mixed_6a(x)
x = self.Mixed_6b(x)
x = self.Mixed_6c(x)
x = self.Mixed_6d(x)
x = self.Mixed_6e(x)
x = self.ADL_6e(x)
x = self.SPG_A3_1b(x)
x = self.SPG_A3_2b(x)
x = self.ADL_A3_2b(x)
x = self.SPG_A4(x)
logits = self.avgpool(x)
logits = logits.view(x.shape[0:2])
if return_cam:
feature_map = x.clone().detach()
cams = feature_map[range(batch_size), labels]
return cams
return {'logits': logits}
class InceptionMyModel46(nn.Module):
def __init__(self, num_classes=1000, large_feature_map=False, **kwargs):
super(InceptionMyModel46, self).__init__()
self.large_feature_map = large_feature_map
self.Conv2d_1a_3x3 = BasicConv2d(3, 32, 3, stride=2, padding=1)
self.Conv2d_2a_3x3 = BasicConv2d(32, 32, 3)
self.Conv2d_2b_3x3 = BasicConv2d(32, 64, 3, padding=1)
self.Conv2d_3b_1x1 = BasicConv2d(64, 80, 1)
self.Conv2d_4a_3x3 = BasicConv2d(80, 192, 3)
self.Mixed_5b = InceptionA(192, pool_features=32)
self.Mixed_5c = InceptionA(256, pool_features=64)
self.Mixed_5d = InceptionA(288, pool_features=64)
self.Mixed_6a = InceptionB(288, kernel_size=3, stride=1, padding=1)
self.Mixed_6b = InceptionC(768, channels_7x7=128)
self.Mixed_6c = InceptionC(768, channels_7x7=160)
self.Mixed_6d = InceptionC(768, channels_7x7=160)
self.Mixed_6e = InceptionC(768, channels_7x7=192)
self.conv6 = nn.Conv2d(768, 1024, kernel_size=3, padding=1)
self.conv7 = nn.Conv2d(1024, num_classes, kernel_size=1)
self.conv8 = nn.Conv2d(768, 1024, kernel_size=3, padding=1)
self.conv9 = nn.Conv2d(1024, num_classes, kernel_size=1)
self.conv10 = nn.Conv2d(768, 1024, kernel_size=3, padding=1)
self.conv11 = nn.Conv2d(1024, num_classes, kernel_size=1)
#self.conv12 = nn.Conv2d(768, 1024, kernel_size=3, padding=1)
#self.conv13 = nn.Conv2d(1024, num_classes, kernel_size=1)
#self.conv12 = nn.Conv2d(512, 1024, kernel_size=3, padding=1)
#self.conv13 = nn.Conv2d(1024, num_classes, kernel_size=1)
self.mymod2 = MyModel2()
self.relu = nn.ReLU(inplace=False)
self.avgpool = nn.AdaptiveAvgPool2d(1)
initialize_weights(self.modules(), init_mode='xavier')
def features(self, x):
x = self.Conv2d_1a_3x3(x)
x = self.Conv2d_2a_3x3(x)
x = self.Conv2d_2b_3x3(x)
x = F.max_pool2d(x, kernel_size=3, stride=2, padding=1, ceil_mode=True)
x = self.Conv2d_3b_1x1(x)
x = self.Conv2d_4a_3x3(x)
x = F.max_pool2d(x, kernel_size=3, stride=2, padding=1, ceil_mode=True)
x = self.Mixed_5b(x)
x = self.Mixed_5c(x)
x = self.Mixed_5d(x)
x = self.mymod2(x)
if not self.large_feature_map:
x = F.max_pool2d(x, kernel_size=3, stride=2, ceil_mode=True)
x = self.Mixed_6a(x)
x = self.Mixed_6b(x)
x = self.Mixed_6c(x)
x = self.Mixed_6d(x)
x = self.Mixed_6e(x)
x = self.mymod2(x)
return x
def forward(self, x, labels=None, return_cam=False):
batch_size = x.shape[0]
x1 = self.features(x)
x1 = self.conv6(x1)
x1 = self.relu(x1)
x1 = self.mymod2(x1)
x1 = self.conv7(x1)
x1 = self.relu(x1)
x2 = self.features(x)
x2 = self.conv8(x2)
x2 = self.relu(x2)
x2 = self.conv9(x2)
x2 = self.relu(x2)
x3 = self.features(x)
x3 = self.conv10(x3)
x3 = self.relu(x3)
x3 = self.conv11(x3)
x3 = self.relu(x3)
#x4 = self.features(x)
#x4 = self.conv12(x4)
#x4 = self.relu(x4)
#x4 = self.conv13(x4)
#x4 = self.relu(x4)
x = torch.max(x1 ,x2)
x = torch.max(x ,x3)
#x = torch.max(x,x4)
if return_cam:
x = x1.detach().clone()
x = x + x2.detach().clone()
x = x + x3.detach().clone()
#x = x + x4.detach().clone()
x = normalize_tensor(x.detach().clone())
x = x[range(batch_size), labels]
return x
x = self.avgpool(x)
x = x.view(x.size(0), -1)
return {'logits': x}
class InceptionMyModel47(nn.Module):
def __init__(self, num_classes=1000, large_feature_map=False, **kwargs):
super(InceptionMyModel47, self).__init__()
self.large_feature_map = large_feature_map
self.Conv2d_1a_3x3 = BasicConv2d(3, 32, 3, stride=2, padding=1)
self.Conv2d_2a_3x3 = BasicConv2d(32, 32, 3)
self.Conv2d_2b_3x3 = BasicConv2d(32, 64, 3, padding=1)
self.Conv2d_3b_1x1 = BasicConv2d(64, 80, 1)
self.Conv2d_4a_3x3 = BasicConv2d(80, 192, 3)
self.Mixed_5b = InceptionA(192, pool_features=32)
self.Mixed_5c = InceptionA(256, pool_features=64)
self.Mixed_5d = InceptionA(288, pool_features=64)
self.Mixed_6a = InceptionB(288, kernel_size=3, stride=1, padding=1)
self.Mixed_6b = InceptionC(768, channels_7x7=128)
self.Mixed_6c = InceptionC(768, channels_7x7=160)
self.Mixed_6d = InceptionC(768, channels_7x7=160)
self.Mixed_6e = InceptionC(768, channels_7x7=192)
self.conv6 = nn.Conv2d(768, 1024, kernel_size=3, padding=1)
self.conv7 = nn.Conv2d(1024, num_classes, kernel_size=1)
self.conv8 = nn.Conv2d(768, 1024, kernel_size=3, padding=1)
self.conv9 = nn.Conv2d(1024, num_classes, kernel_size=1)
self.conv10 = nn.Conv2d(768, 1024, kernel_size=3, padding=1)
self.conv11 = nn.Conv2d(1024, num_classes, kernel_size=1)
self.conv12 = nn.Conv2d(768, 1024, kernel_size=3, padding=1)
self.conv13 = nn.Conv2d(1024, num_classes, kernel_size=1)
#self.conv12 = nn.Conv2d(512, 1024, kernel_size=3, padding=1)
#self.conv13 = nn.Conv2d(1024, num_classes, kernel_size=1)
self.mymod2 = MyModel2()
self.relu = nn.ReLU(inplace=False)
self.avgpool = nn.AdaptiveAvgPool2d(1)
initialize_weights(self.modules(), init_mode='xavier')
def features(self, x):
x = self.Conv2d_1a_3x3(x)
x = self.Conv2d_2a_3x3(x)
x = self.Conv2d_2b_3x3(x)
x = F.max_pool2d(x, kernel_size=3, stride=2, padding=1, ceil_mode=True)
x = self.Conv2d_3b_1x1(x)
x = self.Conv2d_4a_3x3(x)
x = F.max_pool2d(x, kernel_size=3, stride=2, padding=1, ceil_mode=True)
x = self.Mixed_5b(x)
x = self.Mixed_5c(x)
x = self.Mixed_5d(x)
x = self.mymod2(x)
if not self.large_feature_map:
x = F.max_pool2d(x, kernel_size=3, stride=2, ceil_mode=True)
x = self.Mixed_6a(x)
x = self.Mixed_6b(x)
x = self.Mixed_6c(x)
x = self.Mixed_6d(x)
x = self.Mixed_6e(x)
x = self.mymod2(x)
return x
def forward(self, x, labels=None, return_cam=False):
batch_size = x.shape[0]
x1 = self.features(x)
x1 = self.conv6(x1)
x1 = self.relu(x1)
x1 = self.mymod2(x1)
x1 = self.conv7(x1)
x1 = self.relu(x1)
x2 = self.features(x)
x2 = self.conv8(x2)
x2 = self.relu(x2)
x2 = self.conv9(x2)
x2 = self.relu(x2)
x3 = self.features(x)
x3 = self.conv10(x3)
x3 = self.relu(x3)
x3 = self.conv11(x3)
x3 = self.relu(x3)
x4 = self.features(x)
x4 = self.conv12(x4)
x4 = self.relu(x4)
x4 = self.conv13(x4)
x4 = self.relu(x4)
x = torch.max(x1 ,x2)
x = torch.max(x ,x3)
x = torch.max(x,x4)
if return_cam:
x = x1.detach().clone()
x = x + x2.detach().clone()
x = x + x3.detach().clone()
x = x + x4.detach().clone()
x = normalize_tensor(x.detach().clone())
x = x[range(batch_size), labels]
return x
x = self.avgpool(x)
x = x.view(x.size(0), -1)
return {'logits': x}
def load_pretrained_model(model, path=None):
if path:
state_dict = torch.load(
os.path.join(path, 'inception_v3.pth'))
else:
state_dict = load_url(model_urls['inception_v3_google'],
progress=True)
remove_layer(state_dict, 'Mixed_7')
remove_layer(state_dict, 'AuxLogits')
remove_layer(state_dict, 'fc.')
model.load_state_dict(state_dict, strict=False)
return model
def inception_v3(architecture_type, pretrained=False, pretrained_path=None,
**kwargs):
model = {'cam': InceptionCam,
'acol': InceptionAcol,
'spg': InceptionSpg,
'adl': InceptionAdl,
'mymodel46':InceptionMyModel46,
'mymodel47':InceptionMyModel47}[architecture_type](**kwargs)
if pretrained:
model = load_pretrained_model(model, pretrained_path)
return model
| 36.044959 | 92 | 0.609517 | 3,733 | 26,457 | 4.107688 | 0.059202 | 0.012521 | 0.026216 | 0.023673 | 0.819095 | 0.799074 | 0.779444 | 0.757337 | 0.744424 | 0.727403 | 0 | 0.099984 | 0.268889 | 26,457 | 733 | 93 | 36.094134 | 0.692757 | 0.022414 | 0 | 0.688496 | 0 | 0 | 0.014317 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.044248 | false | 0 | 0.023009 | 0 | 0.122124 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
5be83b45042af88abf73358ddb30e3fbf6b7e18e | 117 | py | Python | diverse/fields/__init__.py | sakkada/django-diverse | dbd13bb13c3663d6149a28d94daaf06c1e47b0f4 | [
"MIT"
] | null | null | null | diverse/fields/__init__.py | sakkada/django-diverse | dbd13bb13c3663d6149a28d94daaf06c1e47b0f4 | [
"MIT"
] | null | null | null | diverse/fields/__init__.py | sakkada/django-diverse | dbd13bb13c3663d6149a28d94daaf06c1e47b0f4 | [
"MIT"
] | null | null | null | from .fields import DiverseFileField, DiverseImageField
from .widgets import DiverseFileInput, DiverseImageFileInput
| 39 | 60 | 0.880342 | 10 | 117 | 10.3 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.08547 | 117 | 2 | 61 | 58.5 | 0.962617 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
5bfc7c3a6ecc8e522d9c9e0fbf763a2b52637993 | 109 | py | Python | server/nst/nst/views.py | dilawarm/neural-style-transfer | 100ea2d0f05e28542dddc3c22512cf7945c1e39d | [
"MIT"
] | 5 | 2020-03-31T17:26:36.000Z | 2021-04-07T14:12:50.000Z | server/nst/nst/views.py | shuhuai007/neural-style-transfer | 99babd8d4d899124198710fcc3b2ab5513a67dea | [
"MIT"
] | 8 | 2021-03-30T12:56:58.000Z | 2022-02-10T01:48:27.000Z | server/nst/nst/views.py | shuhuai007/neural-style-transfer | 99babd8d4d899124198710fcc3b2ab5513a67dea | [
"MIT"
] | 1 | 2020-08-02T14:42:31.000Z | 2020-08-02T14:42:31.000Z | from django.http import HttpResponse
def homepage(request):
return HttpResponse("<h1>Server :)</h1>") | 27.25 | 45 | 0.715596 | 13 | 109 | 6 | 0.846154 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.021505 | 0.146789 | 109 | 4 | 45 | 27.25 | 0.817204 | 0 | 0 | 0 | 0 | 0 | 0.168224 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
75062909b1a341801bb388e7c523cd579383304c | 34 | py | Python | Bot/1_Find/Logic/_Top_Movers.py | ReedGraff/High-Low | c8ba0339d7818e344cacf9a73a83d24dc539c2ca | [
"MIT"
] | 1 | 2022-01-06T05:50:53.000Z | 2022-01-06T05:50:53.000Z | Bot/1_Find/Logic/_Top_Movers.py | ReedGraff/High-Low | c8ba0339d7818e344cacf9a73a83d24dc539c2ca | [
"MIT"
] | null | null | null | Bot/1_Find/Logic/_Top_Movers.py | ReedGraff/High-Low | c8ba0339d7818e344cacf9a73a83d24dc539c2ca | [
"MIT"
] | null | null | null | def Top_Movers(self):
return 0 | 17 | 21 | 0.705882 | 6 | 34 | 3.833333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.037037 | 0.205882 | 34 | 2 | 22 | 17 | 0.814815 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | false | 0 | 0 | 0.5 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
753d70e20dcfb3dbd3c77cc503fac460ecd86e00 | 7,413 | py | Python | tests/views/test_admin_review.py | priyanshu-kumar02/personfinder | d5390b60709cd0ccaaade9a3b6224a60cd523ed9 | [
"Apache-2.0"
] | 561 | 2015-02-16T07:59:42.000Z | 2022-03-30T17:31:21.000Z | tests/views/test_admin_review.py | Anthonymcqueen21/personfinder | ee7791fbc434eb4ec5cfad449288a1e884db5b1e | [
"Apache-2.0"
] | 591 | 2015-01-30T05:09:30.000Z | 2022-02-26T09:31:25.000Z | tests/views/test_admin_review.py | Anthonymcqueen21/personfinder | ee7791fbc434eb4ec5cfad449288a1e884db5b1e | [
"Apache-2.0"
] | 258 | 2015-01-25T18:35:12.000Z | 2021-12-25T01:44:14.000Z | # Copyright 2019 Google Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Tests for the admin review page."""
import django
import django.http
import django.test
import model
import view_tests_base
class AdminReviewViewTests(view_tests_base.ViewTestsBase):
"""Tests the admin review view."""
def setUp(self):
super(AdminReviewViewTests, self).setUp()
self.data_generator.repo()
self.person = self.data_generator.person()
self.login_as_moderator()
def test_get_no_notes(self):
"""Tests GET requests when there are no notes."""
resp = self.client.get('/haiti/admin/review', secure=True)
self.assertEqual(len(resp.context['notes']), 0)
self.assertEqual(resp.context['next_url'], None)
self.assertEqual(resp.context['source_options_nav'][0][0], 'all')
self.assertEqual(resp.context['source_options_nav'][0][1], None)
self.assertEqual(
resp.context['source_options_nav'][1][0],
'haiti.personfinder.google.org')
self.assertEqual(
resp.context['source_options_nav'][1][1],
'/haiti/admin/review?source=haiti.personfinder.google.org&'
'status=all')
self.assertEqual(
resp.context['status_options_nav'][1][0], 'unspecified')
self.assertEqual(
resp.context['status_options_nav'][1][1],
'/haiti/admin/review?source=all&status=unspecified')
def test_get(self):
"""Tests GET requests when there are notes."""
for i in range(5):
self.data_generator.note(person_id=self.person.record_id)
resp = self.client.get('/haiti/admin/review', secure=True)
self.assertEqual(len(resp.context['notes']), 5)
self.assertEqual(resp.context['next_url'], None)
self.assertEqual(resp.context['source_options_nav'][0][0], 'all')
self.assertEqual(resp.context['source_options_nav'][0][1], None)
self.assertEqual(
resp.context['source_options_nav'][1][0],
'haiti.personfinder.google.org')
self.assertEqual(
resp.context['source_options_nav'][1][1],
'/haiti/admin/review?source=haiti.personfinder.google.org&'
'status=all')
self.assertEqual(
resp.context['status_options_nav'][1][0], 'unspecified')
self.assertEqual(
resp.context['status_options_nav'][1][1],
'/haiti/admin/review?source=all&status=unspecified')
def test_get_specified_status(self):
for i in range(5):
self.data_generator.note(person_id=self.person.record_id)
for i in range(5):
self.data_generator.note(
person_id=self.person.record_id,
status='is_note_author')
resp = self.client.get(
'/haiti/admin/review?status=is_note_author', secure=True)
self.assertEqual(len(resp.context['notes']), 5)
self.assertEqual(resp.context['next_url'], None)
self.assertEqual(resp.context['source_options_nav'][0][0], 'all')
self.assertEqual(resp.context['source_options_nav'][0][1], None)
self.assertEqual(
resp.context['source_options_nav'][1][0],
'haiti.personfinder.google.org')
self.assertEqual(
resp.context['source_options_nav'][1][1],
'/haiti/admin/review?source=haiti.personfinder.google.org&'
'status=is_note_author')
self.assertEqual(
resp.context['status_options_nav'][1][0], 'unspecified')
self.assertEqual(
resp.context['status_options_nav'][1][1],
'/haiti/admin/review?source=all&status=unspecified')
def test_get_specified_source(self):
other_source_person = self.data_generator.person(
record_id='haiti.example.org/Person.1')
for i in range(5):
self.data_generator.note(person_id=self.person.record_id)
for i in range(5):
self.data_generator.note(
person_id=other_source_person.record_id)
resp = self.client.get(
'/haiti/admin/review?source=haiti.example.org', secure=True)
self.assertEqual(len(resp.context['notes']), 5)
self.assertEqual(resp.context['next_url'], None)
self.assertEqual(resp.context['source_options_nav'][0][0], 'all')
self.assertEqual(
resp.context['source_options_nav'][0][1],
'/haiti/admin/review?source=all&status=all')
self.assertEqual(
resp.context['source_options_nav'][1][0],
'haiti.personfinder.google.org')
self.assertEqual(
resp.context['source_options_nav'][1][1],
'/haiti/admin/review?source=haiti.personfinder.google.org&'
'status=all')
self.assertEqual(
resp.context['status_options_nav'][1][0], 'unspecified')
self.assertEqual(
resp.context['status_options_nav'][1][1],
'/haiti/admin/review?source=haiti.example.org&status=unspecified')
def test_accept_note(self):
"""Tests POST requests to accept a note."""
note = self.data_generator.note(person_id=self.person.record_id)
get_doc = self.to_doc(self.client.get(
'/haiti/admin/review/', secure=True))
xsrf_token = get_doc.cssselect_one('input[name="xsrf_token"]').get(
'value')
post_resp = self.client.post('/haiti/admin/review/', {
'note.%s' % note.record_id: 'accept',
'xsrf_token': xsrf_token,
}, secure=True)
# Check that the user's redirected to the repo's main admin page.
self.assertIsInstance(post_resp, django.http.HttpResponseRedirect)
self.assertEqual(post_resp.url, '/haiti/admin/review/')
# Reload the Note from Datastore.
note = model.Note.get('haiti', note.record_id)
self.assertIs(note.reviewed, True)
self.assertIs(note.hidden, False)
def test_flag_note(self):
"""Tests POST requests to flag a note."""
note = self.data_generator.note(person_id=self.person.record_id)
get_doc = self.to_doc(self.client.get(
'/haiti/admin/review/', secure=True))
xsrf_token = get_doc.cssselect_one('input[name="xsrf_token"]').get(
'value')
post_resp = self.client.post('/haiti/admin/review/', {
'note.%s' % note.record_id: 'flag',
'xsrf_token': xsrf_token,
}, secure=True)
# Check that the user's redirected to the repo's main admin page.
self.assertIsInstance(post_resp, django.http.HttpResponseRedirect)
self.assertEqual(post_resp.url, '/haiti/admin/review/')
# Reload the Note from Datastore.
note = model.Note.get('haiti', note.record_id)
self.assertIs(note.reviewed, True)
self.assertIs(note.hidden, True)
| 43.605882 | 78 | 0.635505 | 933 | 7,413 | 4.906752 | 0.158628 | 0.111402 | 0.116208 | 0.159021 | 0.798384 | 0.785714 | 0.773919 | 0.739187 | 0.738532 | 0.73744 | 0 | 0.011557 | 0.229597 | 7,413 | 169 | 79 | 43.863905 | 0.790054 | 0.129907 | 0 | 0.75 | 0 | 0 | 0.246916 | 0.121037 | 0 | 0 | 0 | 0 | 0.30303 | 1 | 0.05303 | false | 0 | 0.037879 | 0 | 0.098485 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
753fdc4692e52d5eeea5721f588354c78fd3adcc | 3,187 | py | Python | test/pithy/parse/precedence.py | gwk/glossy | 6976ca4fd1efc09d9cd670b1fe37817c05b4b529 | [
"CC0-1.0"
] | 7 | 2019-05-04T00:51:38.000Z | 2021-12-10T15:36:31.000Z | test/pithy/parse/precedence.py | gwk/glossy | 6976ca4fd1efc09d9cd670b1fe37817c05b4b529 | [
"CC0-1.0"
] | null | null | null | test/pithy/parse/precedence.py | gwk/glossy | 6976ca4fd1efc09d9cd670b1fe37817c05b4b529 | [
"CC0-1.0"
] | 1 | 2016-07-30T22:38:08.000Z | 2016-07-30T22:38:08.000Z | #!/usr/bin/env python3
from pithy.parse import Adjacency, Atom, Infix, Left, Parser, Precedence, Right, Suffix, token_extract_text
from pithy.py.lex import lexer
from tolkien import Source
from utest import *
left = Parser(lexer, dict(
name=Atom('name', transform=token_extract_text),
expr=Precedence(
('name',),
Left(Infix('plus')),
Left(Infix('star')),
)),
drop=('spaces',))
utest(('+', ('+', 'a', ('*', 'b', 'c')), 'd'), left.parse, 'expr', Source('', 'a + b * c + d'))
utest(('+', ('*', 'a', 'b'), ('*', ('*', 'c', 'd'), 'e')), left.parse, 'expr', Source('', 'a * b + c * d * e'))
right = Parser(lexer, dict(
name=Atom('name', transform=token_extract_text),
expr=Precedence(
('name',),
Right(Infix('plus')),
Right(Infix('star')),
)),
drop=('spaces',))
utest(('+', 'a', ('+', ('*', 'b', 'c'), 'd')), right.parse, 'expr', Source('', 'a + b * c + d'))
utest(('+', ('*', 'a', 'b'), ('*', 'c', ('*', 'd', 'e'))), right.parse, 'expr', Source('', 'a * b + c * d * e'))
left_adj_dot = Parser(lexer, dict(
name=Atom('name', transform=token_extract_text),
expr=Precedence(
('name',),
Left(Adjacency()),
Left(Infix('dot')),
)),
drop=('spaces',))
utest(((('.', 'a', 'b'), 'c'), 'd'), left_adj_dot.parse, 'expr', Source('', 'a.b c d'))
utest((('a', ('.', 'b', 'c')), 'd'), left_adj_dot.parse, 'expr', Source('', 'a b.c d'))
left_dot_adj = Parser(lexer, dict(
name=Atom('name', transform=token_extract_text),
expr=Precedence(
('name',),
Left(Infix('dot')),
Left(Adjacency()),
)),
drop=('spaces',))
utest(('.', 'a', (('b', 'c'), 'd')), left_dot_adj.parse, 'expr', Source('', 'a . b c d'))
utest(('.', (('a', 'b'), 'c'), 'd'), left_dot_adj.parse, 'expr', Source('', 'a b c . d'))
right_adj_dot = Parser(lexer, dict(
name=Atom('name', transform=token_extract_text),
expr=Precedence(
('name',),
Right(Adjacency()),
Right(Infix('dot')),
)),
drop=('spaces',))
utest((('.', 'a', 'b'), ('c', 'd')), right_adj_dot.parse, 'expr', Source('', 'a.b c d'))
utest(('a', (('.', 'b', 'c'), 'd')), right_adj_dot.parse, 'expr', Source('', 'a b.c d'))
right_dot_adj = Parser(lexer, dict(
name=Atom('name', transform=token_extract_text),
expr=Precedence(
('name',),
Right(Infix('dot')),
Right(Adjacency()),
)),
drop=('spaces',))
utest(('.', 'a', ('b', ('c', 'd'))), right_dot_adj.parse, 'expr', Source('', 'a . b c d'))
utest(('.', ('a', ('b', 'c')), 'd'), right_dot_adj.parse, 'expr', Source('', 'a b c . d'))
right_adj_qmark = Parser(lexer, dict(
name=Atom('name', transform=token_extract_text),
expr=Precedence(
('name',),
Right(Adjacency()),
Right(Suffix('qmark')),
)),
drop=('spaces',))
utest(('a', (('?', 'b'), 'c')), right_adj_qmark.parse, 'expr', Source('', 'a b? c'))
right_qmark_adj = Parser(lexer, dict(
name=Atom('name', transform=token_extract_text),
expr=Precedence(
('name',),
Right(Suffix('qmark')),
Right(Adjacency()),
)),
drop=('spaces',))
utest(('?', ('a', 'b')), right_qmark_adj.parse, 'expr', Source('', 'a b ?'))
| 28.711712 | 112 | 0.523062 | 414 | 3,187 | 3.905797 | 0.099034 | 0.034632 | 0.048237 | 0.059369 | 0.834261 | 0.834261 | 0.794063 | 0.769326 | 0.706865 | 0.649969 | 0 | 0.000386 | 0.18701 | 3,187 | 110 | 113 | 28.972727 | 0.623697 | 0.006589 | 0 | 0.634146 | 0 | 0 | 0.132785 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.04878 | 0 | 0.04878 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
754278cd4b446f300682dc7017ee73862d7674a5 | 96 | py | Python | venv/lib/python3.8/site-packages/pip/_internal/network/utils.py | Retraces/UkraineBot | 3d5d7f8aaa58fa0cb8b98733b8808e5dfbdb8b71 | [
"MIT"
] | 2 | 2022-03-13T01:58:52.000Z | 2022-03-31T06:07:54.000Z | venv/lib/python3.8/site-packages/pip/_internal/network/utils.py | DesmoSearch/Desmobot | b70b45df3485351f471080deb5c785c4bc5c4beb | [
"MIT"
] | 19 | 2021-11-20T04:09:18.000Z | 2022-03-23T15:05:55.000Z | venv/lib/python3.8/site-packages/pip/_internal/network/utils.py | DesmoSearch/Desmobot | b70b45df3485351f471080deb5c785c4bc5c4beb | [
"MIT"
] | null | null | null | /home/runner/.cache/pip/pool/ba/a4/fa/4243bd347530a93c3780705631015d698a9869b078db741466e8900f77 | 96 | 96 | 0.895833 | 9 | 96 | 9.555556 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.5 | 0 | 96 | 1 | 96 | 96 | 0.395833 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
f3c5b1235edc3b63bd485c2fd27ca71f9d597023 | 113 | py | Python | akusherstvo_parser/utils.py | ilkoretskiy/Parsers | 46528f88b3784c9cc26b05b8b8ae9ac7d974de45 | [
"MIT"
] | null | null | null | akusherstvo_parser/utils.py | ilkoretskiy/Parsers | 46528f88b3784c9cc26b05b8b8ae9ac7d974de45 | [
"MIT"
] | null | null | null | akusherstvo_parser/utils.py | ilkoretskiy/Parsers | 46528f88b3784c9cc26b05b8b8ae9ac7d974de45 | [
"MIT"
] | null | null | null |
def download_test_data():
pass
def main():
download_test_data()
if __name__ == "__main__":
main() | 11.3 | 26 | 0.637168 | 14 | 113 | 4.285714 | 0.571429 | 0.4 | 0.533333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.230089 | 113 | 10 | 27 | 11.3 | 0.689655 | 0 | 0 | 0 | 0 | 0 | 0.071429 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | true | 0.166667 | 0 | 0 | 0.333333 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
3418023681025e47e5dd30a619dc86023bd68c73 | 91 | py | Python | src/xsd_training/models/__init__.py | minyiky/xSACdb | 8c407e9a9da196750a66ad53613ad67c8c56e1c3 | [
"MIT"
] | 2 | 2017-08-14T14:40:17.000Z | 2019-02-07T13:10:23.000Z | src/xsd_training/models/__init__.py | minyiky/xSACdb | 8c407e9a9da196750a66ad53613ad67c8c56e1c3 | [
"MIT"
] | 19 | 2016-02-07T18:02:53.000Z | 2019-11-03T17:48:13.000Z | src/xsd_training/models/__init__.py | minyiky/xSACdb | 8c407e9a9da196750a66ad53613ad67c8c56e1c3 | [
"MIT"
] | 4 | 2015-10-19T17:24:35.000Z | 2021-05-12T07:30:32.000Z | from .group import *
from .lesson import *
from .qualification import *
from .sdc import *
| 18.2 | 28 | 0.736264 | 12 | 91 | 5.583333 | 0.5 | 0.447761 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.175824 | 91 | 4 | 29 | 22.75 | 0.893333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
3419e3e8768cf1943603c6af8a514b6c5ab33e76 | 1,755 | py | Python | tests/test_events.py | modera-manyrepo-packages/mcloud | 8ce3b1cc7bac01682a41c7b9d8d82f13a853d223 | [
"Apache-2.0"
] | null | null | null | tests/test_events.py | modera-manyrepo-packages/mcloud | 8ce3b1cc7bac01682a41c7b9d8d82f13a853d223 | [
"Apache-2.0"
] | null | null | null | tests/test_events.py | modera-manyrepo-packages/mcloud | 8ce3b1cc7bac01682a41c7b9d8d82f13a853d223 | [
"Apache-2.0"
] | null | null | null | import inject
from mcloud.events import EventBus
import pytest
from twisted.internet import reactor
import txredisapi as redis
@pytest.inlineCallbacks
def test_events():
inject.clear()
rc = yield redis.Connection(dbid=2)
yield rc.flushdb()
eb = EventBus(rc)
yield eb.connect()
test_events.test = None
def boo(pattern, message):
assert message == 'hoho'
assert pattern == 'foo'
test_events.test = message
eb.on('foo', boo)
yield eb.fire_event('foo', 'hoho')
def check_results():
assert test_events.test == 'hoho'
reactor.callLater(50, check_results)
@pytest.inlineCallbacks
def test_events_pattern():
inject.clear()
rc = yield redis.Connection(dbid=2)
yield rc.flushdb()
eb = EventBus(rc)
yield eb.connect()
test_events_pattern.test = None
def boo(pattern, message):
assert message == 'hoho'
assert pattern == 'foo.baz'
test_events_pattern.test = message
eb.on('foo.*', boo)
yield eb.fire_event('foo.baz', 'hoho')
def check_results():
assert test_events_pattern.test == 'hoho'
reactor.callLater(50, check_results)
@pytest.inlineCallbacks
def test_events_pattern_wrong():
inject.clear()
rc = yield redis.Connection(dbid=2)
yield rc.flushdb()
eb = EventBus(rc)
yield eb.connect()
test_events_pattern_wrong.test = None
def boo(pattern, message):
assert message == 'hoho'
assert pattern == 'foo.baz'
test_events_pattern_wrong.test = message
eb.on('bar.*', boo)
yield eb.fire_event('foo.baz', 'hoho')
def check_results():
assert test_events_pattern_wrong.test is None
reactor.callLater(50, check_results)
| 20.647059 | 53 | 0.655271 | 224 | 1,755 | 4.986607 | 0.191964 | 0.107431 | 0.121755 | 0.078782 | 0.871083 | 0.793196 | 0.793196 | 0.761862 | 0.761862 | 0.761862 | 0 | 0.006677 | 0.231909 | 1,755 | 84 | 54 | 20.892857 | 0.821958 | 0 | 0 | 0.607143 | 0 | 0 | 0.04504 | 0 | 0 | 0 | 0 | 0 | 0.160714 | 1 | 0.160714 | false | 0 | 0.089286 | 0 | 0.25 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
341ac9111e4252a2edb8c631f738fc2e728b34d3 | 646 | py | Python | vrchatapi/__init__.py | vrchatapi/vrchatapi-python | 996b7ddf2914059f1fd4e5def5e3555e678634c0 | [
"MIT"
] | 8 | 2021-08-25T02:35:30.000Z | 2022-03-28T18:11:58.000Z | vrchatapi/__init__.py | vrchatapi/vrchatapi-python | 996b7ddf2914059f1fd4e5def5e3555e678634c0 | [
"MIT"
] | 1 | 2022-03-18T20:29:30.000Z | 2022-03-18T20:35:05.000Z | vrchatapi/__init__.py | vrchatapi/vrchatapi-python | 996b7ddf2914059f1fd4e5def5e3555e678634c0 | [
"MIT"
] | 1 | 2022-01-11T10:49:12.000Z | 2022-01-11T10:49:12.000Z | # flake8: noqa
"""
VRChat API Documentation
The version of the OpenAPI document: 1.6.7
Contact: me@ruby.js.org
Generated by: https://openapi-generator.tech
"""
__version__ = "1.0.0"
# import ApiClient
from vrchatapi.api_client import ApiClient
# import Configuration
from vrchatapi.configuration import Configuration
# import exceptions
from vrchatapi.exceptions import OpenApiException
from vrchatapi.exceptions import ApiAttributeError
from vrchatapi.exceptions import ApiTypeError
from vrchatapi.exceptions import ApiValueError
from vrchatapi.exceptions import ApiKeyError
from vrchatapi.exceptions import ApiException
| 23.071429 | 50 | 0.806502 | 76 | 646 | 6.789474 | 0.486842 | 0.20155 | 0.267442 | 0.337209 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.012545 | 0.136223 | 646 | 27 | 51 | 23.925926 | 0.912186 | 0.321981 | 0 | 0 | 1 | 0 | 0.012165 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.888889 | 0 | 0.888889 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
34331c49853bdf9fa3d94e3fad65c7d744f2990d | 194 | py | Python | network/__init__.py | SebOh/arp_spoof | 9c4493c9bc7b80f70710d7e4a644b102f0bd8c4d | [
"MIT"
] | null | null | null | network/__init__.py | SebOh/arp_spoof | 9c4493c9bc7b80f70710d7e4a644b102f0bd8c4d | [
"MIT"
] | null | null | null | network/__init__.py | SebOh/arp_spoof | 9c4493c9bc7b80f70710d7e4a644b102f0bd8c4d | [
"MIT"
] | null | null | null | from sys import platform
from .network_commands import NetworkCommands
def is_windows():
return platform == "win32"
def is_linux():
return platform == "linux" or platform == "linux2" | 19.4 | 54 | 0.721649 | 24 | 194 | 5.708333 | 0.625 | 0.072993 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.018868 | 0.180412 | 194 | 10 | 54 | 19.4 | 0.842767 | 0 | 0 | 0 | 0 | 0 | 0.082051 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | true | 0 | 0.333333 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
346f16408e765715546e1b15a3b253ffb5128287 | 5,500 | py | Python | dnd/nodes.py | tvarney/dndtools | 80a36db85d704d7d7b632b365156504f676841a0 | [
"MIT"
] | null | null | null | dnd/nodes.py | tvarney/dndtools | 80a36db85d704d7d7b632b365156504f676841a0 | [
"MIT"
] | null | null | null | dnd/nodes.py | tvarney/dndtools | 80a36db85d704d7d7b632b365156504f676841a0 | [
"MIT"
] | null | null | null | from typing import TYPE_CHECKING
if TYPE_CHECKING:
from typing import Union
import dnd.roll
PrecedenceValue = 100
PrecedencePower = 30
PrecedenceMulDiv = 20
PrecedenceAddSub = 10
def nodestr(node, parent_precedence: "int") -> "str":
if node.precedence() < parent_precedence:
return "({})".format(str(node))
return str(node)
class Value(object):
__slots__ = ("value",)
def __init__(self, value: "Union[float, int]") -> None:
self.value = value
def precedence(self) -> int:
return PrecedenceValue
def __call__(self) -> "Union[float, int]":
return self.value
def __repr__(self) -> "str":
return "Value({})".format(self.value)
def __str__(self) -> "str":
return str(self.value)
class Dice(object):
__slots__ = ("dice",)
def __init__(self, value: "dnd.roll.Dice") -> None:
self.dice = value
def precedence(self) -> int:
return PrecedenceValue
def __call__(self) -> "int":
return self.dice.roll().result
def __repr__(self) -> "str":
return repr(self.dice)
def __str__(self) -> "str":
return str(self.dice)
class Add(object):
__slots__ = ("lhs", "rhs")
def __init__(self, lhs, rhs) -> None:
self.lhs = lhs
self.rhs = rhs
def precedence(self) -> int:
return PrecedenceAddSub
def __call__(self) -> "Union[float, int]":
return self.lhs() + self.rhs()
def __repr__(self) -> "str":
return "Add({}, {})".format(repr(self.lhs), repr(self.rhs))
def __str__(self) -> "str":
return "{} + {}".format(
nodestr(self.lhs, PrecedenceAddSub), nodestr(self.rhs, PrecedenceAddSub)
)
class Subtract(object):
__slots__ = ("lhs", "rhs")
def __init__(self, lhs, rhs) -> None:
self.lhs = lhs
self.rhs = rhs
def __call__(self) -> "Union[float, int]":
return self.lhs() - self.rhs()
def __repr__(self) -> "str":
return "Subtract({}, {})".format(repr(self.lhs), repr(self.rhs))
def __str__(self) -> "str":
return "{} - {}".format(
nodestr(self.lhs, PrecedenceAddSub), nodestr(self.rhs, PrecedenceAddSub)
)
class Negative(object):
__slots__ = ("value",)
def __init__(self, value) -> None:
self.value = value
def precedence(self) -> int:
return PrecedenceValue
def __call__(self) -> "Union[float, int]":
return -(self.value())
def __repr__(self) -> "str":
return "Negative({})".format(repr(self.value))
def __str__(self) -> "str":
if self.value.precedence == PrecedenceValue:
if type(self.value) is Dice:
return "-({})".format(self.value)
return "-{}".format(self.value)
return "-({})".format(self.value)
class Multiply(object):
__slots__ = ("lhs", "rhs")
def __init__(self, lhs, rhs) -> None:
self.lhs = lhs
self.rhs = rhs
def precedence(self) -> "int":
return PrecedenceMulDiv
def __call__(self) -> "Union[float, int]":
return self.lhs() * self.rhs()
def __repr__(self) -> "str":
return "Multiply({}, {})".format(repr(self.lhs), repr(self.rhs))
def __str__(self) -> "str":
return "{} * {}".format(
nodestr(self.lhs, PrecedenceMulDiv), nodestr(self.rhs, PrecedenceMulDiv)
)
class Divide(object):
__slots__ = ("lhs", "rhs")
def __init__(self, lhs, rhs) -> None:
self.lhs = lhs
self.rhs = rhs
def precedence(self) -> "int":
return PrecedenceMulDiv
def __call__(self) -> "Union[float, int]":
return self.lhs() / self.rhs()
def __repr__(self) -> "str":
return "Divide({}, {})".format(repr(self.lhs), repr(self.rhs))
def __str__(self) -> "str":
return "{} / {}".format(
nodestr(self.lhs, PrecedenceMulDiv), nodestr(self.rhs, PrecedenceMulDiv)
)
class FloorDiv(object):
__slots__ = ("lhs", "rhs")
def __init__(self, lhs, rhs) -> None:
self.lhs = lhs
self.rhs = rhs
def precedence(self) -> "int":
return PrecedenceMulDiv
def __call__(self) -> "int":
return self.lhs() // self.rhs()
def __repr__(self) -> "str":
return "FloorDiv({}, {})".format(repr(self.lhs), repr(self.rhs))
def __str__(self) -> "str":
return "{} // {}".format(
nodestr(self.lhs, PrecedenceMulDiv), nodestr(self.rhs, PrecedenceMulDiv)
)
class Power(object):
__slots__ = ("lhs", "rhs")
def __init__(self, lhs, rhs) -> None:
self.lhs = lhs
self.rhs = rhs
def precedence(self) -> "int":
return PrecedencePower
def __call__(self) -> "Union[int, float]":
return self.lhs() ** self.rhs()
def __repr__(self) -> "str":
return "Power({}, {})".format(repr(self.lhs), repr(self.rhs))
def __str__(self) -> "str":
return "{}**{}".format(
nodestr(self.lhs, PrecedencePower), nodestr(self.rhs, PrecedencePower)
)
class Modulo(object):
__slots__ = ("lhs", "rhs")
def __init__(self, lhs, rhs) -> None:
self.lhs = lhs
self.rhs = rhs
def precedence(self) -> "int":
return PrecedenceMulDiv
def __call__(self) -> "Union[int, float]":
return self.lhs() % self.rhs()
def __repr__(self) -> "str":
return "Modulo({}, {})".format(repr(self.lhs), repr(self.rhs))
| 24.444444 | 84 | 0.568182 | 614 | 5,500 | 4.763844 | 0.079805 | 0.081368 | 0.08 | 0.047863 | 0.781197 | 0.774359 | 0.759316 | 0.710085 | 0.688547 | 0.688547 | 0 | 0.002227 | 0.265273 | 5,500 | 224 | 85 | 24.553571 | 0.721604 | 0 | 0 | 0.556291 | 0 | 0 | 0.088364 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.324503 | false | 0 | 0.019868 | 0.245033 | 0.754967 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 6 |
3480da2b26d77eea7faebd9c8a08816d793a29b6 | 330 | py | Python | appannie/__init__.py | Julian-O/appannie | b5e053c43a9fbabda2d84a8992c2efb2e76c8aef | [
"MIT"
] | 21 | 2017-07-08T06:07:52.000Z | 2022-02-14T07:58:11.000Z | appannie/__init__.py | Julian-O/appannie | b5e053c43a9fbabda2d84a8992c2efb2e76c8aef | [
"MIT"
] | 2 | 2018-03-17T16:32:43.000Z | 2018-03-20T14:02:26.000Z | appannie/__init__.py | Julian-O/appannie | b5e053c43a9fbabda2d84a8992c2efb2e76c8aef | [
"MIT"
] | 22 | 2017-10-13T04:00:34.000Z | 2022-02-05T11:00:40.000Z | from __future__ import absolute_import
from .version import __version__
from .exception import (AppAnnieException, AppAnnieBadRequestException,
AppAnnieNotFoundException,
AppAnnieUnauthorizedException,
AppAnnieRateLimitException)
from .api import AppAnnie
| 33 | 71 | 0.687879 | 21 | 330 | 10.380952 | 0.619048 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.284848 | 330 | 9 | 72 | 36.666667 | 0.923729 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.571429 | 0 | 0.571429 | 0 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
caabd8ee309ff2f118ff034e7da475a708545c96 | 91 | py | Python | server/settings.py | computmaxer/marantz-rest | 6467c930dd909da784ddd5c72a47f75c5724c23c | [
"Apache-2.0"
] | 2 | 2020-06-05T06:18:01.000Z | 2020-06-05T14:17:15.000Z | server/settings.py | computmaxer/marantz-rest | 6467c930dd909da784ddd5c72a47f75c5724c23c | [
"Apache-2.0"
] | null | null | null | server/settings.py | computmaxer/marantz-rest | 6467c930dd909da784ddd5c72a47f75c5724c23c | [
"Apache-2.0"
] | null | null | null | BASE_API = '/api%s'
MARANTZ_URL = 'http://172.16.2.4%s'
XBOX_URL = 'http://172.16.2.11%s'
| 18.2 | 35 | 0.615385 | 20 | 91 | 2.65 | 0.6 | 0.264151 | 0.377358 | 0.45283 | 0.490566 | 0 | 0 | 0 | 0 | 0 | 0 | 0.185185 | 0.10989 | 91 | 4 | 36 | 22.75 | 0.469136 | 0 | 0 | 0 | 0 | 0 | 0.494505 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
caddd8abf36896224e4c96180d09516d9f401804 | 185 | py | Python | pysce/__init__.py | dchary/pysce | 183f43ef24a80d4a3c10afe8ee553ae58087dd9a | [
"MIT"
] | null | null | null | pysce/__init__.py | dchary/pysce | 183f43ef24a80d4a3c10afe8ee553ae58087dd9a | [
"MIT"
] | null | null | null | pysce/__init__.py | dchary/pysce | 183f43ef24a80d4a3c10afe8ee553ae58087dd9a | [
"MIT"
] | null | null | null | # Load metadata for package
from ._metadata import __version__, __author__, __email__
from ._metadata import __date__, __institution__, __laboratory__
from ._pysce import score_entropy | 37 | 64 | 0.843243 | 21 | 185 | 6.095238 | 0.714286 | 0.1875 | 0.28125 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.113514 | 185 | 5 | 65 | 37 | 0.780488 | 0.135135 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
1b06d39349113581abaf8221be5fdf4eba7edf98 | 156 | py | Python | python/ML/Core/__init__.py | valiro21/ML | 33475c4800a38ffba6c15eac3db49763de3400e5 | [
"MIT"
] | 1 | 2017-08-18T12:22:15.000Z | 2017-08-18T12:22:15.000Z | python/ML/Core/__init__.py | valiro21/ML | 33475c4800a38ffba6c15eac3db49763de3400e5 | [
"MIT"
] | 2 | 2017-08-17T22:12:03.000Z | 2017-08-19T17:22:56.000Z | python/ML/Core/__init__.py | valiro21/ML | 33475c4800a38ffba6c15eac3db49763de3400e5 | [
"MIT"
] | null | null | null | from ML.Core.Functions import Functions, FunctionsDerivative
from ML.Core.FeedforwardNeuralNetwork.FeedforwardNeuralNetwork import FeedforwardNeuralNetwork
| 52 | 94 | 0.903846 | 14 | 156 | 10.071429 | 0.5 | 0.085106 | 0.141844 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.057692 | 156 | 2 | 95 | 78 | 0.959184 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
1b2ca36d0a05baf7fe12d0ef5c1ec44957d3fa7d | 25 | py | Python | app/rooms/examples/eg002_create_room_with_template/__init__.py | olegliubimov/code-examples-python | 7af8c58138a9dd0f3b0be12eff1768ae23e449d3 | [
"MIT"
] | 21 | 2020-05-13T21:08:44.000Z | 2022-02-18T01:32:16.000Z | app/rooms/examples/eg002_create_room_with_template/__init__.py | olegliubimov/code-examples-python | 7af8c58138a9dd0f3b0be12eff1768ae23e449d3 | [
"MIT"
] | 8 | 2020-11-23T09:28:04.000Z | 2022-02-02T12:04:08.000Z | app/rooms/examples/eg002_create_room_with_template/__init__.py | olegliubimov/code-examples-python | 7af8c58138a9dd0f3b0be12eff1768ae23e449d3 | [
"MIT"
] | 26 | 2020-05-12T22:20:01.000Z | 2022-03-09T10:57:27.000Z | from .views import eg002
| 12.5 | 24 | 0.8 | 4 | 25 | 5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.142857 | 0.16 | 25 | 1 | 25 | 25 | 0.809524 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
1b311d41f50c0516e938defa02b363d0a114f23a | 18,904 | py | Python | crichtonweb/prodmgmt/migrations/0001_initial.py | bpluly/crichton | a2fa09c181ba1e44ee1aae7a57769e1778de7f3a | [
"Apache-2.0"
] | null | null | null | crichtonweb/prodmgmt/migrations/0001_initial.py | bpluly/crichton | a2fa09c181ba1e44ee1aae7a57769e1778de7f3a | [
"Apache-2.0"
] | null | null | null | crichtonweb/prodmgmt/migrations/0001_initial.py | bpluly/crichton | a2fa09c181ba1e44ee1aae7a57769e1778de7f3a | [
"Apache-2.0"
] | null | null | null | # Crichton, Admirable Source Configuration Management
# Copyright 2012 British Broadcasting Corporation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
#
# encoding: utf-8
import datetime
from south.db import db
from south.v2 import SchemaMigration
from django.db import models
class Migration(SchemaMigration):
def forwards(self, orm):
# Adding model 'ApplicationAuditLogEntry'
db.create_table('prodmgmt_applicationauditlogentry', (
('id', self.gf('django.db.models.fields.IntegerField')(db_index=True, blank=True)),
('name', self.gf('django.db.models.fields.CharField')(max_length=128, db_index=True)),
('display_name', self.gf('django.db.models.fields.CharField')(max_length=200, blank=True)),
('product', self.gf('django.db.models.fields.related.ForeignKey')(related_name='_auditlog_applications', to=orm['prodmgmt.Product'])),
('deleted', self.gf('django.db.models.fields.BooleanField')(default=False)),
('action_id', self.gf('django.db.models.fields.AutoField')(primary_key=True)),
('action_date', self.gf('django.db.models.fields.DateTimeField')(default=datetime.datetime.now)),
('action_user', self.gf('audit_log.models.fields.LastUserField')(related_name='_application_audit_log_entry')),
('action_type', self.gf('django.db.models.fields.CharField')(max_length=1)),
))
db.send_create_signal('prodmgmt', ['ApplicationAuditLogEntry'])
# Adding model 'Application'
db.create_table('prodmgmt_application', (
('id', self.gf('django.db.models.fields.AutoField')(primary_key=True)),
('name', self.gf('django.db.models.fields.CharField')(unique=True, max_length=128)),
('display_name', self.gf('django.db.models.fields.CharField')(max_length=200, blank=True)),
('product', self.gf('django.db.models.fields.related.ForeignKey')(related_name='applications', to=orm['prodmgmt.Product'])),
('deleted', self.gf('django.db.models.fields.BooleanField')(default=False)),
))
db.send_create_signal('prodmgmt', ['Application'])
# Adding model 'PersonAuditLogEntry'
db.create_table('prodmgmt_personauditlogentry', (
('id', self.gf('django.db.models.fields.IntegerField')(db_index=True, blank=True)),
('username', self.gf('django.db.models.fields.CharField')(max_length=30, db_index=True)),
('first_name', self.gf('django.db.models.fields.CharField')(max_length=30, blank=True)),
('last_name', self.gf('django.db.models.fields.CharField')(max_length=30, blank=True)),
('email', self.gf('django.db.models.fields.EmailField')(max_length=75, blank=True)),
('distinguished_name', self.gf('django.db.models.fields.CharField')(max_length=1024, blank=True)),
('deleted', self.gf('django.db.models.fields.BooleanField')(default=False)),
('action_id', self.gf('django.db.models.fields.AutoField')(primary_key=True)),
('action_date', self.gf('django.db.models.fields.DateTimeField')(default=datetime.datetime.now)),
('action_user', self.gf('audit_log.models.fields.LastUserField')(related_name='_person_audit_log_entry')),
('action_type', self.gf('django.db.models.fields.CharField')(max_length=1)),
))
db.send_create_signal('prodmgmt', ['PersonAuditLogEntry'])
# Adding model 'Person'
db.create_table('prodmgmt_person', (
('id', self.gf('django.db.models.fields.AutoField')(primary_key=True)),
('username', self.gf('django.db.models.fields.CharField')(unique=True, max_length=30)),
('first_name', self.gf('django.db.models.fields.CharField')(max_length=30, blank=True)),
('last_name', self.gf('django.db.models.fields.CharField')(max_length=30, blank=True)),
('email', self.gf('django.db.models.fields.EmailField')(max_length=75, blank=True)),
('distinguished_name', self.gf('django.db.models.fields.CharField')(max_length=1024, blank=True)),
('deleted', self.gf('django.db.models.fields.BooleanField')(default=False)),
))
db.send_create_signal('prodmgmt', ['Person'])
# Adding model 'ProductAuditLogEntry'
db.create_table('prodmgmt_productauditlogentry', (
('id', self.gf('django.db.models.fields.IntegerField')(db_index=True, blank=True)),
('name', self.gf('django.db.models.fields.SlugField')(max_length=128, db_index=True)),
('display_name', self.gf('django.db.models.fields.CharField')(max_length=200, blank=True)),
('owner', self.gf('django.db.models.fields.related.ForeignKey')(related_name='_auditlog_owned_products', to=orm['prodmgmt.Person'])),
('pipeline_issue', self.gf('django.db.models.fields.related.ForeignKey')(blank=True, related_name='_auditlog_+', null=True, to=orm['issue.Issue'])),
('deleted', self.gf('django.db.models.fields.BooleanField')(default=False)),
('action_id', self.gf('django.db.models.fields.AutoField')(primary_key=True)),
('action_date', self.gf('django.db.models.fields.DateTimeField')(default=datetime.datetime.now)),
('action_user', self.gf('audit_log.models.fields.LastUserField')(related_name='_product_audit_log_entry')),
('action_type', self.gf('django.db.models.fields.CharField')(max_length=1)),
))
db.send_create_signal('prodmgmt', ['ProductAuditLogEntry'])
# Adding model 'Product'
db.create_table('prodmgmt_product', (
('id', self.gf('django.db.models.fields.AutoField')(primary_key=True)),
('name', self.gf('django.db.models.fields.SlugField')(unique=True, max_length=128, db_index=True)),
('display_name', self.gf('django.db.models.fields.CharField')(max_length=200, blank=True)),
('owner', self.gf('django.db.models.fields.related.ForeignKey')(related_name='owned_products', to=orm['prodmgmt.Person'])),
('pipeline_issue', self.gf('django.db.models.fields.related.ForeignKey')(blank=True, related_name='+', null=True, to=orm['issue.Issue'])),
('deleted', self.gf('django.db.models.fields.BooleanField')(default=False)),
))
db.send_create_signal('prodmgmt', ['Product'])
def backwards(self, orm):
# Deleting model 'ApplicationAuditLogEntry'
db.delete_table('prodmgmt_applicationauditlogentry')
# Deleting model 'Application'
db.delete_table('prodmgmt_application')
# Deleting model 'PersonAuditLogEntry'
db.delete_table('prodmgmt_personauditlogentry')
# Deleting model 'Person'
db.delete_table('prodmgmt_person')
# Deleting model 'ProductAuditLogEntry'
db.delete_table('prodmgmt_productauditlogentry')
# Deleting model 'Product'
db.delete_table('prodmgmt_product')
models = {
'auth.group': {
'Meta': {'object_name': 'Group'},
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'unique': 'True', 'max_length': '80'}),
'permissions': ('django.db.models.fields.related.ManyToManyField', [], {'to': "orm['auth.Permission']", 'symmetrical': 'False', 'blank': 'True'})
},
'auth.permission': {
'Meta': {'ordering': "('content_type__app_label', 'content_type__model', 'codename')", 'unique_together': "(('content_type', 'codename'),)", 'object_name': 'Permission'},
'codename': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
'content_type': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['contenttypes.ContentType']"}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '50'})
},
'auth.user': {
'Meta': {'object_name': 'User'},
'date_joined': ('django.db.models.fields.DateTimeField', [], {'default': 'datetime.datetime.now'}),
'email': ('django.db.models.fields.EmailField', [], {'max_length': '75', 'blank': 'True'}),
'first_name': ('django.db.models.fields.CharField', [], {'max_length': '30', 'blank': 'True'}),
'groups': ('django.db.models.fields.related.ManyToManyField', [], {'to': "orm['auth.Group']", 'symmetrical': 'False', 'blank': 'True'}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'is_active': ('django.db.models.fields.BooleanField', [], {'default': 'True'}),
'is_staff': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'is_superuser': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'last_login': ('django.db.models.fields.DateTimeField', [], {'default': 'datetime.datetime.now'}),
'last_name': ('django.db.models.fields.CharField', [], {'max_length': '30', 'blank': 'True'}),
'password': ('django.db.models.fields.CharField', [], {'max_length': '128'}),
'user_permissions': ('django.db.models.fields.related.ManyToManyField', [], {'to': "orm['auth.Permission']", 'symmetrical': 'False', 'blank': 'True'}),
'username': ('django.db.models.fields.CharField', [], {'unique': 'True', 'max_length': '30'})
},
'contenttypes.contenttype': {
'Meta': {'ordering': "('name',)", 'unique_together': "(('app_label', 'model'),)", 'object_name': 'ContentType', 'db_table': "'django_content_type'"},
'app_label': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'model': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '100'})
},
'issue.issue': {
'Meta': {'ordering': "('name',)", 'unique_together': "(('name', 'project'),)", 'object_name': 'Issue'},
'deleted': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'name': ('django.db.models.fields.SlugField', [], {'max_length': '128', 'db_index': 'True'}),
'project': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'issues'", 'to': "orm['issue.IssueTrackerProject']"})
},
'issue.issuetracker': {
'Meta': {'ordering': "('name',)", 'object_name': 'IssueTracker'},
'deleted': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'display_name': ('django.db.models.fields.CharField', [], {'max_length': '200', 'blank': 'True'}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'issue_url_pattern': ('django.db.models.fields.URLField', [], {'max_length': '255', 'blank': 'True'}),
'name': ('django.db.models.fields.SlugField', [], {'unique': 'True', 'max_length': '128', 'db_index': 'True'}),
'tracker_type': ('django.db.models.fields.CharField', [], {'default': "'jira'", 'max_length': '12'}),
'url': ('django.db.models.fields.URLField', [], {'max_length': '255', 'blank': 'True'})
},
'issue.issuetrackerproject': {
'Meta': {'ordering': "('name',)", 'unique_together': "(('name', 'issue_tracker'),)", 'object_name': 'IssueTrackerProject'},
'deleted': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'display_name': ('django.db.models.fields.CharField', [], {'max_length': '200', 'blank': 'True'}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'issue_tracker': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'projects'", 'to': "orm['issue.IssueTracker']"}),
'name': ('django.db.models.fields.SlugField', [], {'max_length': '128', 'db_index': 'True'})
},
'prodmgmt.application': {
'Meta': {'ordering': "('name',)", 'object_name': 'Application'},
'deleted': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'display_name': ('django.db.models.fields.CharField', [], {'max_length': '200', 'blank': 'True'}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'unique': 'True', 'max_length': '128'}),
'product': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'applications'", 'to': "orm['prodmgmt.Product']"})
},
'prodmgmt.applicationauditlogentry': {
'Meta': {'ordering': "('-action_date',)", 'object_name': 'ApplicationAuditLogEntry'},
'action_date': ('django.db.models.fields.DateTimeField', [], {'default': 'datetime.datetime.now'}),
'action_id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'action_type': ('django.db.models.fields.CharField', [], {'max_length': '1'}),
'action_user': ('audit_log.models.fields.LastUserField', [], {'related_name': "'_application_audit_log_entry'"}),
'deleted': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'display_name': ('django.db.models.fields.CharField', [], {'max_length': '200', 'blank': 'True'}),
'id': ('django.db.models.fields.IntegerField', [], {'db_index': 'True', 'blank': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '128', 'db_index': 'True'}),
'product': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'_auditlog_applications'", 'to': "orm['prodmgmt.Product']"})
},
'prodmgmt.person': {
'Meta': {'ordering': "('username',)", 'object_name': 'Person'},
'deleted': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'distinguished_name': ('django.db.models.fields.CharField', [], {'max_length': '1024', 'blank': 'True'}),
'email': ('django.db.models.fields.EmailField', [], {'max_length': '75', 'blank': 'True'}),
'first_name': ('django.db.models.fields.CharField', [], {'max_length': '30', 'blank': 'True'}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'last_name': ('django.db.models.fields.CharField', [], {'max_length': '30', 'blank': 'True'}),
'username': ('django.db.models.fields.CharField', [], {'unique': 'True', 'max_length': '30'})
},
'prodmgmt.personauditlogentry': {
'Meta': {'ordering': "('-action_date',)", 'object_name': 'PersonAuditLogEntry'},
'action_date': ('django.db.models.fields.DateTimeField', [], {'default': 'datetime.datetime.now'}),
'action_id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'action_type': ('django.db.models.fields.CharField', [], {'max_length': '1'}),
'action_user': ('audit_log.models.fields.LastUserField', [], {'related_name': "'_person_audit_log_entry'"}),
'deleted': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'distinguished_name': ('django.db.models.fields.CharField', [], {'max_length': '1024', 'blank': 'True'}),
'email': ('django.db.models.fields.EmailField', [], {'max_length': '75', 'blank': 'True'}),
'first_name': ('django.db.models.fields.CharField', [], {'max_length': '30', 'blank': 'True'}),
'id': ('django.db.models.fields.IntegerField', [], {'db_index': 'True', 'blank': 'True'}),
'last_name': ('django.db.models.fields.CharField', [], {'max_length': '30', 'blank': 'True'}),
'username': ('django.db.models.fields.CharField', [], {'max_length': '30', 'db_index': 'True'})
},
'prodmgmt.product': {
'Meta': {'ordering': "('name',)", 'object_name': 'Product'},
'deleted': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'display_name': ('django.db.models.fields.CharField', [], {'max_length': '200', 'blank': 'True'}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'name': ('django.db.models.fields.SlugField', [], {'unique': 'True', 'max_length': '128', 'db_index': 'True'}),
'owner': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'owned_products'", 'to': "orm['prodmgmt.Person']"}),
'pipeline_issue': ('django.db.models.fields.related.ForeignKey', [], {'blank': 'True', 'related_name': "'+'", 'null': 'True', 'to': "orm['issue.Issue']"})
},
'prodmgmt.productauditlogentry': {
'Meta': {'ordering': "('-action_date',)", 'object_name': 'ProductAuditLogEntry'},
'action_date': ('django.db.models.fields.DateTimeField', [], {'default': 'datetime.datetime.now'}),
'action_id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'action_type': ('django.db.models.fields.CharField', [], {'max_length': '1'}),
'action_user': ('audit_log.models.fields.LastUserField', [], {'related_name': "'_product_audit_log_entry'"}),
'deleted': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'display_name': ('django.db.models.fields.CharField', [], {'max_length': '200', 'blank': 'True'}),
'id': ('django.db.models.fields.IntegerField', [], {'db_index': 'True', 'blank': 'True'}),
'name': ('django.db.models.fields.SlugField', [], {'max_length': '128', 'db_index': 'True'}),
'owner': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'_auditlog_owned_products'", 'to': "orm['prodmgmt.Person']"}),
'pipeline_issue': ('django.db.models.fields.related.ForeignKey', [], {'blank': 'True', 'related_name': "'_auditlog_+'", 'null': 'True', 'to': "orm['issue.Issue']"})
}
}
complete_apps = ['prodmgmt']
| 73.271318 | 182 | 0.605163 | 2,028 | 18,904 | 5.494576 | 0.096154 | 0.14646 | 0.163331 | 0.23333 | 0.783093 | 0.773041 | 0.747106 | 0.739208 | 0.70726 | 0.684555 | 0 | 0.010424 | 0.177899 | 18,904 | 257 | 183 | 73.55642 | 0.706583 | 0.054116 | 0 | 0.444976 | 0 | 0 | 0.544339 | 0.325584 | 0 | 0 | 0 | 0 | 0 | 1 | 0.009569 | false | 0.004785 | 0.019139 | 0 | 0.043062 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
1b7ec836af633bfaa2a21b98da42344ec352d840 | 19,896 | py | Python | Bioinformatics I/Week II/ApproximatePatternMatching.py | egeulgen/Bioinformatics_Specialization | 38581b471a54c41d780d9eeb26a7033eb57f3a01 | [
"MIT"
] | 3 | 2021-04-03T23:46:42.000Z | 2021-08-08T01:19:32.000Z | Bioinformatics I/Week II/ApproximatePatternMatching.py | egeulgen/Bioinformatics_Specialization | 38581b471a54c41d780d9eeb26a7033eb57f3a01 | [
"MIT"
] | null | null | null | Bioinformatics I/Week II/ApproximatePatternMatching.py | egeulgen/Bioinformatics_Specialization | 38581b471a54c41d780d9eeb26a7033eb57f3a01 | [
"MIT"
] | null | null | null | def ApproximatePatternMatching(Text, Pattern, d):
k = len(Pattern)
L = len(Text)
start_idx = []
for i in range(L - k + 1):
if HammingDistance(Text[i:i+k], Pattern) <= d:
start_idx.append(i)
return start_idx
def HammingDistance(p, q):
mm = [p[i] != q[i] for i in range(len(p))]
return sum(mm)
Text = 'TGCGAGCGGTGGGTAGGCTCTACTTACAGTCGGGAGCAGTCAAGTTTCGATCACTGCTGGCGGCTGCAGGGGCGCCCAGGCAAGCGTCTTGGTCCGGGCCCGCTCCAATGGCATAACGGGAATGAAACCACCTTCTTAGGATGGAGCGCCTAGAACCAAAACAAGAAAGGCGGATCTCAGTCCTAGGACCCCCCGGAAAATGGACACCCTCCAGTACCCATTCATACGGATTAGTTGGAAAGTTAAAGCTCCGTGTGACACCGCTAGCCGATACCAATAATTACTGTAAGCGTCACAAATCCATCCGTCAAATGAGGTATGTTTGAAGGGGCGCGTTGTTGTGATGGAGGCCAACGGAAGAGCGCGTTTACTAGTGATTCGCAAAGCGCTGCGCATCAGAGGCGCGCGTTGAATTATCTATACCGGCAGTGGTGACAGGAGAAATCCGGCAACCGAGCCCCTACCATCAAAAAGTTAGTATAATCGGTCTTACCATTCCCTCCTGTTGGGAACGTCTCGCAGCAAATTCTATAGACTGTTAAGATAGCGCCGAGCTGACCGTAATCGTCTATGGAACGGAATGGTATCAAGCGCCGTATCGGCAGCCGAGTCGTTTGTGGTAACTAGCCCATGGCTATTTATTTAAGGTGATTCCTATCAGTATGACCTATGTCTATAAACCAAGGGACGTCCCGTAAGGGCGCAAAACAGACCACGGTGGCCTATGAGACATGGGGAATACACAATGTATAATAATCAGACTCAATGTGGAATGGCTCATTAGCAAGGTTCTTCCCGGATCGTAATTAGGATGCAACGTCTCTATCTCGCTGGCGAATACAAATCCGTCAGATACACCTCCAGCGAACTTCTAAACGGCATTCAGTGGACCGCACAGATGGAGGTTGGGATGCTAACGTATGCCATATTCCTATTAGTTCGCAGGGAGTACATAGATTTCCAGACTGCGCGTGCTCTATCCAGTTTAGTAATCCTTATGATGGACCTTAGGTACTCTGTAGGAGAAACGAGGCCATTCCAGGTATTCTATCTTAGGGAGCTGCTCTGACAGCGGTGTGTCTTCATGCCCCAGGGTTAGAGCTATAACTCGGGGAACAAGGAAGGTCGTCGTGAGCCCCTAGTCTTTGGATGCGTTTAAGAGCCCTGAGGAATACTGTTAAGGCATCGTCTTAACGCTCACATCCGCGTCTATTAAGGTGGTTACCACGTTGTCCGAGAATCCATTCGGCGTCTTTATCTGGATACATCCCCGTTTACCCTTTAGTAGCGCGTGGCAATTGCCCTATAGCCAGGTATTTCCGAGCTTCCGCGGCGATTAAGTACGTCTAACAGTATAAAGTAACTACTATGCAGCTATGCCGGCTGTCCCCTCCTGGAGCCGGTCAGGTGACTGGGGACCGACCCGAAAGGTCTCATAGGAAAGCTACGAGGCTGTACCCTGCGCGATTAAATTGGCCCTAGCGAAGGCCGCGCTGGTAGAAAGACACGTGTGTCCTGGGTCATAGTGGGGGAACGCTTTTTCCCAAGTTCTAGCCTCGCGGCGGGCCCCACCGAACTCTAGATATACGAAAATTACATAGATGTTGAGCGAAAGCGATTGCGACATGCGTTGTTAGGAGGGGTGGGTATAATCTCGACCTTAGGGTGGCAAGTCAAGATCGTACACGGTGAGCAGATTAGCTGTCCAGGTTGTCATCTCAACTATCGGGTCTTTCTGTCTCCGCACGTTCTATACCATGTGTACATAGGTAGGATATTGCGGGAAAAGACATGCTACGAAGTACGGGGGAAACCCGTACGCTGAAGCCACACACACTAGTTTTGAGATTGCCTAGAAGTAGAAAACAGTAAAAGGCCCCAATTTAAGTGGGTTATTGTAGCCTCTCGGTAAAGCTGTTCAAGATAGACAAAGGTTATGGGTAAACTCACGGCACGGGGCGTCGCCGTACCGGTTGCCTGCCAGTATGTCTGTAGCTAGCGGGCAAGAATAAGTAAGCCCAATACTCTATTTTATCCACCCGGATATCCGGCTTCTGCCAAGCGCTTAGTGGGAGGGTCTTACCCCGCAGGGCCCTCCTAGCTTCAAACTGTTGGGATACCGACTTGACTGTACCCCTGGTCTTGCGAGAGATAGAGCATACCGTACCGTGCGTTTTTGCCGAGGCCTCGACTATAGCAGCGCGTTAGCTTAGCCCCAGAGCGTATCGTCAGTGCAGTTGAACGGTGTTGTGACGTGGACTTCGAGGGAATTCATAATCTCTCCGTGGCTACGTTGTCGACCACGGGGACTGGGTCCCGTCTACATCCACCCTTCTTGGACTACCCGATGGGTTTCTTCTATAAAATGATACCGCGTCCTAGACAGTATAAAAGTCTCGAGCGTGGATGTACCTCTGGATGTGTCGTGAGTGCTGGCCCCGTACAGTACCAATCATTGAACTACTACGGCCAATGTTCCTCCATTGAGAGTGTATAACACATGGGAAACGTGGATGTCGGACTCTACTGCTCGGATCGGACATGCTCGTCCGGAACACAAACCGGTTCCAGGAGTACCGTCGCACAACTTGCTCTGGTTGAACCATACTAGGCTCCGCAACTTTCGGGACTTAGTGTACTTTCCGCTCTACCGCTTCGCTGACGACGATTGTTAAAATACAGAGTATTCGAAGTAATAGTTTAGTGATTACATGGGCTTCCCTAGACACCAAGTGGCACAGATGTGACACTGGGATACACAGACTCAGACCAACACGGCTGAGCACAAGCAAGGGCAATCCGGAGATAGCGGATCGCGAGCTCTCCTCCAGGCGCTACACCAGCTGCGCCACACCTACCGCTCCTTGTCGACCTGACCTCGCTTAATACCGGCTGTCTGAAGATGCTAAAGCACGTCACTGAGCTTGTGTCGACACAATACTGTGGCATAGCCGCTATACGTCCCCTTAGAGCATGCTAGCATCCTGGTCAACGCGCACGAGGATCTAAGCAAGTCGCCGCATAAGAGGCGCATCCAGCTTACAAATGTATCGTGTGACCTGGTTCCACCTCGGCTATACCTTTTCTATCTCAGATCGTCATGACCTCTCCGTTCCACCTATTGCTAGAGATTCACTCGTCGCGGGCGCGTCCGACTTCAGCGGGCTGGACCCTGTACAGACGATGCCTACGAGTTAGGGCGTTCAGATCTACCGACAAAAGTACCAACTCCCATGTACAGACCTTGAGACGGGCGGGAGCGTTCAGTTCCAGACGTTAATGATACGTCGATCCTCCCAGGCCAGGGCGCATGTACGAGATGTCCGCACGTGTTGTGAAAACGGCAACGGCATCGAACGATCTCCAGTCAAGCTTCGGGGAAATGCACTCGATAGATTACGCTACAGAGAAACGTGCCAAACTTGGCCCCTCTAACGTGAACCGATGGTTGTGCTCCAGCCAAGAACCTGCACGGATCTATGCAAACCACCCCGCTAGCTAACCACTTGTTAGGTCAAGGCGTGGTATCATAAGCTTGGTGGACACACTTTTATATCTAGAAGTTAAGGTCTTCTGGGCGGGGTAGGCTGGAGTTAAGGCTGGGCTGTACACCCTGTGCAATGGACGTTAGTCGGCACCTGGCTCCGCCATCGATCGCGTGACATAGCTAATTGGGAGGGCAGCGCTATCATAAAAATTAGCCGCACAAGACATGACTCTATCTTAAATTGTTGTTCATCTGGGGGACGGTTATTATCGGCTGGTAACGGAGTCACCATCATAGCTTGTCCCAAGCTTTCTGATGTGCATCCAGGACGAGCTAACCCGTAAGTCGCATCCTACTAGGTCGTCTGCATAACAACGCTCATGGTGTAATTGTAGCCGACCGGTACTTTTCTAACCGAGATTTACGAGATACTGCCGTTGACTAAGAACCCGTACAAAACTAAACGTTGTTTCTACCGGCAACGGTCCTAATGCTAATGACAAGGCCAACCCAATCTGTGCGACCTATCGGCGAGCCTTCACGTGCCTCTAAGGACCTAAGTTCGCCTCTACGTATAACCCCAAATGCTGCTAAAAACATCAGGTGCGAGGAGGAGGAGCCGATCTTGATAGAAGGATGCTTGCGGGCGTTGCCCTTGTGGATGTGGGTGATTGCAGAGTCCATACTCAGCGATCGATAGTTTTCAATGCGTCGATGCAACCACCCGAAAGAATGCACTGCCTCGAATGTCCATCTTGATCTTATACCTCCGCATCGACGGAAGTCGCAATGTAAAGAAAATCTACAGTCAGATTTTCAACCTGACGCCGCGCGTGGTCAGCTGGAAGGGGGGGGCATGTCCTCGGAATGCGTAGACCGGGTTGGCTAGCTTCTACTCCGTTGCAGGAGGGAGCTTCGTCAGCTAATGCCCGCCTTGCGGAAAACCATAAGACAGGTATGCGCCTTTGGATTAACGCTAATCCAAGTTCTTTGTATCCCAATTTTTTGCCGAGCGACTAGCGTAGATGTTGCTAATAGTTATGAACAACACACCACGCGCAGATCTTCTGGGCAAAGGCGTGTGACATACGTCTGCAATGATGGAAAGAGGGCTTTATCGCCGTCTGTCAAGCAAACACACGATGCGGGAGGAGAACTGGACACGGGGGTGAAACTATAGACCTTATAAAGTGTTTGTTTTGGCCTTACTTCTGTACACGTTGGACACGCCGGCACACGGTTGCCCTATCACGGGATAGGTGCACTTAGGTCTGGTGTCTGTGGCTCAGAATGTTCTTATATGTTGACAGTGGATCCTTCGTGCGACATGACGGTTTCGCTTCAGTGTCCTCACAGAGTACGTGCTACGAATCATACCTAGACTCCAGGGGCAGATGTCCGAGCATAATTCCTAAAAGTACACTGTCTTTGCGTTGGGGCTCCTTGGAGCAGGAGAACGATACGAGAAGCGGGGGAGAGTACGGCCTCGTCGCATGTCACGTGTTCGATGTCTGGTGAACCGGCCCGGAAACCTACCGAGTGCATAGTATTCTGCCCAATTAAATGAGTGGCGGATGATCATCTGAGACCATTACAATCCGTTTGCGTGTGCCTGCTTACAACAACTTAAAGTAGGTGGCGTAATGATCCACCTTTGCGACTACGCCACGAGTCGGGAGTTGGCTGTCCGCAATCGCGTCGCTCGATTATCTGTGTCTTGGAGAATCTAATTTATAGCGGCGGGCCTGTGTTGGTTTTAGTATCTGGGTTAGAACAAACAAATGGAATCGTAACAAGACCGTCAATCTAAGATGAGCGCCTTGGTTTGCCGAGGATTTACGTTTGTCTCTTGAACATTAGACTGTTCCTAAGGGCCGGAATTTTCTTTGCTTAATCCACTTACGCAGTCAGCACTTTACCTATTATCAGGCTCTCACTGACACGGTGTAGAAATCAACGAAACGACGTAGTGGTAACATAGTCCAGGACTCCTTCCGGCAATTCATACGCTATATCGCGCTCCTCGCTACAGGTTCGTGTGGGGGTCGCGTGTCGTGGGTCGCTTTAGCGAGTGGCTACGGTGTCGTGCGCAGGTGCTACCGATAGTTTTCACCATCATAATCGCGCTTAAGGCATCGTTCCTTGGTAGAGCCTCCTACAGGATAATCGCAGGAGTTCCAATTTACTATGATGGCTAATTGTTTTATATTCTCTAACCGAGTGCAAACCTAAGGCCTCGGACCTTCGAATGCAAATGCATTGCGATTTTGAACGGTACGATGTTTTCGGTCAGGAGATCCTGACCCGGTACCGGCTTGATGAGCCTCAACGTCCGCACTGGGGATGGTCTAGGTGCCTTATTGGGATGGACGTAAGAAAGCTGTATCGGCACCTATCTTCGATGCCTCTGTAGTGCCGAAGGTTAATCGGTTAATATAGACCCATGCCCAATAAAGAGAACAAACTTATGATTTGACTCCCGATAAAGGAAGAGCCAGATGCGGTAAAGAGTCAGTGTCCTAGACTTTGGTAAGGCGTGATTCAGTCATGATAGCTATATAAAGATTCACCAGCCAGAGACCCACGGTAGTAACACCGGTCAAAAGGATCCGCTGGGGGACTGAATCTTTGGCTTAAGGATCCCGCTACTAAACGTTGTGTTTGAACCCCGTGTTTACTGAAATGCGGCCCCGAGGATGTGACATAAGACATAACCATATATGCTTGCCCTATCTAAATTCGTTTGTGCCTGGCGCTATAAAGTCGGCATTAGAATCACACGGCAATAAGTATTGAATAACGTCGGCTTTTCCTCTCTAGAGCGACGGGGGTACTGGAGTCGCTGGCTTATTCTCTCCCCGATAAATCAGCTGGCCCACGGTTCTCAACTAGGGACCGTCGAGGCCGGGTAATATCTTAATGCGATGCAACGCCCAGAAGATGAGCTCGTGTGCCCTAATGATTGTGAGCACCTCTCACCCACCAATATAATCTTCGCCTACCCATCCGTCAGCTGCTATCATGTGGGGCGACACATCAACTATGCTCCTGATCTACGTTTGAACTAGAACTGATTATAGGGGCAGACGCTATGAGATGCACAACCTATGGAGACCCGACTCAAGGCAGATCTGGCAAACGGTAGCTGTCGGCCCGAGCGTCGTAGTGCTAGGCGCGCCGTAAATGATGGGTGCACATTAAGTCCGCAAAAATTCTAACAATGATCAAAGTACAATTATGAGTGAGCCTTAAGTGGCCTAAGAGCGTCTTGCTTTTCTAAGTCTCCCGTGACGTAGCCGGAATCCCGTGACATCACGTCGGAATAGTCAGCAGATAGTAACCCTATGCCAAACCTGGAGAAACGACTAGTGCATCAATAAGCATCGGTCGTATTCCGAAGCAAGGGCGGCTTCTAACGAATCTACCTAAGCAACCCCAACAACCGCTAGGTGGAAGGTAGACCTCCACGGCAATGTGTGGGGGGGCAGGCGTGCTTGAATTAAGACTGGCCAACTACTACATCCGAGTTAGTACCCTCATCTCGACGACGCAGCTACACCCTCCGTGTGTCCACCTTTGATTATCGACTGATAAGGCTTATCACACCGACAACGCCTCATCTGTCCTGATGTATGCCTAATCTCGGCGCTACGGTAGCATAGACCCGGAACCGGACCTGATAGTGTATTCTTTGTTGTGCTCGCAACTCAATGGCAGGTACTTTGATTCACCCGAAGGATAGTCTTACTCGCCGTCGGAGCTCTTGACAAGCCGGGTCTGCTTTTGCGGAGTAGGTCGAAGCGCGCATATGGTATCACATCTAGTGAAGCATCGCAGTACCCTCGCCCTTCGATCATTTAATATAAGAGTGGGAACGAGGCAAAGAAGAGTGCCATCTCCACGTTCGAGTCCAAAGGCGCCAGGGTACACTGAACAGGGGACTCCCCGGCACAAGATTGCAACAGTGAGTAGCTTTACCCGGATATGTCGTACAAAAACCGCCTGCCCGAAGTCGGGGTACCCGAAGATCGCCCAGAAACAGGATCAGTTGCCCAGGCAGTATTCTAGGGAGCTCCGGGTATCGTCAATGAATAGAGTTGTGTGAGGAACCGGAAATTTGCTAAGGCATCCTCCAATTTGGTATATGTATTGTGAGTACACCAGCAACTGCAGGCAAGACAGACCGTACACAAGGACATCGTGCACCTCCGGAATCGCCAGTAGATGGGCGCACGGTGGAGTAGAGCTTTCACTTGGCCTTGTTGCAGTCACTTGAATCGCTTTTATTCTTGGGTTAGTCCAGCGTCAAGGACGGTTTTAAAGACTAAGGTATGTGATGATGATAGGTAATCTCTATGATTGGAAGCGCCGTCTACCACTTGAAACAACGGTGGACTTGGTCTCAAGCACGTACCTATTGGAATTAATGTGATACAAGACATTAAAACAGTGCGGGCCTTTCAATGGATGGCGCAATTGGAACCGTTTACCGAGCTAACTAAATTTTGAGACGCCTGATTTCCCACAATAGGTCTCGCTGTAGTCGAGGATAACCACTACCGGCAATACGCAGTTTTGCTATGGTCAAATCTGAAATGGTTGCCGACGTTATCAGGCTCTGGTCCCATTCACTTAGTTGGATTGTGACTCTCCCCCTGCCCAGCTTGCGTATACATGAGTTACCGCTTACCATTTCTGGACGGACGTTCACTTTGTAACGTGGCCGTGAAGTGCCCTATCGTCAAAGTAGTTGCAATGAGCGCGTCAATTAGCCGGTTCGTTATTATAAGGACGCGGCGGCATCATAATACTTATCCGTGGCACCGACGCCGCGCCTTGACTACATCCTTCTGGGATTATGGGGTGTTACTAGTAAGTTCTTAACCGCCACGCTCTGACGACGGCGCTACATGCAAGGTCCGTGCACTTGCCATACGACACCACAGTCCAGTAGTCACGTGACTGTATTCCAGCAGCTTAGTCGCAGGAGGTTTTGTTATGGTACGCAGACCACACGAATGTCATATTCAGGGGTTTCCACCGGCTTTAACTGTCCCCGATCCCACGCTAGAAATCGCTGCCGAACATCGATTCGGTAGTACCTATGGTAATGGCCGGATAGACGAGAGCATCCCAACACGGCACTGGCTTACACGCAAGTGGTACATCGGGAAGTCCACGGGGAGGAAGCTTGAAAGTTCTTGCCCAAGGGGGTCCTTGAAGAAGGCGTACGTAGTCCTACGTCGTCTCTTGGGGTGTAGGAGGAAAGGGCTATTAGCGATTGAAATTCAGTCTCGGGCAAAGCGTCGTTTCTTGCAGCGTGTGTCATCCGGAGGGGGCGTACCTTCGCTTACGCTTGGTCCGCGTGTATACGCCTCCGAAATGATCCTTCTAGTGTATGTTGCGATCGGGGGGTGCACCAATTCGGTGTAGGCATCTGAATAGGGTGAAAGGTAACAGAGCATAAAGCCGTAATCCGCCCTGGCCGGAGGCATTCACCGGACGGGGGCGGAGTTCCCACATGCTACAGCTAACTAACCGGTAAACCCTATACCATGAGCCGGAGGACGCTGTACCACGCACTAAGGATCTGGCCGGTCCCGCTGCTTGCTAACGGCACCCCTCCACCATAGCCTCGTACACCATGTATTTTCAATCCAGTCGCGTCGAGACAACCTCCCAAAACGCCGCCCGTGGGAAGGCTCTATCTTTGGTGAGCTTGTATGATGTATCAGGGAACGCAGAAACAAAGGTGAGAAACTTAAGGGGAGAATCCCCAATCAGTCTGGTTCGATGCCCAAGGATGGCGAAGGGCTGGTCTATACGAGCAATGTTATGCATACTACGTTTCAGACTTGATTATCGTCCTAGTAACAAGCCACGTGCATCCATAAAACAAGCGCCGTGGAGGGCTTCAATCCTTCTCGTACAAACTGAGGGTGCGACAGGATGTGACTTGGGTGCTATCGGGCTATCTTGCTATTTGATCTTAGAAACAAGACTCACCGTGAGAAGTGATTGTACGCAAGGCCAGAGATCCATCATTTACATGTCCACGCGAACTTCAGCGCGTACACAGTGTGGCTGCTCCTTCAGCACGTTATACGAGTAGGAGCGGTGTCCTGCTTACTCTGTGTTCCAAGACGGGCAGCTTAGTACGAAAGAGATCAATGCGAATAAAGCCCTTAGAATAGAGCCCCCGTACAAGCTCTGCCGCCCAAGCGTGTACAATTGGGACTTTATGTTCTCCTGCGAAAGGTGCGTCACGTAAGCGAGTTACATTTTCGTAAAACTCTCTCAGTGGCGGTGTTGACCCTTTATTGGATATAAGGTGCATTGCGACGGTGAACGTTACAAACGCCATTGTCTACGTAAAGGGCATGATTGTGGGCTACTAAGGGCAAATTTCTGATCACCCCTCTTACCCAGTAAACCAATAGCAAGGTAGAACAGCACACATAGAAAGCTTACTACTACGAGCCGGAAACATAGGATATCCAATGTTCCTAATCTCTGAGCGCCAGAGGCGGTCGCCCACAGCGGAATACGCGGAAGTAAAAGATAGACCCGACCGTCGGAAACGGCAAAACGAAGAGGTGAGGGAGTACTATTCCTAGCTTTTAAGTGACCATGACGCCCCTGGTGGTAACAACCCGAATAGTAGTATCACCCATCGGAAAGCCAGTTACCTTGCAAAATTAAAGCGGACTTCCTGAGCAACATGAAGGTATAATGACCGGGGTTATACTATCCACAAGGGAGGGAAGTTACTCATTGTTGATTTTGATATCAAGAGGTTGAGAAATTCGAGTCAGCCATTTGTGGACCTTTAAACCACCCCCGAGACTGGTATAAATGTGGAAGGCTGCCACTCAGCTTCTCATCAACGCTCGCTTGCGCGTTAGTACGTGGCCTCCTGAAGCCGACCCTCGTAAATGACGTGTGCTAGCCGACTTTGCATATAAGTCATACATTGGGGAAGTTGGTCTCTCAGCTCGTTTCCAAACCGGCGACCTGATCATGCCCTTACAATGAAATGATCTGTAAAGATGACTGTGAGCCAATCGCCTTCCTGGCTAAAGCTATTAACCCTAAAGTGATCTCGCGCTAAGCAGGAACGCATGGTACTCGCTTATAACGTAAACCGAATAGGCATTATTGCGCCTGTACGTTCCACCTTCGCAGCGTTAAGTGGGTGTCTTTAACAACTGCTTAATCCTTCAGGAGTTCAACCAGCGGGGTCTGGAGGGAACTGCTCACACTTCGCACTCGACCCTAGGAACTAGTCATTAACAGACTATCCCAAAGGAAGGCCACTCCATAGAACTTTCTAGTTGATGATCTGACTAAAACAAGTCCACCCTCGATTGCAACACGTGTAAAGCGGACTAGCCATCTTCAAAGGACTTGGGTCGCTTTGCCACAATTACTCATGAGGATATATGCGATCTCATTTTAGTTTTTAAGCGTGGCGGCGCCAGGAGCATCTCTGGATCATATACAAACGCTAAGTACCAGCACATCTCTCCCCTTGTCCAAGGGTGTCATTGTCTCCTCTGTGCATAGTGGGTGAATATACCACTATAATCCCGCTTTGAGGCCAAGGAGTGTTCGTCTCGACATCCCCTTAACATTTTATGGACCCACAGTAACCTGGAGTAGCTCTCACCTTGCATTATGAATGCAACTTATGTTACCTGAGTGCCTCCGCCTCGGTGCCCTCTAGAGCTGGTTAAGTATTTTTTAGGGTCAAAACCTGCCTCCGTTCGCTACCCAACGGCTGCTGAAAGCCTTTTGGGCCTACCGGATGCATTAAGTATCAGCAAGTACAGCTGGTAACCGCACCAGCATTACGTACGTTCGTGATAAAATCTGAATTTCTACTCTACGCCAGCGCGGGAAACAAATAGTCTCGTCGTGATATTTCGAACTCCATACGGTAATCATATTCCGGTTCGGCAAGTGCTGCATGGACCTACCTGTGTGTAAGCACTACGGGGCCCCTACCACGTTCTAAACAAGCTCGAAGGCTCTTAGTTCGATTTCTTTCGCAGCAGTGCATGGGTCAGGACGCCTCGATAGTGGTTTTTAGATTTTTTAAGCCCAGTAGCACAAGCACATCGCCGTCGACGATAGCCCCAGACAATGACAGCATAACACAGGGCAGCGTATAAGCGAAAAGAGTTGCTTGTTGAACACGGTGAACCGATTTTGGACCGGTTACCGATATGTTCCAAGCAGAGATCGTCATTTTATCCACATGCTGCACAAGTCGCCCAGGGTACTCATGTTGACTGACAGGTCGCAACACGATGGCCTATCGGGTTTGTAACAATATCCCCTGAAGGCATTCCAAGCCCGAAGGTTGGAGTTAGCTTATTATAGCAATGTGGGAACGGCCAATTCTGCCGACATCAATAGGCGTCTCGGACCTAGCGACGCTGGCGTTGTAAAACTCATCCACAAGTGCTTCGATCGCGATTCTAAGCAGGTAGGACGTACGCTGGACCCCTGTGTCTGCTACTCTTGATCAAACTTGGTGAGTGTGGGTAAAAAGGCGTTTTCGGGAGCCCTCAGCTTGACCTTGAGGAGTTTACCATATACATAAACTCCGGGGGAATCCTAACATAGCCACTGACCAGGCCTTACTTGATTGCAACGGGTTAATGAATAGATTGTTTCTGGAGTAGCTAAGGGACCCCTCGTAGAGTACTCTGCGTCTCTGTAACCGCATACGTGGAAGGGCTCAATGAAACGTCACAACCAGATGCCCCAGGAGCGCTTTTACGTATAGAAAAATATAGGGTGGAGAATAACCGGGTAATTACCATTGTAGTTCGTATTTACCATGGAATGCTAATCTCTCAAACATGCCGTGGTGCCGCCGGGCCGCATTTTCGCCATGCACTCATAGCTAATCAGGGACGCCTAAAGTGCTCGAGTATACCTAGACCAGCCTCAAAGAGGATCTAGTGATGGCACTCGTACCGGGATCTAGTCTATTTTCCCCTCACAGAGCCATTGCAGTCCGTGTGGGGCTCGGGTATCTAGTAGAAGACCTCGTCTGGTATTCGCGGTCAAATCTCTTTCACATCGCTCGCATAAGGAACCTCATACACCCCAACAAATCACGCGAGGTATTTCTTCGCCAATTCTAAGGGAGGCGGAAGATTATTTCACGGAATTTCATTTAACCATGGAGATGATAACAGCGCGGTATACGCGATCGTCATAACTCTGCCATAAAGCCATTGTGCACTTTCAGAGATTTGCTGCGAGGCAGCATATCGGAGAAGGAGAATTGAACTTGTTCTAGGACTATAGGCTCTCCCATATCTATAAGCACTGGGAGCTCCAGAAGGCCACCGAACCAACAACTTTAGCTGTCGCTGCGGTAACTCTTGAGGTTAGGCGCGCGAACACGACAGGGCGCTCTGGCTCGTCACGGTTTTGGGGGTACCGGCCGTTGAATAGAAATGTAGCTTTAGCAACCTCATAGGCTGCGGTAAGGTCTCAGATCTAGTGAGCGTATGACTGGCTTAAGGCTGTGGACAAGAGTGCAAAACACTTTAATACTGTAAGTAATTAGCCCGGCCGTCGACTATAGCTACAGACAGTGTACACGATGATTACAAAATTGTTATTTGGTACGCACTTCTGTGTGTCGCGATAATAGCAAAGACGTCGGATAATACTCATCTTAACAATAGCAAATCAGACAATCGTTAGGCCTGCGTTTGTTGTATCATACTCAGTCGACTCCGCCCTTACAACGTTGGGTCTTTAATTATCTGGGTCGGACTGGCGAAGGGGAAGGAATCCGGAGGGGGGATGCTCGCACAGTGTGAGGTCTGCGGAAATTCGATAGATTTAGCCTAACTAGAAGGCCGTATAACATAAAAACACACTCTCTCTGCGACGAACAAGGGGCTTCAAATGGCTCTGAGGCGCTTGGCGGCATTTGTAGTCCTTTATGATCAACTGTGATACTGCATTTTGCATATTGAAAGCCCGCGGTTTTGAATTGCCGGGACGCTTTTTTACCGTTAAGTATGGGACAGTCGCTGCTTTACATGACCGCAGTATTTATCCCAGTTAATCATTCTCTGGGTTGTTGTGCTTGCCTTCTACCCGATCCTCGTGGCACGCTCCGCGAAGCAACCTCCTACTCCAGCATTGATCAACGTTCCATGGATTCTGATGTGAGTCCAGGTGGGGGCATGTCGTACAATCTGCTAACAACCGAGGGACAGCTGGTATCCTCTACGGTACACCTAGCTATCTAGAACAGATATTTGAAACCTCAATCTGGCAATGGTTTCACCCTCAATAATGTCTTCACAGCAATTTTAAAAGGACTTTTTGGGGAGTGGCGCGGATAGGCCTCCTTCACCCCCCAATAATAGTGAACATGCTGTTCGGGGAAGCTAACCAACGATTTCACTAGTGTCTTGGCCCGTCTCTAAGGATTGTGGGGTTTTATTGGGACCACACGGGTTAACCGTACCTCGTTTACGGGCATTATGAACCCGTGAGGGCATTCCCGGCTTATTTCTTTATATGTAGTCGGGTATCAGGGGTATGCTCGATTGTTCCAGCTGTAACGGTACGCACCCTTGTGCGATCGCTTGACCCCGATCCGTTAGACACGAAGGCACCACTTAAATTCCTGCTGCCGAAGAGCATAAAGGCCAGTCATATACCCTTATTACTGCCCCGCCCCACGACTTTTCGGCTTAGGAACTGCAACTCGATAGCGTGGCGACAAAGTCAACCCACCCTCTGAACTTTGTGCTTGTTGCGGGTTGTGGGCATCGCGACCCTAAGCTAATGCGAGGCTTCAACCTCAACGTGCGGACGTCACCTGATTATCTTCACTGCACTCACTATCCAGAGACCCGAAGAGGAGATCAGCATCTACGTTTGCATCTAGCGCGTTACGCGAGTTCGAAAGGAAATTAGATGGTGTGGTGAGGGGGTTTACTGACTCCACCTCGCCGAAAGTACATCTCTTAACCGTGGTAGTTATACGTCTCTGTGGTGTAGTCGTAGGACAGTTGTACTAATTCCAGAAGGTTGGCCGGCATTCGTCCGCCCGCGCTAAGGGGATCGTCCAACCTGAAGGGTTCCGTACGGGATCCGTCGGTCATGCAGTGGCTTTTAGTGTGAGTCCTCTTCCACGTGAACCCGATAAGAGGATGTCTCGCGCTCCAATGCGGTAGGCAACAAAGAACTGTCTCTGCTTCCCCGGAGCGCAATGATCTGTAGTAACTAGCCCTGGGCAAGCACCCTACGTTCGTATTGGCCTAGTGTAGCACGCACCCCGCTGTCGTAAGATATTAAGGAAGATGCTCCTTTTTATATCGCTTTGAGCCGTAGAAAGCTAGTCGTCTTGCCCCACATTAAAGCCTCAAGCTGGACGATTCCGAGTCCTAATTCCCTACCTTTATACTTCGGTTCAGTGCAATGCATAGTTAACCACTTAGCCCACAATGCGGAAGCTTAAGATTCGTCCCCCCAAGTAGAATCTAGAAGCTGTACCCGGGCGATTCAATGGTGAGCACTTGAGTATGTCAGGGATTTCTTTGTATAGCGCCTACAATGCTCTAAATGAATTTATTGGTAGCATACAGCAACATGCGAAGTACGATATAGTTCTCGTAGTACGTTATGGGGGGGCCGCTAGGACTCACCCAAACGATTGCATCAATCTTCTACCGATATGTGGGGTGGCGACTAGAGCGAGGTACGCCACGCGAGACGCGTAGTCTTGTAAACCTCACGCCGCGGTAGGTACGGTCCGGGATGGGCTGATACTGAAGCGAACTGTGGTCTCGTCTCCACCCAGACTAGAGGCATTACCGGGGCATAGCCAGAGCATTCGTATATAGCGATTGACCACTGGCTAAGCGCGTAATTGTAGACGGCGGTTAGGACGTGCAAGTACGACCTACTGTGTATCGGGGTGTAACGATATCGAACCGCTGAATACTTTTCGTATCTGCCTATTCATGCGTGTCCGCTCGCTATGCAGCATTTCTGGTCTGCTTGGACCTAGACGGAACAGATCCGAGTACGCAGTGACATTTGGCGACGTCCAAGGAGGCCCTAGACAGATAGCATCAGTATCAGTGCGAGCTCTCGTATGGATACACCTCGAATACGGATAGGGGTCCCAACACCTACCGAACATAAAGCGGAGACGAGCACACTAAATCGTTACACGGGGGCGCTTATATCTGTATAGGTCTAACCAAGGACGTCACCATATTGTAACACATAAACGCTGGAGGTCTCAGGCCTCGGAGGACAGGACTATAACCCAATCTTGATCTGTGTGTCTAGCGTGGATCTCGCAAGAGACCACCTGTTGCCATCCGTTTAGTGAAACTACGAAGAACGACCTTTCTGTGATTCCTCTCACGCAGTCTAGTAGGAAAAGCTAAGGGGTGAGGGGGACCTATCTCGTACTGCGCGCGGAACAACAGGTTCACATTTATAGCGTCTGCTAGGGCGCCCCACCGTCGTGGGGTGCCGACAGCCGCATCTATTCACTCTAGAGCCTGGGCTTAGAAACAAGTCAGAGGAGCCTCTTTCTAATACAACATAAATTGGGCCAACTTATCTGGCGCGCCCTCGACCGCAGCTGACAAAATGTAAAACGGGGCTAGAATCCGGAGCCACGCCTTGCGGTTGTAGGCACAGATATTACGTTGTGAACGATACGGGCTGGCGATAGTATACGTCTTACGCATTCCCGCGCTCGCCTGGGCGTTTAGGCTATATTGAATCTCGATCGATAGGGGCGGCAACCGAGGCCAGTAGGCGCGTAACTCACGCGATTTTACTTGATTATGTGCCGTATTAAGTAAACGTGTTCCGTGCGGGTAACGAATCACCACGTACTCCCTTGCGGTTTCGGAGCACGATTAACTTAACGTTGTACGTAAACGACCCTATTGATCTGTTCACTGTACGAGGTCTTATGCACGTACCCTAACTAAGAGAGAATGATGCGGTTAGGGCTTAAATTGGGTGAGAGAAGCCAACGGCAATTGCCCAGCCCCCCACCGATGGTTCTGAGGCAGCAAGGGCATCGACCGTACACCTATTCCTCTCTTAACGGGGTTACGCTCATTTCTGCAGCGCACACCGAAACTGAGCAGAGGTACACATCATAAGAATACATTGAGCGATTGACTTCAAATGACACCTTTCCGAGGTACCTCAACTTTCTCTCGTGCAGCACAGGCTGGTTGAGGTCGCGTTATTGTACTTTGACATTCTGTAAAAGAATGTCTAGATCGTAGCTGTAACGGACTTTGGGCCCTATTGTCATAAGCTGCGGAGATCCTGCCATGTCGAGATTCCATTTACCTTTTGCGTCCGTTCCACAAACGCCTGGTTGAATGAATTAGGAACAATATCGCGGCTCCCCCTAAAATTGAGTGCGATCTTTGTACCTTATGCAGTCATAACCACGCAATAATCAAAGAGCTGTAGCATTGGGCTACATGAGTGGGCTAAAATGTAGATTCAGTACACGATGTCGGTGCCTCCGAAACCCGGGTAAAATAGTGTCTGCTAGCTGAGAACACTCCTACGATGCTTATAACTCAGAGAACGCTCAAAGGGCAACTTGGTTTTGAAATAAAGGCCGTTGAAAATTTTGAACATTACATCGGCCTGCTGTCATTTCGTGTTTGACATACTCATCGGTCAATGTCACGCAATCGCGGTTTCCTTACCTTTGGATAAGGTGTTATGCTAGGTGCCCCATGATGTGTGAGTTTGACTGCCGAGCACGTAAGGACGAGTTAGTTCGCGTGTCTGGCGTCAGTATACTTAGCGAGCGGTAGTCTAGGCCCAGCAACGCTTGTTTCTGCGTGATGGCGTATTCAGCGGAGCTGGTGAGCCGGTAGAAGCATTAAAAGAACCTCCATACTGTAGAACGTAAAATCGGCACTAGTCAGAGGTACTAATATTAATACCATGCCTTATAAGGCGGACATCGTGAACAACTCAGACGGCGGACGCTAGAACGAGTGGTCAGCTCATTGTTGCCGTTGTGGATTTCAGAGAGGATATCGTATCGGGGGCGCACTTAAGTATGACTGGTGTCTCGAAAGGACGCAAGCATTGATAACTCCCATTGACTATAAGGCACACTGGAATTCATACAGCAGATTAGCCCAGCCGGCACAGTCGCTAGCAAAGCATGAGGTCGACCTAGGAAGAAATCTCGGGCACCTTTAACTGTCATTGGTGAGTGCCTTTCAGTATGCGGCCCTGAGGATAGATCGTTTGTAATTGGAGATCGGATAATTAACTGTACTAGCAAGATTTAGTGGGTCGGAAATTCCTACTCCCCTGTCTTGCACATTCGCGTTTCGGGCCTAGCATTTCCGCACGACTATTGTGGTGCCCACCGACCCTCCATAGTGCGGTTTAAAGCGGTTCTTAAGAATGTCAGGCCCTATTAATTGCTTAGGATGAAACACCGACGGTCAGGACCTCATCTTCTTGGCGGGACGTCCTTAATGCCGTATCACATCGCACATCCTATACGTCTAAGAATCTCAGGCCTTGATACGTACTGCCCCCGTTTCTATGACCGAGGAATCGTACTGTTGCTCATCTAGCATATGCGTAAAGTGTTCACGGCCTTGCTAACGTATTCGTCTTGACCGGTGCACAATTTGATGTACATGATAAAGGGGTAATGACGCGTGGTTGAATCTTTATAGTCCGAACTGAAATGCCCCTACAGGCCCAGCATGCCGCTGTCTAGGACCTCACAAGTAGGCGCCTACTAGTTAGGAGTGGCGTAACGGGACATATCGGCGCGTAGGGGACAAGTTTAAGCGTGTTTTACTATGCTTTGAGTCCAGTAAACAGATGGCCGCACAGGGCCGTGGTATGGTACGGCAATGATCTTGCGTTGCCTGCACAGATATGTGGACATGTTACATCGGGGCGGACGTTTCGTTGGGATTATTGATACGGTCGTTCGTTCCGGCCTCGAGCTCATGTGCAAGACTTGCACCGATTATCACCATCCACGTGATGCTTCGACCTGTGAAGCCAGCTATGCAACATCAGATGCTCGATTCAAACACAAAAACTACAGTATGACCTGTGTTGAAGGCTATTTCTTCTAATATAACCAGACACTGCATTCTCTCGGCGGCTATTATTTCTTGCGATCTAGAACTACGCCGGGCACAGGTCGTATGTAGAGAGTACACTCCCGCTTTGGATACGGACGACTAGCCCTGATGCGATCCTTCGCCACTGCTTCCGTGTGGTCGTCAAACCAGGATCCAGGGTCCGCATTAGACACGATCGGCTTGATACCGTCACTCACGTAGTAACACGCGCGTATTATTCAATACGACGAGAACCGGGACTCTGTAGAGAGCTAGTTGATCCGCGTCGGACGGAAGAGTCCAGGATCCTCCGATTGTGCTTTGGAAACCCACTCCTGATTAAGGCCTTGGCCTATGC'
Pattern = 'CTTGGCCTATGC'
d = 5
res = ApproximatePatternMatching(Text, Pattern, d)
' '.join(map(str, res))
count = len(ApproximatePatternMatching('AACAAGCTGATAAACATTTAAAGAG', 'AAAAA', 2))
def ApproximatePatternCount(Text, Pattern, d):
count = 0 # initialize count variable
k = len(Pattern)
L = len(Text)
for i in range(L - k + 1):
if HammingDistance(Text[i:i+k], Pattern) <= d:
count += 1
return count
file = open('dataset_9_6.txt', 'r')
for i, line in enumerate(file):
temp = line.rstrip()
if i == 0:
Pattern = temp
elif i == 1:
Text = temp
else:
d = int(temp) | 473.714286 | 19,003 | 0.984419 | 139 | 19,896 | 140.870504 | 0.374101 | 0.002043 | 0.001839 | 0.001685 | 0.006639 | 0.006639 | 0.004698 | 0.004698 | 0.004698 | 0.004698 | 0 | 0.000507 | 0.008494 | 19,896 | 42 | 19,004 | 473.714286 | 0.992092 | 0.001257 | 0 | 0.235294 | 0 | 0 | 0.958883 | 0.957172 | 0 | 1 | 0 | 0 | 0 | 1 | 0.088235 | false | 0 | 0 | 0 | 0.176471 | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
1bceb82e1e7119923bb0324dca3a53a2247e8109 | 276 | py | Python | epymetheus/__init__.py | shishaboy/epymetheus | d8916b20c6b79e86e5aadb39c7c01a582659f03b | [
"BSD-3-Clause"
] | null | null | null | epymetheus/__init__.py | shishaboy/epymetheus | d8916b20c6b79e86e5aadb39c7c01a582659f03b | [
"BSD-3-Clause"
] | null | null | null | epymetheus/__init__.py | shishaboy/epymetheus | d8916b20c6b79e86e5aadb39c7c01a582659f03b | [
"BSD-3-Clause"
] | null | null | null | # flake8: noqa
from epymetheus.history import History
from epymetheus.strategy import Strategy
from epymetheus.strategy import TradeStrategy
from epymetheus.trade import Trade
from epymetheus.universe import Universe
from epymetheus.wealth import Wealth
from . import utils
| 25.090909 | 45 | 0.847826 | 35 | 276 | 6.685714 | 0.342857 | 0.358974 | 0.188034 | 0.239316 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.004115 | 0.119565 | 276 | 10 | 46 | 27.6 | 0.958848 | 0.043478 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
1bcf26abc265e38d664a03d4f570802ab15ea137 | 46 | py | Python | srcs/python/kungfu/torch/optimizers/__init__.py | Pandinosaurus/KungFu | 80dfa463450330e920b413f65cc49d8e013b84a9 | [
"Apache-2.0"
] | 291 | 2019-10-25T16:37:59.000Z | 2022-03-17T21:47:09.000Z | srcs/python/kungfu/torch/optimizers/__init__.py | Pandinosaurus/KungFu | 80dfa463450330e920b413f65cc49d8e013b84a9 | [
"Apache-2.0"
] | 56 | 2019-10-26T08:25:33.000Z | 2021-09-07T11:11:51.000Z | srcs/python/kungfu/torch/optimizers/__init__.py | Pandinosaurus/KungFu | 80dfa463450330e920b413f65cc49d8e013b84a9 | [
"Apache-2.0"
] | 53 | 2019-10-25T17:45:40.000Z | 2022-02-08T13:09:39.000Z | from .sync_sgd import SynchronousSGDOptimizer
| 23 | 45 | 0.891304 | 5 | 46 | 8 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.086957 | 46 | 1 | 46 | 46 | 0.952381 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
1bd374084e19c4b98447f622d6387d179f68be72 | 87 | py | Python | src/sentry/search/snuba/__init__.py | AlexWayfer/sentry | ef935cda2b2e960bd602fda590540882d1b0712d | [
"BSD-3-Clause"
] | 4 | 2019-05-27T13:55:07.000Z | 2021-03-30T07:05:09.000Z | src/sentry/search/snuba/__init__.py | AlexWayfer/sentry | ef935cda2b2e960bd602fda590540882d1b0712d | [
"BSD-3-Clause"
] | 196 | 2019-06-10T08:34:10.000Z | 2022-02-22T01:26:13.000Z | src/sentry/search/snuba/__init__.py | AlexWayfer/sentry | ef935cda2b2e960bd602fda590540882d1b0712d | [
"BSD-3-Clause"
] | 1 | 2020-08-10T07:55:40.000Z | 2020-08-10T07:55:40.000Z | from __future__ import absolute_import, print_function
from .backend import * # NOQA
| 21.75 | 54 | 0.804598 | 11 | 87 | 5.818182 | 0.727273 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.149425 | 87 | 3 | 55 | 29 | 0.864865 | 0.045977 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0.5 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | 6 |
1bd3a996200246b2959bcab7aec57742ddadf21f | 199 | py | Python | tests/app/factories/feature_flag.py | department-of-veterans-affairs/notification-api | 698bc98d8e78a13a0b2cfc432cfc718ff1016b06 | [
"MIT"
] | 10 | 2020-05-04T14:11:06.000Z | 2022-02-22T19:06:36.000Z | tests/app/factories/feature_flag.py | department-of-veterans-affairs/notification-api | 698bc98d8e78a13a0b2cfc432cfc718ff1016b06 | [
"MIT"
] | 554 | 2020-05-07T21:56:24.000Z | 2022-03-31T23:04:51.000Z | tests/app/factories/feature_flag.py | department-of-veterans-affairs/notification-api | 698bc98d8e78a13a0b2cfc432cfc718ff1016b06 | [
"MIT"
] | 4 | 2020-08-27T16:43:29.000Z | 2021-02-17T22:17:27.000Z | import os
from app.feature_flags import FeatureFlag
def mock_feature_flag(mocker, feature_flag: FeatureFlag, enabled: str) -> None:
mocker.patch.dict(os.environ, {feature_flag.value: enabled})
| 28.428571 | 79 | 0.78392 | 28 | 199 | 5.392857 | 0.642857 | 0.218543 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.115578 | 199 | 6 | 80 | 33.166667 | 0.857955 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.5 | 0 | 0.75 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
940fc8d0d015d7caf6b62ce0a736d8a4e75c8cb2 | 27 | py | Python | src/euler_python_package/euler_python/medium/p435.py | wilsonify/euler | 5214b776175e6d76a7c6d8915d0e062d189d9b79 | [
"MIT"
] | null | null | null | src/euler_python_package/euler_python/medium/p435.py | wilsonify/euler | 5214b776175e6d76a7c6d8915d0e062d189d9b79 | [
"MIT"
] | null | null | null | src/euler_python_package/euler_python/medium/p435.py | wilsonify/euler | 5214b776175e6d76a7c6d8915d0e062d189d9b79 | [
"MIT"
] | null | null | null | def problem435():
pass
| 9 | 17 | 0.62963 | 3 | 27 | 5.666667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.15 | 0.259259 | 27 | 2 | 18 | 13.5 | 0.7 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | true | 0.5 | 0 | 0 | 0.5 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
943542ca488ab3e97674cc0337d53bd3c572eeea | 23 | py | Python | server/server/contest/__init__.py | aweijx/MMW_YNU | 0f4aa38c9b359cb7282a322eb3f258f9b7b7eb47 | [
"Apache-2.0"
] | 2 | 2020-11-16T06:15:09.000Z | 2021-09-07T09:32:55.000Z | server/server/contest/__init__.py | aweijx/MMW_YNU | 0f4aa38c9b359cb7282a322eb3f258f9b7b7eb47 | [
"Apache-2.0"
] | null | null | null | server/server/contest/__init__.py | aweijx/MMW_YNU | 0f4aa38c9b359cb7282a322eb3f258f9b7b7eb47 | [
"Apache-2.0"
] | null | null | null | from .contest import *
| 11.5 | 22 | 0.73913 | 3 | 23 | 5.666667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.173913 | 23 | 1 | 23 | 23 | 0.894737 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
9460f3f47dcafb904d0577fc08db71062eff898d | 1,746 | py | Python | apps/people/validators/people.py | bergran/people | a2639b238005bd37b7a08f220b57c4b5ad5c031d | [
"MIT"
] | null | null | null | apps/people/validators/people.py | bergran/people | a2639b238005bd37b7a08f220b57c4b5ad5c031d | [
"MIT"
] | null | null | null | apps/people/validators/people.py | bergran/people | a2639b238005bd37b7a08f220b57c4b5ad5c031d | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
from fastapi import HTTPException
from sqlalchemy.sql.functions import count
from starlette import status
from apps.people.models import People
def validate_place_kings(obj, people, session):
count_people = session.query(count(People.id)).filter(
People.is_king.is_(True),
People.is_alive.is_(True),
People.place_id == people.place_id
).scalar()
if people.is_king and people.is_alive and count_people != 0:
detail = 'It can not be 2 kings alive in the same place'
raise HTTPException(status_code=status.HTTP_400_BAD_REQUEST, detail=detail)
def validate_place_kings_updated(obj, people, session):
count_people = session.query(count(People.id)).filter(
People.is_king.is_(True),
People.is_alive.is_(True),
People.place_id == people.place_id,
People.id != obj.id
).scalar()
if people.is_king and people.is_alive and count_people != 0:
detail = 'It can not be 2 kings alive in the same place'
raise HTTPException(status_code=status.HTTP_400_BAD_REQUEST, detail=detail)
def validate_first_name(obj, people, session):
count_people = session.query(count(People.id)).filter(
People.first_name == people.first_name
).scalar()
if count_people > 0:
detail = 'It can not be 2 people with the same first name'
raise HTTPException(status_code=status.HTTP_400_BAD_REQUEST, detail=detail)
def validate_people(obj, people, session):
validate_first_name(obj, people, session)
validate_place_kings(obj, people, session)
def validate_people_update(obj, people, session):
validate_first_name(obj, people, session)
validate_place_kings_updated(obj, people, session)
| 33.576923 | 83 | 0.715922 | 249 | 1,746 | 4.799197 | 0.212851 | 0.130544 | 0.120502 | 0.080335 | 0.803347 | 0.803347 | 0.767364 | 0.733891 | 0.733891 | 0.709623 | 0 | 0.011276 | 0.187285 | 1,746 | 51 | 84 | 34.235294 | 0.830867 | 0.012027 | 0 | 0.527778 | 0 | 0 | 0.079512 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.138889 | false | 0 | 0.111111 | 0 | 0.25 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
946db5dc78cd7c3e0a9ec0c544a89ac0c37efa67 | 5,863 | py | Python | common.py | unvercanunlu/pytorch-activation-functions-comparison | e57860845e1a2c572b37e8c83f1b8721ffd0fbc8 | [
"MIT"
] | 1 | 2021-12-06T13:19:18.000Z | 2021-12-06T13:19:18.000Z | common.py | unvercanunlu/loss-function-comparison-pytorch | 6dfbb2c0935898774452044129c3e22a50d4ec19 | [
"MIT"
] | null | null | null | common.py | unvercanunlu/loss-function-comparison-pytorch | 6dfbb2c0935898774452044129c3e22a50d4ec19 | [
"MIT"
] | null | null | null | import os
import matplotlib.pyplot as graph
import numpy as np
import torch
def train(model, device, loader, optimizer, loss, one_hot_encoded=False, info_per_batch=10):
model.train()
number_of_batches = len(loader)
batch_losses = []
batch_accuracies = []
for batch_index, (batch_input, batch_target) in enumerate(loader):
batch_input, batch_target = batch_input.to(device), batch_target.to(device)
optimizer.zero_grad()
batch_output = model(batch_input)
if one_hot_encoded:
batch_target_one_hot_encoded = torch.nn.functional.one_hot(batch_target, 10).float()
loss_calculation = loss(batch_output, batch_target_one_hot_encoded)
else:
loss_calculation = loss(batch_output, batch_target)
loss_calculation.backward()
optimizer.step()
batch_loss = loss_calculation.item()
batch_losses.append(batch_loss)
batch_prediction = batch_output.max(dim=1, keepdim=True)[1]
batch_correct = batch_prediction.eq(batch_target.view_as(batch_prediction)).sum().item()
batch_size = len(batch_input)
batch_accuracy = batch_correct / batch_size
batch_accuracies.append(batch_accuracy)
if (batch_index + 1) % info_per_batch == 0:
info = 'Train: Batch {current_batch}/{number_of_batches}, Loss: {batch_loss:.5f}, Accuracy: % {batch_accuracy:.2f}'
print(info.format(current_batch=(batch_index + 1), number_of_batches=number_of_batches,
batch_loss=batch_loss, batch_accuracy=(100 * batch_accuracy)))
average_loss = sum(batch_losses) / number_of_batches
accuracy = sum(batch_accuracies) / number_of_batches
return average_loss, accuracy
def test(model, device, loader, loss, one_hot_encoded=False, info_name='Test', info_per_batch=10):
model.eval()
number_of_batches = len(loader)
batch_loses = []
batch_accuracies = []
with torch.no_grad():
for batch_index, (batch_input, batch_target) in enumerate(loader):
batch_input, batch_target = batch_input.to(device), batch_target.to(device)
batch_output = model(batch_input)
if one_hot_encoded:
batch_target_one_hot_encoded = torch.nn.functional.one_hot(batch_target, 10).float()
loss_calculation = loss(batch_output, batch_target_one_hot_encoded)
else:
loss_calculation = loss(batch_output, batch_target)
batch_loss = loss_calculation.item()
batch_loses.append(batch_loss)
batch_prediction = batch_output.max(dim=1, keepdim=True)[1]
batch_correct = batch_prediction.eq(batch_target.view_as(batch_prediction)).sum().item()
batch_size = len(batch_input)
batch_accuracy = batch_correct / batch_size
batch_accuracies.append(batch_accuracy)
if (batch_index + 1) % info_per_batch == 0:
info = '{info_name}: Batch {current_batch}/{number_of_batches}, Loss: {batch_loss:.5f}, Accuracy: % {batch_accuracy:.2f}'
print(info.format(current_batch=(batch_index + 1), number_of_batches=number_of_batches,
batch_loss=batch_loss, batch_accuracy=(100 * batch_accuracy), info_name=info_name))
average_loss = sum(batch_loses) / number_of_batches
accuracy = sum(batch_accuracies) / number_of_batches
return average_loss, accuracy
def save_state(model, directory, file_name):
file_path = os.path.join(directory, file_name)
state = model.state_dict()
torch.save(obj=state, f=file_path)
info = 'File: {file_name} is saved.'
print(info.format(file_name=file_name))
def save_data(array, directory, file_name):
file_path = os.path.join(directory, file_name)
np.save(file=file_path, arr=array)
info = 'File: {file_name} is saved.'
print(info.format(file_name=file_name))
def load_data(directory, file_name):
file_path = os.path.join(directory, file_name)
array = []
if os.path.exists(file_path):
array = np.load(file_path)
info = 'File: {file_name} is saved.'
print(info.format(file_name=file_name))
else:
info = 'File: {file_name} does not exist.'
print(info.format(file_name=file_name))
return array
def draw_multi_lines_graph(lines, x_label, y_label, title, directory=None, file_name=None):
graph.clf()
labels = []
for line in lines:
label = line['label']
labels.append(label)
x = line['data']['x']
y = line['data']['y']
graph.xticks(x)
graph.plot(x, y)
graph.xlabel(xlabel=x_label)
graph.ylabel(ylabel=y_label)
graph.title(label=title)
graph.legend(labels)
if directory is not None:
if file_name is None:
file_name = '_'.join([word.lower() for word in title.split()]) + '.png'
file_path = os.path.join(directory, file_name)
graph.savefig(file_path)
info = 'File: {file_name} is saved.'
print(info.format(file_name=file_name))
else:
graph.show()
def draw_line_graph(x, y, x_label, y_label, title, directory=None, file_name=None):
graph.clf()
graph.xticks(x)
graph.plot(x, y)
graph.xlabel(xlabel=x_label)
graph.ylabel(ylabel=y_label)
graph.title(label=title)
if directory is not None:
if file_name is None:
file_name = '_'.join([word.lower() for word in title.split()]) + '.png'
file_path = os.path.join(directory, file_name)
graph.savefig(file_path)
info = 'File: {file_name} is saved.'
print(info.format(file_name=file_name))
else:
graph.show()
| 42.179856 | 138 | 0.646938 | 768 | 5,863 | 4.645833 | 0.153646 | 0.071749 | 0.050448 | 0.026906 | 0.820348 | 0.810818 | 0.761491 | 0.752803 | 0.752803 | 0.752803 | 0 | 0.006325 | 0.244926 | 5,863 | 138 | 139 | 42.485507 | 0.799639 | 0 | 0 | 0.658537 | 0 | 0.01626 | 0.072489 | 0.012576 | 0 | 0 | 0 | 0 | 0 | 1 | 0.056911 | false | 0 | 0.03252 | 0 | 0.113821 | 0.065041 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
848a7b1382e0f574d86b77ba5360aa12d3d3af0b | 61 | py | Python | gui/__init__.py | alexsmith2910/Strat_UN | 57f79beb923cebed9ced940ccaea9df9172541fe | [
"MIT",
"Unlicense"
] | null | null | null | gui/__init__.py | alexsmith2910/Strat_UN | 57f79beb923cebed9ced940ccaea9df9172541fe | [
"MIT",
"Unlicense"
] | 3 | 2020-10-10T11:10:55.000Z | 2021-03-30T13:16:52.000Z | gui/__init__.py | alexsmith2910/Strat_UN | 57f79beb923cebed9ced940ccaea9df9172541fe | [
"MIT",
"Unlicense"
] | null | null | null | from .research_elements import elements as research_elements
| 30.5 | 60 | 0.885246 | 8 | 61 | 6.5 | 0.625 | 0.615385 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.098361 | 61 | 1 | 61 | 61 | 0.945455 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
84c9c83b61be158da40649176d2623a19c0d3e00 | 25 | py | Python | ClickReaction/__init__.py | Gillingham-Lab/Click | 66a742d3fe035e611ef891023a390a030bfd0729 | [
"MIT"
] | 1 | 2020-05-23T06:25:14.000Z | 2020-05-23T06:25:14.000Z | ClickReaction/__init__.py | Gillingham-Lab/Click | 66a742d3fe035e611ef891023a390a030bfd0729 | [
"MIT"
] | null | null | null | ClickReaction/__init__.py | Gillingham-Lab/Click | 66a742d3fe035e611ef891023a390a030bfd0729 | [
"MIT"
] | 1 | 2021-02-22T06:02:50.000Z | 2021-02-22T06:02:50.000Z | from .Reactions import *
| 12.5 | 24 | 0.76 | 3 | 25 | 6.333333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.16 | 25 | 1 | 25 | 25 | 0.904762 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
ca2b21fbab5765b773051c683b71f47ac7e4c02d | 3,556 | py | Python | Country_CovidTracker.py | harshagl2002/COVID_CLI | f6f86e48e9477a021a63651a29c4cdfb616a0b99 | [
"Apache-2.0"
] | 1 | 2021-06-11T19:54:56.000Z | 2021-06-11T19:54:56.000Z | Country_CovidTracker.py | harshagl2002/COVID_CLI | f6f86e48e9477a021a63651a29c4cdfb616a0b99 | [
"Apache-2.0"
] | null | null | null | Country_CovidTracker.py | harshagl2002/COVID_CLI | f6f86e48e9477a021a63651a29c4cdfb616a0b99 | [
"Apache-2.0"
] | null | null | null | import requests
import json
import datetime
def country():
url = "https://covid-193.p.rapidapi.com/history"
country = input("Enter the country you would like to search for: ")
date = input("Enter the date (yyyy-mm-dd) you would like to search for: ")
year,month,day = date.split('-')
isValidDate = True
try :
datetime.datetime(int(year),int(month),int(day))
except ValueError :
isValidDate = False
if(isValidDate) :
querystring = {"country":country,"day":date}
headers = {
'x-rapidapi-key': "574d25f133msh5c58c65e8a4c944p1e6b8fjsnee1da55d91cd",
'x-rapidapi-host': "covid-193.p.rapidapi.com"
}
response = requests.request("GET", url, headers=headers, params=querystring)
data = response.text
parsed = json.loads(data)
if parsed["results"] == 0:
print("The requested data is currently not availible. Sorry")
else:
response_dict = parsed["response"][0]
cases_dict = response_dict["cases"]
print()
print("NEW CASES in", parsed["parameters"]["country"], "on", response_dict["day"], "is", cases_dict["new"])
print("TOTAL number of cases in", parsed["parameters"]["country"], "till", response_dict["day"], "is", cases_dict["total"])
print("ACTIVE CASES in", parsed["parameters"]["country"], "on", response_dict["day"], "is", cases_dict["active"])
print("Number of DEATHS recorded in", parsed["parameters"]["country"], "on", response_dict["day"], "is", response_dict["deaths"]["new"])
print("Number of RECOVERIES recorded in", parsed["parameters"]["country"], "till", response_dict["day"], "is", cases_dict["recovered"])
print("Total number of TESTS conducted in", parsed["parameters"]["country"], "till", response_dict["day"], "is", response_dict["tests"]["total"])
else:
print("You have entered an invalid date. Kindly enter a valid date")
date_new = input("Enter the date (yyyy-mm-dd) you would like to search for: ")
querystring = {"country":country,"day":date_new}
headers = {
'x-rapidapi-key': "574d25f133msh5c58c65e8a4c944p1e6b8fjsnee1da55d91cd",
'x-rapidapi-host': "covid-193.p.rapidapi.com"
}
response = requests.request("GET", url, headers=headers, params=querystring)
data = response.text
parsed = json.loads(data)
if parsed["results"] == 0:
print("The requested data is currently not availible. Sorry")
else:
response_dict = parsed["response"][0]
cases_dict = response_dict["cases"]
print()
print("NEW CASES in", parsed["parameters"]["country"], "on", response_dict["day"], "is", cases_dict["new"])
print("TOTAL number of cases in", parsed["parameters"]["country"], "till", response_dict["day"], "is", cases_dict["total"])
print("ACTIVE CASES in", parsed["parameters"]["country"], "on", response_dict["day"], "is", cases_dict["active"])
print("Number of DEATHS recorded in", parsed["parameters"]["country"], "on", response_dict["day"], "is", response_dict["deaths"]["new"])
print("Number of RECOVERIES recorded in", parsed["parameters"]["country"], "till", response_dict["day"], "is", cases_dict["recovered"])
print("Total number of TESTS conducted in", parsed["parameters"]["country"], "till", response_dict["day"], "is", response_dict["tests"]["total"])
| 50.8 | 157 | 0.616704 | 412 | 3,556 | 5.245146 | 0.211165 | 0.11106 | 0.099954 | 0.138825 | 0.869505 | 0.830634 | 0.819991 | 0.819991 | 0.819991 | 0.819991 | 0 | 0.023364 | 0.21766 | 3,556 | 69 | 158 | 51.536232 | 0.753415 | 0 | 0 | 0.649123 | 0 | 0 | 0.369516 | 0.04162 | 0 | 0 | 0 | 0 | 0 | 1 | 0.017544 | false | 0 | 0.052632 | 0 | 0.070175 | 0.298246 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
ca5a992a2c87aeb651726b530b0f6d74463d4055 | 25 | py | Python | v3/as_drivers/htu21d/__init__.py | Dilepa/micropython-async | 3c8817d9ead33bcd8399d0935ffb24dd7bcd6e71 | [
"MIT"
] | 443 | 2017-01-01T20:54:46.000Z | 2022-03-28T06:17:30.000Z | v3/as_drivers/htu21d/__init__.py | Dilepa/micropython-async | 3c8817d9ead33bcd8399d0935ffb24dd7bcd6e71 | [
"MIT"
] | 79 | 2017-01-28T17:53:32.000Z | 2022-02-08T10:05:04.000Z | v3/as_drivers/htu21d/__init__.py | Dilepa/micropython-async | 3c8817d9ead33bcd8399d0935ffb24dd7bcd6e71 | [
"MIT"
] | 126 | 2017-02-17T13:06:01.000Z | 2022-03-07T03:50:50.000Z | from .htu21d_mc import *
| 12.5 | 24 | 0.76 | 4 | 25 | 4.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.095238 | 0.16 | 25 | 1 | 25 | 25 | 0.761905 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
ca6dfebdb588d759e235c1d3467133a084b992fe | 3,767 | py | Python | Examples/AdvancedUsage/AddAnnotations/AddPolylineAnnotation.py | groupdocs-annotation-cloud/groupdocs-annotation-cloud-python-samples | 5fb6c88d0e173198753d8483ea0a75606479fa41 | [
"MIT"
] | null | null | null | Examples/AdvancedUsage/AddAnnotations/AddPolylineAnnotation.py | groupdocs-annotation-cloud/groupdocs-annotation-cloud-python-samples | 5fb6c88d0e173198753d8483ea0a75606479fa41 | [
"MIT"
] | null | null | null | Examples/AdvancedUsage/AddAnnotations/AddPolylineAnnotation.py | groupdocs-annotation-cloud/groupdocs-annotation-cloud-python-samples | 5fb6c88d0e173198753d8483ea0a75606479fa41 | [
"MIT"
] | 2 | 2019-07-08T12:50:55.000Z | 2019-07-08T13:21:54.000Z | # Import modules
from groupdocs_annotation_cloud import *
import groupdocs_annotation_cloud
from Common import Common
class AddPolylineAnnotation:
@classmethod
def Run(cls):
# Create instance of the API
api = groupdocs_annotation_cloud.AnnotateApi.from_config(Common.GetConfig())
try:
a1 = groupdocs_annotation_cloud.AnnotationInfo()
a1.box = groupdocs_annotation_cloud.Rectangle()
a1.box.x = 100
a1.box.y = 100
a1.box.width = 200
a1.box.height = 100
a1.page_number = 0
a1.pen_color = 1201033
a1.pen_style = "Solid"
a1.pen_width = 1
a1.opacity = 0.7
a1.type = "Polyline"
a1.text = "This is polyline annotation"
a1.creator_name = "Anonym A."
a1.svgPath = "M250.8280751173709,48.209295774647885l0.6986854460093896,0l0.6986854460093896,-1.3973708920187793l0.6986854460093896,0l0.6986854460093896,-1.3973708920187793l1.3973708920187793,-0.6986854460093896l0.6986854460093896,-0.6986854460093896l0.6986854460093896,0l2.096056338028169,-1.3973708920187793l3.493427230046948,-1.3973708920187793l0.6986854460093896,-0.6986854460093896l1.3973708920187793,-1.3973708920187793l0.6986854460093896,0l1.3973708920187793,-0.6986854460093896l0.6986854460093896,0l0.6986854460093896,-0.6986854460093896l0.6986854460093896,0l0.6986854460093896,0l0,-0.6986854460093896l0.6986854460093896,0l0.6986854460093896,0l1.3973708920187793,0l0,-0.6986854460093896l0.6986854460093896,0l1.3973708920187793,0l0.6986854460093896,0l1.3973708920187793,0l0.6986854460093896,0l2.096056338028169,-0.6986854460093896l1.3973708920187793,0l0.6986854460093896,0l0.6986854460093896,0l1.3973708920187793,0l1.3973708920187793,0l1.3973708920187793,0l2.096056338028169,0l5.589483568075117,0l1.3973708920187793,0l2.096056338028169,0l0.6986854460093896,0l1.3973708920187793,0l0.6986854460093896,0l1.3973708920187793,0l1.3973708920187793,0l0.6986854460093896,0.6986854460093896l1.3973708920187793,0l2.096056338028169,1.3973708920187793l0.6986854460093896,0l0.6986854460093896,0l0,0.6986854460093896l1.3973708920187793,0l0.6986854460093896,0.6986854460093896l1.3973708920187793,0.6986854460093896l0,0.6986854460093896l0.6986854460093896,0l1.3973708920187793,0.6986854460093896l1.3973708920187793,0.6986854460093896l3.493427230046948,0.6986854460093896l1.3973708920187793,0.6986854460093896l2.096056338028169,0.6986854460093896l1.3973708920187793,0.6986854460093896l1.3973708920187793,0l1.3973708920187793,0.6986854460093896l0.6986854460093896,0l0.6986854460093896,0.6986854460093896l1.3973708920187793,0l0.6986854460093896,0l0.6986854460093896,0l2.7947417840375586,0l1.3973708920187793,0l0.6986854460093896,0l1.3973708920187793,0l0.6986854460093896,0l0.6986854460093896,0l1.3973708920187793,0l0.6986854460093896,0l2.7947417840375586,0l0.6986854460093896,0l2.7947417840375586,0l1.3973708920187793,0l0.6986854460093896,0l0.6986854460093896,0l0.6986854460093896,0l0.6986854460093896,0l0.6986854460093896,0l0.6986854460093896,0l0.6986854460093896,-0.6986854460093896l0.6986854460093896,0"
file_info = FileInfo()
file_info.file_path = "annotationdocs\\one-page.docx"
options = AnnotateOptions()
options.file_info = file_info
options.annotations = [a1]
options.output_path = "Output\\output.docx"
request = AnnotateRequest(options)
result = api.annotate(request)
print("AddPolylineAnnotation: Polyline Annotation added: " + result['href'])
except ApiException as e:
print("Exception when calling AnnotateAPI: {0}".format(e.message)) | 87.604651 | 2,304 | 0.774887 | 355 | 3,767 | 8.160563 | 0.284507 | 0.190197 | 0.193303 | 0.118053 | 0.469451 | 0.385226 | 0.284777 | 0.182258 | 0.04591 | 0.04591 | 0 | 0.617981 | 0.134855 | 3,767 | 43 | 2,305 | 87.604651 | 0.270942 | 0.010884 | 0 | 0 | 0 | 0.029412 | 0.66246 | 0.625134 | 0 | 0 | 0 | 0 | 0 | 1 | 0.029412 | false | 0 | 0.088235 | 0 | 0.147059 | 0.058824 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
ca9bc15f5ad721478b272f0e09caa257710adfc8 | 185 | py | Python | src/enum/token_error.py | quadrixm/ya | 621f7c12f0bfdcca49068177cfa6e0025f3a3bae | [
"MIT"
] | 22 | 2019-01-26T15:52:24.000Z | 2021-11-11T22:24:21.000Z | src/enum/token_error.py | quadrixm/ya | 621f7c12f0bfdcca49068177cfa6e0025f3a3bae | [
"MIT"
] | 1 | 2018-07-31T05:39:19.000Z | 2018-07-31T05:39:19.000Z | src/enum/token_error.py | quadrixm/ya | 621f7c12f0bfdcca49068177cfa6e0025f3a3bae | [
"MIT"
] | 1 | 2018-07-31T05:30:02.000Z | 2018-07-31T05:30:02.000Z | from enum import Enum
class TokenError(Enum):
INCOMPLETE_STRING = "INCOMPLETE_STRING هناك مشكلة"
INVALID_TOKEN = "هناك مشكلة INVALID_TOKEN"
DEFAULT = "هناك مشكلة DEFAULT"
| 23.125 | 54 | 0.745946 | 23 | 185 | 5.826087 | 0.521739 | 0.201493 | 0.238806 | 0.313433 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.189189 | 185 | 7 | 55 | 26.428571 | 0.893333 | 0 | 0 | 0 | 0 | 0 | 0.378378 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.2 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 6 |
047fe1384b12faefc1fba734a8dd7672fd7f61b1 | 36 | py | Python | pspdfkit/__init__.py | r-kells/py-pspdfkit | f32582f5907c8c5f59d294abc6de68523b4ba1da | [
"MIT"
] | null | null | null | pspdfkit/__init__.py | r-kells/py-pspdfkit | f32582f5907c8c5f59d294abc6de68523b4ba1da | [
"MIT"
] | 4 | 2018-05-24T12:54:01.000Z | 2020-07-24T16:26:30.000Z | pspdfkit/__init__.py | r-kells/py-pspdfkit | f32582f5907c8c5f59d294abc6de68523b4ba1da | [
"MIT"
] | 1 | 2020-07-23T14:19:49.000Z | 2020-07-23T14:19:49.000Z | # flake8: noqa
from .api import API
| 12 | 20 | 0.722222 | 6 | 36 | 4.333333 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.034483 | 0.194444 | 36 | 2 | 21 | 18 | 0.862069 | 0.333333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
0486d0f3fd00a3d9009afda4648c4c5729613344 | 190 | py | Python | pantaucovid/pantau_covid/doctype/pasien/pasien.py | iboen/frappe-pantaucovid | 38f5272c438dff58d5a98c817cb3869a568a67dc | [
"MIT"
] | null | null | null | pantaucovid/pantau_covid/doctype/pasien/pasien.py | iboen/frappe-pantaucovid | 38f5272c438dff58d5a98c817cb3869a568a67dc | [
"MIT"
] | null | null | null | pantaucovid/pantau_covid/doctype/pasien/pasien.py | iboen/frappe-pantaucovid | 38f5272c438dff58d5a98c817cb3869a568a67dc | [
"MIT"
] | null | null | null | # Copyright (c) 2021, Sinawardi and contributors
# For license information, please see license.txt
# import frappe
from frappe.model.document import Document
class Pasien(Document):
pass
| 21.111111 | 49 | 0.789474 | 25 | 190 | 6 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.02454 | 0.142105 | 190 | 8 | 50 | 23.75 | 0.895706 | 0.568421 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
049c1c9632ef95bab381373fb4a901acefd9d2ef | 1,325 | py | Python | terrascript/teamcity/r.py | mjuenema/python-terrascript | 6d8bb0273a14bfeb8ff8e950fe36f97f7c6e7b1d | [
"BSD-2-Clause"
] | 507 | 2017-07-26T02:58:38.000Z | 2022-01-21T12:35:13.000Z | terrascript/teamcity/r.py | mjuenema/python-terrascript | 6d8bb0273a14bfeb8ff8e950fe36f97f7c6e7b1d | [
"BSD-2-Clause"
] | 135 | 2017-07-20T12:01:59.000Z | 2021-10-04T22:25:40.000Z | terrascript/teamcity/r.py | mjuenema/python-terrascript | 6d8bb0273a14bfeb8ff8e950fe36f97f7c6e7b1d | [
"BSD-2-Clause"
] | 81 | 2018-02-20T17:55:28.000Z | 2022-01-31T07:08:40.000Z | # terrascript/teamcity/r.py
# Automatically generated by tools/makecode.py ()
import warnings
warnings.warn(
"using the 'legacy layout' is deprecated", DeprecationWarning, stacklevel=2
)
import terrascript
class teamcity_agent_pool(terrascript.Resource):
pass
class teamcity_agent_pool_project_assignment(terrascript.Resource):
pass
class teamcity_agent_requirement(terrascript.Resource):
pass
class teamcity_artifact_dependency(terrascript.Resource):
pass
class teamcity_build_config(terrascript.Resource):
pass
class teamcity_build_trigger_build_finish(terrascript.Resource):
pass
class teamcity_build_trigger_schedule(terrascript.Resource):
pass
class teamcity_build_trigger_vcs(terrascript.Resource):
pass
class teamcity_feature_commit_status_publisher(terrascript.Resource):
pass
class teamcity_feature_golang(terrascript.Resource):
pass
class teamcity_group(terrascript.Resource):
pass
class teamcity_project(terrascript.Resource):
pass
class teamcity_project_feature_oauth_provider_settings(terrascript.Resource):
pass
class teamcity_project_feature_versioned_settings(terrascript.Resource):
pass
class teamcity_snapshot_dependency(terrascript.Resource):
pass
class teamcity_vcs_root_git(terrascript.Resource):
pass
| 17.905405 | 79 | 0.807547 | 149 | 1,325 | 6.885906 | 0.33557 | 0.202729 | 0.358674 | 0.409357 | 0.658869 | 0.588694 | 0.237817 | 0 | 0 | 0 | 0 | 0.000867 | 0.129811 | 1,325 | 73 | 80 | 18.150685 | 0.888985 | 0.055094 | 0 | 0.432432 | 1 | 0 | 0.031225 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.432432 | 0.054054 | 0 | 0.486486 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
b6c40ad2df639079a5f8321237ccc7e542c8c343 | 34 | py | Python | qleet/examples/__init__.py | AnimeshSinha1309/qaoa-optimizer | 2a93a46bacc99f22f49e7b5121eb3aa9f12c0163 | [
"Apache-2.0"
] | 9 | 2021-09-26T18:43:43.000Z | 2022-03-30T12:34:01.000Z | qleet/examples/__init__.py | QLemma/qLEET | 2a93a46bacc99f22f49e7b5121eb3aa9f12c0163 | [
"Apache-2.0"
] | 12 | 2021-09-19T13:29:33.000Z | 2022-01-09T15:22:49.000Z | qleet/examples/__init__.py | QLemma/qLEET | 2a93a46bacc99f22f49e7b5121eb3aa9f12c0163 | [
"Apache-2.0"
] | 1 | 2022-03-14T03:02:24.000Z | 2022-03-14T03:02:24.000Z | import qleet.examples.qaoa_maxcut
| 17 | 33 | 0.882353 | 5 | 34 | 5.8 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.058824 | 34 | 1 | 34 | 34 | 0.90625 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
b6d4005295625e516d262c7ebcc35a8cd0698e98 | 174 | py | Python | pdover2t/__init__.py | qwilka/PDover2t | 4387d153228f1af20a8f5f3f368aa49c42cda2cd | [
"MIT"
] | null | null | null | pdover2t/__init__.py | qwilka/PDover2t | 4387d153228f1af20a8f5f3f368aa49c42cda2cd | [
"MIT"
] | null | null | null | pdover2t/__init__.py | qwilka/PDover2t | 4387d153228f1af20a8f5f3f368aa49c42cda2cd | [
"MIT"
] | 1 | 2019-11-24T09:32:12.000Z | 2019-11-24T09:32:12.000Z | """`pdover2t` computational subsea pipeline engineering.
"""
from . import utilities
from . import pipe
from .utilities.helpers import symbol, greek
from . import dnvstf101
| 21.75 | 56 | 0.775862 | 20 | 174 | 6.75 | 0.65 | 0.222222 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.026667 | 0.137931 | 174 | 7 | 57 | 24.857143 | 0.873333 | 0.304598 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
b6df9eecf06f632003c5c0dbf4d593fceff47300 | 159 | py | Python | stepboard/__init__.py | Stepujacy/stepboard | ffa079792fc4b133bb44f33e0408159da5692f6e | [
"MIT"
] | null | null | null | stepboard/__init__.py | Stepujacy/stepboard | ffa079792fc4b133bb44f33e0408159da5692f6e | [
"MIT"
] | null | null | null | stepboard/__init__.py | Stepujacy/stepboard | ffa079792fc4b133bb44f33e0408159da5692f6e | [
"MIT"
] | null | null | null | from .user import *
from .guilds import *
from .config import *
from .applications import *
from .message import *
from .webhooks import *
from .roles import * | 22.714286 | 27 | 0.742138 | 21 | 159 | 5.619048 | 0.428571 | 0.508475 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.169811 | 159 | 7 | 28 | 22.714286 | 0.893939 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
8e0bedaa0875e2b9b04b63ccf9cd63d69486742b | 240 | py | Python | food_delivery/services.py | clemencegoh/machine_learning_service | 49ccb65dd8cca544bed801559b920cd7bea2d120 | [
"MIT"
] | null | null | null | food_delivery/services.py | clemencegoh/machine_learning_service | 49ccb65dd8cca544bed801559b920cd7bea2d120 | [
"MIT"
] | null | null | null | food_delivery/services.py | clemencegoh/machine_learning_service | 49ccb65dd8cca544bed801559b920cd7bea2d120 | [
"MIT"
] | null | null | null | from .models import Restaurant
from django.db import models
def get_restaurants() -> models.QuerySet:
return Restaurant.objects.all()
def get_restaurant_by_id(_id: int) -> models.QuerySet:
return Restaurant.objects.get(id=_id)
| 20 | 54 | 0.758333 | 33 | 240 | 5.333333 | 0.484848 | 0.068182 | 0.227273 | 0.340909 | 0.420455 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.141667 | 240 | 11 | 55 | 21.818182 | 0.854369 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0.333333 | 1 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
8e458c6de01d549a9b6d764c8868b01145f69958 | 846 | py | Python | modules/lib/webpage/__init__.py | yansinan/pycameresp | e239b4db110bffeb6bbdae6930d2b78562d21e35 | [
"MIT"
] | 28 | 2021-01-19T10:53:20.000Z | 2022-03-24T13:57:09.000Z | modules/lib/webpage/__init__.py | yansinan/pycameresp | e239b4db110bffeb6bbdae6930d2b78562d21e35 | [
"MIT"
] | 5 | 2021-02-28T23:00:23.000Z | 2022-03-30T07:36:21.000Z | modules/lib/webpage/__init__.py | yansinan/pycameresp | e239b4db110bffeb6bbdae6930d2b78562d21e35 | [
"MIT"
] | 9 | 2021-02-28T23:01:37.000Z | 2022-03-24T13:57:18.000Z | # Distributed under MIT License
# Copyright (c) 2021 Remi BERTHOLET
""" All web pages defined here """
from webpage.passwordpage import *
from webpage.mainpage import *
from webpage.changepasswordpage import *
from webpage.infopage import *
from webpage.pushoverpage import *
from webpage.serverpage import *
from webpage.wifipage import *
from webpage.regionpage import *
from webpage.presencepage import *
from webpage.batterypage import *
from webpage.awakepage import *
from webpage.systempage import *
from tools.useful import iscamera
if iscamera():
# pylint:disable=ungrouped-imports
from webpage.streamingpage import *
from webpage.camerapage import *
from webpage.historicpage import *
from webpage.motionpage import *
| 36.782609 | 47 | 0.695035 | 89 | 846 | 6.606742 | 0.460674 | 0.29932 | 0.404762 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.00627 | 0.245863 | 846 | 22 | 48 | 38.454545 | 0.915361 | 0.147754 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.111111 | 0.944444 | 0 | 0.944444 | 0 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
6d2d259dcb73dd3abf099afbfc9cd9827793fc1d | 14,437 | py | Python | octavia_f5/tests/unit/api/drivers/f5_provider_driver/test_f5_driver.py | sungwon-ahn/octavia-f5-provider-driver | ab99ed806b5249c1f774aa6f807f778dfb2051fa | [
"Apache-2.0"
] | 15 | 2020-01-23T16:06:52.000Z | 2022-02-16T08:44:35.000Z | octavia_f5/tests/unit/api/drivers/f5_provider_driver/test_f5_driver.py | sungwon-ahn/octavia-f5-provider-driver | ab99ed806b5249c1f774aa6f807f778dfb2051fa | [
"Apache-2.0"
] | 88 | 2019-12-09T11:14:40.000Z | 2022-02-28T11:51:58.000Z | octavia_f5/tests/unit/api/drivers/f5_provider_driver/test_f5_driver.py | sungwon-ahn/octavia-f5-provider-driver | ab99ed806b5249c1f774aa6f807f778dfb2051fa | [
"Apache-2.0"
] | 2 | 2020-03-23T16:21:54.000Z | 2022-02-24T15:13:32.000Z | # Copyright 2020 SAP SE
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import mock
from oslo_config import cfg
from oslo_config import fixture as oslo_fixture
from octavia.common import constants as consts
from octavia.tests.unit import base
from octavia.tests.common import sample_data_models
from octavia_f5.api.drivers.f5_driver import driver
from octavia_lib.api.drivers import data_models as driver_dm
class TestF5Driver(base.TestRpc):
def setUp(self):
super(TestF5Driver, self).setUp()
conf = self.useFixture(oslo_fixture.Config(cfg.CONF))
self.patches = [
mock.patch('octavia.db.repositories.AmphoraRepository.get'),
mock.patch('octavia.db.api.get_session')
]
conf.config(group="oslo_messaging", topic='foo_topic')
conf.config(group="controller_worker", network_driver='network_noop_driver_f5')
self.amp_driver = driver.F5ProviderDriver()
self.sample_data = sample_data_models.SampleDriverDataModels()
for patch in self.patches:
patch.start()
def tearDown(self):
super(TestF5Driver, self).tearDown()
for patch in self.patches:
patch.stop()
# Load Balancer
@mock.patch('oslo_messaging.rpc.client._BaseCallContext.cast')
def test_loadbalancer_create(self, mock_cast):
provider_lb = driver_dm.LoadBalancer(
loadbalancer_id=self.sample_data.lb_id)
self.amp_driver.loadbalancer_create(provider_lb)
payload = {consts.LOAD_BALANCER_ID: self.sample_data.lb_id,
consts.FLAVOR: None}
mock_cast.assert_called_with({}, 'create_load_balancer', **payload)
@mock.patch('oslo_messaging.rpc.client._BaseCallContext.cast')
def test_loadbalancer_delete(self, mock_cast):
provider_lb = driver_dm.LoadBalancer(
loadbalancer_id=self.sample_data.lb_id)
self.amp_driver.loadbalancer_delete(provider_lb)
payload = {consts.LOAD_BALANCER_ID: self.sample_data.lb_id,
'cascade': False}
mock_cast.assert_called_with({}, 'delete_load_balancer', **payload)
# Listener
@mock.patch('oslo_messaging.rpc.client._BaseCallContext.cast')
def test_listener_create(self, mock_cast):
provider_listener = driver_dm.Listener(
listener_id=self.sample_data.listener1_id)
self.amp_driver.listener_create(provider_listener)
payload = {consts.LISTENER_ID: self.sample_data.listener1_id}
mock_cast.assert_called_with({}, 'create_listener', **payload)
@mock.patch('oslo_messaging.rpc.client._BaseCallContext.cast')
def test_listener_delete(self, mock_cast):
provider_listener = driver_dm.Listener(
listener_id=self.sample_data.listener1_id)
self.amp_driver.listener_delete(provider_listener)
payload = {consts.LISTENER_ID: self.sample_data.listener1_id}
mock_cast.assert_called_with({}, 'delete_listener', **payload)
@mock.patch('oslo_messaging.rpc.client._BaseCallContext.cast')
def test_listener_update(self, mock_cast):
old_provider_listener = driver_dm.Listener(
listener_id=self.sample_data.listener1_id)
provider_listener = driver_dm.Listener(
listener_id=self.sample_data.listener1_id, admin_state_up=False)
self.amp_driver.listener_update(old_provider_listener,
provider_listener)
payload = {consts.LISTENER_ID: self.sample_data.listener1_id,
consts.LISTENER_UPDATES: {}}
mock_cast.assert_called_with({}, 'update_listener', **payload)
@mock.patch('oslo_messaging.rpc.client._BaseCallContext.cast')
def test_listener_update_name(self, mock_cast):
old_provider_listener = driver_dm.Listener(
listener_id=self.sample_data.listener1_id)
provider_listener = driver_dm.Listener(
listener_id=self.sample_data.listener1_id, name='Great Listener')
self.amp_driver.listener_update(old_provider_listener,
provider_listener)
payload = {consts.LISTENER_ID: self.sample_data.listener1_id,
consts.LISTENER_UPDATES: {}}
mock_cast.assert_called_with({}, 'update_listener', **payload)
# Pool
@mock.patch('oslo_messaging.rpc.client._BaseCallContext.cast')
def test_pool_create(self, mock_cast):
provider_pool = driver_dm.Pool(
pool_id=self.sample_data.pool1_id)
self.amp_driver.pool_create(provider_pool)
payload = {consts.POOL_ID: self.sample_data.pool1_id}
mock_cast.assert_called_with({}, 'create_pool', **payload)
@mock.patch('oslo_messaging.rpc.client._BaseCallContext.cast')
def test_pool_delete(self, mock_cast):
provider_pool = driver_dm.Pool(
pool_id=self.sample_data.pool1_id)
self.amp_driver.pool_delete(provider_pool)
payload = {consts.POOL_ID: self.sample_data.pool1_id}
mock_cast.assert_called_with({}, 'delete_pool', **payload)
@mock.patch('oslo_messaging.rpc.client._BaseCallContext.cast')
def test_pool_update(self, mock_cast):
old_provider_pool = driver_dm.Pool(
pool_id=self.sample_data.pool1_id)
provider_pool = driver_dm.Pool(
pool_id=self.sample_data.pool1_id, admin_state_up=True)
self.amp_driver.pool_update(old_provider_pool, provider_pool)
payload = {consts.POOL_ID: self.sample_data.pool1_id,
consts.POOL_UPDATES: {}}
mock_cast.assert_called_with({}, 'update_pool', **payload)
# Member
@mock.patch('octavia.db.repositories.PoolRepository.get')
@mock.patch('oslo_messaging.rpc.client._BaseCallContext.cast')
def test_member_create(self, mock_cast, mock_pool_get):
provider_member = driver_dm.Member(
member_id=self.sample_data.member1_id)
self.amp_driver.member_create(provider_member)
payload = {consts.MEMBER_ID: self.sample_data.member1_id}
mock_cast.assert_called_with({}, 'create_member', **payload)
@mock.patch('octavia.db.repositories.PoolRepository.get')
@mock.patch('oslo_messaging.rpc.client._BaseCallContext.cast')
def test_member_create_udp_ipv4(self, mock_cast, mock_pool_get):
mock_lb = mock.MagicMock()
mock_lb.vip = mock.MagicMock()
mock_lb.vip.ip_address = "192.0.1.1"
mock_listener = mock.MagicMock()
mock_listener.load_balancer = mock_lb
mock_pool = mock.MagicMock()
mock_pool.protocol = consts.PROTOCOL_UDP
mock_pool.listeners = [mock_listener]
mock_pool_get.return_value = mock_pool
provider_member = driver_dm.Member(
member_id=self.sample_data.member1_id,
address="192.0.2.1")
self.amp_driver.member_create(provider_member)
payload = {consts.MEMBER_ID: self.sample_data.member1_id}
mock_cast.assert_called_with({}, 'create_member', **payload)
@mock.patch('octavia.db.repositories.PoolRepository.get')
@mock.patch('oslo_messaging.rpc.client._BaseCallContext.cast')
def test_member_create_udp_ipv4_ipv6(self, mock_cast, mock_pool_get):
mock_lb = mock.MagicMock()
mock_lb.vip = mock.MagicMock()
mock_lb.vip.ip_address = "fe80::1"
mock_listener = mock.MagicMock()
mock_listener.load_balancer = mock_lb
mock_pool = mock.MagicMock()
mock_pool.protocol = consts.PROTOCOL_UDP
mock_pool.listeners = [mock_listener]
mock_pool_get.return_value = mock_pool
provider_member = driver_dm.Member(
member_id=self.sample_data.member1_id,
address="192.0.2.1")
self.amp_driver.member_create(provider_member)
payload = {consts.MEMBER_ID: self.sample_data.member1_id}
mock_cast.assert_called_with({}, 'create_member', **payload)
@mock.patch('oslo_messaging.rpc.client._BaseCallContext.cast')
def test_member_delete(self, mock_cast):
provider_member = driver_dm.Member(
member_id=self.sample_data.member1_id)
self.amp_driver.member_delete(provider_member)
payload = {consts.MEMBER_ID: self.sample_data.member1_id}
mock_cast.assert_called_with({}, 'delete_member', **payload)
@mock.patch('oslo_messaging.rpc.client._BaseCallContext.cast')
def test_member_update(self, mock_cast):
old_provider_member = driver_dm.Member(
member_id=self.sample_data.member1_id)
provider_member = driver_dm.Member(
member_id=self.sample_data.member1_id, admin_state_up=True)
self.amp_driver.member_update(old_provider_member, provider_member)
payload = {consts.MEMBER_ID: self.sample_data.member1_id,
consts.MEMBER_UPDATES: {}}
mock_cast.assert_called_with({}, 'update_member', **payload)
# L7 Policy
@mock.patch('oslo_messaging.rpc.client._BaseCallContext.cast')
def test_l7policy_create(self, mock_cast):
provider_l7policy = driver_dm.L7Policy(
l7policy_id=self.sample_data.l7policy1_id)
self.amp_driver.l7policy_create(provider_l7policy)
payload = {consts.L7POLICY_ID: self.sample_data.l7policy1_id}
mock_cast.assert_called_with({}, 'create_l7policy', **payload)
@mock.patch('oslo_messaging.rpc.client._BaseCallContext.cast')
def test_l7policy_delete(self, mock_cast):
provider_l7policy = driver_dm.L7Policy(
l7policy_id=self.sample_data.l7policy1_id)
self.amp_driver.l7policy_delete(provider_l7policy)
payload = {consts.L7POLICY_ID: self.sample_data.l7policy1_id}
mock_cast.assert_called_with({}, 'delete_l7policy', **payload)
@mock.patch('oslo_messaging.rpc.client._BaseCallContext.cast')
def test_l7policy_update(self, mock_cast):
old_provider_l7policy = driver_dm.L7Policy(
l7policy_id=self.sample_data.l7policy1_id)
provider_l7policy = driver_dm.L7Policy(
l7policy_id=self.sample_data.l7policy1_id, admin_state_up=True)
self.amp_driver.l7policy_update(old_provider_l7policy,
provider_l7policy)
payload = {consts.L7POLICY_ID: self.sample_data.l7policy1_id,
consts.L7POLICY_UPDATES: {}}
mock_cast.assert_called_with({}, 'update_l7policy', **payload)
# Health Monitor
@mock.patch('oslo_messaging.rpc.client._BaseCallContext.cast')
def test_health_monitor_create(self, mock_cast):
provider_HM = driver_dm.HealthMonitor(
healthmonitor_id=self.sample_data.hm1_id)
self.amp_driver.health_monitor_create(provider_HM)
payload = {consts.HEALTH_MONITOR_ID: self.sample_data.hm1_id}
mock_cast.assert_called_with({}, 'create_health_monitor', **payload)
@mock.patch('oslo_messaging.rpc.client._BaseCallContext.cast')
def test_health_monitor_delete(self, mock_cast):
provider_HM = driver_dm.HealthMonitor(
healthmonitor_id=self.sample_data.hm1_id)
self.amp_driver.health_monitor_delete(provider_HM)
payload = {consts.HEALTH_MONITOR_ID: self.sample_data.hm1_id}
mock_cast.assert_called_with({}, 'delete_health_monitor', **payload)
@mock.patch('oslo_messaging.rpc.client._BaseCallContext.cast')
def test_health_monitor_update(self, mock_cast):
old_provider_hm = driver_dm.HealthMonitor(
healthmonitor_id=self.sample_data.hm1_id)
provider_hm = driver_dm.HealthMonitor(
healthmonitor_id=self.sample_data.hm1_id, admin_state_up=True,
max_retries=1, max_retries_down=2)
self.amp_driver.health_monitor_update(old_provider_hm, provider_hm)
payload = {consts.HEALTH_MONITOR_ID: self.sample_data.hm1_id,
consts.HEALTH_MONITOR_UPDATES: {}}
mock_cast.assert_called_with({}, 'update_health_monitor', **payload)
# L7 Rules
@mock.patch('oslo_messaging.rpc.client._BaseCallContext.cast')
def test_l7rule_create(self, mock_cast):
provider_l7rule = driver_dm.L7Rule(
l7rule_id=self.sample_data.l7rule1_id)
self.amp_driver.l7rule_create(provider_l7rule)
payload = {consts.L7RULE_ID: self.sample_data.l7rule1_id}
mock_cast.assert_called_with({}, 'create_l7rule', **payload)
@mock.patch('oslo_messaging.rpc.client._BaseCallContext.cast')
def test_l7rule_delete(self, mock_cast):
provider_l7rule = driver_dm.L7Rule(
l7rule_id=self.sample_data.l7rule1_id)
self.amp_driver.l7rule_delete(provider_l7rule)
payload = {consts.L7RULE_ID: self.sample_data.l7rule1_id}
mock_cast.assert_called_with({}, 'delete_l7rule', **payload)
@mock.patch('oslo_messaging.rpc.client._BaseCallContext.cast')
def test_l7rule_update(self, mock_cast):
old_provider_l7rule = driver_dm.L7Rule(
l7rule_id=self.sample_data.l7rule1_id)
provider_l7rule = driver_dm.L7Rule(
l7rule_id=self.sample_data.l7rule1_id, admin_state_up=True)
self.amp_driver.l7rule_update(old_provider_l7rule, provider_l7rule)
payload = {consts.L7RULE_ID: self.sample_data.l7rule1_id,
consts.L7RULE_UPDATES: {}}
mock_cast.assert_called_with({}, 'update_l7rule', **payload)
@mock.patch('oslo_messaging.rpc.client._BaseCallContext.cast')
def test_l7rule_update_invert(self, mock_cast):
old_provider_l7rule = driver_dm.L7Rule(
l7rule_id=self.sample_data.l7rule1_id)
provider_l7rule = driver_dm.L7Rule(
l7rule_id=self.sample_data.l7rule1_id, invert=True)
self.amp_driver.l7rule_update(old_provider_l7rule, provider_l7rule)
payload = {consts.L7RULE_ID: self.sample_data.l7rule1_id,
consts.L7RULE_UPDATES: {}}
mock_cast.assert_called_with({}, 'update_l7rule', **payload)
| 47.963455 | 87 | 0.705687 | 1,823 | 14,437 | 5.229841 | 0.099835 | 0.044053 | 0.0837 | 0.093979 | 0.828823 | 0.811831 | 0.788861 | 0.766415 | 0.756031 | 0.756031 | 0 | 0.015753 | 0.195331 | 14,437 | 300 | 88 | 48.123333 | 0.804941 | 0.042114 | 0 | 0.558704 | 0 | 0 | 0.130359 | 0.102115 | 0 | 0 | 0 | 0 | 0.097166 | 1 | 0.105263 | false | 0 | 0.032389 | 0 | 0.1417 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
6d3d5cf565a83d8261e89093260221783fd6b910 | 107 | py | Python | onelya_sdk/aeroexpress/__init__.py | tmconsulting/onelya-sdk | eb21398afed916021d74594d094b66e49fdb019c | [
"MIT"
] | 6 | 2017-12-16T13:55:51.000Z | 2020-01-28T01:46:23.000Z | onelya_sdk/aeroexpress/__init__.py | tmconsulting/onelya-sdk | eb21398afed916021d74594d094b66e49fdb019c | [
"MIT"
] | null | null | null | onelya_sdk/aeroexpress/__init__.py | tmconsulting/onelya-sdk | eb21398afed916021d74594d094b66e49fdb019c | [
"MIT"
] | 6 | 2017-12-08T13:57:58.000Z | 2017-12-12T03:16:42.000Z | from .reservation.requests import (OrderFullCustomerRequest, AeroexpressReservationRequest, ProductRequest) | 107 | 107 | 0.897196 | 7 | 107 | 13.714286 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.046729 | 107 | 1 | 107 | 107 | 0.941176 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
edae56d63c517fc52225766c98bdf05b0881bcfc | 40 | py | Python | api/bybit/__init__.py | sheungon/fx-connectors | 1eef5d6617a6a9403ddd1903ec56e826e2126832 | [
"Apache-2.0"
] | 1 | 2021-12-04T18:44:37.000Z | 2021-12-04T18:44:37.000Z | api/bybit/__init__.py | sheungon/fx-connectors | 1eef5d6617a6a9403ddd1903ec56e826e2126832 | [
"Apache-2.0"
] | null | null | null | api/bybit/__init__.py | sheungon/fx-connectors | 1eef5d6617a6a9403ddd1903ec56e826e2126832 | [
"Apache-2.0"
] | 1 | 2022-03-18T07:51:49.000Z | 2022-03-18T07:51:49.000Z | from .bybit_service import BybitService
| 20 | 39 | 0.875 | 5 | 40 | 6.8 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.1 | 40 | 1 | 40 | 40 | 0.944444 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
edd936b12a8af287a52efd89fcf998c49d425579 | 85 | py | Python | scripts/__init__.py | HaraldWilhelmi/Baltica | 02ea6388f6917db028d26435fea295c58f19fe0d | [
"MIT"
] | null | null | null | scripts/__init__.py | HaraldWilhelmi/Baltica | 02ea6388f6917db028d26435fea295c58f19fe0d | [
"MIT"
] | null | null | null | scripts/__init__.py | HaraldWilhelmi/Baltica | 02ea6388f6917db028d26435fea295c58f19fe0d | [
"MIT"
] | null | null | null | #!/usr/bin/env python
# -*- coding: utf-8
"""
Created on 11:12 2019-03-14 2019
"""
| 12.142857 | 33 | 0.588235 | 15 | 85 | 3.333333 | 0.933333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.242857 | 0.176471 | 85 | 6 | 34 | 14.166667 | 0.471429 | 0.847059 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
edf1acdf71d21bac9a7ca4236cfa021aa646122c | 67 | py | Python | src/model/__init__.py | RobertMcCarter/animal-finder | 5ac839a65df62ab312e440ce43416727492e84d8 | [
"MIT"
] | null | null | null | src/model/__init__.py | RobertMcCarter/animal-finder | 5ac839a65df62ab312e440ce43416727492e84d8 | [
"MIT"
] | null | null | null | src/model/__init__.py | RobertMcCarter/animal-finder | 5ac839a65df62ab312e440ce43416727492e84d8 | [
"MIT"
] | null | null | null | from .image import *
from .region2d import *
from .size2d import *
| 16.75 | 23 | 0.731343 | 9 | 67 | 5.444444 | 0.555556 | 0.408163 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.036364 | 0.179104 | 67 | 3 | 24 | 22.333333 | 0.854545 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
edf1d8dd4e788d99ba3567930976c0a2073c520c | 28 | py | Python | scripts/train_helper/__init__.py | AndAgio/Shallow2Deep | e42e9b3b11fdd2ec035144890a88e93a5154276f | [
"Apache-2.0"
] | null | null | null | scripts/train_helper/__init__.py | AndAgio/Shallow2Deep | e42e9b3b11fdd2ec035144890a88e93a5154276f | [
"Apache-2.0"
] | 2 | 2021-02-17T12:07:45.000Z | 2021-02-17T12:16:21.000Z | scripts/train_helper/__init__.py | AndAgio/Shallow2Deep | e42e9b3b11fdd2ec035144890a88e93a5154276f | [
"Apache-2.0"
] | null | null | null | from .train_helper import *
| 14 | 27 | 0.785714 | 4 | 28 | 5.25 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.142857 | 28 | 1 | 28 | 28 | 0.875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
6100e3975f9f2924d443c531b0eec0d5a6ff2613 | 174 | py | Python | examples/example_3_class_stub.py | CristianSifuentes/OOPPython | be6fe4d4761eabd06d0548bfa6edd67cbe437bf5 | [
"MIT"
] | null | null | null | examples/example_3_class_stub.py | CristianSifuentes/OOPPython | be6fe4d4761eabd06d0548bfa6edd67cbe437bf5 | [
"MIT"
] | null | null | null | examples/example_3_class_stub.py | CristianSifuentes/OOPPython | be6fe4d4761eabd06d0548bfa6edd67cbe437bf5 | [
"MIT"
] | null | null | null | class Bike(object):
def __init__(self):
pass
def update_sale_price(self):
pass
def sell(self):
pass
def service(self):
pass | 14.5 | 32 | 0.545977 | 21 | 174 | 4.238095 | 0.571429 | 0.359551 | 0.370787 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.367816 | 174 | 12 | 33 | 14.5 | 0.809091 | 0 | 0 | 0.444444 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.444444 | false | 0.444444 | 0 | 0 | 0.555556 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 6 |
6109276f68ce769b163feb97380ca421d11e2a59 | 35 | py | Python | Netra_1.py | YogendraBhati/HacktoberaaFest | cf1e2e36ac0ec2772fe43a4f6f183a9bf4cd9d33 | [
"Apache-2.0"
] | null | null | null | Netra_1.py | YogendraBhati/HacktoberaaFest | cf1e2e36ac0ec2772fe43a4f6f183a9bf4cd9d33 | [
"Apache-2.0"
] | null | null | null | Netra_1.py | YogendraBhati/HacktoberaaFest | cf1e2e36ac0ec2772fe43a4f6f183a9bf4cd9d33 | [
"Apache-2.0"
] | 1 | 2021-10-08T22:18:52.000Z | 2021-10-08T22:18:52.000Z | print("File 1 for Hacktober 2021")
| 17.5 | 34 | 0.742857 | 6 | 35 | 4.333333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.166667 | 0.142857 | 35 | 1 | 35 | 35 | 0.7 | 0 | 0 | 0 | 0 | 0 | 0.714286 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
b67998c075f084046012572318f78aa8f5d48372 | 42 | py | Python | autox/autox_recommend/recall_and_rank/__init__.py | OneToolsCollection/4paradigm-AutoX | f8e838021354de17f5bb9bc44e9d68d12dda6427 | [
"Apache-2.0"
] | null | null | null | autox/autox_recommend/recall_and_rank/__init__.py | OneToolsCollection/4paradigm-AutoX | f8e838021354de17f5bb9bc44e9d68d12dda6427 | [
"Apache-2.0"
] | null | null | null | autox/autox_recommend/recall_and_rank/__init__.py | OneToolsCollection/4paradigm-AutoX | f8e838021354de17f5bb9bc44e9d68d12dda6427 | [
"Apache-2.0"
] | null | null | null | from .recall_and_rank import RecallAndRank | 42 | 42 | 0.904762 | 6 | 42 | 6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.071429 | 42 | 1 | 42 | 42 | 0.923077 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
b6a1d8b9325ae7e6af7dca79406bdf05c0260e78 | 31 | py | Python | codeqaapi/tests/__init__.py | solnsubuga/codeqa-api | e126e4d6bf9a9d588ddcf6b85bf925348a14b66e | [
"MIT"
] | null | null | null | codeqaapi/tests/__init__.py | solnsubuga/codeqa-api | e126e4d6bf9a9d588ddcf6b85bf925348a14b66e | [
"MIT"
] | 9 | 2020-02-11T23:38:52.000Z | 2022-02-10T09:03:33.000Z | codeqaapi/tests/__init__.py | solnsubuga/codeqa-api | e126e4d6bf9a9d588ddcf6b85bf925348a14b66e | [
"MIT"
] | null | null | null | from .base import BaseTestCase
| 15.5 | 30 | 0.83871 | 4 | 31 | 6.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.129032 | 31 | 1 | 31 | 31 | 0.962963 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
b6b262dfc0da2bcba4d00d61c4a3c84a901d7012 | 134 | py | Python | test_dask_lthops.py | cloudbutton/lithops-dataframe | e8f2259dfd663b7fd84f2fc31548839d695a275f | [
"Apache-2.0"
] | null | null | null | test_dask_lthops.py | cloudbutton/lithops-dataframe | e8f2259dfd663b7fd84f2fc31548839d695a275f | [
"Apache-2.0"
] | null | null | null | test_dask_lthops.py | cloudbutton/lithops-dataframe | e8f2259dfd663b7fd84f2fc31548839d695a275f | [
"Apache-2.0"
] | 1 | 2021-09-18T01:21:31.000Z | 2021-09-18T01:21:31.000Z | import dask as d
import dask.array as da
a = da.arange(10, chunks=2).sum()
#b = da.arange(10, chunks=2).mean()
a.compute()
print(a)
| 14.888889 | 35 | 0.664179 | 27 | 134 | 3.296296 | 0.592593 | 0.224719 | 0.224719 | 0.359551 | 0.382022 | 0 | 0 | 0 | 0 | 0 | 0 | 0.052632 | 0.149254 | 134 | 8 | 36 | 16.75 | 0.72807 | 0.253731 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.4 | 0 | 0.4 | 0.2 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.