hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
520241149108ce5350b1779284131244aace564c | 655 | py | Python | stock_order/views.py | gitCincta/StockTool | 2ae604774cd8c271ffc49a4a39fcc412bcaf4577 | [
"Apache-2.0"
] | null | null | null | stock_order/views.py | gitCincta/StockTool | 2ae604774cd8c271ffc49a4a39fcc412bcaf4577 | [
"Apache-2.0"
] | null | null | null | stock_order/views.py | gitCincta/StockTool | 2ae604774cd8c271ffc49a4a39fcc412bcaf4577 | [
"Apache-2.0"
] | null | null | null | from django.shortcuts import render
from django.http import HttpResponse
from stock_register import controller
import json
# checks if user is logged in and linked to order from encrypted_order_id
# if user is not logged-in, show login field (user_name filled in)
# run order_manager.accept_oder, if it returns messages, show them
# else, render order accept page with order details and payment instructions
def accept_order_view(request):
pass
def get_stock_register(person=None, comprime=True):
context = {"transactions": controller.list_stock_register_person()}
return HttpResponse(json.dumps(context), content_type="application/json")
| 36.388889 | 77 | 0.8 | 96 | 655 | 5.3125 | 0.625 | 0.076471 | 0.031373 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.135878 | 655 | 17 | 78 | 38.529412 | 0.90106 | 0.421374 | 0 | 0 | 0 | 0 | 0.074866 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.222222 | false | 0.111111 | 0.444444 | 0 | 0.777778 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 1 | 0 | 0 | 4 |
5203a5db5eb855f0ae2e2a5efc5b54666b8315da | 258 | py | Python | 3.3.1/se34euca/se34euca/runtest_utility.py | eucalyptus/se34euca | af5da36754fccca84b7f260ba7605b8fdc30fa55 | [
"BSD-2-Clause"
] | 8 | 2015-01-08T21:06:08.000Z | 2019-10-26T13:17:16.000Z | 3.3.1/se34euca/se34euca/runtest_utility.py | eucalyptus/se34euca | af5da36754fccca84b7f260ba7605b8fdc30fa55 | [
"BSD-2-Clause"
] | null | null | null | 3.3.1/se34euca/se34euca/runtest_utility.py | eucalyptus/se34euca | af5da36754fccca84b7f260ba7605b8fdc30fa55 | [
"BSD-2-Clause"
] | 7 | 2016-08-31T07:02:21.000Z | 2020-07-18T00:10:36.000Z | #!/usr/bin/python
import se34euca
from se34euca.testcase.testcase_utility import testcase_utility
class Utility(se34euca.TestRunner):
testcase = "change_password"
testclass = testcase_utility
if __name__ == "__main__":
Utility().start_test()
| 19.846154 | 63 | 0.763566 | 29 | 258 | 6.344828 | 0.62069 | 0.244565 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.027027 | 0.139535 | 258 | 12 | 64 | 21.5 | 0.801802 | 0.062016 | 0 | 0 | 0 | 0 | 0.095833 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.142857 | 0.285714 | 0 | 0.714286 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 4 |
5215ccaf94dc099e07c282788703dd7aa3e2a732 | 50 | py | Python | lhrhost/util/__init__.py | ethanjli/liquid-handling-robotics | 999ab03c225b4c5382ab9fcac6a4988d0c232c67 | [
"BSD-3-Clause"
] | null | null | null | lhrhost/util/__init__.py | ethanjli/liquid-handling-robotics | 999ab03c225b4c5382ab9fcac6a4988d0c232c67 | [
"BSD-3-Clause"
] | null | null | null | lhrhost/util/__init__.py | ethanjli/liquid-handling-robotics | 999ab03c225b4c5382ab9fcac6a4988d0c232c67 | [
"BSD-3-Clause"
] | 1 | 2018-08-03T17:17:31.000Z | 2018-08-03T17:17:31.000Z | """Various utilities to support other modules."""
| 25 | 49 | 0.74 | 6 | 50 | 6.166667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.12 | 50 | 1 | 50 | 50 | 0.840909 | 0.86 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 4 |
5297a5057867ca154794f240d44e7bf3d019a119 | 207 | py | Python | setup.py | ScottWales/xncview | 06ed9fb5036c0078a3f96b8eac42622e317304bd | [
"Apache-2.0"
] | null | null | null | setup.py | ScottWales/xncview | 06ed9fb5036c0078a3f96b8eac42622e317304bd | [
"Apache-2.0"
] | null | null | null | setup.py | ScottWales/xncview | 06ed9fb5036c0078a3f96b8eac42622e317304bd | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/env python
from setuptools import setup
import versioneer
# See setup.cfg for full metadata
setup(
version=versioneer.get_version(),
cmdclass=versioneer.get_cmdclass(),
)
| 20.7 | 43 | 0.700483 | 25 | 207 | 5.72 | 0.68 | 0.181818 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.207729 | 207 | 9 | 44 | 23 | 0.871951 | 0.251208 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.333333 | 0 | 0.333333 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 4 |
52980c8e77de04a5abee40ec895f727b509b3549 | 51 | py | Python | albow/containers/__init__.py | hasii2011/albow-python-3 | 04b9d42705b370b62f0e49d10274eebf3ac54bc1 | [
"MIT"
] | 6 | 2019-04-30T23:50:39.000Z | 2019-11-04T06:15:02.000Z | albow/containers/__init__.py | hasii2011/albow-python-3 | 04b9d42705b370b62f0e49d10274eebf3ac54bc1 | [
"MIT"
] | 73 | 2019-05-12T18:43:14.000Z | 2021-04-13T19:19:03.000Z | albow/containers/__init__.py | hasii2011/albow-python-3 | 04b9d42705b370b62f0e49d10274eebf3ac54bc1 | [
"MIT"
] | null | null | null | """"
This package contains the Albow containers
""" | 17 | 42 | 0.72549 | 6 | 51 | 6.166667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.137255 | 51 | 3 | 43 | 17 | 0.840909 | 0.862745 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 4 |
8742aa3dcf12c5e80c20fde1e092a37fce1363d0 | 56 | py | Python | mbed_targets/_internal/__init__.py | madchutney/mbed-targets | dab825a7ca20473020dde28fb0c86700f6d10399 | [
"Apache-2.0"
] | null | null | null | mbed_targets/_internal/__init__.py | madchutney/mbed-targets | dab825a7ca20473020dde28fb0c86700f6d10399 | [
"Apache-2.0"
] | null | null | null | mbed_targets/_internal/__init__.py | madchutney/mbed-targets | dab825a7ca20473020dde28fb0c86700f6d10399 | [
"Apache-2.0"
] | null | null | null | """Code not to be accessed by external applications."""
| 28 | 55 | 0.732143 | 8 | 56 | 5.125 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.142857 | 56 | 1 | 56 | 56 | 0.854167 | 0.875 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 4 |
8769176e011f6db3a9d4f1ce07d1fa055360ad62 | 1,233 | py | Python | src/cfgmgr32/data/props.py | Mahas1/OCSysInfo | f6179f0b6b37c6ea02e9cdbc8e5514f9c339edf7 | [
"MIT"
] | 6 | 2021-10-16T14:06:11.000Z | 2022-02-12T15:12:51.000Z | src/cfgmgr32/data/props.py | Mahas1/OCSysInfo | f6179f0b6b37c6ea02e9cdbc8e5514f9c339edf7 | [
"MIT"
] | 11 | 2021-10-17T22:44:12.000Z | 2022-02-13T09:13:40.000Z | src/cfgmgr32/data/props.py | Mahas1/OCSysInfo | f6179f0b6b37c6ea02e9cdbc8e5514f9c339edf7 | [
"MIT"
] | 9 | 2021-10-18T05:11:56.000Z | 2021-11-21T03:26:02.000Z | # Full list here: https://github.com/tpn/winsdk-10/blob/master/Include/10.0.16299.0/shared/devpkey.h
#
# Special thank you to [Flagers](https://github.com/flagersgit) for sharing this with me.
props = [
["name", 0xb725f130, 0x47ef, 0x101a, [0xa5, 0xf1, 0x02, 0x60, 0x8c, 0x9e, 0xeb, 0xac], 10],
["driver", 0xa45c254e, 0xdf1c, 0x4efd, [0x80, 0x20, 0x67, 0xd1, 0x46, 0xa8, 0x50, 0xe0], 11],
["compatible_ids", 0xa45c254e, 0xdf1c, 0x4efd, [0x80, 0x20, 0x67, 0xd1, 0x46, 0xa8, 0x50, 0xe0], 4],
["manufacturer", 0xa45c254e, 0xdf1c, 0x4efd, [0x80, 0x20, 0x67, 0xd1, 0x46, 0xa8, 0x50, 0xe0], 13],
["location_paths", 0xa45c254e, 0xdf1c, 0x4efd, [0x80, 0x20, 0x67, 0xd1, 0x46, 0xa8, 0x50, 0xe0], 37],
["model", 0x78c34fc8, 0x104a, 0x4aca, [0x9e, 0xa4, 0x52, 0x4d, 0x52, 0x99, 0x6e, 0x57], 39],
["instance_id", 0x78c34fc8, 0x104a, 0x4aca, [0x9e, 0xa4, 0x52, 0x4d, 0x52, 0x99, 0x6e, 0x57], 256],
["driver_desc", 0xa8b865dd, 0x2e3d, 0x4094, [0xad, 0x97, 0xe5, 0x93, 0xa7, 0xc, 0x75, 0xd6], 4],
["driver_inf_path", 0xa8b865dd, 0x2e3d, 0x4094, [0xad, 0x97, 0xe5, 0x93, 0xa7, 0xc, 0x75, 0xd6], 5],
["driver_provider", 0xa8b865dd, 0x2e3d, 0x4094, [0xad, 0x97, 0xe5, 0x93, 0xa7, 0xc, 0x75, 0xd6], 9]
] | 51.375 | 105 | 0.657745 | 171 | 1,233 | 4.701754 | 0.54386 | 0.079602 | 0.109453 | 0.129353 | 0.600746 | 0.600746 | 0.600746 | 0.600746 | 0.600746 | 0.600746 | 0 | 0.324952 | 0.161395 | 1,233 | 24 | 106 | 51.375 | 0.452611 | 0.150852 | 0 | 0 | 0 | 0 | 0.10249 | 0 | 0 | 0 | 0.514368 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 4 |
5e4b5ed091c233d357a6b0f580a36d1e375e5261 | 156 | py | Python | miprometheus/helpers/__init__.py | vincentalbouy/mi-prometheus | 99a0c94b0d0f3476fa021213b3246fda0db8b2db | [
"Apache-2.0"
] | null | null | null | miprometheus/helpers/__init__.py | vincentalbouy/mi-prometheus | 99a0c94b0d0f3476fa021213b3246fda0db8b2db | [
"Apache-2.0"
] | null | null | null | miprometheus/helpers/__init__.py | vincentalbouy/mi-prometheus | 99a0c94b0d0f3476fa021213b3246fda0db8b2db | [
"Apache-2.0"
] | null | null | null | # Helpers.
from .index_splitter import IndexSplitter
from .problem_initializer import ProblemInitializer
__all__ = ['IndexSplitter', 'ProblemInitializer']
| 26 | 51 | 0.826923 | 14 | 156 | 8.785714 | 0.714286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.096154 | 156 | 5 | 52 | 31.2 | 0.87234 | 0.051282 | 0 | 0 | 0 | 0 | 0.212329 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 4 |
5e5ad542f1a7516a37a727ce75541e2c84b7d673 | 114 | py | Python | ghia/__main__.py | petrnymsa/mi-pyt-ghia | 5a5c939078529a7c422eabea16ce2b4b354b3bd3 | [
"MIT"
] | null | null | null | ghia/__main__.py | petrnymsa/mi-pyt-ghia | 5a5c939078529a7c422eabea16ce2b4b354b3bd3 | [
"MIT"
] | 3 | 2019-11-01T22:11:23.000Z | 2019-12-03T14:25:14.000Z | ghia/__main__.py | petrnymsa/mi-pyt-ghia | 5a5c939078529a7c422eabea16ce2b4b354b3bd3 | [
"MIT"
] | null | null | null | import configparser
import os.path
import os
import click
import re
from .cli import run
run(prog_name='ghia')
| 10.363636 | 21 | 0.780702 | 19 | 114 | 4.631579 | 0.631579 | 0.181818 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.157895 | 114 | 10 | 22 | 11.4 | 0.916667 | 0 | 0 | 0 | 0 | 0 | 0.035088 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.857143 | 0 | 0.857143 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 4 |
5e6dafcda0b76029adc94b8dec58bb15b80bfe6f | 478 | py | Python | utils/imports.py | pmandera/snaut | 19f32b204e6fbaf5162f5f788d2128e769bccdb2 | [
"Apache-2.0"
] | 2 | 2016-04-27T14:00:23.000Z | 2019-06-24T16:08:43.000Z | utils/imports.py | pmandera/snaut | 19f32b204e6fbaf5162f5f788d2128e769bccdb2 | [
"Apache-2.0"
] | null | null | null | utils/imports.py | pmandera/snaut | 19f32b204e6fbaf5162f5f788d2128e769bccdb2 | [
"Apache-2.0"
] | 1 | 2019-06-25T20:15:02.000Z | 2019-06-25T20:15:02.000Z | import sys
print((sys.version))
import csv
print((csv.__name__, csv.__version__))
import markdown
print((markdown.__name__, markdown.__version__))
import json
print((json.__name__, json.__version__))
# import cStringIO
# print cStringIO.__name__, cStringIO.__version__
# import ConfigParser
# print ConfigParser.__name__, ConfigParser.__version__
import flask
print((flask.__name__, flask.__version__))
import semspaces
print((semspaces.__name__, semspaces.__version__))
| 19.916667 | 55 | 0.803347 | 54 | 478 | 6.074074 | 0.222222 | 0.277439 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.09205 | 478 | 23 | 56 | 20.782609 | 0.75576 | 0.288703 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0.5 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 4 |
5e765c9e32b9830d54a9b900353688be492302e3 | 203 | py | Python | game/utils.py | smrsan/django-backgammon-server | 02eee8fea2c4aa0e40b333a35b0bb09d7b444230 | [
"MIT"
] | null | null | null | game/utils.py | smrsan/django-backgammon-server | 02eee8fea2c4aa0e40b333a35b0bb09d7b444230 | [
"MIT"
] | 6 | 2021-03-18T22:43:08.000Z | 2021-09-22T18:31:02.000Z | game/utils.py | smrsan/django-backgammon-server | 02eee8fea2c4aa0e40b333a35b0bb09d7b444230 | [
"MIT"
] | null | null | null | from random import choice
from string import ascii_letters, digits
from django.db.models import Q
def get_rand_str(length=12):
return ''.join(choice(ascii_letters + digits) for _ in range(length))
| 25.375 | 73 | 0.773399 | 32 | 203 | 4.75 | 0.71875 | 0.157895 | 0.236842 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.011494 | 0.142857 | 203 | 7 | 74 | 29 | 0.862069 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0.6 | 0.2 | 1 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 4 |
5e995e6f2253a201b5340a6a8d644982ed634cda | 22 | py | Python | my_classes/.history/ModulesPackages_PackageNamespaces/ImportingModules_20210725180654.py | minefarmer/deep-Dive-1 | b0675b853180c5b5781888266ea63a3793b8d855 | [
"Unlicense"
] | null | null | null | my_classes/.history/ModulesPackages_PackageNamespaces/ImportingModules_20210725180654.py | minefarmer/deep-Dive-1 | b0675b853180c5b5781888266ea63a3793b8d855 | [
"Unlicense"
] | null | null | null | my_classes/.history/ModulesPackages_PackageNamespaces/ImportingModules_20210725180654.py | minefarmer/deep-Dive-1 | b0675b853180c5b5781888266ea63a3793b8d855 | [
"Unlicense"
] | null | null | null | """Importing module""" | 22 | 22 | 0.681818 | 2 | 22 | 7.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.045455 | 22 | 1 | 22 | 22 | 0.714286 | 0.727273 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 4 |
5e9d4bd95c7b4b9bac57d0a54b8248d821ac8d75 | 468 | py | Python | bcdata/__init__.py | NewGraphEnvironment/bcdata | 5f3df4264e6e4409564d14923aed1ce314fe76dc | [
"MIT"
] | null | null | null | bcdata/__init__.py | NewGraphEnvironment/bcdata | 5f3df4264e6e4409564d14923aed1ce314fe76dc | [
"MIT"
] | 3 | 2021-03-04T17:03:40.000Z | 2021-03-25T19:27:42.000Z | bcdata/__init__.py | NewGraphEnvironment/bcdata | 5f3df4264e6e4409564d14923aed1ce314fe76dc | [
"MIT"
] | null | null | null | from .wfs import get_table_name
from .wfs import get_data
from .wfs import get_features
from .wfs import get_count
from .wfs import list_tables
from .wfs import validate_name
from .wfs import define_request
from .wcs import get_dem
__version__ = "0.4.4dev0"
BCDC_API_URL = "https://catalogue.data.gov.bc.ca/api/3/action/"
WFS_URL = "https://openmaps.gov.bc.ca/geo/pub/wfs"
OWS_URL = "http://openmaps.gov.bc.ca/geo/ows"
WCS_URL = "https://openmaps.gov.bc.ca/om/wcs"
| 27.529412 | 63 | 0.762821 | 85 | 468 | 3.988235 | 0.423529 | 0.144543 | 0.268437 | 0.188791 | 0.19764 | 0.135693 | 0 | 0 | 0 | 0 | 0 | 0.011962 | 0.106838 | 468 | 16 | 64 | 29.25 | 0.799043 | 0 | 0 | 0 | 0 | 0 | 0.339744 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.615385 | 0 | 0.615385 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 4 |
5ea1637d5abe6e9964534a3c606e1319b09378de | 111 | py | Python | package.py | TiaVerwega/TestKDE2 | 25758dddf6222029f2fd79bdb529918d40bacb0d | [
"MIT"
] | null | null | null | package.py | TiaVerwega/TestKDE2 | 25758dddf6222029f2fd79bdb529918d40bacb0d | [
"MIT"
] | null | null | null | package.py | TiaVerwega/TestKDE2 | 25758dddf6222029f2fd79bdb529918d40bacb0d | [
"MIT"
] | null | null | null | from scipy import stats
def function(data):
result = stats.gaussian_kde(data)
return result
| 13.875 | 37 | 0.666667 | 14 | 111 | 5.214286 | 0.785714 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.27027 | 111 | 7 | 38 | 15.857143 | 0.901235 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.25 | 0 | 0.75 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 4 |
5eaca8c2e31ce6073e08f893e0562f980e374af7 | 164 | py | Python | problem0446.py | kmarcini/Project-Euler-Python | d644e8e1ec4fac70a9ab407ad5e1f0a75547c8d3 | [
"BSD-3-Clause"
] | null | null | null | problem0446.py | kmarcini/Project-Euler-Python | d644e8e1ec4fac70a9ab407ad5e1f0a75547c8d3 | [
"BSD-3-Clause"
] | null | null | null | problem0446.py | kmarcini/Project-Euler-Python | d644e8e1ec4fac70a9ab407ad5e1f0a75547c8d3 | [
"BSD-3-Clause"
] | null | null | null | ###########################
#
# #446 Retractions B - Project Euler
# https://projecteuler.net/problem=446
#
# Code by Kevin Marciniak
#
###########################
| 18.222222 | 38 | 0.469512 | 14 | 164 | 5.5 | 0.928571 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.041379 | 0.115854 | 164 | 8 | 39 | 20.5 | 0.489655 | 0.573171 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 4 |
5eafec53efed67b9537127df538ca10b1c0dcfb2 | 69 | py | Python | URI1930.py | rashidulhasanhridoy/URI-Online-Judge-Problem-Solve-with-Python-3 | c7db434e2e6e40c2ca3bd56db0d04cf79f69de12 | [
"Apache-2.0"
] | 2 | 2020-07-21T18:01:37.000Z | 2021-11-29T01:08:14.000Z | URI1930.py | rashidulhasanhridoy/URI-Online-Judge-Problem-Solve-with-Python-3 | c7db434e2e6e40c2ca3bd56db0d04cf79f69de12 | [
"Apache-2.0"
] | null | null | null | URI1930.py | rashidulhasanhridoy/URI-Online-Judge-Problem-Solve-with-Python-3 | c7db434e2e6e40c2ca3bd56db0d04cf79f69de12 | [
"Apache-2.0"
] | null | null | null | A, B, C, D = map(int, input().split())
X = A + B + C + D - 3
print(X) | 23 | 38 | 0.463768 | 16 | 69 | 2 | 0.6875 | 0.125 | 0.1875 | 0.25 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.019608 | 0.26087 | 69 | 3 | 39 | 23 | 0.607843 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.333333 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 4 |
5eb0110b97290031eb0adf3f3668cb169852e5aa | 124,036 | py | Python | abcpy/inferences.py | shoshijak/abcpy | ad12808782fa72c0428122fc659fd3ff22d3e854 | [
"BSD-3-Clause-Clear"
] | null | null | null | abcpy/inferences.py | shoshijak/abcpy | ad12808782fa72c0428122fc659fd3ff22d3e854 | [
"BSD-3-Clause-Clear"
] | null | null | null | abcpy/inferences.py | shoshijak/abcpy | ad12808782fa72c0428122fc659fd3ff22d3e854 | [
"BSD-3-Clause-Clear"
] | null | null | null | from abc import ABCMeta, abstractmethod, abstractproperty
from abcpy.graphtools import GraphTools
from abcpy.probabilisticmodels import *
from abcpy.acceptedparametersmanager import *
from abcpy.perturbationkernel import DefaultKernel
from abcpy.jointdistances import LinearCombination
from abcpy.jointapprox_lhd import ProductCombination
import copy
import numpy as np
from abcpy.output import Journal
from scipy import optimize
class InferenceMethod(GraphTools, metaclass = ABCMeta):
"""
This abstract base class represents an inference method.
"""
def __getstate__(self):
"""Cloudpickle is used with the MPIBackend. This function ensures that the backend itself
is not pickled
"""
state = self.__dict__.copy()
del state['backend']
return state
@abstractmethod
def sample(self):
"""To be overwritten by any sub-class:
Samples from the posterior distribution of the model parameter given the observed
data observations.
"""
raise NotImplementedError
@abstractproperty
def model(self):
"""To be overwritten by any sub-class: an attribute specifying the model to be used
"""
raise NotImplementedError
@abstractproperty
def rng(self):
"""To be overwritten by any sub-class: an attribute specifying the random number generator to be used
"""
raise NotImplementedError
@abstractproperty
def backend(self):
"""To be overwritten by any sub-class: an attribute specifying the backend to be used."""
raise NotImplementedError
@abstractproperty
def n_samples(self):
"""To be overwritten by any sub-class: an attribute specifying the number of samples to be generated
"""
raise NotImplementedError
@abstractproperty
def n_samples_per_param(self):
"""To be overwritten by any sub-class: an attribute specifying the number of data points in each simulated data set."""
raise NotImplementedError
class BaseMethodsWithKernel(metaclass = ABCMeta):
"""
This abstract base class represents inference methods that have a kernel.
"""
@abstractproperty
def kernel(self):
"""To be overwritten by any sub-class: an attribute specifying the transition or perturbation kernel."""
raise NotImplementedError
def perturb(self, column_index, epochs = 10, rng=np.random.RandomState()):
"""
Perturbs all free parameters, given the current weights.
Commonly used during inference.
Parameters
----------
column_index: integer
The index of the column in the accepted_parameters_bds that should be used for perturbation
epochs: integer
The number of times perturbation should happen before the algorithm is terminated
Returns
-------
boolean
Whether it was possible to set new parameter values for all probabilistic models
"""
current_epoch = 0
while current_epoch < epochs:
# Get new parameters of the graph
new_parameters = self.kernel.update(self.accepted_parameters_manager, column_index, rng=rng)
self._reset_flags()
# Order the parameters provided by the kernel in depth-first search order
correctly_ordered_parameters = self.get_correct_ordering(new_parameters)
# Try to set new parameters
accepted, last_index = self.set_parameters(correctly_ordered_parameters, 0)
if accepted:
break
current_epoch+=1
if current_epoch == 10:
return [False]
return [True, correctly_ordered_parameters]
class BaseLikelihood(InferenceMethod, BaseMethodsWithKernel, metaclass = ABCMeta):
"""
This abstract base class represents inference methods that use the likelihood.
"""
@abstractproperty
def likfun(self):
"""To be overwritten by any sub-class: an attribute specifying the likelihood function to be used."""
raise NotImplementedError
class BaseDiscrepancy(InferenceMethod, BaseMethodsWithKernel, metaclass = ABCMeta):
"""
This abstract base class represents inference methods using descrepancy.
"""
@abstractproperty
def distance(self):
"""To be overwritten by any sub-class: an attribute specifying the distance function."""
raise NotImplementedError
class RejectionABC(InferenceMethod):
"""This base class implements the rejection algorithm based inference scheme [1] for
Approximate Bayesian Computation.
[1] Tavaré, S., Balding, D., Griffith, R., Donnelly, P.: Inferring coalescence
times from DNA sequence data. Genetics 145(2), 505–518 (1997).
Parameters
----------
model: list
A list of the Probabilistic models corresponding to the observed datasets
distance: abcpy.distances.Distance
Distance object defining the distance measure to compare simulated and observed data sets.
backend: abcpy.backends.Backend
Backend object defining the backend to be used.
seed: integer, optional
Optional initial seed for the random number generator. The default value is generated randomly.
"""
# TODO: defining attributes as class attributes is not correct, move to init
model = None
distance = None
rng = None
n_samples = None
n_samples_per_param = None
epsilon = None
backend = None
def __init__(self, root_models, distances, backend, seed=None):
self.model = root_models
# We define the joint Linear combination distance using all the distances for each individual models
self.distance = LinearCombination(root_models, distances)
self.backend = backend
self.rng = np.random.RandomState(seed)
# An object managing the bds objects
self.accepted_parameters_manager = AcceptedParametersManager(self.model)
# counts the number of simulate calls
self.simulation_counter = 0
def sample(self, observations, n_samples, n_samples_per_param, epsilon, full_output=0):
"""
Samples from the posterior distribution of the model parameter given the observed
data observations.
Parameters
----------
observations: list
A list, containing lists describing the observed data sets
n_samples: integer
Number of samples to generate
n_samples_per_param: integer
Number of data points in each simulated data set.
epsilon: float
Value of threshold
full_output: integer, optional
If full_output==1, intermediate results are included in output journal.
The default value is 0, meaning the intermediate results are not saved.
Returns
-------
abcpy.output.Journal
a journal containing simulation results, metadata and optionally intermediate results.
"""
self.accepted_parameters_manager.broadcast(self.backend, observations)
self.n_samples = n_samples
self.n_samples_per_param = n_samples_per_param
self.epsilon = epsilon
journal = Journal(full_output)
journal.configuration["n_samples"] = self.n_samples
journal.configuration["n_samples_per_param"] = self.n_samples_per_param
journal.configuration["epsilon"] = self.epsilon
accepted_parameters = None
# main Rejection ABC algorithm
seed_arr = self.rng.randint(1, n_samples * n_samples, size=n_samples, dtype=np.int32)
rng_arr = np.array([np.random.RandomState(seed) for seed in seed_arr])
rng_pds = self.backend.parallelize(rng_arr)
accepted_parameters_and_counter_pds = self.backend.map(self._sample_parameter, rng_pds)
accepted_parameters_and_counter = self.backend.collect(accepted_parameters_and_counter_pds)
accepted_parameters, counter = [list(t) for t in zip(*accepted_parameters_and_counter)]
for count in counter:
self.simulation_counter+=count
accepted_parameters = np.array(accepted_parameters)
self.accepted_parameters_manager.update_broadcast(self.backend, accepted_parameters=accepted_parameters)
journal.add_parameters(accepted_parameters)
journal.add_weights(np.ones((n_samples, 1)))
self.accepted_parameters_manager.update_broadcast(self.backend, accepted_parameters=accepted_parameters)
names_and_parameters = self._get_names_and_parameters()
journal.add_user_parameters(names_and_parameters)
journal.number_of_simulations.append(self.simulation_counter)
return journal
def _sample_parameter(self, rng):
"""
Samples a single model parameter and simulates from it until
distance between simulated outcome and the observation is
smaller than epsilon.
Parameters
----------
rng: random number generator
The random number generator to be used.
Returns
-------
np.array
accepted parameter
"""
distance = self.distance.dist_max()
counter = 0
while distance > self.epsilon:
# Accept new parameter value if the distance is less than epsilon
self.sample_from_prior(rng=rng)
theta = np.array(self.get_parameters(self.model)).reshape(-1,)
y_sim = self.simulate(self.n_samples_per_param, rng=rng)
counter+=1
if(y_sim is not None):
distance = self.distance.distance(self.accepted_parameters_manager.observations_bds.value(), y_sim)
else:
distance = self.distance.dist_max()
return (theta, counter)
class PMCABC(BaseDiscrepancy, InferenceMethod):
"""
This base class implements a modified version of Population Monte Carlo based inference scheme for Approximate
Bayesian computation of Beaumont et. al. [1]. Here the threshold value at `t`-th generation are adaptively chosen by
taking the maximum between the epsilon_percentile-th value of discrepancies of the accepted parameters at `t-1`-th
generation and the threshold value provided for this generation by the user. If we take the value of
epsilon_percentile to be zero (default), this method becomes the inference scheme described in [1], where the
threshold values considered at each generation are the ones provided by the user.
[1] M. A. Beaumont. Approximate Bayesian computation in evolution and ecology. Annual Review of Ecology,
Evolution, and Systematics, 41(1):379–406, Nov. 2010.
Parameters
----------
model : list
A list of the Probabilistic models corresponding to the observed datasets
distance : abcpy.distances.Distance
Distance object defining the distance measure to compare simulated and observed data sets.
kernel : abcpy.distributions.Distribution
Distribution object defining the perturbation kernel needed for the sampling.
backend : abcpy.backends.Backend
Backend object defining the backend to be used.
seed : integer, optional
Optional initial seed for the random number generator. The default value is generated randomly.
"""
model = None
distance = None
kernel = None
rng = None
#default value, set so that testing works
n_samples = 2
n_samples_per_param = None
backend = None
def __init__(self, root_models, distances, backend, kernel=None,seed=None):
self.model = root_models
# We define the joint Linear combination distance using all the distances for each individual models
self.distance = LinearCombination(root_models, distances)
if(kernel is None):
mapping, garbage_index = self._get_mapping()
models = []
for mdl, mdl_index in mapping:
models.append(mdl)
kernel = DefaultKernel(models)
self.kernel = kernel
self.backend = backend
self.rng = np.random.RandomState(seed)
self.accepted_parameters_manager = AcceptedParametersManager(self.model)
self.simulation_counter=0
def sample(self, observations, steps, epsilon_init, n_samples = 10000, n_samples_per_param = 1, epsilon_percentile = 0, covFactor = 2, full_output=0, journal_file = None):
"""Samples from the posterior distribution of the model parameter given the observed
data observations.
Parameters
----------
observations : list
A list, containing lists describing the observed data sets
steps : integer
Number of iterations in the sequential algoritm ("generations")
epsilon_init : numpy.ndarray
An array of proposed values of epsilon to be used at each steps. Can be supplied
A single value to be used as the threshold in Step 1 or a `steps`-dimensional array of values to be
used as the threshold in evry steps.
n_samples : integer, optional
Number of samples to generate. The default value is 10000.
n_samples_per_param : integer, optional
Number of data points in each simulated data set. The default value is 1.
epsilon_percentile : float, optional
A value between [0, 100]. The default value is 0, meaning the threshold value provided by the user being used.
covFactor : float, optional
scaling parameter of the covariance matrix. The default value is 2 as considered in [1].
full_output: integer, optional
If full_output==1, intermediate results are included in output journal.
The default value is 0, meaning the intermediate results are not saved.
Returns
-------
abcpy.output.Journal
A journal containing simulation results, metadata and optionally intermediate results.
"""
self.accepted_parameters_manager.broadcast(self.backend, observations)
self.n_samples = n_samples
self.n_samples_per_param=n_samples_per_param
if(journal_file is None):
journal = Journal(full_output)
journal.configuration["type_model"] = [type(model).__name__ for model in self.model]
journal.configuration["type_dist_func"] = type(self.distance).__name__
journal.configuration["n_samples"] = self.n_samples
journal.configuration["n_samples_per_param"] = self.n_samples_per_param
journal.configuration["steps"] = steps
journal.configuration["epsilon_percentile"] = epsilon_percentile
else:
journal = Journal.fromFile(journal_file)
accepted_parameters = None
accepted_weights = None
accepted_cov_mats = None
# Define epsilon_arr
if len(epsilon_init) == steps:
epsilon_arr = epsilon_init
else:
if len(epsilon_init) == 1:
epsilon_arr = [None] * steps
epsilon_arr[0] = epsilon_init
else:
raise ValueError("The length of epsilon_init can only be equal to 1 or steps.")
# main PMCABC algorithm
# print("INFO: Starting PMCABC iterations.")
for aStep in range(0, steps):
if(aStep==0 and journal_file is not None):
accepted_parameters = journal.parameters[-1]
accepted_weights = journal.weights[-1]
self.accepted_parameters_manager.update_broadcast(self.backend, accepted_parameters=accepted_parameters, accepted_weights=accepted_weights)
kernel_parameters = []
for kernel in self.kernel.kernels:
kernel_parameters.append(
self.accepted_parameters_manager.get_accepted_parameters_bds_values(kernel.models))
self.accepted_parameters_manager.update_kernel_values(self.backend, kernel_parameters=kernel_parameters)
# 3: calculate covariance
# print("INFO: Calculating covariance matrix.")
new_cov_mats = self.kernel.calculate_cov(self.accepted_parameters_manager)
# Since each entry of new_cov_mats is a numpy array, we can multiply like this
accepted_cov_mats = [covFactor * new_cov_mat for new_cov_mat in new_cov_mats]
# print("DEBUG: Iteration " + str(aStep) + " of PMCABC algorithm.")
seed_arr = self.rng.randint(0, np.iinfo(np.uint32).max, size=n_samples, dtype=np.uint32)
rng_arr = np.array([np.random.RandomState(seed) for seed in seed_arr])
rng_pds = self.backend.parallelize(rng_arr)
# 0: update remotely required variables
# print("INFO: Broadcasting parameters.")
self.epsilon = epsilon_arr[aStep]
self.accepted_parameters_manager.update_broadcast(self.backend, accepted_parameters, accepted_weights, accepted_cov_mats)
# 1: calculate resample parameters
# print("INFO: Resampling parameters")
params_and_dists_and_ysim_and_counter_pds = self.backend.map(self._resample_parameter, rng_pds)
params_and_dists_and_ysim_and_counter = self.backend.collect(params_and_dists_and_ysim_and_counter_pds)
new_parameters, distances, counter = [list(t) for t in zip(*params_and_dists_and_ysim_and_counter)]
new_parameters = np.array(new_parameters)
#print(new_parameters)
for count in counter:
self.simulation_counter+=count
# Compute epsilon for next step
# print("INFO: Calculating acceptance threshold (epsilon).")
if aStep < steps - 1:
if epsilon_arr[aStep + 1] == None:
epsilon_arr[aStep + 1] = np.percentile(distances, epsilon_percentile)
else:
epsilon_arr[aStep + 1] = np.max(
[np.percentile(distances, epsilon_percentile), epsilon_arr[aStep + 1]])
# 2: calculate weights for new parameters
# print("INFO: Calculating weights.")
new_parameters_pds = self.backend.parallelize(new_parameters)
new_weights_pds = self.backend.map(self._calculate_weight, new_parameters_pds)
new_weights = np.array(self.backend.collect(new_weights_pds)).reshape(-1, 1)
sum_of_weights = 0.0
for w in new_weights:
sum_of_weights += w
new_weights = new_weights / sum_of_weights
# The calculation of cov_mats needs the new weights and new parameters
self.accepted_parameters_manager.update_broadcast(self.backend, accepted_parameters = new_parameters, accepted_weights=new_weights)
# The parameters relevant to each kernel have to be used to calculate n_sample times. It is therefore more efficient to broadcast these parameters once, instead of collecting them at each kernel in each step
kernel_parameters = []
for kernel in self.kernel.kernels:
kernel_parameters.append(
self.accepted_parameters_manager.get_accepted_parameters_bds_values(kernel.models))
self.accepted_parameters_manager.update_kernel_values(self.backend, kernel_parameters=kernel_parameters)
# 3: calculate covariance
# print("INFO: Calculating covariance matrix.")
new_cov_mats = self.kernel.calculate_cov(self.accepted_parameters_manager)
# Since each entry of new_cov_mats is a numpy array, we can multiply like this
new_cov_mats = [covFactor*new_cov_mat for new_cov_mat in new_cov_mats]
# 4: Update the newly computed values
accepted_parameters = new_parameters
accepted_weights = new_weights
accepted_cov_mats = new_cov_mats
# print("INFO: Saving configuration to output journal.")
if (full_output == 1 and aStep <= steps - 1) or (full_output == 0 and aStep == steps - 1):
journal.add_parameters(accepted_parameters)
journal.add_weights(accepted_weights)
self.accepted_parameters_manager.update_broadcast(self.backend, accepted_parameters=accepted_parameters,
accepted_weights=accepted_weights)
names_and_parameters = self._get_names_and_parameters()
journal.add_user_parameters(names_and_parameters)
journal.number_of_simulations.append(self.simulation_counter)
# Add epsilon_arr to the journal
journal.configuration["epsilon_arr"] = epsilon_arr
return journal
# define helper functions for map step
def _resample_parameter(self, rng):
"""
Samples a single model parameter and simulate from it until
distance between simulated outcome and the observation is
smaller than epsilon.
Parameters
----------
seed: integer
initial seed for the random number generator.
Returns
-------
np.array
accepted parameter
"""
rng.seed(rng.randint(np.iinfo(np.uint32).max, dtype=np.uint32))
distance = self.distance.dist_max()
counter=0
while distance > self.epsilon:
#print( " distance: " + str(distance) + " epsilon: " + str(self.epsilon))
if self.accepted_parameters_manager.accepted_parameters_bds == None:
self.sample_from_prior(rng=rng)
theta = self.get_parameters()
y_sim = self.simulate(self.n_samples_per_param, rng=rng)
counter+=1
else:
index = rng.choice(self.n_samples, size=1, p=self.accepted_parameters_manager.accepted_weights_bds.value().reshape(-1))
# truncate the normal to the bounds of parameter space of the model
# truncating the normal like this is fine: https://arxiv.org/pdf/0907.4010v1.pdf
while True:
perturbation_output = self.perturb(index[0], rng=rng)
if(perturbation_output[0] and self.pdf_of_prior(self.model, perturbation_output[1])!=0):
theta = perturbation_output[1]
break
y_sim = self.simulate(self.n_samples_per_param, rng=rng)
counter+=1
if(y_sim is not None):
distance = self.distance.distance(self.accepted_parameters_manager.observations_bds.value(),y_sim)
else:
distance = self.distance.dist_max()
return (theta, distance, counter)
def _calculate_weight(self, theta):
"""
Calculates the weight for the given parameter using
accepted_parameters, accepted_cov_mat
Parameters
----------
theta: np.array
1xp matrix containing model parameter, where p is the number of parameters
Returns
-------
float
the new weight for theta
"""
if self.accepted_parameters_manager.kernel_parameters_bds is None:
return 1.0 / self.n_samples
else:
prior_prob = self.pdf_of_prior(self.model, theta, 0)
denominator = 0.0
# Get the mapping of the models to be used by the kernels
mapping_for_kernels, garbage_index = self.accepted_parameters_manager.get_mapping(self.accepted_parameters_manager.model)
for i in range(0, self.n_samples):
pdf_value = self.kernel.pdf(mapping_for_kernels, self.accepted_parameters_manager, i, theta)
denominator += self.accepted_parameters_manager.accepted_weights_bds.value()[i, 0] * pdf_value
return 1.0 * prior_prob / denominator
class PMC(BaseLikelihood, InferenceMethod):
"""
Population Monte Carlo based inference scheme of Cappé et. al. [1].
This algorithm assumes a likelihood function is available and can be evaluated
at any parameter value given the oberved dataset. In absence of the
likelihood function or when it can't be evaluated with a rational
computational expenses, we use the approximated likelihood functions in
abcpy.approx_lhd module, for which the argument of the consistency of the
inference schemes are based on Andrieu and Roberts [2].
[1] Cappé, O., Guillin, A., Marin, J.-M., and Robert, C. P. (2004). Population Monte Carlo.
Journal of Computational and Graphical Statistics, 13(4), 907–929.
[2] C. Andrieu and G. O. Roberts. The pseudo-marginal approach for efficient Monte Carlo computations.
Annals of Statistics, 37(2):697–725, 04 2009.
Parameters
----------
model : list
A list of the Probabilistic models corresponding to the observed datasets
likfun : abcpy.approx_lhd.Approx_likelihood
Approx_likelihood object defining the approximated likelihood to be used.
kernel : abcpy.distributions.Distribution
Distribution object defining the perturbation kernel needed for the sampling.
backend : abcpy.backends.Backend
Backend object defining the backend to be used.
seed : integer, optional
Optional initial seed for the random number generator. The default value is generated randomly.
"""
model = None
likfun = None
kernel = None
rng = None
n_samples = None
n_samples_per_param = None
backend = None
def __init__(self, root_models, likfuns, backend, kernel=None, seed=None):
self.model = root_models
# We define the joint Product of likelihood functions using all the likelihoods for each individual models
self.likfun = ProductCombination(root_models, likfuns)
if(kernel is None):
mapping, garbage_index = self._get_mapping()
models = []
for mdl, mdl_index in mapping:
models.append(mdl)
kernel = DefaultKernel(models)
self.kernel = kernel
self.backend = backend
self.rng = np.random.RandomState(seed)
# these are usually big tables, so we broadcast them to have them once
# per executor instead of once per task
self.accepted_parameters_manager = AcceptedParametersManager(self.model)
self.simulation_counter = 0
def sample(self, observations, steps, n_samples = 10000, n_samples_per_param = 100, covFactors = None, iniPoints = None, full_output=0, journal_file = None):
"""Samples from the posterior distribution of the model parameter given the observed
data observations.
Parameters
----------
observations : list
A list, containing lists describing the observed data sets
steps : integer
number of iterations in the sequential algoritm ("generations")
n_samples : integer, optional
number of samples to generate. The default value is 10000.
n_samples_per_param : integer, optional
number of data points in each simulated data set. The default value is 100.
covFactor : list of float, optional
scaling parameter of the covariance matrix. The default is a p dimensional array of 1 when p is the dimension of the parameter.
inipoints : numpy.ndarray, optional
parameter vaulues from where the sampling starts. By default sampled from the prior.
full_output: integer, optional
If full_output==1, intermediate results are included in output journal.
The default value is 0, meaning the intermediate results are not saved.
Returns
-------
abcpy.output.Journal
A journal containing simulation results, metadata and optionally intermediate results.
"""
self.sample_from_prior(rng=self.rng)
self.accepted_parameters_manager.broadcast(self.backend, observations)
self.n_samples = n_samples
self.n_samples_per_param = n_samples_per_param
if(journal_file is None):
journal = Journal(full_output)
journal.configuration["type_model"] = [type(model).__name__ for model in self.model]
journal.configuration["type_lhd_func"] = type(self.likfun).__name__
journal.configuration["n_samples"] = self.n_samples
journal.configuration["n_samples_per_param"] = self.n_samples_per_param
journal.configuration["steps"] = steps
journal.configuration["covFactor"] = covFactors
journal.configuration["iniPoints"] = iniPoints
else:
journal = Journal.fromFile(journal_file)
accepted_parameters = None
accepted_weights = None
accepted_cov_mats = None
new_theta = None
dim = len(self.get_parameters())
# Initialize particles: When not supplied, randomly draw them from prior distribution
# Weights of particles: Assign equal weights for each of the particles
if iniPoints == None:
accepted_parameters = np.zeros(shape=(n_samples, dim))
for ind in range(0, n_samples):
self.sample_from_prior(rng=self.rng)
accepted_parameters[ind, :] = self.get_parameters()
accepted_weights = np.ones((n_samples, 1), dtype=np.float) / n_samples
else:
accepted_parameters = iniPoints
accepted_weights = np.ones((iniPoints.shape[0], 1), dtype=np.float) / iniPoints.shape[0]
if covFactors is None:
covFactors = np.ones(shape=(len(self.kernel.kernels),))
self.accepted_parameters_manager.update_broadcast(self.backend, accepted_parameters=accepted_parameters, accepted_weights=accepted_weights)
# The parameters relevant to each kernel have to be used to calculate n_sample times. It is therefore more efficient to broadcast these parameters once, instead of collecting them at each kernel in each step
kernel_parameters = []
for kernel in self.kernel.kernels:
kernel_parameters.append(
self.accepted_parameters_manager.get_accepted_parameters_bds_values(kernel.models))
self.accepted_parameters_manager.update_kernel_values(self.backend, kernel_parameters=kernel_parameters)
# 3: calculate covariance
# print("INFO: Calculating covariance matrix.")
new_cov_mats = self.kernel.calculate_cov(self.accepted_parameters_manager)
# Since each entry of new_cov_mats is a numpy array, we can multiply like this
accepted_cov_mats = [covFactor * new_cov_mat for covFactor, new_cov_mat in zip(covFactors,new_cov_mats)]
self.accepted_parameters_manager.update_broadcast(self.backend, accepted_cov_mats=accepted_cov_mats)
# main SMC algorithm
# print("INFO: Starting PMC iterations.")
for aStep in range(0, steps):
if(aStep==0 and journal_file is not None):
accepted_parameters = journal.parameters[-1]
accepted_weights = journal.weights[-1]
approx_likelihood_new_parameters = journal.opt_values[-1]
self.accepted_parameters_manager.update_broadcast(self.backend, accepted_parameters=accepted_parameters, accepted_weights=accepted_weights)
kernel_parameters = []
for kernel in self.kernel.kernels:
kernel_parameters.append(
self.accepted_parameters_manager.get_accepted_parameters_bds_values(kernel.models))
self.accepted_parameters_manager.update_kernel_values(self.backend, kernel_parameters=kernel_parameters)
# 3: calculate covariance
# print("INFO: Calculating covariance matrix.")
new_cov_mats = self.kernel.calculate_cov(self.accepted_parameters_manager)
# Since each entry of new_cov_mats is a numpy array, we can multiply like this
accepted_cov_mats = [covFactor * new_cov_mat for covFactor, new_cov_mat in zip(covFactors, new_cov_mats)]
# print("DEBUG: Iteration " + str(aStep) + " of PMC algorithm.")
# 0: update remotely required variables
# print("INFO: Broadcasting parameters.")
self.accepted_parameters_manager.update_broadcast(self.backend, accepted_parameters=accepted_parameters, accepted_weights=accepted_weights, accepted_cov_mats=accepted_cov_mats)
# 1: calculate resample parameters
# print("INFO: Resample parameters.")
index = self.rng.choice(accepted_parameters.shape[0], size=n_samples, p=accepted_weights.reshape(-1))
# Choose a new particle using the resampled particle (make the boundary proper)
# Initialize new_parameters
new_parameters = np.zeros((n_samples, dim), dtype=np.float)
for ind in range(0, self.n_samples):
while True:
perturbation_output = self.perturb(index[ind], rng=self.rng)
if perturbation_output[0] and self.pdf_of_prior(self.model, perturbation_output[1])!= 0:
new_parameters[ind, :] = perturbation_output[1]
break
# 2: calculate approximate lieklihood for new parameters
# print("INFO: Calculate approximate likelihood.")
new_parameters_pds = self.backend.parallelize(new_parameters)
approx_likelihood_new_parameters_and_counter_pds = self.backend.map(self._approx_lik_calc, new_parameters_pds)
# print("DEBUG: Collect approximate likelihood from pds.")
approx_likelihood_new_parameters_and_counter = self.backend.collect(approx_likelihood_new_parameters_and_counter_pds)
approx_likelihood_new_parameters, counter = [list(t) for t in zip(*approx_likelihood_new_parameters_and_counter)]
approx_likelihood_new_parameters = np.array(approx_likelihood_new_parameters).reshape(-1,1)
for count in counter:
self.simulation_counter+=count
# 3: calculate new weights for new parameters
# print("INFO: Calculating weights.")
new_weights_pds = self.backend.map(self._calculate_weight, new_parameters_pds)
new_weights = np.array(self.backend.collect(new_weights_pds)).reshape(-1, 1)
sum_of_weights = 0.0
for i in range(0, self.n_samples):
new_weights[i] = new_weights[i] * approx_likelihood_new_parameters[i]
sum_of_weights += new_weights[i]
new_weights = new_weights / sum_of_weights
accepted_parameters = new_parameters
self.accepted_parameters_manager.update_broadcast(self.backend, accepted_parameters=accepted_parameters, accepted_weights=new_weights)
# 4: calculate covariance
# print("INFO: Calculating covariance matrix.")
# The parameters relevant to each kernel have to be used to calculate n_sample times. It is therefore more efficient to broadcast these parameters once, instead of collecting them at each kernel in each step
kernel_parameters = []
for kernel in self.kernel.kernels:
kernel_parameters.append(
self.accepted_parameters_manager.get_accepted_parameters_bds_values(kernel.models))
self.accepted_parameters_manager.update_kernel_values(self.backend, kernel_parameters=kernel_parameters)
# 3: calculate covariance
# print("INFO: Calculating covariance matrix.")
new_cov_mats = self.kernel.calculate_cov(self.accepted_parameters_manager)
# Since each entry of new_cov_mats is a numpy array, we can multiply like this
new_cov_mats = [covFactor * new_cov_mat for covFactor, new_cov_mat in zip(covFactors, new_cov_mats)]
# 5: Update the newly computed values
accepted_parameters = new_parameters
accepted_weights = new_weights
accepted_cov_mat = new_cov_mats
# print("INFO: Saving configuration to output journal.")
if (full_output == 1 and aStep <= steps - 1) or (full_output == 0 and aStep == steps - 1):
journal.add_parameters(accepted_parameters)
journal.add_weights(accepted_weights)
journal.add_opt_values(approx_likelihood_new_parameters)
self.accepted_parameters_manager.update_broadcast(self.backend, accepted_parameters=accepted_parameters,
accepted_weights=accepted_weights)
names_and_parameters = self._get_names_and_parameters()
journal.add_user_parameters(names_and_parameters)
journal.number_of_simulations.append(self.simulation_counter)
return journal
# define helper functions for map step
def _approx_lik_calc(self, theta):
"""
Compute likelihood for new parameters using approximate likelihood function
Parameters
----------
theta: numpy.ndarray
1xp matrix containing the model parameters, where p is the number of parameters
Returns
-------
float
The approximated likelihood function
"""
# Simulate the fake data from the model given the parameter value theta
# print("DEBUG: Simulate model for parameter " + str(theta))
y_sim = self.simulate(self.n_samples_per_param, self.rng)
# print("DEBUG: Extracting observation.")
obs = self.accepted_parameters_manager.observations_bds.value()
# print("DEBUG: Computing likelihood...")
total_pdf_at_theta = 1.
lhd = self.likfun.likelihood(obs, y_sim)
# print("DEBUG: Likelihood is :" + str(lhd))
pdf_at_theta = self.pdf_of_prior(self.model, theta)
total_pdf_at_theta*=(pdf_at_theta*lhd)
# print("DEBUG: prior pdf evaluated at theta is :" + str(pdf_at_theta))
return (total_pdf_at_theta, 1)
def _calculate_weight(self, theta):
"""
Calculates the weight for the given parameter using
accepted_parameters, accepted_cov_mat
Parameters
----------
theta: np.ndarray
1xp matrix containing the model parameters, where p is the number of parameters
Returns
-------
float
The new weight for theta
"""
if self.accepted_parameters_manager.accepted_weights_bds is None:
return 1.0 / self.n_samples
else:
prior_prob = self.pdf_of_prior(self.model, theta)
denominator = 0.0
mapping_for_kernels, garbage_index = self.accepted_parameters_manager.get_mapping(
self.accepted_parameters_manager.model)
for i in range(0, self.n_samples):
pdf_value = self.kernel.pdf(mapping_for_kernels, self.accepted_parameters_manager, i, theta)
denominator+=self.accepted_parameters_manager.accepted_weights_bds.value()[i,0]*pdf_value
return 1.0 * prior_prob / denominator
class SABC(BaseDiscrepancy, InferenceMethod):
"""
This base class implements a modified version of Simulated Annealing Approximate Bayesian Computation (SABC) of [1] when the prior is non-informative.
[1] C. Albert, H. R. Kuensch and A. Scheidegger. A Simulated Annealing Approach to
Approximate Bayes Computations. Statistics and Computing, (2014).
Parameters
----------
model : list
A list of the Probabilistic models corresponding to the observed datasets
distance : abcpy.distances.Distance
Distance object defining the distance measure used to compare simulated and observed data sets.
kernel : abcpy.distributions.Distribution
Distribution object defining the perturbation kernel needed for the sampling.
backend : abcpy.backends.Backend
Backend object defining the backend to be used.
seed : integer, optional
Optional initial seed for the random number generator. The default value is generated randomly.
"""
model = None
distance = None
kernel = None
rng = None
n_samples = None
n_samples_per_param = None
epsilon = None
smooth_distances_bds = None
all_distances_bds = None
backend = None
def __init__(self, root_models, distances, backend, kernel=None, seed=None):
self.model = root_models
# We define the joint Linear combination distance using all the distances for each individual models
self.distance = LinearCombination(root_models, distances)
if (kernel is None):
mapping, garbage_index = self._get_mapping()
models = []
for mdl, mdl_index in mapping:
models.append(mdl)
kernel = DefaultKernel(models)
self.kernel = kernel
self.backend = backend
self.rng = np.random.RandomState(seed)
# these are usually big tables, so we broadcast them to have them once
# per executor instead of once per task
self.smooth_distances_bds = None
self.all_distances_bds = None
self.accepted_parameters_manager = AcceptedParametersManager(self.model)
self.simulation_counter = 0
def sample(self, observations, steps, epsilon, n_samples = 10000, n_samples_per_param = 1, beta = 2, delta = 0.2, v = 0.3, ar_cutoff = 0.5, resample = None, n_update = None, adaptcov = 1, full_output=0, journal_file = None):
"""Samples from the posterior distribution of the model parameter given the observed
data observations.
Parameters
----------
observations : list
A list, containing lists describing the observed data sets
steps : integer
Number of maximum iterations in the sequential algoritm ("generations")
epsilon : numpy.float
A proposed value of threshold to start with.
n_samples : integer, optional
Number of samples to generate. The default value is 10000.
n_samples_per_param : integer, optional
Number of data points in each simulated data set. The default value is 1.
beta : numpy.float
Tuning parameter of SABC
delta : numpy.float
Tuning parameter of SABC
v : numpy.float, optional
Tuning parameter of SABC, The default value is 0.3.
ar_cutoff : numpy.float
Acceptance ratio cutoff, The default value is 0.5
resample: int, optional
Resample after this many acceptance, The default value if n_samples
n_update: int, optional
Number of perturbed parameters at each step, The default value if n_samples
adaptcov : boolean, optional
Whether we adapt the covariance matrix in iteration stage. The default value TRUE.
full_output: integer, optional
If full_output==1, intermediate results are included in output journal.
The default value is 0, meaning the intermediate results are not saved.
Returns
-------
abcpy.output.Journal
A journal containing simulation results, metadata and optionally intermediate results.
"""
global broken_preemptively
self.sample_from_prior(rng=self.rng)
self.accepted_parameters_manager.broadcast(self.backend, observations)
self.epsilon = epsilon
self.n_samples = n_samples
self.n_samples_per_param = n_samples_per_param
if(journal_file is None):
journal = Journal(full_output)
journal.configuration["type_model"] = [type(model).__name__ for model in self.model]
journal.configuration["type_dist_func"] = type(self.distance).__name__
journal.configuration["type_kernel_func"] = type(self.kernel)
journal.configuration["n_samples"] = self.n_samples
journal.configuration["n_samples_per_param"] = self.n_samples_per_param
journal.configuration["beta"] = beta
journal.configuration["delta"] = delta
journal.configuration["v"] = v
journal.configuration["ar_cutoff"] = ar_cutoff
journal.configuration["resample"] = resample
journal.configuration["n_update"] = n_update
journal.configuration["adaptcov"] = adaptcov
journal.configuration["full_output"] = full_output
else:
journal = Journal.fromFile(journal_file)
accepted_parameters = np.zeros(shape=(n_samples, len(self.get_parameters(self.model))))
distances = np.zeros(shape=(n_samples,))
smooth_distances = np.zeros(shape=(n_samples,))
accepted_weights = np.ones(shape=(n_samples, 1))
all_distances = None
accepted_cov_mat = None
if resample == None:
resample = n_samples
if n_update == None:
n_update = n_samples
sample_array = np.ones(shape=(steps,))
sample_array[0] = n_samples
sample_array[1:] = n_update
## Acceptance counter to determine the resampling step
accept = 0
samples_until = 0
## Counter whether broken preemptively
broken_preemptively = False
for aStep in range(0, steps):
print(aStep)
if(aStep==0 and journal_file is not None):
accepted_parameters=journal.parameters[-1]
accepted_weights=journal.weights[-1]
#Broadcast Accepted parameters and Accedpted weights
self.accepted_parameters_manager.update_broadcast(self.backend, accepted_parameters=accepted_parameters, accepted_weights=accepted_weights)
kernel_parameters = []
for kernel in self.kernel.kernels:
kernel_parameters.append(
self.accepted_parameters_manager.get_accepted_parameters_bds_values(kernel.models))
#Broadcast Accepted Kernel parameters
self.accepted_parameters_manager.update_kernel_values(self.backend, kernel_parameters=kernel_parameters)
new_cov_mats = self.kernel.calculate_cov(self.accepted_parameters_manager)
if accepted_parameters.shape[1] > 1:
accepted_cov_mats = [beta * new_cov_mat + 0.0001 * np.trace(new_cov_mat) * np.eye(len(new_cov_mat)) for
new_cov_mat in new_cov_mats]
else:
accepted_cov_mats = [beta*new_cov_mat + 0.0001*(new_cov_mat)*np.eye(accepted_parameters.shape[1]) for new_cov_mat in new_cov_mats]
# Broadcast Accepted Covariance Matrix
self.accepted_parameters_manager.update_broadcast(self.backend, accepted_cov_mats=accepted_cov_mats)
# main SABC algorithm
# print("INFO: Initialization of SABC")
seed_arr = self.rng.randint(0, np.iinfo(np.uint32).max, size=int(sample_array[aStep]), dtype=np.uint32)
rng_arr = np.array([np.random.RandomState(seed) for seed in seed_arr])
index_arr = self.rng.randint(0, self.n_samples, size=int(sample_array[aStep]), dtype=np.uint32)
data_arr = []
for i in range(len(rng_arr)):
data_arr.append([rng_arr[i], index_arr[i]])
data_pds = self.backend.parallelize(data_arr)
# 0: update remotely required variables
# print("INFO: Broadcasting parameters.")
self.epsilon = epsilon
self._update_broadcasts(smooth_distances, all_distances)
# 1: Calculate parameters
# print("INFO: Initial accepted parameter parameters")
params_and_dists_pds = self.backend.map(self._accept_parameter, data_pds)
params_and_dists = self.backend.collect(params_and_dists_pds)
new_parameters, new_distances, new_all_parameters, new_all_distances, index, acceptance, counter = [list(t) for t in
zip(
*params_and_dists)]
# Keeping counter of number of simulations
for count in counter:
self.simulation_counter+=count
new_parameters = np.array(new_parameters)
new_distances = np.array(new_distances)
new_all_distances = np.concatenate(new_all_distances)
index = np.array(index)
acceptance = np.array(acceptance)
# Reading all_distances at Initial step
if aStep == 0:
index = np.linspace(0, n_samples - 1, n_samples).astype(int).reshape(n_samples, )
accept = 0
all_distances = new_all_distances
# Initialize/Update the accepted parameters and their corresponding distances
accepted_parameters[index[acceptance == 1], :] = new_parameters[acceptance == 1, :]
distances[index[acceptance == 1]] = new_distances[acceptance == 1]
# 2: Smoothing of the distances
smooth_distances[index[acceptance == 1]] = self._smoother_distance(distances[index[acceptance == 1]],
all_distances)
# 3: Initialize/Update U, epsilon and covariance of perturbation kernel
if aStep == 0:
U = self._average_redefined_distance(self._smoother_distance(all_distances, all_distances), epsilon)
else:
U = np.mean(smooth_distances)
epsilon = self._schedule(U, v)
# 4: Show progress and if acceptance rate smaller than a value break the iteration
if aStep > 0:
accept = accept + np.sum(acceptance)
samples_until = samples_until + sample_array[aStep]
acceptance_rate = accept / samples_until
print(
'updates: ', np.sum(sample_array[1:aStep + 1]) / np.sum(sample_array[1:]) * 100, ' epsilon: ', epsilon, \
'u.mean: ', U, 'acceptance rate: ', acceptance_rate)
if acceptance_rate < ar_cutoff:
broken_preemptively = True
break
# 5: Resampling if number of accepted particles greater than resample
if accept >= resample and U > 1e-100:
## Weighted resampling:
weight = np.exp(-smooth_distances * delta / U)
weight = weight / sum(weight)
index_resampled = self.rng.choice(np.arange(n_samples), n_samples, replace=1, p=weight)
accepted_parameters = accepted_parameters[index_resampled, :]
smooth_distances = smooth_distances[index_resampled]
## Update U and epsilon:
epsilon = epsilon * (1 - delta)
U = np.mean(smooth_distances)
epsilon = self._schedule(U, v)
## Print effective sampling size
print('Resampling: Effective sampling size: ', 1 / sum(pow(weight / sum(weight), 2)))
accept = 0
samples_until = 0
## Compute and broadcast accepted parameters, accepted kernel parameters and accepted Covariance matrix
# Broadcast Accepted parameters and add to journal
self.accepted_parameters_manager.update_broadcast(self.backend, accepted_parameters=accepted_parameters)
# Compute Accepetd Kernel parameters and broadcast them
kernel_parameters = []
for kernel in self.kernel.kernels:
kernel_parameters.append(
self.accepted_parameters_manager.get_accepted_parameters_bds_values(kernel.models))
self.accepted_parameters_manager.update_kernel_values(self.backend, kernel_parameters=kernel_parameters)
# Compute Kernel Covariance Matrix and broadcast it
new_cov_mats = self.kernel.calculate_cov(self.accepted_parameters_manager)
if accepted_parameters.shape[1] > 1:
accepted_cov_mats = [beta * new_cov_mat + 0.0001 * np.trace(new_cov_mat) * np.eye(len(new_cov_mat))
for new_cov_mat in new_cov_mats]
else:
accepted_cov_mats = [
beta * new_cov_mat + 0.0001 * (new_cov_mat) * np.eye(accepted_parameters.shape[1])
for new_cov_mat in new_cov_mats]
self.accepted_parameters_manager.update_broadcast(self.backend, accepted_cov_mats=accepted_cov_mats)
if (full_output == 1 and aStep<= steps-1):
## Saving intermediate configuration to output journal.
print('Saving after resampling')
journal.add_parameters(copy.deepcopy(accepted_parameters))
journal.add_weights(copy.deepcopy(accepted_weights))
journal.add_distances(copy.deepcopy(distances))
names_and_parameters = self._get_names_and_parameters()
journal.add_user_parameters(names_and_parameters)
journal.number_of_simulations.append(self.simulation_counter)
else:
## Compute and broadcast accepted parameters, accepted kernel parameters and accepted Covariance matrix
# Broadcast Accepted parameters
self.accepted_parameters_manager.update_broadcast(self.backend, accepted_parameters=accepted_parameters)
# Compute Accepetd Kernel parameters and broadcast them
kernel_parameters = []
for kernel in self.kernel.kernels:
kernel_parameters.append(
self.accepted_parameters_manager.get_accepted_parameters_bds_values(kernel.models))
self.accepted_parameters_manager.update_kernel_values(self.backend, kernel_parameters=kernel_parameters)
# Compute Kernel Covariance Matrix and broadcast it
new_cov_mats = self.kernel.calculate_cov(self.accepted_parameters_manager)
if accepted_parameters.shape[1] > 1:
accepted_cov_mats = [beta * new_cov_mat + 0.0001 * np.trace(new_cov_mat) * np.eye(len(new_cov_mat))
for new_cov_mat in new_cov_mats]
else:
accepted_cov_mats = [
beta * new_cov_mat + 0.0001 * (new_cov_mat) * np.eye(accepted_parameters.shape[1])
for new_cov_mat in new_cov_mats]
self.accepted_parameters_manager.update_broadcast(self.backend, accepted_cov_mats=accepted_cov_mats)
if (full_output == 1 and aStep <= steps-1):
## Saving intermediate configuration to output journal.
journal.add_parameters(copy.deepcopy(accepted_parameters))
journal.add_weights(copy.deepcopy(accepted_weights))
journal.add_distances(copy.deepcopy(distances))
names_and_parameters = self._get_names_and_parameters()
journal.add_user_parameters(names_and_parameters)
journal.number_of_simulations.append(self.simulation_counter)
# Add epsilon_arr, number of final steps and final output to the journal
# print("INFO: Saving final configuration to output journal.")
if (full_output == 0) or (full_output ==1 and broken_preemptively and aStep<= steps-1):
journal.add_parameters(copy.deepcopy(accepted_parameters))
journal.add_weights(copy.deepcopy(accepted_weights))
journal.add_distances(copy.deepcopy(distances))
self.accepted_parameters_manager.update_broadcast(self.backend,accepted_parameters=accepted_parameters,accepted_weights=accepted_weights)
names_and_parameters = self._get_names_and_parameters()
journal.add_user_parameters(names_and_parameters)
journal.number_of_simulations.append(self.simulation_counter)
journal.configuration["steps"] = aStep + 1
journal.configuration["epsilon"] = epsilon
return journal
def _smoother_distance(self, distance, old_distance):
"""Smooths the distance using the Equation 14 of [1].
[1] C. Albert, H. R. Kuensch and A. Scheidegger. A Simulated Annealing Approach to
Approximate Bayes Computations. Statistics and Computing 0960-3174 (2014).
Parameters
----------
distance: numpy.ndarray
Current distance between the simulated and observed data
old_distance: numpy.ndarray
Last distance between the simulated and observed data
Returns
-------
numpy.ndarray
Smoothed distance
"""
smoothed_distance = np.zeros(shape=(len(distance),))
for ind in range(0, len(distance)):
if distance[ind] < np.min(old_distance):
smoothed_distance[ind] = (distance[ind] / np.min(old_distance)) / len(old_distance)
else:
smoothed_distance[ind] = np.mean(np.array(old_distance) < distance[ind])
return smoothed_distance
def _average_redefined_distance(self, distance, epsilon):
"""
Function to calculate the weighted average of the distance
Parameters
----------
distance: numpy.ndarray
Distance between simulated and observed data set
epsilon: float
threshold
Returns
-------
numpy.ndarray
Weighted average of the distance
"""
if epsilon == 0:
U = 0
else:
U = np.average(distance, weights=np.exp(-distance / epsilon))
return (U)
def _schedule(self, rho, v):
if rho < 1e-100:
epsilon = 0
else:
fun = lambda epsilon: pow(epsilon, 2) + v * pow(epsilon, 3 / 2) - pow(rho, 2)
epsilon = optimize.fsolve(fun, rho / 2)
return (epsilon)
def _update_broadcasts(self, smooth_distances, all_distances):
def destroy(bc):
if bc != None:
bc.unpersist
# bc.destroy
if not smooth_distances is None:
self.smooth_distances_bds = self.backend.broadcast(smooth_distances)
if not all_distances is None:
self.all_distances_bds = self.backend.broadcast(all_distances)
# define helper functions for map step
def _accept_parameter(self, data):
"""
Samples a single model parameter and simulate from it until
accepted with probabilty exp[-rho(x,y)/epsilon].
Parameters
----------
seed: integer
Initial seed for the random number generator.
Returns
-------
numpy.ndarray
accepted parameter
"""
if(isinstance(data,np.ndarray)):
data = data.tolist()
rng=data[0]
index=data[1]
rng.seed(rng.randint(np.iinfo(np.uint32).max, dtype=np.uint32))
all_parameters = []
all_distances = []
acceptance = 0
counter = 0
if self.accepted_parameters_manager.accepted_cov_mats_bds == None:
while acceptance == 0:
self.sample_from_prior(rng=rng)
new_theta = np.array(self.get_parameters()).reshape(-1,)
all_parameters.append(new_theta)
y_sim = self.simulate(self.n_samples_per_param, rng=rng)
counter+=1
distance = self.distance.distance(self.accepted_parameters_manager.observations_bds.value(), y_sim)
all_distances.append(distance)
acceptance = rng.binomial(1, np.exp(-distance / self.epsilon), 1)
acceptance = 1
else:
## Select one arbitrary particle:
index = rng.choice(self.n_samples, size=1)[0]
## Sample proposal parameter and calculate new distance:
theta = self.accepted_parameters_manager.accepted_parameters_bds.value()[index,:]
while True:
perturbation_output = self.perturb(index, rng=rng)
if perturbation_output[0] and self.pdf_of_prior(self.model, perturbation_output[1]) != 0:
new_theta = np.array(perturbation_output[1]).reshape(-1,)
break
y_sim = self.simulate(self.n_samples_per_param, rng=rng)
counter+=1
distance = self.distance.distance(self.accepted_parameters_manager.observations_bds.value(), y_sim)
smooth_distance = self._smoother_distance([distance], self.all_distances_bds.value())
## Calculate acceptance probability:
ratio_prior_prob = self.pdf_of_prior(self.model, perturbation_output[1]) / self.pdf_of_prior(self.model,
self.accepted_parameters_manager.accepted_parameters_bds.value()[index, :])
ratio_likelihood_prob = np.exp((self.smooth_distances_bds.value()[index] - smooth_distance) / self.epsilon)
acceptance_prob = ratio_prior_prob * ratio_likelihood_prob
## If accepted
if rng.rand(1) < acceptance_prob:
acceptance = 1
else:
distance = np.inf
return (new_theta, distance, all_parameters, all_distances, index, acceptance, counter)
class ABCsubsim(BaseDiscrepancy, InferenceMethod):
"""This base class implements Approximate Bayesian Computation by subset simulation (ABCsubsim) algorithm of [1].
[1] M. Chiachio, J. L. Beck, J. Chiachio, and G. Rus., Approximate Bayesian computation by subset
simulation. SIAM J. Sci. Comput., 36(3):A1339–A1358, 2014/10/03 2014.
Parameters
----------
model : list
A list of the Probabilistic models corresponding to the observed datasets
distance : abcpy.distances.Distance
Distance object defining the distance used to compare the simulated and observed data sets.
kernel : abcpy.distributions.Distribution
Distribution object defining the perturbation kernel needed for the sampling.
backend : abcpy.backends.Backend
Backend object defining the backend to be used.
seed : integer, optional
Optional initial seed for the random number generator. The default value is generated randomly.
"""
model = None
distance = None
kernel = None
rng = None
anneal_parameter = None
n_samples = None
n_samples_per_param = None
chain_length = None
backend = None
def __init__(self, root_models, distances, backend, kernel=None,seed=None):
self.model = root_models
# We define the joint Linear combination distance using all the distances for each individual models
self.distance = LinearCombination(root_models, distances)
if (kernel is None):
mapping, garbage_index = self._get_mapping()
models = []
for mdl, mdl_index in mapping:
models.append(mdl)
kernel = DefaultKernel(models)
self.kernel = kernel
self.backend = backend
self.rng = np.random.RandomState(seed)
self.anneal_parameter = None
# these are usually big tables, so we broadcast them to have them once
# per executor instead of once per task
self.accepted_parameters_manager = AcceptedParametersManager(self.model)
self.simulation_counter = 0
def sample(self, observations, steps, n_samples = 10000, n_samples_per_param = 1, chain_length = 10, ap_change_cutoff = 10, full_output=0, journal_file = None):
"""Samples from the posterior distribution of the model parameter given the observed
data observations.
Parameters
----------
observations : list
A list, containing lists describing the observed data sets
steps : integer
Number of iterations in the sequential algoritm ("generations")
ap_change_cutoff : float, optional
The cutoff value for the percentage change in the anneal parameter. If the change is less than
ap_change_cutoff the iterations are stopped. The default value is 10.
full_output: integer, optional
If full_output==1, intermediate results are included in output journal.
The default value is 0, meaning the intermediate results are not saved.
Returns
-------
abcpy.output.Journal
A journal containing simulation results, metadata and optionally intermediate results.
"""
self.sample_from_prior(rng=self.rng)
self.accepted_parameters_manager.broadcast(self.backend, observations)
self.chain_length = chain_length
self.n_samples = n_samples
self.n_samples_per_param = n_samples_per_param
if(journal_file is None):
journal = Journal(full_output)
journal.configuration["type_model"] = [type(model).__name__ for model in self.model]
journal.configuration["type_dist_func"] = type(self.distance).__name__
journal.configuration["type_kernel_func"] = type(self.kernel)
journal.configuration["n_samples"] = self.n_samples
journal.configuration["n_samples_per_param"] = self.n_samples_per_param
journal.configuration["chain_length"] = self.chain_length
journal.configuration["ap_change_cutoff"] = ap_change_cutoff
journal.configuration["full_output"] = full_output
else:
journal = Journal.fromFile(journal_file)
accepted_parameters = None
accepted_weights = np.ones(shape=(n_samples, 1))
accepted_cov_mat = None
anneal_parameter = 0
anneal_parameter_old = 0
temp_chain_length = 1
for aStep in range(0, steps):
if(aStep==0 and journal_file is not None):
accepted_parameters = journal.parameters[-1]
accepted_weights = journal.weights[-1]
accepted_cov_mats = journal.opt_values[-1]
# main ABCsubsim algorithm
# print("INFO: Initialization of ABCsubsim")
seed_arr = self.rng.randint(0, np.iinfo(np.uint32).max, size=int(n_samples / temp_chain_length),
dtype=np.uint32)
rng_arr = np.array([np.random.RandomState(seed) for seed in seed_arr])
index_arr = np.linspace(0, n_samples / temp_chain_length - 1, n_samples / temp_chain_length).astype(
int).reshape(int(n_samples / temp_chain_length), )
rng_and_index_arr = np.column_stack((rng_arr, index_arr))
rng_and_index_pds = self.backend.parallelize(rng_and_index_arr)
# 0: update remotely required variables
# print("INFO: Broadcasting parameters.")
self.accepted_parameters_manager.update_broadcast(self.backend, accepted_parameters=accepted_parameters)
# 1: Calculate parameters
# print("INFO: Initial accepted parameter parameters")
params_and_dists_pds = self.backend.map(self._accept_parameter, rng_and_index_pds)
params_and_dists = self.backend.collect(params_and_dists_pds)
new_parameters, new_distances, counter = [list(t) for t in zip(*params_and_dists)]
for count in counter:
self.simulation_counter+=count
accepted_parameters = np.concatenate(new_parameters)
distances = np.concatenate(new_distances)
# 2: Sort and renumber samples
SortIndex = sorted(range(len(distances)), key=lambda k: distances[k])
distances = distances[SortIndex]
accepted_parameters = accepted_parameters[SortIndex, :]
# 3: Calculate and broadcast annealling parameters
temp_chain_length = chain_length
if aStep > 0:
anneal_parameter_old = anneal_parameter
anneal_parameter = 0.5 * (
distances[int(n_samples / temp_chain_length)] + distances[int(n_samples / temp_chain_length) + 1])
self.anneal_parameter = anneal_parameter
# 4: Update proposal covariance matrix (Parallelized)
if aStep == 0:
self.accepted_parameters_manager.update_broadcast(self.backend, accepted_parameters=accepted_parameters)
kernel_parameters = []
for kernel in self.kernel.kernels:
kernel_parameters.append(
self.accepted_parameters_manager.get_accepted_parameters_bds_values(kernel.models))
self.accepted_parameters_manager.update_kernel_values(self.backend, kernel_parameters=kernel_parameters)
accepted_cov_mats = self.kernel.calculate_cov(self.accepted_parameters_manager)
else:
accepted_cov_mats = pow(2,1)*accepted_cov_mats
self.accepted_parameters_manager.update_broadcast(self.backend, accepted_cov_mats=accepted_cov_mats)
seed_arr = self.rng.randint(0, np.iinfo(np.uint32).max, size=10, dtype=np.uint32)
rng_arr = np.array([np.random.RandomState(seed) for seed in seed_arr])
index_arr = np.linspace(0, 10 - 1, 10).astype(int).reshape(10, )
rng_and_index_arr = np.column_stack((rng_arr, index_arr))
rng_and_index_pds = self.backend.parallelize(rng_and_index_arr)
cov_mats_index_pds = self.backend.map(self._update_cov_mat, rng_and_index_pds)
cov_mats_index = self.backend.collect(cov_mats_index_pds)
cov_mats, T, accept_index, counter = [list(t) for t in zip(*cov_mats_index)]
for count in counter:
self.simulation_counter+=count
for ind in range(10):
if accept_index[ind] == 1:
accepted_cov_mats = cov_mats[ind]
break
self.accepted_parameters_manager.update_broadcast(self.backend, accepted_cov_mats=accepted_cov_mats)
# print("INFO: Saving intermediate configuration to output journal.")
if full_output == 1:
journal.add_parameters(copy.deepcopy(accepted_parameters))
journal.add_weights(copy.deepcopy(accepted_weights))
journal.add_opt_values(accepted_cov_mats)
self.accepted_parameters_manager.update_broadcast(self.backend, accepted_parameters=accepted_parameters,
accepted_weights=accepted_weights)
names_and_parameters = self._get_names_and_parameters()
journal.add_user_parameters(names_and_parameters)
journal.number_of_simulations.append(self.simulation_counter)
# Show progress
anneal_parameter_change_percentage = 100 * abs(anneal_parameter_old - anneal_parameter) / abs(anneal_parameter)
print('Steps: ', aStep, 'annealing parameter: ', anneal_parameter, 'change (%) in annealing parameter: ',
anneal_parameter_change_percentage)
if anneal_parameter_change_percentage < ap_change_cutoff:
break
# Add anneal_parameter, number of final steps and final output to the journal
# print("INFO: Saving final configuration to output journal.")
if full_output == 0:
journal.add_parameters(copy.deepcopy(accepted_parameters))
journal.add_weights(copy.deepcopy(accepted_weights))
journal.add_opt_values(accepted_cov_mats)
self.accepted_parameters_manager.update_broadcast(self.backend, accepted_parameters=accepted_parameters,
accepted_weights=accepted_weights)
names_and_parameters = self._get_names_and_parameters()
journal.add_user_parameters(names_and_parameters)
journal.number_of_simulations.append(self.simulation_counter)
journal.configuration["steps"] = aStep + 1
journal.configuration["anneal_parameter"] = anneal_parameter
return journal
# define helper functions for map step
def _accept_parameter(self, rng_and_index):
"""
Samples a single model parameter and simulate from it until
distance between simulated outcome and the observation is
smaller than epsilon.
Parameters
----------
seed: numpy.ndarray
2 dimensional array. The first entry defines the initial seed of therandom number generator.
The second entry defines the index in the data set.
Returns
-------
numpy.ndarray
accepted parameter
"""
rng = rng_and_index[0]
index = rng_and_index[1]
rng.seed(rng.randint(np.iinfo(np.uint32).max, dtype=np.uint32))
mapping_for_kernels, garbage_index = self.accepted_parameters_manager.get_mapping(
self.accepted_parameters_manager.model)
result_theta = []
result_distance = []
counter = 0
if self.accepted_parameters_manager.accepted_parameters_bds == None:
self.sample_from_prior(rng=rng)
y_sim = self.simulate(self.n_samples_per_param, rng=rng)
counter+=1
distance = self.distance.distance(self.accepted_parameters_manager.observations_bds.value(), y_sim)
result_theta.append(self.get_parameters())
result_distance.append(distance)
else:
theta = np.array(self.accepted_parameters_manager.accepted_parameters_bds.value()[index]).reshape(-1,)
self.set_parameters(theta)
y_sim = self.simulate(self.n_samples_per_param, rng=rng)
counter+=1
distance = self.distance.distance(self.accepted_parameters_manager.observations_bds.value(), y_sim)
result_theta.append(theta)
result_distance.append(distance)
for ind in range(0, self.chain_length - 1):
while True:
perturbation_output = self.perturb(index, rng=rng)
if perturbation_output[0] and self.pdf_of_prior(self.model, perturbation_output[1])!= 0:
break
y_sim = self.simulate(self.n_samples_per_param, rng=rng)
counter+=1
new_distance = self.distance.distance(self.accepted_parameters_manager.observations_bds.value(), y_sim)
## Calculate acceptance probability:
ratio_prior_prob = self.pdf_of_prior(self.model, perturbation_output[1]) / self.pdf_of_prior(self.model, theta)
kernel_numerator = self.kernel.pdf(mapping_for_kernels, self.accepted_parameters_manager,index, theta)
kernel_denominator = self.kernel.pdf(mapping_for_kernels, self.accepted_parameters_manager, index, perturbation_output[1])
ratio_likelihood_prob = kernel_numerator / kernel_denominator
acceptance_prob = min(1, ratio_prior_prob * ratio_likelihood_prob) * (
new_distance < self.anneal_parameter)
## If accepted
if rng.binomial(1, acceptance_prob) == 1:
result_theta.append(perturbation_output[1])
result_distance.append(new_distance)
theta = perturbation_output[1]
distance = new_distance
else:
result_theta.append(theta)
result_distance.append(distance)
return (result_theta, result_distance, counter)
def _update_cov_mat(self, rng_t):
"""
Updates the covariance matrix.
Parameters
----------
seed_t: numpy.ndarray
2 dimensional array. The first entry defines the initial seed of the random number generator.
The second entry defines the way in which the accepted covariance matrix is transformed.
Returns
-------
numpy.ndarray
accepted covariance matrix
"""
rng = rng_t[0]
t = rng_t[1]
rng.seed(rng.randint(np.iinfo(np.uint32).max, dtype=np.uint32))
acceptance = 0
accepted_cov_mats_transformed = [cov_mat*pow(2.0, -2.0 * t) for cov_mat in self.accepted_parameters_manager.accepted_cov_mats_bds.value()]
theta = np.array(self.accepted_parameters_manager.accepted_parameters_bds.value()[0]).reshape(-1,)
mapping_for_kernels, garbage_index = self.accepted_parameters_manager.get_mapping(
self.accepted_parameters_manager.model)
counter = 0
for ind in range(0, self.chain_length):
while True:
perturbation_output = self.perturb(0, rng=rng)
if perturbation_output[0] and self.pdf_of_prior(self.model, perturbation_output[1]) != 0:
break
y_sim = self.simulate(self.n_samples_per_param, rng=rng)
counter+=1
new_distance = self.distance.distance(self.accepted_parameters_manager.observations_bds.value(), y_sim)
## Calculate acceptance probability:
ratio_prior_prob = self.pdf_of_prior(self.model, perturbation_output[1]) / self.pdf_of_prior(self.model, theta)
kernel_numerator = self.kernel.pdf(mapping_for_kernels, self.accepted_parameters_manager,0 , theta)
kernel_denominator = self.kernel.pdf(mapping_for_kernels, self.accepted_parameters_manager,0 , perturbation_output[1])
ratio_likelihood_prob = kernel_numerator / kernel_denominator
acceptance_prob = min(1, ratio_prior_prob * ratio_likelihood_prob) * (new_distance < self.anneal_parameter)
## If accepted
if rng.binomial(1, acceptance_prob) == 1:
theta = perturbation_output[1]
acceptance = acceptance + 1
if acceptance / 10 <= 0.5 and acceptance / 10 >= 0.3:
return (accepted_cov_mats_transformed, t, 1, counter)
else:
return (accepted_cov_mats_transformed, t, 0, counter)
class RSMCABC(BaseDiscrepancy, InferenceMethod):
"""This base class implements Replenishment Sequential Monte Carlo Approximate Bayesian computation of
Drovandi and Pettitt [1].
[1] CC. Drovandi CC and AN. Pettitt, Estimation of parameters for macroparasite population evolution using
approximate Bayesian computation. Biometrics 67(1):225–233, 2011.
Parameters
----------
model : list
A list of the Probabilistic models corresponding to the observed datasets
distance : abcpy.distances.Distance
Distance object defining the distance measure used to compare simulated and observed data sets.
kernel : abcpy.distributions.Distribution
Distribution object defining the perturbation kernel needed for the sampling.
backend : abcpy.backends.Backend
Backend object defining the backend to be used.
seed : integer, optional
Optional initial seed for the random number generator. The default value is generated randomly.
"""
model = None
distance = None
kernel = None
R = None
rng = None
n_samples = None
n_samples_per_param = None
alpha = None
accepted_dist_bds = None
backend = None
def __init__(self, root_models, distances, backend, kernel=None,seed=None):
self.model = root_models
# We define the joint Linear combination distance using all the distances for each individual models
self.distance = LinearCombination(root_models, distances)
if (kernel is None):
mapping, garbage_index = self._get_mapping()
models = []
for mdl, mdl_index in mapping:
models.append(mdl)
kernel = DefaultKernel(models)
self.kernel = kernel
self.backend = backend
self.R=None
self.rng = np.random.RandomState(seed)
# these are usually big tables, so we broadcast them to have them once
# per executor instead of once per task
self.accepted_parameters_manager = AcceptedParametersManager(self.model)
self.accepted_dist_bds = None
self.simulation_counter = 0
def sample(self, observations, steps, n_samples = 10000, n_samples_per_param = 1, alpha = 0.1, epsilon_init = 100, epsilon_final = 0.1, const = 0.01, covFactor = 2.0, full_output=0, journal_file = None):
"""Samples from the posterior distribution of the model parameter given the observed
data observations.
Parameters
----------
observations : list
A list, containing lists describing the observed data sets
steps : integer
Number of iterations in the sequential algoritm ("generations")
n_samples : integer, optional
Number of samples to generate. The default value is 10000.
n_samples_per_param : integer, optional
Number of data points in each simulated data set. The default value is 1.
alpha : float, optional
A parameter taking values between [0,1], the default value is 0.1.
epsilon_init : float, optional
Initial value of threshold, the default is 100
epsilon_final : float, optional
Terminal value of threshold, the default is 0.1
const : float, optional
A constant to compute acceptance probabilty
covFactor : float, optional
scaling parameter of the covariance matrix. The default value is 2.
full_output: integer, optional
If full_output==1, intermediate results are included in output journal.
The default value is 0, meaning the intermediate results are not saved.
Returns
-------
abcpy.output.Journal
A journal containing simulation results, metadata and optionally intermediate results.
"""
self.sample_from_prior(rng=self.rng)
self.accepted_parameters_manager.broadcast(self.backend, observations)
self.alpha = alpha
self.n_samples = n_samples
self.n_samples_per_param = n_samples_per_param
if(journal_file is None):
journal = Journal(full_output)
journal.configuration["type_model"] = [type(model).__name__ for model in self.model]
journal.configuration["type_dist_func"] = type(self.distance).__name__
journal.configuration["n_samples"] = self.n_samples
journal.configuration["n_samples_per_param"] = self.n_samples_per_param
journal.configuration["steps"] = steps
else:
journal = Journal.fromFile(journal_file)
accepted_parameters = None
accepted_cov_mat = None
accepted_dist = None
# main RSMCABC algorithm
# print("INFO: Starting RSMCABC iterations.")
for aStep in range(steps):
if(aStep==0 and journal_file is not None):
accepted_parameters=journal.parameters[-1]
self.accepted_parameters_manager.update_broadcast(self.backend, accepted_parameters=accepted_parameters)
kernel_parameters = []
for kernel in self.kernel.kernels:
kernel_parameters.append(
self.accepted_parameters_manager.get_accepted_parameters_bds_values(kernel.models))
self.accepted_parameters_manager.update_kernel_values(self.backend, kernel_parameters=kernel_parameters)
accepted_cov_mats = self.kernel.calculate_cov(self.accepted_parameters_manager)
accepted_cov_mats = [covFactor * cov_mat for cov_mat in accepted_cov_mats]
self.accepted_parameters_manager.update_broadcast(self.backend, accepted_cov_mats=accepted_cov_mats)
# 0: Compute epsilon, compute new covariance matrix for Kernel,
# and finally Drawing new new/perturbed samples using prior or MCMC Kernel
# print("DEBUG: Iteration " + str(aStep) + " of RSMCABC algorithm.")
if aStep == 0:
n_replenish = n_samples
# Compute epsilon
epsilon = [epsilon_init]
R = int(1)
if(journal_file is None):
accepted_cov_mats=None
else:
# Compute epsilon
epsilon.append(accepted_dist[-1])
# Calculate covariance
# print("INFO: Calculating covariance matrix.")
kernel_parameters = []
for kernel in self.kernel.kernels:
kernel_parameters.append(
self.accepted_parameters_manager.get_accepted_parameters_bds_values(kernel.models))
self.accepted_parameters_manager.update_kernel_values(self.backend, kernel_parameters=kernel_parameters)
accepted_cov_mats = self.kernel.calculate_cov(self.accepted_parameters_manager)
accepted_cov_mats = [covFactor*cov_mat for cov_mat in accepted_cov_mats]
if epsilon[-1] < epsilon_final:
break
seed_arr = self.rng.randint(0, np.iinfo(np.uint32).max, size=n_replenish, dtype=np.uint32)
rng_arr = np.array([np.random.RandomState(seed) for seed in seed_arr])
rng_pds = self.backend.parallelize(rng_arr)
# update remotely required variables
# print("INFO: Broadcasting parameters.")
self.epsilon = epsilon
self.R = R
# Broadcast updated variable
self.accepted_parameters_manager.update_broadcast(self.backend, accepted_cov_mats=accepted_cov_mats)
self._update_broadcasts(accepted_dist)
# calculate resample parameters
# print("INFO: Resampling parameters")
params_and_dist_index_pds = self.backend.map(self._accept_parameter, rng_pds)
params_and_dist_index = self.backend.collect(params_and_dist_index_pds)
new_parameters, new_dist, new_index, counter = [list(t) for t in zip(*params_and_dist_index)]
new_parameters = np.array(new_parameters)
new_dist = np.array(new_dist)
new_index = np.array(new_index)
for count in counter:
self.simulation_counter+=count
# 1: Update all parameters, compute acceptance probability, compute epsilon
if len(new_dist) == self.n_samples:
accepted_parameters = new_parameters
accepted_dist = new_dist
else:
accepted_parameters = np.concatenate((accepted_parameters, new_parameters))
accepted_dist = np.concatenate((accepted_dist, new_dist))
# print("INFO: Saving configuration to output journal.")
if (full_output == 1 and aStep <= steps - 1) or (full_output == 0 and aStep == steps - 1):
journal.add_parameters(copy.deepcopy(accepted_parameters))
journal.add_weights(np.ones(shape=(len(accepted_parameters), 1)) * (1 / len(accepted_parameters)))
self.accepted_parameters_manager.update_broadcast(self.backend, accepted_parameters=accepted_parameters)
names_and_parameters = self._get_names_and_parameters()
journal.add_user_parameters(names_and_parameters)
journal.number_of_simulations.append(self.simulation_counter)
# 2: Compute acceptance probabilty and set R
# print(aStep)
# print(new_index)
prob_acceptance = sum(new_index) / (R * n_replenish)
if prob_acceptance == 1 or prob_acceptance == 0:
R = 1
else:
R = int(np.log(const) / np.log(1 - prob_acceptance))
n_replenish = round(n_samples * alpha)
accepted_params_and_dist = zip(accepted_dist, accepted_parameters)
accepted_params_and_dist = sorted(accepted_params_and_dist, key = lambda x: x[0])
accepted_dist, accepted_parameters = [list(t) for t in zip(*accepted_params_and_dist)]
# Throw away N_alpha particles with largest dist
accepted_parameters = np.delete(accepted_parameters, np.arange(round(n_samples * alpha)) + (
self.n_samples - round(n_samples * alpha)), 0)
accepted_dist = np.delete(accepted_dist,
np.arange(round(n_samples * alpha)) + (n_samples - round(n_samples * alpha)),
0)
self.accepted_parameters_manager.update_broadcast(self.backend, accepted_parameters=accepted_parameters)
# Add epsilon_arr to the journal
journal.configuration["epsilon_arr"] = epsilon
return journal
def _update_broadcasts(self, accepted_dist):
def destroy(bc):
if bc != None:
bc.unpersist
# bc.destroy
if not accepted_dist is None:
self.accepted_dist_bds = self.backend.broadcast(accepted_dist)
# define helper functions for map step
def _accept_parameter(self, rng):
"""
Samples a single model parameter and simulate from it until
distance between simulated outcome and the observation is
smaller than epsilon.
Parameters
----------
seed: integer
Initial seed for the random number generator.
Returns
-------
numpy.ndarray
accepted parameter
"""
rng.seed(rng.randint(np.iinfo(np.uint32).max, dtype=np.uint32))
distance = self.distance.dist_max()
mapping_for_kernels, garbage_index = self.accepted_parameters_manager.get_mapping(
self.accepted_parameters_manager.model)
counter = 0
if self.accepted_parameters_manager.accepted_parameters_bds == None:
while distance > self.epsilon[-1]:
self.sample_from_prior(rng=rng)
y_sim = self.simulate(self.n_samples_per_param, rng=rng)
counter+=1
distance = self.distance.distance(self.accepted_parameters_manager.observations_bds.value(), y_sim)
index_accept = 1
else:
index = rng.choice(len(self.accepted_parameters_manager.accepted_parameters_bds.value()), size=1)
theta = np.array(self.accepted_parameters_manager.accepted_parameters_bds.value()[index[0]]).reshape(-1,)
index_accept = 0.0
for ind in range(self.R):
while True:
perturbation_output = self.perturb(index[0], rng=rng)
if perturbation_output[0] and self.pdf_of_prior(self.model, perturbation_output[1]) != 0:
break
y_sim = self.simulate(self.n_samples_per_param, rng=rng)
counter+=1
distance = self.distance.distance(self.accepted_parameters_manager.observations_bds.value(), y_sim)
ratio_prior_prob = self.pdf_of_prior(self.model, perturbation_output[1]) / self.pdf_of_prior(self.model, theta)
kernel_numerator = self.kernel.pdf(mapping_for_kernels, self.accepted_parameters_manager, index[0], theta)
kernel_denominator = self.kernel.pdf(mapping_for_kernels, self.accepted_parameters_manager, index[0], perturbation_output[1])
ratio_kernel_prob = kernel_numerator / kernel_denominator
probability_acceptance = min(1, ratio_prior_prob * ratio_kernel_prob)
if distance < self.epsilon[-1] and rng.binomial(1, probability_acceptance) == 1:
index_accept += 1
else:
self.set_parameters(theta)
distance = self.accepted_dist_bds.value()[index[0]]
return (self.get_parameters(self.model), distance, index_accept, counter)
class APMCABC(BaseDiscrepancy, InferenceMethod):
"""This base class implements Adaptive Population Monte Carlo Approximate Bayesian computation of
M. Lenormand et al. [1].
[1] M. Lenormand, F. Jabot and G. Deffuant, Adaptive approximate Bayesian computation
for complex models. Computational Statistics, 28:2777–2796, 2013.
Parameters
----------
model : list
A list of the Probabilistic models corresponding to the observed datasets
distance : abcpy.distances.Distance
Distance object defining the distance measure used to compare simulated and observed data sets.
kernel : abcpy.distributions.Distribution
Distribution object defining the perturbation kernel needed for the sampling.
backend : abcpy.backends.Backend
Backend object defining the backend to be used.
seed : integer, optional
Optional initial seed for the random number generator. The default value is generated randomly.
"""
model = None
distance = None
kernel = None
epsilon = None
rng = None
n_samples = None
n_samples_per_param = None
alpha = None
accepted_dist = None
backend = None
def __init__(self, root_models, distances, backend, kernel = None,seed=None):
self.model = root_models
# We define the joint Linear combination distance using all the distances for each individual models
self.distance = LinearCombination(root_models, distances)
if (kernel is None):
mapping, garbage_index = self._get_mapping()
models = []
for mdl, mdl_index in mapping:
models.append(mdl)
kernel = DefaultKernel(models)
self.kernel = kernel
self.backend = backend
self.epsilon= None
self.rng = np.random.RandomState(seed)
# these are usually big tables, so we broadcast them to have them once
# per executor instead of once per task
self.accepted_parameters_manager = AcceptedParametersManager(self.model)
self.accepted_dist_bds = None
self.simulation_counter = 0
def sample(self, observations, steps, n_samples = 10000, n_samples_per_param = 1, alpha = 0.9, acceptance_cutoff = 0.03, covFactor = 2.0, full_output=0, journal_file = None):
"""Samples from the posterior distribution of the model parameter given the observed
data observations.
Parameters
----------
observations : list
A list, containing lists describing the observed data sets
steps : integer
Number of iterations in the sequential algoritm ("generations")
n_samples : integer, optional
Number of samples to generate. The default value is 10000.
n_samples_per_param : integer, optional
Number of data points in each simulated data set. The default value is 1.
alpha : float, optional
A parameter taking values between [0,1], the default value is 0.1.
acceptance_cutoff : float, optional
Acceptance ratio cutoff, should be chosen between 0.01 and 0.05
covFactor : float, optional
scaling parameter of the covariance matrix. The default value is 2.
full_output: integer, optional
If full_output==1, intermediate results are included in output journal.
The default value is 0, meaning the intermediate results are not saved.
Returns
-------
abcpy.output.Journal
A journal containing simulation results, metadata and optionally intermediate results.
"""
self.sample_from_prior(rng=self.rng)
self.accepted_parameters_manager.broadcast(self.backend, observations)
self.alpha = alpha
self.n_samples = n_samples
self.n_samples_per_param = n_samples_per_param
if(journal_file is None):
journal = Journal(full_output)
journal.configuration["type_model"] = [type(model).__name__ for model in self.model]
journal.configuration["type_dist_func"] = type(self.distance).__name__
journal.configuration["n_samples"] = self.n_samples
journal.configuration["n_samples_per_param"] = self.n_samples_per_param
journal.configuration["steps"] = steps
else:
journal = Journal.fromFile(journal_file)
accepted_parameters = None
accepted_weights = None
accepted_cov_mats = None
accepted_dist = None
alpha_accepted_parameters = None
alpha_accepted_weights = None
alpha_accepted_dist = None
# main APMCABC algorithm
# print("INFO: Starting APMCABC iterations.")
for aStep in range(steps):
if(aStep==0 and journal_file is not None):
accepted_parameters=journal.parameters[-1]
accepted_weights=journal.weights[-1]
self.accepted_parameters_manager.update_broadcast(self.backend, accepted_parameters=accepted_parameters, accepted_weights=accepted_weights)
kernel_parameters = []
for kernel in self.kernel.kernels:
kernel_parameters.append(
self.accepted_parameters_manager.get_accepted_parameters_bds_values(kernel.models))
self.accepted_parameters_manager.update_kernel_values(self.backend, kernel_parameters=kernel_parameters)
accepted_cov_mats = self.kernel.calculate_cov(self.accepted_parameters_manager)
accepted_cov_mats = [covFactor * cov_mat for cov_mat in accepted_cov_mats]
self.accepted_parameters_manager.update_broadcast(self.backend, accepted_parameters=accepted_parameters, accepted_weights=accepted_weights)
alpha_accepted_parameters=accepted_parameters
alpha_accepted_weights=accepted_weights
# 0: Drawing new new/perturbed samples using prior or MCMC Kernel
# print("DEBUG: Iteration " + str(aStep) + " of APMCABC algorithm.")
if aStep > 0:
n_additional_samples = n_samples - round(n_samples * alpha)
else:
n_additional_samples = n_samples
seed_arr = self.rng.randint(0, np.iinfo(np.uint32).max, size=n_additional_samples, dtype=np.uint32)
rng_arr = np.array([np.random.RandomState(seed) for seed in seed_arr])
rng_pds = self.backend.parallelize(rng_arr)
# update remotely required variables
# print("INFO: Broadcasting parameters.")
self.accepted_parameters_manager.update_broadcast(self.backend, accepted_parameters=alpha_accepted_parameters, accepted_weights=alpha_accepted_weights, accepted_cov_mats=accepted_cov_mats)
self._update_broadcasts(alpha_accepted_dist)
# calculate resample parameters
# print("INFO: Resampling parameters")
params_and_dist_weights_pds = self.backend.map(self._accept_parameter, rng_pds)
params_and_dist_weights = self.backend.collect(params_and_dist_weights_pds)
new_parameters, new_dist, new_weights, counter = [list(t) for t in zip(*params_and_dist_weights)]
new_parameters = np.array(new_parameters)
new_dist = np.array(new_dist)
new_weights = np.array(new_weights).reshape(n_additional_samples, 1)
for count in counter:
self.simulation_counter+=count
# 1: Update all parameters, compute acceptance probability, compute epsilon
if len(new_weights) == n_samples:
accepted_parameters = new_parameters
accepted_dist = new_dist
accepted_weights = new_weights
# Compute acceptance probability
prob_acceptance = 1
# Compute epsilon
epsilon = [np.percentile(accepted_dist, alpha * 100)]
else:
accepted_parameters = np.concatenate((alpha_accepted_parameters, new_parameters))
accepted_dist = np.concatenate((alpha_accepted_dist, new_dist))
accepted_weights = np.concatenate((alpha_accepted_weights, new_weights))
# Compute acceptance probability
prob_acceptance = sum(new_dist < epsilon[-1]) / len(new_dist)
# Compute epsilon
epsilon.append(np.percentile(accepted_dist, alpha * 100))
# 2: Update alpha_parameters, alpha_dist and alpha_weights
index_alpha = accepted_dist < epsilon[-1]
alpha_accepted_parameters = accepted_parameters[index_alpha, :]
alpha_accepted_weights = accepted_weights[index_alpha] / sum(accepted_weights[index_alpha])
alpha_accepted_dist = accepted_dist[index_alpha]
# 3: calculate covariance
# print("INFO: Calculating covariance matrix.")
self.accepted_parameters_manager.update_broadcast(self.backend, accepted_parameters=alpha_accepted_parameters, accepted_weights=alpha_accepted_weights)
kernel_parameters = []
for kernel in self.kernel.kernels:
kernel_parameters.append(
self.accepted_parameters_manager.get_accepted_parameters_bds_values(kernel.models))
self.accepted_parameters_manager.update_kernel_values(self.backend, kernel_parameters=kernel_parameters)
accepted_cov_mats = self.kernel.calculate_cov(self.accepted_parameters_manager)
accepted_cov_mats = [covFactor*cov_mat for cov_mat in accepted_cov_mats]
# print("INFO: Saving configuration to output journal.")
if (full_output == 1 and aStep <= steps - 1) or (full_output == 0 and aStep == steps - 1):
journal.add_parameters(copy.deepcopy(accepted_parameters))
journal.add_weights(copy.deepcopy(accepted_weights))
self.accepted_parameters_manager.update_broadcast(self.backend,
accepted_parameters=accepted_parameters,
accepted_weights=accepted_weights)
names_and_parameters = self._get_names_and_parameters()
journal.add_user_parameters(names_and_parameters)
journal.number_of_simulations.append(self.simulation_counter)
# 4: Check probability of acceptance lower than acceptance_cutoff
if prob_acceptance < acceptance_cutoff:
break
# Add epsilon_arr to the journal
journal.configuration["epsilon_arr"] = epsilon
return journal
def _update_broadcasts(self, accepted_dist):
def destroy(bc):
if bc != None:
bc.unpersist
# bc.destroy
self.accepted_dist_bds = self.backend.broadcast(accepted_dist)
# define helper functions for map step
def _accept_parameter(self, rng):
"""
Samples a single model parameter and simulate from it until
distance between simulated outcome and the observation is
smaller than epsilon.
Parameters
----------
seed: integer
Initial seed for the random number generator.
Returns
-------
numpy.ndarray
accepted parameter
"""
rng.seed(rng.randint(np.iinfo(np.uint32).max, dtype=np.uint32))
mapping_for_kernels, garbage_index = self.accepted_parameters_manager.get_mapping(
self.accepted_parameters_manager.model)
counter = 0
if self.accepted_parameters_manager.accepted_parameters_bds == None:
self.sample_from_prior(rng=rng)
y_sim = self.simulate(self.n_samples_per_param, rng=rng)
counter+=1
dist = self.distance.distance(self.accepted_parameters_manager.observations_bds.value(), y_sim)
weight = 1.0
else:
index = rng.choice(len(self.accepted_parameters_manager.accepted_weights_bds.value()), size=1,
p=self.accepted_parameters_manager.accepted_weights_bds.value().reshape(-1))
# trucate the normal to the bounds of parameter space of the model
# truncating the normal like this is fine: https://arxiv.org/pdf/0907.4010v1.pdf
while True:
perturbation_output = self.perturb(index[0], rng=rng)
if perturbation_output[0] and self.pdf_of_prior(self.model, perturbation_output[1]) != 0:
break
y_sim = self.simulate(self.n_samples_per_param, rng=rng)
counter+=1
dist = self.distance.distance(self.accepted_parameters_manager.observations_bds.value(), y_sim)
prior_prob = self.pdf_of_prior(self.model, perturbation_output[1])
denominator = 0.0
for i in range(0, len(self.accepted_parameters_manager.accepted_weights_bds.value())):
pdf_value = self.kernel.pdf(mapping_for_kernels, self.accepted_parameters_manager, index[0], perturbation_output[1])
denominator += self.accepted_parameters_manager.accepted_weights_bds.value()[i, 0] * pdf_value
weight = 1.0 * prior_prob / denominator
return (self.get_parameters(self.model), dist, weight, counter)
class SMCABC(BaseDiscrepancy, InferenceMethod):
"""This base class implements Adaptive Population Monte Carlo Approximate Bayesian computation of
Del Moral et al. [1].
[1] P. Del Moral, A. Doucet, A. Jasra, An adaptive sequential Monte Carlo method for approximate
Bayesian computation. Statistics and Computing, 22(5):1009–1020, 2012.
Parameters
----------
model : list
A list of the Probabilistic models corresponding to the observed datasets
distance : abcpy.distances.Distance
Distance object defining the distance measure used to compare simulated and observed data sets.
kernel : abcpy.distributions.Distribution
Distribution object defining the perturbation kernel needed for the sampling.
backend : abcpy.backends.Backend
Backend object defining the backend to be used.
seed : integer, optional
Optional initial seed for the random number generator. The default value is generated randomly.
"""
model = None
distance = None
kernel = None
epsilon = None
rng = None
n_samples = None
n_samples_per_param = None
accepted_y_sim_bds = None
backend = None
def __init__(self, root_models, distances, backend, kernel = None,seed=None):
self.model = root_models
# We define the joint Linear combination distance using all the distances for each individual models
self.distance = LinearCombination(root_models, distances)
if (kernel is None):
mapping, garbage_index = self._get_mapping()
models = []
for mdl, mdl_index in mapping:
models.append(mdl)
kernel = DefaultKernel(models)
self.kernel = kernel
self.backend = backend
self.epsilon = None
self.rng = np.random.RandomState(seed)
# these are usually big tables, so we broadcast them to have them once
# per executor instead of once per task\
self.accepted_parameters_manager = AcceptedParametersManager(self.model)
self.accepted_y_sim_bds = None
self.simulation_counter = 0
def sample(self, observations, steps, n_samples = 10000, n_samples_per_param = 1, epsilon_final = 0.1, alpha = 0.95,
covFactor = 2, resample = None, full_output=0, journal_file=None):
"""Samples from the posterior distribution of the model parameter given the observed
data observations.
Parameters
----------
observations : list
A list, containing lists describing the observed data sets
steps : integer
Number of iterations in the sequential algoritm ("generations")
epsilon_final : float, optional
The final threshold value of epsilon to be reached. The default value is 0.1.
n_samples : integer, optional
Number of samples to generate. The default value is 10000.
n_samples_per_param : integer, optional
Number of data points in each simulated data set. The default value is 1.
alpha : float, optional
A parameter taking values between [0,1], determinining the rate of change of the threshold epsilon. The
default value is 0.5.
covFactor : float, optional
scaling parameter of the covariance matrix. The default value is 2.
full_output: integer, optional
If full_output==1, intermediate results are included in output journal.
The default value is 0, meaning the intermediate results are not saved.
Returns
-------
abcpy.output.Journal
A journal containing simulation results, metadata and optionally intermediate results.
"""
self.sample_from_prior(rng=self.rng)
self.accepted_parameters_manager.broadcast(self.backend, observations)
self.n_samples = n_samples
self.n_samples_per_param = n_samples_per_param
if(journal_file is None):
journal = Journal(full_output)
journal.configuration["type_model"] = [type(model).__name__ for model in self.model]
journal.configuration["type_dist_func"] = type(self.distance).__name__
journal.configuration["n_samples"] = self.n_samples
journal.configuration["n_samples_per_param"] = self.n_samples_per_param
journal.configuration["steps"] = steps
else:
journal = Journal.fromFile(journal_file)
accepted_parameters = None
accepted_weights = None
accepted_cov_mats = None
accepted_y_sim = None
# Define the resmaple parameter
if resample == None:
resample = n_samples * 0.5
# Define epsilon_init
epsilon = [10000]
# main SMC ABC algorithm
# print("INFO: Starting SMCABC iterations.")
for aStep in range(0, steps):
if(aStep==0 and journal_file is not None):
accepted_parameters=journal.parameters[-1]
accepted_weights=journal.weights[-1]
accepted_y_sim = journal.opt_values[-1]
self.accepted_parameters_manager.update_broadcast(self.backend, accepted_parameters=accepted_parameters,
accepted_weights=accepted_weights)
kernel_parameters = []
for kernel in self.kernel.kernels:
kernel_parameters.append(
self.accepted_parameters_manager.get_accepted_parameters_bds_values(kernel.models))
self.accepted_parameters_manager.update_kernel_values(self.backend, kernel_parameters=kernel_parameters)
accepted_cov_mats = self.kernel.calculate_cov(self.accepted_parameters_manager)
accepted_cov_mats = [covFactor * cov_mat for cov_mat in accepted_cov_mats]
self.accepted_parameters_manager.update_broadcast(self.backend, accepted_cov_mats=accepted_cov_mats)
# Break if epsilon in previous step is less than epsilon_final
if epsilon[-1] <= epsilon_final:
break
# 0: Compute the Epsilon
if accepted_y_sim != None:
# Compute epsilon for next step
fun = lambda epsilon_var: self._compute_epsilon(epsilon_var, \
epsilon, observations, accepted_y_sim, accepted_weights,
n_samples, n_samples_per_param, alpha)
epsilon_new = self._bisection(fun, epsilon_final, epsilon[-1], 0.001)
if epsilon_new < epsilon_final:
epsilon_new = epsilon_final
epsilon.append(epsilon_new)
# 1: calculate weights for new parameters
# print("INFO: Calculating weights.")
if accepted_y_sim != None:
new_weights = np.zeros(shape=(n_samples), )
for ind1 in range(n_samples):
numerator = 0.0
denominator = 0.0
for ind2 in range(n_samples_per_param):
numerator += (self.distance.distance(observations, [[accepted_y_sim[ind1][0][ind2]]]) < epsilon[-1])
denominator += (
self.distance.distance(observations, [[accepted_y_sim[ind1][0][ind2]]]) < epsilon[-2])
if denominator != 0.0:
new_weights[ind1] = accepted_weights[ind1] * (numerator / denominator)
else:
new_weights[ind1] = 0
new_weights = new_weights / sum(new_weights)
else:
new_weights = np.ones(shape=(n_samples), ) * (1.0 / n_samples)
# 2: Resample
if accepted_y_sim != None and pow(sum(pow(new_weights, 2)), -1) < resample:
print('Resampling')
# Weighted resampling:
index_resampled = self.rng.choice(np.arange(n_samples), n_samples, replace=1, p=new_weights)
accepted_parameters = accepted_parameters[index_resampled, :]
new_weights = np.ones(shape=(n_samples), ) * (1.0 / n_samples)
# Update the weights
accepted_weights = new_weights.reshape(len(new_weights), 1)
self.accepted_parameters_manager.update_broadcast(self.backend, accepted_parameters=accepted_parameters,
accepted_weights=accepted_weights)
if(accepted_y_sim is not None):
kernel_parameters = []
for kernel in self.kernel.kernels:
kernel_parameters.append(
self.accepted_parameters_manager.get_accepted_parameters_bds_values(kernel.models))
self.accepted_parameters_manager.update_kernel_values(self.backend, kernel_parameters=kernel_parameters)
accepted_cov_mats = self.kernel.calculate_cov(self.accepted_parameters_manager)
accepted_cov_mats = [covFactor * cov_mat for cov_mat in accepted_cov_mats]
# 3: Drawing new perturbed samples using MCMC Kernel
# print("DEBUG: Iteration " + str(aStep) + " of SMCABC algorithm.")
seed_arr = self.rng.randint(0, np.iinfo(np.uint32).max, size=n_samples, dtype=np.uint32)
rng_arr = np.array([np.random.RandomState(seed) for seed in seed_arr])
index_arr = np.arange(n_samples)
rng_and_index_arr = np.column_stack((rng_arr, index_arr))
rng_and_index_pds = self.backend.parallelize(rng_and_index_arr)
# print("INFO: Broadcasting parameters.")
self.epsilon = epsilon
self.accepted_parameters_manager.update_broadcast(self.backend, accepted_parameters=accepted_parameters,
accepted_weights=accepted_weights, accepted_cov_mats=accepted_cov_mats)
self._update_broadcasts(accepted_y_sim)
# calculate resample parameters
# print("INFO: Resampling parameters")
params_and_ysim_pds = self.backend.map(self._accept_parameter, rng_and_index_pds)
params_and_ysim = self.backend.collect(params_and_ysim_pds)
new_parameters, new_y_sim, counter = [list(t) for t in zip(*params_and_ysim)]
new_parameters = np.array(new_parameters)
for count in counter:
self.simulation_counter+=count
# Update the parameters
accepted_parameters = new_parameters
accepted_y_sim = new_y_sim
# print("INFO: Saving configuration to output journal.")
if (full_output == 1 and aStep <= steps - 1) or (full_output == 0 and aStep == steps - 1):
self.accepted_parameters_manager.update_broadcast(self.backend, accepted_parameters=accepted_parameters)
journal.add_parameters(copy.deepcopy(accepted_parameters))
journal.add_weights(copy.deepcopy(accepted_weights))
journal.add_opt_values(copy.deepcopy(accepted_y_sim))
names_and_parameters = self._get_names_and_parameters()
journal.add_user_parameters(names_and_parameters)
journal.number_of_simulations.append(self.simulation_counter)
# Add epsilon_arr to the journal
journal.configuration["epsilon_arr"] = epsilon
return journal
def _compute_epsilon(self, epsilon_new, epsilon, observations, accepted_y_sim, accepted_weights, n_samples,
n_samples_per_param, alpha):
"""
Parameters
----------
epsilon_new: float
New value for epsilon.
epsilon: float
Current threshold.
observations: numpy.ndarray
Observed data.
accepted_y_sim: numpy.ndarray
Accepted simulated data.
accepted_weights: numpy.ndarray
Accepted weights.
n_samples: integer
Number of samples to generate.
n_samples_per_param: integer
Number of data points in each simulated data set.
alpha: float
Returns
-------
float
Newly computed value for threshold.
"""
RHS = alpha * pow(sum(pow(accepted_weights, 2)), -1)
LHS = np.zeros(shape=(n_samples), )
for ind1 in range(n_samples):
numerator = 0.0
denominator = 0.0
for ind2 in range(n_samples_per_param):
numerator += (self.distance.distance(observations, [[accepted_y_sim[ind1][0][ind2]]]) < epsilon_new)
denominator += (self.distance.distance(observations, [[accepted_y_sim[ind1][0][ind2]]]) < epsilon[-1])
if(denominator==0):
LHS[ind1]=0
else:
LHS[ind1] = accepted_weights[ind1] * (numerator / denominator)
if sum(LHS) == 0:
result = RHS
else:
LHS = LHS / sum(LHS)
LHS = pow(sum(pow(LHS, 2)), -1)
result = RHS - LHS
return (result)
def _bisection(self, func, low, high, tol):
midpoint = (low + high) / 2.0
while (high - low) / 2.0 > tol:
if func(midpoint) == 0:
return midpoint
elif func(low) * func(midpoint) < 0:
high = midpoint
else:
low = midpoint
midpoint = (low + high) / 2.0
return midpoint
def _update_broadcasts(self, accepted_y_sim):
def destroy(bc):
if bc != None:
bc.unpersist
# bc.destroy
if not accepted_y_sim is None:
self.accepted_y_sim_bds = self.backend.broadcast(accepted_y_sim)
# define helper functions for map step
def _accept_parameter(self, rng_and_index):
"""
Samples a single model parameter and simulate from it until
distance between simulated outcome and the observation is
smaller than epsilon.
Parameters
----------
seed_and_index: numpy.ndarray
2 dimensional array. The first entry specifies the initial seed for the random number generator.
The second entry defines the index in the data set.
Returns
-------
Tuple
The first entry of the tuple is the accepted parameters. The second entry is the simulated data set.
"""
rng = rng_and_index[0]
index = rng_and_index[1]
rng.seed(rng.randint(np.iinfo(np.uint32).max, dtype=np.uint32))
mapping_for_kernels, garbage_index = self.accepted_parameters_manager.get_mapping(
self.accepted_parameters_manager.model)
counter=0
# print("on seed " + str(seed) + " distance: " + str(distance) + " epsilon: " + str(self.epsilon))
if self.accepted_parameters_manager.accepted_parameters_bds == None:
self.sample_from_prior(rng=rng)
y_sim = self.simulate(self.n_samples_per_param, rng=rng)
counter+=1
else:
if self.accepted_parameters_manager.accepted_weights_bds.value()[index] > 0:
theta = np.array(self.accepted_parameters_manager.accepted_parameters_bds.value()[index]).reshape(-1,)
while True:
perturbation_output = self.perturb(index, rng=rng)
if perturbation_output[0] and self.pdf_of_prior(self.model, perturbation_output[1]) != 0:
break
y_sim = self.simulate(self.n_samples_per_param, rng=rng)
counter+=1
y_sim_old = self.accepted_y_sim_bds.value()[index]
## Calculate acceptance probability:
numerator = 0.0
denominator = 0.0
for ind in range(self.n_samples_per_param):
numerator += (self.distance.distance(self.accepted_parameters_manager.observations_bds.value(), [[y_sim[0][ind]]]) < self.epsilon[-1])
denominator += (self.distance.distance(self.accepted_parameters_manager.observations_bds.value(), [[y_sim_old[0][ind]]]) < self.epsilon[-1])
if denominator == 0:
ratio_data_epsilon = 1
else:
ratio_data_epsilon = numerator / denominator
ratio_prior_prob = self.pdf_of_prior(self.model, perturbation_output[1]) / self.pdf_of_prior(self.model, theta)
kernel_numerator = self.kernel.pdf(mapping_for_kernels, self.accepted_parameters_manager, index, theta)
kernel_denominator = self.kernel.pdf(mapping_for_kernels, self.accepted_parameters_manager, index, perturbation_output[1])
ratio_likelihood_prob = kernel_numerator / kernel_denominator
acceptance_prob = min(1, ratio_data_epsilon * ratio_prior_prob * ratio_likelihood_prob)
if rng.binomial(1, acceptance_prob) == 1:
self.set_parameters(perturbation_output[1])
else:
self.set_parameters(theta)
y_sim = self.accepted_y_sim_bds.value()[index]
else:
self.set_parameters(self.accepted_parameters_manager.accepted_parameters_bds.value()[index])
y_sim = self.accepted_y_sim_bds.value()[index]
return (self.get_parameters(), y_sim, counter)
| 45.384559 | 228 | 0.643805 | 14,264 | 124,036 | 5.378435 | 0.049986 | 0.079303 | 0.04789 | 0.063127 | 0.781994 | 0.75423 | 0.725436 | 0.705754 | 0.687127 | 0.669556 | 0 | 0.01094 | 0.282926 | 124,036 | 2,732 | 229 | 45.401171 | 0.851523 | 0.272421 | 0 | 0.654728 | 0 | 0 | 0.010523 | 0 | 0 | 0 | 0 | 0.000366 | 0 | 1 | 0.036533 | false | 0 | 0.00788 | 0 | 0.124642 | 0.004298 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 4 |
0d69ec804a8e81e99f84f9bdea1b7fe6ac500cd5 | 207 | py | Python | test/repo/python/some-errors/tests/test.py | ddillinger/runbld | 7afcb1d95a464dc068f95abf3ad8a7566202ce28 | [
"Apache-2.0"
] | 6 | 2015-11-20T14:53:13.000Z | 2017-05-03T01:26:53.000Z | test/repo/python/some-errors/tests/test.py | ddillinger/runbld | 7afcb1d95a464dc068f95abf3ad8a7566202ce28 | [
"Apache-2.0"
] | 110 | 2015-12-18T15:31:15.000Z | 2018-09-25T15:06:47.000Z | test/repo/python/some-errors/tests/test.py | ddillinger/runbld | 7afcb1d95a464dc068f95abf3ad8a7566202ce28 | [
"Apache-2.0"
] | 10 | 2016-02-08T19:55:14.000Z | 2021-11-10T02:00:56.000Z | import unittest
class HelloWorld(unittest.TestCase):
def test_hello(self):
self.assert_('Hello, world!' != 'Hello, world!')
def test_error(self):
raise Exception('Goodbye, world!')
| 23 | 56 | 0.657005 | 24 | 207 | 5.541667 | 0.625 | 0.105263 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.202899 | 207 | 8 | 57 | 25.875 | 0.806061 | 0 | 0 | 0 | 0 | 0 | 0.198068 | 0 | 0 | 0 | 0 | 0 | 0.166667 | 1 | 0.333333 | false | 0 | 0.166667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 4 |
0d78f57f156db58f4797b1d7bc9b14e99e6c79f9 | 584 | py | Python | project/proj3_fly/src/RobotControl.py | cyoahs/robotics_tutorial | 3aed846c5e95eb32dbcdeebac0b22e54cd74ea02 | [
"MIT"
] | 1 | 2021-12-23T13:05:26.000Z | 2021-12-23T13:05:26.000Z | project/proj3_fly/src/RobotControl.py | cyoahs/robotics_tutorial | 3aed846c5e95eb32dbcdeebac0b22e54cd74ea02 | [
"MIT"
] | null | null | null | project/proj3_fly/src/RobotControl.py | cyoahs/robotics_tutorial | 3aed846c5e95eb32dbcdeebac0b22e54cd74ea02 | [
"MIT"
] | null | null | null | import pybullet as p
# import AnswerByTA
def generateTraj(robotId):
# work in this function to make a plan before actual control
# the output can be in any data structure you like
plan = None
return plan
def realTimeControl(robotId, plan):
# work in this function to calculate real time control signal
# the output should be a list of two float
controlSignal = [0, 0]
# controlSignal = AnswerByTA.realTimeControl(robotId, plan)
return controlSignal
def addDebugItems():
# work in this function to add any debug visual items you need
pass | 30.736842 | 66 | 0.724315 | 82 | 584 | 5.158537 | 0.597561 | 0.042553 | 0.070922 | 0.12766 | 0.141844 | 0 | 0 | 0 | 0 | 0 | 0 | 0.004444 | 0.229452 | 584 | 19 | 67 | 30.736842 | 0.935556 | 0.590753 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0.111111 | 0.111111 | 0 | 0.666667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 4 |
0d9f5478af4baed537560e6d9b01ae36f3df2e02 | 164 | py | Python | src/apps/portfolio/urls.py | Pewpewarrows/MyModernLife | 5348792b0aedc2bae6c91d688e61391b0656e136 | [
"X11"
] | null | null | null | src/apps/portfolio/urls.py | Pewpewarrows/MyModernLife | 5348792b0aedc2bae6c91d688e61391b0656e136 | [
"X11"
] | null | null | null | src/apps/portfolio/urls.py | Pewpewarrows/MyModernLife | 5348792b0aedc2bae6c91d688e61391b0656e136 | [
"X11"
] | null | null | null | from django.conf import settings
from django.conf.urls.defaults import *
urlpatterns = patterns('portfolio.views',
url(r'^$', 'index', name='project_list'),
)
| 23.428571 | 45 | 0.719512 | 21 | 164 | 5.571429 | 0.809524 | 0.17094 | 0.239316 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.121951 | 164 | 6 | 46 | 27.333333 | 0.8125 | 0 | 0 | 0 | 0 | 0 | 0.207317 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.4 | 0 | 0.4 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 4 |
0da1b5d831facb027fac2902eaf43d708161e40b | 103 | py | Python | budget_rest_app/apps.py | joyliao07/budget_tool | a20974f47d5bfa8ef2ef285f57c7e1aafde42f29 | [
"MIT"
] | null | null | null | budget_rest_app/apps.py | joyliao07/budget_tool | a20974f47d5bfa8ef2ef285f57c7e1aafde42f29 | [
"MIT"
] | 6 | 2019-01-22T03:54:53.000Z | 2019-01-25T04:49:18.000Z | budget_rest_app/apps.py | joyliao07/budget_tool | a20974f47d5bfa8ef2ef285f57c7e1aafde42f29 | [
"MIT"
] | null | null | null | from django.apps import AppConfig
class BudgetRestAppConfig(AppConfig):
name = 'budget_rest_app'
| 17.166667 | 37 | 0.786408 | 12 | 103 | 6.583333 | 0.916667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.145631 | 103 | 5 | 38 | 20.6 | 0.897727 | 0 | 0 | 0 | 0 | 0 | 0.145631 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 4 |
0da89674e31c5ae6130187878cec4367e2c011f7 | 975 | py | Python | test_python_toolbox/test_introspection_tools/test_get_default_args_dict.py | hboshnak/python_toolbox | cb9ef64b48f1d03275484d707dc5079b6701ad0c | [
"MIT"
] | 119 | 2015-02-05T17:59:47.000Z | 2022-02-21T22:43:40.000Z | test_python_toolbox/test_introspection_tools/test_get_default_args_dict.py | hboshnak/python_toolbox | cb9ef64b48f1d03275484d707dc5079b6701ad0c | [
"MIT"
] | 4 | 2019-04-24T14:01:14.000Z | 2020-05-21T12:03:29.000Z | test_python_toolbox/test_introspection_tools/test_get_default_args_dict.py | hboshnak/python_toolbox | cb9ef64b48f1d03275484d707dc5079b6701ad0c | [
"MIT"
] | 14 | 2015-03-30T06:30:42.000Z | 2021-12-24T23:45:11.000Z | # Copyright 2009-2017 Ram Rachum.
# This program is distributed under the MIT license.
'''Testing for `python_toolbox.introspection_tools.get_default_args_dict`.'''
from __future__ import generator_stop
from python_toolbox.introspection_tools import get_default_args_dict
from python_toolbox.nifty_collections import OrderedDict
def test():
'''Test the basic workings of `get_default_args_dict`.'''
def f(a, b, c=3, d=4):
pass
assert get_default_args_dict(f) == \
OrderedDict((('c', 3), ('d', 4)))
def test_generator():
'''Test `get_default_args_dict` on a generator function.'''
def f(a, meow='frr', d={}):
yield None
assert get_default_args_dict(f) == \
OrderedDict((('meow', 'frr'), ('d', {})))
def test_empty():
'''Test `get_default_args_dict` on a function with no defaultful args.'''
def f(a, b, c, *args, **kwargs):
pass
assert get_default_args_dict(f) == \
OrderedDict()
| 26.351351 | 77 | 0.665641 | 136 | 975 | 4.5 | 0.411765 | 0.130719 | 0.183007 | 0.235294 | 0.366013 | 0.271242 | 0.271242 | 0.130719 | 0 | 0 | 0 | 0.015326 | 0.196923 | 975 | 36 | 78 | 27.083333 | 0.766284 | 0.337436 | 0 | 0.277778 | 0 | 0 | 0.020833 | 0 | 0 | 0 | 0 | 0 | 0.166667 | 1 | 0.333333 | false | 0.111111 | 0.166667 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 4 |
0db1ba52c345e050e50e475cd98cced39704b9d1 | 1,227 | py | Python | tests/run/pep563_annotations.py | johannes-mueller/cython | b75af38ce5c309cd84c1835220932e53e9a9adb6 | [
"Apache-2.0"
] | 6,663 | 2015-01-02T06:06:43.000Z | 2022-03-31T10:35:02.000Z | tests/run/pep563_annotations.py | johannes-mueller/cython | b75af38ce5c309cd84c1835220932e53e9a9adb6 | [
"Apache-2.0"
] | 3,094 | 2015-01-01T15:44:13.000Z | 2022-03-31T19:49:57.000Z | tests/run/pep563_annotations.py | scoder/cython | ddaaa7b8bfe9885b7bed432cd0a5ab8191d112cd | [
"Apache-2.0"
] | 1,425 | 2015-01-12T07:21:27.000Z | 2022-03-30T14:10:40.000Z | # mode: run
# tag: pep563, pure3.7
from __future__ import annotations
def f(a: 1+2==3, b: list, c: this_cant_evaluate, d: "Hello from inside a string") -> "Return me!":
"""
The absolute exact strings aren't reproducible according to the PEP,
so be careful to avoid being too specific
>>> stypes = (type(""), type(u"")) # Python 2 is a bit awkward here
>>> eval(f.__annotations__['a'])
True
>>> isinstance(f.__annotations__['a'], stypes)
True
>>> print(f.__annotations__['b'])
list
>>> print(f.__annotations__['c'])
this_cant_evaluate
>>> isinstance(eval(f.__annotations__['d']), stypes)
True
>>> print(f.__annotations__['return'][1:-1]) # First and last could be either " or '
Return me!
>>> f.__annotations__['return'][0] == f.__annotations__['return'][-1]
True
"""
pass
def empty_decorator(cls):
return cls
@empty_decorator
class DecoratedStarship(object):
"""
>>> sorted(DecoratedStarship.__annotations__.items())
[('captain', 'str'), ('damage', 'cython.int')]
"""
captain: str = 'Picard' # instance variable with default
damage: cython.int # instance variable without default
| 29.926829 | 98 | 0.621027 | 149 | 1,227 | 4.805369 | 0.563758 | 0.134078 | 0.071229 | 0.047486 | 0.075419 | 0 | 0 | 0 | 0 | 0 | 0 | 0.013655 | 0.224124 | 1,227 | 40 | 99 | 30.675 | 0.738445 | 0.641402 | 0 | 0 | 0 | 0 | 0.12426 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.222222 | false | 0.111111 | 0.111111 | 0.111111 | 0.777778 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | 0 | 4 |
0dc275c9d1df095e74886382656bd50994cd7580 | 94 | py | Python | boards/admin.py | onerbs/treux | 3ec3a80a49de2860efcc0b1806e9063975c35023 | [
"MIT"
] | null | null | null | boards/admin.py | onerbs/treux | 3ec3a80a49de2860efcc0b1806e9063975c35023 | [
"MIT"
] | null | null | null | boards/admin.py | onerbs/treux | 3ec3a80a49de2860efcc0b1806e9063975c35023 | [
"MIT"
] | null | null | null | from django.contrib import admin
from boards.models import Board
admin.site.register(Board)
| 15.666667 | 32 | 0.819149 | 14 | 94 | 5.5 | 0.714286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.117021 | 94 | 5 | 33 | 18.8 | 0.927711 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 4 |
0df7ab68e99ba0eea962229e2499d43451043c7b | 57 | py | Python | __init__.py | MCTVR/ePyHTML | e1ebcfbafe2c0f1ae8f8d89a891104fe9a65ea2b | [
"MIT"
] | 3 | 2021-02-08T05:15:30.000Z | 2022-01-27T01:09:20.000Z | __init__.py | MCTVR/ePyHTML | e1ebcfbafe2c0f1ae8f8d89a891104fe9a65ea2b | [
"MIT"
] | null | null | null | __init__.py | MCTVR/ePyHTML | e1ebcfbafe2c0f1ae8f8d89a891104fe9a65ea2b | [
"MIT"
] | null | null | null | """
__init__.py for pretending it as a proper library
""" | 19 | 49 | 0.719298 | 9 | 57 | 4.111111 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.157895 | 57 | 3 | 50 | 19 | 0.770833 | 0.859649 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 4 |
21700d5f06578d62c8ef03cdc223e7d1c9ba5dc2 | 85 | py | Python | async_rpc_demo_rq/task_queue.py | ak64th/async_rpc_demo | f55feb66956644160b4478ff2f237e4a237cf05e | [
"MIT"
] | null | null | null | async_rpc_demo_rq/task_queue.py | ak64th/async_rpc_demo | f55feb66956644160b4478ff2f237e4a237cf05e | [
"MIT"
] | null | null | null | async_rpc_demo_rq/task_queue.py | ak64th/async_rpc_demo | f55feb66956644160b4478ff2f237e4a237cf05e | [
"MIT"
] | null | null | null | from redis import Redis
from rq import Queue
task_queue = Queue(connection=Redis())
| 17 | 38 | 0.788235 | 13 | 85 | 5.076923 | 0.538462 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.141176 | 85 | 4 | 39 | 21.25 | 0.90411 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 4 |
21dc0c2ead351f02bfb688ef1c5122e6b0c98276 | 28 | py | Python | homeassistant/components/onewire/__init__.py | erogleva/core | 994ae09f69afe772150a698953c0d7386a745de2 | [
"Apache-2.0"
] | 3 | 2017-09-16T23:34:59.000Z | 2021-12-20T11:11:27.000Z | homeassistant/components/onewire/__init__.py | erogleva/core | 994ae09f69afe772150a698953c0d7386a745de2 | [
"Apache-2.0"
] | 52 | 2020-07-14T14:12:26.000Z | 2022-03-31T06:24:02.000Z | homeassistant/components/onewire/__init__.py | erogleva/core | 994ae09f69afe772150a698953c0d7386a745de2 | [
"Apache-2.0"
] | 2 | 2019-08-04T13:39:43.000Z | 2020-02-07T23:01:23.000Z | """The 1-Wire component."""
| 14 | 27 | 0.607143 | 4 | 28 | 4.25 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.04 | 0.107143 | 28 | 1 | 28 | 28 | 0.64 | 0.75 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 4 |
df08ca4f0d421c07d7707dcf965760c5b1a475f6 | 2,266 | py | Python | push/migrations/0005_auto_20161003_2350.py | nnsnodnb/djabaas | 788cea2c26e7e2afc9b7ceb6ddc4934560201c7a | [
"Apache-2.0"
] | 3 | 2017-12-27T09:04:33.000Z | 2019-08-29T13:44:53.000Z | push/migrations/0005_auto_20161003_2350.py | nnsnodnb/djabaas | 788cea2c26e7e2afc9b7ceb6ddc4934560201c7a | [
"Apache-2.0"
] | 1 | 2018-07-30T04:42:24.000Z | 2018-07-30T04:42:24.000Z | push/migrations/0005_auto_20161003_2350.py | nnsnodnb/djabaas | 788cea2c26e7e2afc9b7ceb6ddc4934560201c7a | [
"Apache-2.0"
] | null | null | null | # -*- coding: utf-8 -*-
# Generated by Django 1.9.1 on 2016-10-03 14:50
from __future__ import unicode_literals
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('push', '0004_auto_20161003_2346'),
]
operations = [
migrations.AlterField(
model_name='developfilemodel',
name='development_file_name',
field=models.CharField(blank=True, max_length=100),
),
migrations.AlterField(
model_name='developfilemodel',
name='upload_username',
field=models.CharField(blank=True, max_length=50),
),
migrations.AlterField(
model_name='devicetokenmodel',
name='device_token',
field=models.CharField(blank=True, max_length=100),
),
migrations.AlterField(
model_name='notificationmodel',
name='badge',
field=models.IntegerField(blank=True),
),
migrations.AlterField(
model_name='notificationmodel',
name='json',
field=models.CharField(blank=True, max_length=150),
),
migrations.AlterField(
model_name='notificationmodel',
name='message',
field=models.CharField(blank=True, max_length=500),
),
migrations.AlterField(
model_name='notificationmodel',
name='sound',
field=models.CharField(blank=True, max_length=30),
),
migrations.AlterField(
model_name='notificationmodel',
name='title',
field=models.CharField(blank=True, max_length=200),
),
migrations.AlterField(
model_name='notificationmodel',
name='url',
field=models.CharField(blank=True, max_length=200),
),
migrations.AlterField(
model_name='productfilemodel',
name='production_file_name',
field=models.CharField(blank=True, max_length=100),
),
migrations.AlterField(
model_name='productfilemodel',
name='upload_username',
field=models.CharField(blank=True, max_length=50),
),
]
| 31.915493 | 63 | 0.579435 | 206 | 2,266 | 6.199029 | 0.305825 | 0.172279 | 0.215348 | 0.249804 | 0.735317 | 0.735317 | 0.466719 | 0.377447 | 0.377447 | 0.377447 | 0 | 0.037676 | 0.308914 | 2,266 | 70 | 64 | 32.371429 | 0.777778 | 0.029568 | 0 | 0.650794 | 1 | 0 | 0.146175 | 0.020036 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.031746 | 0 | 0.079365 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 4 |
df0bd7ccb1a51a66607fdce37e0ebe2ae466df63 | 1,784 | py | Python | django-rest/api/models.py | baseplate-admin/django-react | a3a7c90a49d77e4654eee2dff254fc0c3188cf54 | [
"MIT"
] | null | null | null | django-rest/api/models.py | baseplate-admin/django-react | a3a7c90a49d77e4654eee2dff254fc0c3188cf54 | [
"MIT"
] | 1 | 2021-02-09T19:10:05.000Z | 2022-02-09T13:26:16.000Z | django-rest/api/models.py | baseplate-admin/django-react | a3a7c90a49d77e4654eee2dff254fc0c3188cf54 | [
"MIT"
] | null | null | null | from __future__ import unicode_literals
from django.db import models
# Create your models here.
class Url(models.Model):
long = models.CharField(max_length=100)
short = models.CharField(unique=True, max_length=25)
combinations = models.IntegerField(default=100000)
time = models.CharField(max_length=25)
def __str__(self):
return self.id
class YoutubeDownloader(models.Model):
title = models.CharField(max_length=200)
url = models.URLField()
file_location = models.CharField(max_length=200)
time = models.CharField(max_length=100)
short_url = models.CharField(max_length=10)
def __str__(self):
return self.id
class Bitrate(models.Model):
hour = models.CharField(max_length=100)
minute = models.CharField(max_length=100)
seconds = models.CharField(max_length=100)
size = models.CharField(max_length=6)
episode = models.CharField(max_length=100)
time = models.CharField(max_length=200, unique=True, default="-")
bitrate = models.CharField(max_length=100)
def __str__(self):
return self.id
class Poll(models.Model):
question = models.CharField(max_length=200)
option_1 = models.CharField(max_length=100)
option_2 = models.CharField(max_length=100)
option_3 = models.CharField(max_length=100)
option_4 = models.CharField(max_length=100)
option_1_count = models.IntegerField(default=0)
option_2_count = models.IntegerField(default=0)
option_3_count = models.IntegerField(default=0)
option_4_count = models.IntegerField(default=0)
time = models.CharField(max_length=28)
def __str__(self):
return self.question
# class IpTable(models.Model):
# entry_id = models.IntegerField()
# ip = models.CharField(max_length=12)
| 29.733333 | 69 | 0.723655 | 235 | 1,784 | 5.251064 | 0.259574 | 0.255267 | 0.291734 | 0.388979 | 0.604538 | 0.314425 | 0.06564 | 0 | 0 | 0 | 0 | 0.05 | 0.170404 | 1,784 | 59 | 70 | 30.237288 | 0.783784 | 0.07343 | 0 | 0.175 | 0 | 0 | 0.000607 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.1 | false | 0 | 0.05 | 0.1 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 4 |
df1e9f3b461042dff4c402ac73db64d51ddaca56 | 200 | py | Python | test/unit/test-cases/operations/op-output-name.py | JSTransformationBenchmarks/deepforge | 422f47c9440112a3f1a02745ac30646e1b0e681b | [
"Apache-2.0"
] | 726 | 2016-12-06T04:32:45.000Z | 2022-02-22T04:30:17.000Z | test/unit/test-cases/operations/op-output-name.py | JSTransformationBenchmarks/deepforge | 422f47c9440112a3f1a02745ac30646e1b0e681b | [
"Apache-2.0"
] | 685 | 2016-12-06T20:44:00.000Z | 2022-01-26T18:41:31.000Z | test/unit/test-cases/operations/op-output-name.py | JSTransformationBenchmarks/deepforge | 422f47c9440112a3f1a02745ac30646e1b0e681b | [
"Apache-2.0"
] | 72 | 2017-01-13T03:20:44.000Z | 2021-04-12T17:51:22.000Z | from operations import Operation
from typing import Tuple
class ExampleOperation(Operation):
def execute(hello, world, count):
self.myOutput = hello + world
return self.myOutput
| 22.222222 | 37 | 0.725 | 23 | 200 | 6.304348 | 0.695652 | 0.137931 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.215 | 200 | 8 | 38 | 25 | 0.923567 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0 | 0.333333 | 0 | 0.833333 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 4 |
df5e536f9ae7cec91d27bf8d451bc508fe51aa1a | 255 | py | Python | src/marvinbot/messages/parsers.py | osullivryan/marvin-the-discord-bot | ca07f9bf7229ba1576a9ba6b3b1d2393ab20c90d | [
"MIT"
] | null | null | null | src/marvinbot/messages/parsers.py | osullivryan/marvin-the-discord-bot | ca07f9bf7229ba1576a9ba6b3b1d2393ab20c90d | [
"MIT"
] | null | null | null | src/marvinbot/messages/parsers.py | osullivryan/marvin-the-discord-bot | ca07f9bf7229ba1576a9ba6b3b1d2393ab20c90d | [
"MIT"
] | null | null | null | from typing import Dict, Callable
from discord import Message
from marvinbot.messages.message_parser import flip_a_coin, roll_dice
# TODO: Type this Callable fully.
PARSERS: Dict[str, Callable] = {
"flip a coin": flip_a_coin,
"roll": roll_dice,
} | 28.333333 | 68 | 0.756863 | 38 | 255 | 4.894737 | 0.552632 | 0.080645 | 0.145161 | 0.139785 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.156863 | 255 | 9 | 69 | 28.333333 | 0.865116 | 0.121569 | 0 | 0 | 0 | 0 | 0.067265 | 0 | 0 | 0 | 0 | 0.111111 | 0 | 1 | 0 | true | 0 | 0.428571 | 0 | 0.428571 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 4 |
df6b442dd0165de606777ece1515ad97053dc03a | 66 | py | Python | tests/integration/__init__.py | dwayne314/ace-scaffold | 312dab194653f5122181746c252fd9712d5058a2 | [
"MIT"
] | null | null | null | tests/integration/__init__.py | dwayne314/ace-scaffold | 312dab194653f5122181746c252fd9712d5058a2 | [
"MIT"
] | null | null | null | tests/integration/__init__.py | dwayne314/ace-scaffold | 312dab194653f5122181746c252fd9712d5058a2 | [
"MIT"
] | null | null | null | """This module contains integration tests for the application."""
| 33 | 65 | 0.772727 | 8 | 66 | 6.375 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.121212 | 66 | 1 | 66 | 66 | 0.87931 | 0.893939 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 4 |
df73412834079a28ec5e4db30c5b8cdca97f77e2 | 174 | py | Python | openslides/config/exceptions.py | DerPate/OpenSlides | 2733a47d315fec9b8f3cb746fd5f3739be225d65 | [
"MIT"
] | 1 | 2015-03-22T02:07:23.000Z | 2015-03-22T02:07:23.000Z | openslides/config/exceptions.py | frauenknecht/OpenSlides | 6521d6b095bca33dc0c5f09f59067551800ea1e3 | [
"MIT"
] | null | null | null | openslides/config/exceptions.py | frauenknecht/OpenSlides | 6521d6b095bca33dc0c5f09f59067551800ea1e3 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
from openslides.utils.exceptions import OpenSlidesError
class ConfigError(OpenSlidesError):
pass
class ConfigNotFound(ConfigError):
pass
| 14.5 | 55 | 0.741379 | 17 | 174 | 7.588235 | 0.764706 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.006849 | 0.16092 | 174 | 11 | 56 | 15.818182 | 0.876712 | 0.12069 | 0 | 0.4 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.4 | 0.2 | 0 | 0.6 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 4 |
df8680e46c69c06d2fd022c5ac2397694b4a0916 | 56 | py | Python | skompiler/toskast/__init__.py | darleybarreto/SKompiler | 9a6c0d1f7134cb98126adc7b4528a4dc08ddd064 | [
"MIT"
] | 112 | 2018-12-12T03:54:28.000Z | 2022-01-14T14:18:42.000Z | skompiler/toskast/__init__.py | darleybarreto/SKompiler | 9a6c0d1f7134cb98126adc7b4528a4dc08ddd064 | [
"MIT"
] | 10 | 2018-12-20T17:21:09.000Z | 2022-03-24T19:31:55.000Z | skompiler/toskast/__init__.py | darleybarreto/SKompiler | 9a6c0d1f7134cb98126adc7b4528a4dc08ddd064 | [
"MIT"
] | 7 | 2019-02-05T05:20:05.000Z | 2021-03-21T16:31:38.000Z | """
Converters from other representations TO SKAST.
"""
| 14 | 47 | 0.732143 | 6 | 56 | 6.833333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.142857 | 56 | 3 | 48 | 18.666667 | 0.854167 | 0.839286 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 4 |
df921382a18265829910ba2098ecb05167232e00 | 205 | py | Python | py/cidoc_crm_types/properties/p113i_was_removed_by.py | minorg/cidoc-crm-types | 9018bdbf0658e4d28a87bc94543e467be45d8aa5 | [
"Apache-2.0"
] | null | null | null | py/cidoc_crm_types/properties/p113i_was_removed_by.py | minorg/cidoc-crm-types | 9018bdbf0658e4d28a87bc94543e467be45d8aa5 | [
"Apache-2.0"
] | null | null | null | py/cidoc_crm_types/properties/p113i_was_removed_by.py | minorg/cidoc-crm-types | 9018bdbf0658e4d28a87bc94543e467be45d8aa5 | [
"Apache-2.0"
] | null | null | null | from .p12i_was_present_at import P12iWasPresentAt
from dataclasses import dataclass
@dataclass
class P113iWasRemovedBy(P12iWasPresentAt):
URI = "http://erlangen-crm.org/current/P113i_was_removed_by"
| 25.625 | 64 | 0.829268 | 25 | 205 | 6.56 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.064865 | 0.097561 | 205 | 7 | 65 | 29.285714 | 0.821622 | 0 | 0 | 0 | 0 | 0 | 0.253659 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.4 | 0 | 0.8 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 4 |
10c1e9d169bc06b85ae1624f90f7a2ea2f78d376 | 74 | py | Python | problem/01000~09999/01009/1009.py3.py | njw1204/BOJ-AC | 1de41685725ae4657a7ff94e413febd97a888567 | [
"MIT"
] | 1 | 2019-04-19T16:37:44.000Z | 2019-04-19T16:37:44.000Z | problem/01000~09999/01009/1009.py3.py | njw1204/BOJ-AC | 1de41685725ae4657a7ff94e413febd97a888567 | [
"MIT"
] | 1 | 2019-04-20T11:42:44.000Z | 2019-04-20T11:42:44.000Z | problem/01000~09999/01009/1009.py3.py | njw1204/BOJ-AC | 1de41685725ae4657a7ff94e413febd97a888567 | [
"MIT"
] | 3 | 2019-04-19T16:37:47.000Z | 2021-10-25T00:45:00.000Z | for _ in range(int(input())):print(pow(*map(int,input().split()),10)or 10) | 74 | 74 | 0.662162 | 14 | 74 | 3.428571 | 0.785714 | 0.333333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.057143 | 0.054054 | 74 | 1 | 74 | 74 | 0.628571 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 4 |
10de1012d053db04b4c07f1317b75c15092d03ee | 1,808 | py | Python | app/exchanges/tests/test_models.py | iyanuashiri/exchange-api | 86f7a4e9fb17f71888e6854510618876d1010c19 | [
"MIT"
] | null | null | null | app/exchanges/tests/test_models.py | iyanuashiri/exchange-api | 86f7a4e9fb17f71888e6854510618876d1010c19 | [
"MIT"
] | null | null | null | app/exchanges/tests/test_models.py | iyanuashiri/exchange-api | 86f7a4e9fb17f71888e6854510618876d1010c19 | [
"MIT"
] | null | null | null | import pytest
@pytest.mark.django_db
def test_exchange_model(exchange):
assert exchange.from_currency_code == 'BTC'
assert exchange.from_currency_name == 'Bitcoin'
assert exchange.to_currency_code == 'USD'
assert exchange.to_currency_name == 'United States Dollar'
assert exchange.exchange_rate == '35894.79000000'
assert exchange.last_refreshed == '2021-06-12T13:28:01Z'
assert exchange.timezone == 'UTC'
assert exchange.bid_price == '35894.79000000'
assert exchange.ask_price == '35894.80000000'
@pytest.mark.django_db
def test_exchange_field_label(exchange):
assert exchange._meta.get_field('from_currency_code').verbose_name == 'from currency code'
assert exchange._meta.get_field('from_currency_name').verbose_name == 'from currency name'
assert exchange._meta.get_field('to_currency_code').verbose_name == 'to currency code'
assert exchange._meta.get_field('to_currency_name').verbose_name == 'to currency name'
assert exchange._meta.get_field('exchange_rate').verbose_name == 'exchange rate'
assert exchange._meta.get_field('last_refreshed').verbose_name == 'last refreshed'
assert exchange._meta.get_field('timezone').verbose_name == 'timezone'
assert exchange._meta.get_field('bid_price').verbose_name == 'bid price'
assert exchange._meta.get_field('ask_price').verbose_name == 'ask price'
@pytest.mark.django_db
def test_exchange_field_attributes(exchange):
assert exchange._meta.get_field('from_currency_code').max_length == 10
assert exchange._meta.get_field('from_currency_name').max_length == 100
assert exchange._meta.get_field('to_currency_code').max_length == 10
assert exchange._meta.get_field('to_currency_name').max_length == 100
assert exchange._meta.get_field('timezone').max_length == 10 | 50.222222 | 94 | 0.763274 | 246 | 1,808 | 5.264228 | 0.191057 | 0.248649 | 0.194595 | 0.227027 | 0.555212 | 0.494981 | 0.462548 | 0.379923 | 0.220849 | 0.152896 | 0 | 0.040778 | 0.118363 | 1,808 | 36 | 95 | 50.222222 | 0.771644 | 0 | 0 | 0.1 | 0 | 0 | 0.229961 | 0 | 0 | 0 | 0 | 0 | 0.766667 | 1 | 0.1 | false | 0 | 0.033333 | 0 | 0.133333 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 4 |
10e3c6638501b6803f777e40b13c5a798cce29c4 | 989 | py | Python | openprocurement/auctions/core/tests/bidder.py | EBRD-ProzorroSale/openprocurement.auctions.core | 52bd59f193f25e4997612fca0f87291decf06966 | [
"Apache-2.0"
] | 2 | 2016-09-15T20:17:43.000Z | 2017-01-08T03:32:43.000Z | openprocurement/auctions/core/tests/bidder.py | EBRD-ProzorroSale/openprocurement.auctions.core | 52bd59f193f25e4997612fca0f87291decf06966 | [
"Apache-2.0"
] | 183 | 2017-12-21T11:04:37.000Z | 2019-03-27T08:14:34.000Z | openprocurement/auctions/core/tests/bidder.py | EBRD-ProzorroSale/openprocurement.auctions.core | 52bd59f193f25e4997612fca0f87291decf06966 | [
"Apache-2.0"
] | 12 | 2016-09-05T12:07:48.000Z | 2019-02-26T09:24:17.000Z | from openprocurement.auctions.core.tests.base import snitch
from openprocurement.auctions.core.tests.blanks.bidder_blanks import (
# AuctionBidderDocumentResourceTestMixin
not_found,
create_auction_bidder_document,
put_auction_bidder_document,
patch_auction_bidder_document,
# AuctionBidderDocumentWithDSResourceTest
create_auction_bidder_document_json,
put_auction_bidder_document_json
)
class AuctionBidderDocumentResourceTestMixin(object):
test_not_found = snitch(not_found)
test_create_auction_bidder_document = snitch(create_auction_bidder_document)
test_put_auction_bidder_document = snitch(put_auction_bidder_document)
test_patch_auction_bidder_document = snitch(patch_auction_bidder_document)
class AuctionBidderDocumentWithDSResourceTestMixin(object):
test_create_auction_bidder_document_json = snitch(create_auction_bidder_document_json)
test_put_auction_bidder_document_json = snitch(put_auction_bidder_document_json)
| 41.208333 | 90 | 0.85541 | 109 | 989 | 7.201835 | 0.220183 | 0.248408 | 0.401274 | 0.206369 | 0.498089 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.102123 | 989 | 23 | 91 | 43 | 0.884009 | 0.078868 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.117647 | 0 | 0.588235 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 4 |
8018dd41ac3ab65cdfc99a8944d5b2cb0b108b3a | 1,734 | py | Python | SMM2/keytables.py | MarioPossamato/MariOver | 088adc0c0c9350b5a426093d2efbfce7edf28b24 | [
"MIT"
] | null | null | null | SMM2/keytables.py | MarioPossamato/MariOver | 088adc0c0c9350b5a426093d2efbfce7edf28b24 | [
"MIT"
] | null | null | null | SMM2/keytables.py | MarioPossamato/MariOver | 088adc0c0c9350b5a426093d2efbfce7edf28b24 | [
"MIT"
] | null | null | null | bcd: tuple = (
0x7ab1c9d2, 0xca750936, 0x3003e59c, 0xf261014b,
0x2e25160a, 0xed614811, 0xf1ac6240, 0xd59272cd,
0xf38549bf, 0x6cf5b327, 0xda4db82a, 0x820c435a,
0xc95609ba, 0x19be08b0, 0x738e2b81, 0xed3c349a,
0x045275d1, 0xe0a73635, 0x1debf4da, 0x9924b0de,
0x6a1fc367, 0x71970467, 0xfc55abeb, 0x368d7489,
0x0cc97d1d, 0x17cc441e, 0x3528d152, 0xd0129b53,
0xe12a69e9, 0x13d1bdb7, 0x32eaa9ed, 0x42f41d1b,
0xaea5f51f, 0x42c5d23c, 0x7cc742ed, 0x723ba5f9,
0xde5b99e3, 0x2c0055a4, 0xc38807b4, 0x4c099b61,
0xc4e4568e, 0x8c29c901, 0xe13b34ac, 0xe7c3f212,
0xb67ef941, 0x08038965, 0x8afd1e6a, 0x8e5341a3,
0xa4c61107, 0xfbaf1418, 0x9b05ef64, 0x3c91734e,
0x82ec6646, 0xfb19f33e, 0x3bde6fe2, 0x17a84cca,
0xccdf0ce9, 0x50e4135c, 0xff2658b2, 0x3780f156,
0x7d8f5d68, 0x517cbed1, 0x1fcddf0d, 0x77a58c94
)
btl: tuple = (
0x39b399d2, 0xfae40b38, 0x851bc213, 0x8cb4e3d9,
0x7ed1c46a, 0xe8050462, 0xd8d24f76, 0xb52886fc,
0x67890bf0, 0xf5329cb0, 0xd597fb28, 0x2b8ee0ea,
0x47574c51, 0x0f7569d9, 0xcf1163ae, 0xe4a153bf,
0xd1fae468, 0xd4c64738, 0x360106f5, 0xdd7eb113,
0xc296f3e2, 0x2c58f258, 0x79b554e1, 0x85df9d06,
0xaa307330, 0x01410f69, 0xb2f2c573, 0x82b93eb1,
0xf351a11c, 0x63098693, 0x885b5da5, 0x8872a8ed,
0xacd9cb13, 0xed7fbcad, 0xe6a41ec2, 0x5f44e79f,
0x8346f5b5, 0x389fe6ed, 0x507124b5, 0xe9b23eaa,
0x577113f0, 0xa95ed917, 0x2f62d158, 0x47843f86,
0xc65637d0, 0x2f272052, 0xba4a4cc4, 0xb5f146f6,
0x501b87a7, 0x51fc3a93, 0x6ede3f02, 0x3d265728,
0x9b809440, 0x75b89229, 0xf6a280cc, 0x8537fa68,
0x5b5ed19a, 0x6fc05bb6, 0xf4ef5261, 0xaa1b7d4f,
0xfcb26110, 0x00ad3d74, 0xc0e73a4b, 0xf132e7c7
)
| 45.631579 | 52 | 0.747405 | 132 | 1,734 | 9.818182 | 0.992424 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.544056 | 0.175317 | 1,734 | 37 | 53 | 46.864865 | 0.362238 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.754272 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 4 |
802814f97bc9e9a6aa8af76f6f3eb7198b432e9f | 662 | py | Python | cisco_firepower_management_center/setup.py | emartin-merrill-r7/insightconnect-plugins | a589745dbcc9f01d3e601431e77ab7221a84c117 | [
"MIT"
] | 1 | 2020-03-18T09:14:55.000Z | 2020-03-18T09:14:55.000Z | cisco_firepower_management_center/setup.py | OSSSP/insightconnect-plugins | 846758dab745170cf1a8c146211a8bea9592e8ff | [
"MIT"
] | null | null | null | cisco_firepower_management_center/setup.py | OSSSP/insightconnect-plugins | 846758dab745170cf1a8c146211a8bea9592e8ff | [
"MIT"
] | null | null | null | # GENERATED BY KOMAND SDK - DO NOT EDIT
from setuptools import setup, find_packages
setup(name='cisco_firepower_management_center-rapid7-plugin',
version='1.0.1',
description='This plugin utilizes Cisco Firepower Management Center to create a new block URL policy Cisco Firepower Management Center is an administrative nerve center for managing critical Cisco network security solutions',
author='rapid7',
author_email='',
url='',
packages=find_packages(),
install_requires=['komand'], # Add third-party dependencies to requirements.txt, not here!
scripts=['bin/icon_cisco_firepower_management_center']
)
| 44.133333 | 231 | 0.740181 | 83 | 662 | 5.771084 | 0.686747 | 0.11691 | 0.200418 | 0.250522 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.009191 | 0.178248 | 662 | 14 | 232 | 47.285714 | 0.871324 | 0.146526 | 0 | 0 | 1 | 0.090909 | 0.562278 | 0.158363 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.090909 | 0 | 0.090909 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 4 |
8051a9bbaed3357d74f84a1191717f9315aaa6d5 | 395 | py | Python | ui-server/model/rover.py | TomZurales/northernpike | f7d878ad7681456e95e3c480b31bfcb2358aa2d9 | [
"MIT"
] | null | null | null | ui-server/model/rover.py | TomZurales/northernpike | f7d878ad7681456e95e3c480b31bfcb2358aa2d9 | [
"MIT"
] | null | null | null | ui-server/model/rover.py | TomZurales/northernpike | f7d878ad7681456e95e3c480b31bfcb2358aa2d9 | [
"MIT"
] | null | null | null | import csv
class roverState:
def __init__(self):
self.writer = csv.writer(testfile1.csv, dialect='excel')
def getRoverGyro(self):
return "Gyro values x: %d y: %d z: %d" % (self.x, self.y, self.z)
def getRoverCompass(self):
return "Direction: %d" % (self.d)
def readGyroData(self):
return None
def writedata(self):
self.writer.writerows(x,y,z,d)
x=10
y=15
z=20
d=45
| 14.107143 | 67 | 0.660759 | 64 | 395 | 4.015625 | 0.453125 | 0.116732 | 0.108949 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.02795 | 0.18481 | 395 | 27 | 68 | 14.62963 | 0.770186 | 0 | 0 | 0 | 0 | 0 | 0.119898 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.3125 | false | 0.0625 | 0.0625 | 0.1875 | 0.875 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | 0 | 4 |
805979b5d95607a03363ea7a5c64d1b54a61c3d9 | 108 | py | Python | test.py | AlexandreOuellet/halite-bot | 3455f9b57d52aaee542ee0dad45b3b72314ba139 | [
"MIT"
] | 1 | 2017-10-26T20:13:01.000Z | 2017-10-26T20:13:01.000Z | test.py | AlexandreOuellet/halite-bot | 3455f9b57d52aaee542ee0dad45b3b72314ba139 | [
"MIT"
] | null | null | null | test.py | AlexandreOuellet/halite-bot | 3455f9b57d52aaee542ee0dad45b3b72314ba139 | [
"MIT"
] | null | null | null | import operator
x = {1: 2, 3: 4, 4: 3, 2: 1, 0: 0}
sorted_x = sorted(x.items(), key=operator.itemgetter(1))
| 27 | 56 | 0.62037 | 22 | 108 | 3 | 0.545455 | 0.212121 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.122222 | 0.166667 | 108 | 3 | 57 | 36 | 0.611111 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.333333 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 4 |
33c30d50857777710cdb06686172e8d30524a76c | 43 | py | Python | veils/_async_dummy.py | monomonedula/veil | 27615413f477b490580e9e22b2a8748a4b763696 | [
"MIT"
] | 2 | 2021-01-17T15:50:25.000Z | 2021-01-19T11:23:55.000Z | veils/_async_dummy.py | monomonedula/veils | 27615413f477b490580e9e22b2a8748a4b763696 | [
"MIT"
] | null | null | null | veils/_async_dummy.py | monomonedula/veils | 27615413f477b490580e9e22b2a8748a4b763696 | [
"MIT"
] | null | null | null | async def async_dummy(val):
return val
| 14.333333 | 27 | 0.72093 | 7 | 43 | 4.285714 | 0.714286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.209302 | 43 | 2 | 28 | 21.5 | 0.882353 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0.5 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 4 |
33dbaa64cb87a2c3d440e080d6a0289e1a3fecf6 | 214 | py | Python | test/mitmproxy/data/scripts/a.py | yatere/mitmproxy | 5c0161886ae03dcd3b4cfc726c7a53408cdb5d71 | [
"MIT"
] | 1 | 2021-01-10T15:48:40.000Z | 2021-01-10T15:48:40.000Z | test/mitmproxy/data/scripts/a.py | yatere/mitmproxy | 5c0161886ae03dcd3b4cfc726c7a53408cdb5d71 | [
"MIT"
] | null | null | null | test/mitmproxy/data/scripts/a.py | yatere/mitmproxy | 5c0161886ae03dcd3b4cfc726c7a53408cdb5d71 | [
"MIT"
] | null | null | null | import sys
from a_helper import parser
var = 0
def start(ctx):
global var
var = parser.parse_args(sys.argv[1:]).var
def here(ctx):
global var
var += 1
return var
def errargs():
pass
| 10.190476 | 45 | 0.621495 | 34 | 214 | 3.852941 | 0.588235 | 0.137405 | 0.183206 | 0.229008 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.019481 | 0.280374 | 214 | 20 | 46 | 10.7 | 0.831169 | 0 | 0 | 0.166667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0.083333 | 0.166667 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 4 |
33fddb72cf9725dc4fdf60ea6945293891929a0f | 103 | py | Python | CMXls.py | pengphei/cinemaman | f2de21e9034f7dc07f25980a653d8af82342136f | [
"Unlicense"
] | null | null | null | CMXls.py | pengphei/cinemaman | f2de21e9034f7dc07f25980a653d8af82342136f | [
"Unlicense"
] | null | null | null | CMXls.py | pengphei/cinemaman | f2de21e9034f7dc07f25980a653d8af82342136f | [
"Unlicense"
] | null | null | null | # -*- coding: utf-8 -*-
from xls import *
class CMXls():
def __init__(self):
pass
| 9.363636 | 23 | 0.504854 | 12 | 103 | 4 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.014706 | 0.339806 | 103 | 10 | 24 | 10.3 | 0.691176 | 0.203884 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0.25 | 0.25 | 0 | 0.75 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 4 |
33fdf65c8ead9798d87bee30902209348283c9af | 77 | py | Python | dcinside_cleaner/__main__.py | exportfs/dcinside-cleaner | 2169d85fc08a29ee52c6174567dbd77629ef05b7 | [
"MIT"
] | 15 | 2020-11-30T01:26:39.000Z | 2022-03-26T15:11:01.000Z | dcinside_cleaner/__main__.py | exportfs/dcinside-cleaner | 2169d85fc08a29ee52c6174567dbd77629ef05b7 | [
"MIT"
] | null | null | null | dcinside_cleaner/__main__.py | exportfs/dcinside-cleaner | 2169d85fc08a29ee52c6174567dbd77629ef05b7 | [
"MIT"
] | 10 | 2021-01-26T12:32:23.000Z | 2022-03-05T15:54:12.000Z | from cleaner_console import Console
if __name__ == '__main__':
Console() | 19.25 | 35 | 0.74026 | 9 | 77 | 5.333333 | 0.777778 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.168831 | 77 | 4 | 36 | 19.25 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0.102564 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.333333 | 0 | 0.333333 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 4 |
1d463fadcbdab0b9f46778b6508cb758f53551e9 | 70 | py | Python | karton/config_extractor/__main__.py | kscieslinski/karton-config-extractor | c0eb0bddeed2b217abe517ca1b8a20e679506dba | [
"BSD-3-Clause"
] | 7 | 2020-12-31T00:53:18.000Z | 2021-12-02T20:36:53.000Z | karton/config_extractor/__main__.py | kscieslinski/karton-config-extractor | c0eb0bddeed2b217abe517ca1b8a20e679506dba | [
"BSD-3-Clause"
] | 11 | 2021-08-22T01:15:23.000Z | 2022-02-26T22:08:40.000Z | karton/config_extractor/__main__.py | kscieslinski/karton-config-extractor | c0eb0bddeed2b217abe517ca1b8a20e679506dba | [
"BSD-3-Clause"
] | 3 | 2021-04-02T09:50:48.000Z | 2021-06-14T11:46:53.000Z | from .config_extractor import ConfigExtractor
ConfigExtractor.main()
| 17.5 | 45 | 0.857143 | 7 | 70 | 8.428571 | 0.857143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.085714 | 70 | 3 | 46 | 23.333333 | 0.921875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 4 |
1d5652ad952fb887517475a52da77317fba78e69 | 184 | py | Python | modules/s3/pyvttbl/stats/stats_test.py | unimauro/eden | b739d334e6828d0db14b3790f2f5e2666fc83576 | [
"MIT"
] | 1 | 2019-08-20T16:32:33.000Z | 2019-08-20T16:32:33.000Z | modules/s3/pyvttbl/stats/stats_test.py | andygimma/eden | 716d5e11ec0030493b582fa67d6f1c35de0af50d | [
"MIT"
] | null | null | null | modules/s3/pyvttbl/stats/stats_test.py | andygimma/eden | 716d5e11ec0030493b582fa67d6f1c35de0af50d | [
"MIT"
] | null | null | null | from stats import ttest_ind, tinv
a = [62,96,26,121,106,59,50,122,114,89,55,36]
b = [109,117,73,80,113,156,24,73,121,125,37,69]
t,prob = ttest_ind(a,b,1)
print tinv(.05,10)
| 20.444444 | 48 | 0.641304 | 43 | 184 | 2.697674 | 0.837209 | 0.137931 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.401274 | 0.146739 | 184 | 8 | 49 | 23 | 0.33758 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.2 | null | null | 0.2 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 4 |
1d71ad92afc1574bbcd13e891515a39fff327e2a | 126 | py | Python | tests/handlers.py | Tijani-Dia/yrouter-websockets | ea5ef8ed6a2143945c8f0736313197dbd6c77896 | [
"BSD-3-Clause"
] | 3 | 2022-01-15T23:36:43.000Z | 2022-01-18T09:06:18.000Z | tests/handlers.py | Tijani-Dia/yrouter-websockets | ea5ef8ed6a2143945c8f0736313197dbd6c77896 | [
"BSD-3-Clause"
] | null | null | null | tests/handlers.py | Tijani-Dia/yrouter-websockets | ea5ef8ed6a2143945c8f0736313197dbd6c77896 | [
"BSD-3-Clause"
] | null | null | null | async def home(ws):
await ws.send("In home")
async def hello_user(ws, username):
await ws.send(f"Hello {username}")
| 18 | 38 | 0.666667 | 21 | 126 | 3.952381 | 0.52381 | 0.192771 | 0.26506 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.18254 | 126 | 6 | 39 | 21 | 0.805825 | 0 | 0 | 0 | 0 | 0 | 0.18254 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 4 |
d528d5015d2766a99b7ea62b581ac87fcbfc0dbb | 337 | py | Python | transmit.py | nesbit/BerryStone | 3c7b2e1f4789ad7590b0e208cb59df328c23256f | [
"MIT"
] | null | null | null | transmit.py | nesbit/BerryStone | 3c7b2e1f4789ad7590b0e208cb59df328c23256f | [
"MIT"
] | null | null | null | transmit.py | nesbit/BerryStone | 3c7b2e1f4789ad7590b0e208cb59df328c23256f | [
"MIT"
] | null | null | null | import os
message = "17 02 01 1a 03 03 aa fe 0f 16 aa fe 10 ed 03 64 65 6d 70 73 65 79 73 07 00 00 00 00 00 00 00 00"
#Stop advertising
os.system("sudo hcitool -i hci0 cmd 0x08 0x000a 00")
#Set message
os.system("sudo hcitool -i hci0 cmd 0x08 0x0008 " + message)
#Resume advertising
os.system("sudo hcitool -i hci0 cmd 0x08 0x000a 01")
| 37.444444 | 107 | 0.721068 | 71 | 337 | 3.422535 | 0.492958 | 0.115226 | 0.148148 | 0.164609 | 0.588477 | 0.588477 | 0.588477 | 0.522634 | 0.395062 | 0.395062 | 0 | 0.298507 | 0.204748 | 337 | 8 | 108 | 42.125 | 0.608209 | 0.133531 | 0 | 0 | 0 | 0.2 | 0.726644 | 0 | 0 | 0 | 0.103806 | 0 | 0 | 1 | 0 | false | 0 | 0.2 | 0 | 0.2 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 4 |
d53d91aacd33c04d14c29e96080b07bc24b57c33 | 96 | py | Python | src/config/settings.py | luscafter/bot-telegram | c936020b05923976d203fd33f26facaddddd5013 | [
"MIT"
] | null | null | null | src/config/settings.py | luscafter/bot-telegram | c936020b05923976d203fd33f26facaddddd5013 | [
"MIT"
] | null | null | null | src/config/settings.py | luscafter/bot-telegram | c936020b05923976d203fd33f26facaddddd5013 | [
"MIT"
] | null | null | null | import os
from dotenv import load_dotenv
load_dotenv()
TOKEN_BOT = os.getenv("TOKEN_BOT") | 16 | 34 | 0.75 | 15 | 96 | 4.533333 | 0.533333 | 0.294118 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.166667 | 96 | 6 | 34 | 16 | 0.85 | 0 | 0 | 0 | 0 | 0 | 0.097826 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 4 |
d546180a3e7e1fa7749957fea4c1fb1c275a7158 | 565 | py | Python | djaveAPI/currency_field.py | dasmith2/djaveAPI | 6cece89bb945a4c8ace1534cc007626a35af3c38 | [
"MIT"
] | null | null | null | djaveAPI/currency_field.py | dasmith2/djaveAPI | 6cece89bb945a4c8ace1534cc007626a35af3c38 | [
"MIT"
] | null | null | null | djaveAPI/currency_field.py | dasmith2/djaveAPI | 6cece89bb945a4c8ace1534cc007626a35af3c38 | [
"MIT"
] | null | null | null | """ Money is a little tricky. In Django models I store it as a single Money
field. However, in the database that's a decimal field for the amount and a
char field for the currency. In the API, it's a float field for the amount and
a text field for the currency. """
def corresponding_currency_value(field, request_data):
currency_field_name = corresponding_currency_field_name(field)
if currency_field_name in request_data:
return request_data[currency_field_name]
def corresponding_currency_field_name(field):
return '{}_currency'.format(field.name)
| 37.666667 | 78 | 0.79115 | 92 | 565 | 4.663043 | 0.402174 | 0.125874 | 0.198135 | 0.079254 | 0.391608 | 0.097902 | 0 | 0 | 0 | 0 | 0 | 0 | 0.146903 | 565 | 14 | 79 | 40.357143 | 0.890041 | 0.454867 | 0 | 0 | 0 | 0 | 0.036667 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0 | 0.166667 | 0.666667 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 4 |
d59685f854e8ccd62e0fa790f91954d7178e71bf | 3,259 | py | Python | L1Trigger/L1THGCalUtilities/python/clustering2d.py | ckamtsikis/cmssw | ea19fe642bb7537cbf58451dcf73aa5fd1b66250 | [
"Apache-2.0"
] | 852 | 2015-01-11T21:03:51.000Z | 2022-03-25T21:14:00.000Z | L1Trigger/L1THGCalUtilities/python/clustering2d.py | ckamtsikis/cmssw | ea19fe642bb7537cbf58451dcf73aa5fd1b66250 | [
"Apache-2.0"
] | 30,371 | 2015-01-02T00:14:40.000Z | 2022-03-31T23:26:05.000Z | L1Trigger/L1THGCalUtilities/python/clustering2d.py | ckamtsikis/cmssw | ea19fe642bb7537cbf58451dcf73aa5fd1b66250 | [
"Apache-2.0"
] | 3,240 | 2015-01-02T05:53:18.000Z | 2022-03-31T17:24:21.000Z | import FWCore.ParameterSet.Config as cms
from L1Trigger.L1THGCal.hgcalBackEndLayer1Producer_cfi import dummy_C2d_params, \
distance_C2d_params, \
topological_C2d_params, \
constrTopological_C2d_params
from L1Trigger.L1THGCal.customClustering import set_threshold_params
def create_distance(process, inputs,
distance=distance_C2d_params.dR_cluster, # cm
seed_threshold=distance_C2d_params.seeding_threshold_silicon, # MipT
cluster_threshold=distance_C2d_params.clustering_threshold_silicon # MipT
):
producer = process.hgcalBackEndLayer1Producer.clone(
InputTriggerCells = cms.InputTag('{}:HGCalConcentratorProcessorSelection'.format(inputs))
)
producer.ProcessorParameters.C2d_parameters = distance_C2d_params.clone(
dR_cluster = distance
)
set_threshold_params(producer.ProcessorParameters.C2d_parameters, seed_threshold, cluster_threshold)
return producer
def create_topological(process, inputs,
seed_threshold=topological_C2d_params.seeding_threshold_silicon, # MipT
cluster_threshold=topological_C2d_params.clustering_threshold_silicon # MipT
):
producer = process.hgcalBackEndLayer1Producer.clone(
InputTriggerCells = cms.InputTag('{}:HGCalConcentratorProcessorSelection'.format(inputs))
)
producer.ProcessorParameters.C2d_parameters = topological_C2d_params.clone()
set_threshold_params(producer.ProcessorParameters.C2d_parameters, seed_threshold, cluster_threshold)
return producer
def create_constrainedtopological(process, inputs,
distance=constrTopological_C2d_params.dR_cluster, # cm
seed_threshold=constrTopological_C2d_params.seeding_threshold_silicon, # MipT
cluster_threshold=constrTopological_C2d_params.clustering_threshold_silicon # MipT
):
producer = process.hgcalBackEndLayer1Producer.clone(
InputTriggerCells = cms.InputTag('{}:HGCalConcentratorProcessorSelection'.format(inputs))
)
producer.ProcessorParameters.C2d_parameters = constrTopological_C2d_params.clone(
dR_cluster = distance
)
set_threshold_params(producer.ProcessorParameters.C2d_parameters, seed_threshold, cluster_threshold)
return producer
def create_dummy(process, inputs):
producer = process.hgcalBackEndLayer1Producer.clone(
InputTriggerCells = cms.InputTag('{}:HGCalConcentratorProcessorSelection'.format(inputs))
)
producer.ProcessorParameters.C2d_parameters = dummy_C2d_params.clone()
return producer
def create_truth_dummy(process, inputs):
producer = process.hgcalBackEndLayer1Producer.clone(
InputTriggerCells = cms.InputTag('{}'.format(inputs))
)
producer.ProcessorParameters.C2d_parameters = dummy_C2d_params.clone()
return producer
| 50.921875 | 117 | 0.673826 | 264 | 3,259 | 7.996212 | 0.162879 | 0.072478 | 0.11369 | 0.151587 | 0.759356 | 0.759356 | 0.759356 | 0.728091 | 0.654192 | 0.654192 | 0 | 0.01462 | 0.265419 | 3,259 | 63 | 118 | 51.730159 | 0.867168 | 0.010739 | 0 | 0.444444 | 0 | 0 | 0.0479 | 0.047278 | 0 | 0 | 0 | 0 | 0 | 1 | 0.092593 | false | 0 | 0.055556 | 0 | 0.240741 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 4 |
633573cd077ee1c3b8a545036bc6265189e4e4a1 | 692 | py | Python | help.py | c1c1/SSH-Cisco-Config | 9c425ec3a6e5c5dcdfbca9c4bc20cf1e76f8d96a | [
"MIT"
] | 8 | 2017-02-07T15:56:28.000Z | 2021-11-02T17:33:11.000Z | help.py | c1c1/SSH-Cisco-Config | 9c425ec3a6e5c5dcdfbca9c4bc20cf1e76f8d96a | [
"MIT"
] | null | null | null | help.py | c1c1/SSH-Cisco-Config | 9c425ec3a6e5c5dcdfbca9c4bc20cf1e76f8d96a | [
"MIT"
] | 3 | 2020-06-14T19:15:33.000Z | 2021-11-02T17:33:16.000Z | import bcolors
import os
#######################################################################
# HELP MENU
def help():
os.system("clear")
print "#####################################################################"
print "# #"
print "#" + bcolors.bcolors.FAIL+ " Please use: CLI.py <hostsfile> <commandsfile>" + bcolors.bcolors.ENDC + " #"
print "# #"
print "# hhugomarques@gmail.com #"
print "#####################################################################"
| 49.428571 | 136 | 0.245665 | 32 | 692 | 5.3125 | 0.625 | 0.176471 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.381503 | 692 | 13 | 137 | 53.230769 | 0.397196 | 0.013006 | 0 | 0.4 | 0 | 0 | 0.686885 | 0.262295 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.2 | null | null | 0.6 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 4 |
63431b0c484ebf2f08607739f42be57bdf676a84 | 99 | py | Python | src/universities/apps.py | Busaka/excellence | 1cd19770285584d61aeddd77d6c1dd83e2fd04ba | [
"MIT"
] | 3 | 2019-03-13T00:44:31.000Z | 2019-06-05T08:20:55.000Z | server/universities/apps.py | ShahriarDhruvo/HackTheVerse_SUST_NOOBs | e884e47e5e987eac45f86faacc78be7db6e588ac | [
"MIT"
] | 13 | 2019-03-17T16:53:02.000Z | 2022-03-11T23:42:13.000Z | server/universities/apps.py | ShahriarDhruvo/HackTheVerse_SUST_NOOBs | e884e47e5e987eac45f86faacc78be7db6e588ac | [
"MIT"
] | 4 | 2019-03-17T14:58:46.000Z | 2020-07-05T15:20:28.000Z | from django.apps import AppConfig
class UniversitiesConfig(AppConfig):
name = 'universities'
| 16.5 | 36 | 0.777778 | 10 | 99 | 7.7 | 0.9 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.151515 | 99 | 5 | 37 | 19.8 | 0.916667 | 0 | 0 | 0 | 0 | 0 | 0.121212 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 4 |
6355661d952d92e1c668b127da8b0bbe16b87d91 | 252 | py | Python | gehomesdk/erd/values/water_filter/erd_waterfilter_life.py | willhayslett/gehome | 7e407a1d31cede1453656eaef948332e808484ea | [
"MIT"
] | 17 | 2021-05-18T01:58:06.000Z | 2022-03-22T20:49:32.000Z | gehomesdk/erd/values/water_filter/erd_waterfilter_life.py | willhayslett/gehome | 7e407a1d31cede1453656eaef948332e808484ea | [
"MIT"
] | 29 | 2021-05-17T21:43:16.000Z | 2022-02-28T22:50:48.000Z | gehomesdk/erd/values/water_filter/erd_waterfilter_life.py | willhayslett/gehome | 7e407a1d31cede1453656eaef948332e808484ea | [
"MIT"
] | 9 | 2021-05-17T04:40:58.000Z | 2022-02-02T17:26:13.000Z | from datetime import timedelta
from typing import NamedTuple, Optional
import humanize
class ErdWaterFilterLifeRemaining(NamedTuple):
life_remaining: int
def stringify(self, **kwargs) -> Optional[str]:
return self.life_remaining
| 22.909091 | 51 | 0.757937 | 27 | 252 | 7 | 0.703704 | 0.137566 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.178571 | 252 | 10 | 52 | 25.2 | 0.913043 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | false | 0 | 0.428571 | 0.142857 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 4 |
635d34cf8203a37be6ad99b0e72cf732e160246f | 97 | py | Python | project_code_helpers/code_helpers/apps.py | lorenzowind/CodeHelpers | 1f47477256e62d266800fbca6b9ff08d6c32f631 | [
"MIT"
] | null | null | null | project_code_helpers/code_helpers/apps.py | lorenzowind/CodeHelpers | 1f47477256e62d266800fbca6b9ff08d6c32f631 | [
"MIT"
] | 2 | 2021-03-30T13:57:30.000Z | 2021-04-08T21:23:20.000Z | project_code_helpers/code_helpers/apps.py | lorenzowind/CodeHelpers | 1f47477256e62d266800fbca6b9ff08d6c32f631 | [
"MIT"
] | 1 | 2022-03-23T14:37:22.000Z | 2022-03-23T14:37:22.000Z | from django.apps import AppConfig
class CodeHelpersConfig(AppConfig):
name = 'code_helpers'
| 19.4 | 35 | 0.783505 | 11 | 97 | 6.818182 | 0.909091 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.14433 | 97 | 4 | 36 | 24.25 | 0.903614 | 0 | 0 | 0 | 0 | 0 | 0.123711 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 4 |
63737aef735b2cad93633d7ac01f63843bf9c86b | 109 | py | Python | other_tests/funtype.py | nuua-io/Nuua | d74bec22d09d25f2bc0ced8d7c9a154ff84a874d | [
"MIT"
] | 43 | 2018-11-17T02:08:09.000Z | 2022-03-03T14:50:02.000Z | other_tests/funtype.py | nuua-io/Nuua | d74bec22d09d25f2bc0ced8d7c9a154ff84a874d | [
"MIT"
] | 2 | 2019-08-07T03:16:51.000Z | 2021-05-17T03:05:08.000Z | other_tests/funtype.py | nuua-io/Nuua | d74bec22d09d25f2bc0ced8d7c9a154ff84a874d | [
"MIT"
] | 3 | 2019-01-07T18:43:35.000Z | 2021-07-21T12:12:23.000Z | def test():
def test2():
return 0
test3 = lambda: 0
print(test2)
print(test3)
test()
| 13.625 | 21 | 0.53211 | 14 | 109 | 4.142857 | 0.571429 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.083333 | 0.33945 | 109 | 7 | 22 | 15.571429 | 0.722222 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.285714 | false | 0 | 0 | 0.142857 | 0.428571 | 0.285714 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 4 |
894cc16431515cb926163397ba4179e1fcdad934 | 1,267 | py | Python | variation/translators/__init__.py | GenomicMedLab/varlex | 9e53906a5f4e41afb4480487a4d03b0c218a1d57 | [
"MIT"
] | null | null | null | variation/translators/__init__.py | GenomicMedLab/varlex | 9e53906a5f4e41afb4480487a4d03b0c218a1d57 | [
"MIT"
] | 3 | 2020-06-26T15:19:31.000Z | 2021-02-04T21:14:37.000Z | variation/translators/__init__.py | GenomicMedLab/varlex | 9e53906a5f4e41afb4480487a4d03b0c218a1d57 | [
"MIT"
] | null | null | null | """Translator package import."""
from .translate import Translate # noqa: F401
from .translator import Translator # noqa: F401
from .amino_acid_substitution import AminoAcidSubstitution # noqa: F401
from .polypeptide_truncation import PolypeptideTruncation # noqa: F401
from .silent_mutation import SilentMutation # noqa: F401
from .coding_dna_substitution import CodingDNASubstitution # noqa: F401
from .genomic_substitution import GenomicSubstitution # noqa: F401
from .coding_dna_silent_mutation import CodingDNASilentMutation # noqa: F401
from .genomic_silent_mutation import GenomicSilentMutation # noqa: F401
from .amino_acid_delins import AminoAcidDelIns # noqa: F401
from .coding_dna_delins import CodingDNADelIns # noqa: F401
from .genomic_delins import GenomicDelIns # noqa: F401
from .amino_acid_deletion import AminoAcidDeletion # noqa: F401
from .coding_dna_deletion import CodingDNADeletion # noqa: F401
from .genomic_deletion import GenomicDeletion # noqa: F401
from .amino_acid_insertion import AminoAcidInsertion # noqa: F401
from .coding_dna_insertion import CodingDNAInsertion # noqa: F401
from .genomic_insertion import GenomicInsertion # noqa: F401
from .genomic_uncertain_deletion import GenomicUncertainDeletion # noqa: F401
| 60.333333 | 78 | 0.827151 | 146 | 1,267 | 6.979452 | 0.260274 | 0.149166 | 0.211973 | 0.111874 | 0.185476 | 0 | 0 | 0 | 0 | 0 | 0 | 0.051259 | 0.122336 | 1,267 | 20 | 79 | 63.35 | 0.865108 | 0.186267 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 4 |
8962806da9dff786b00552e48563f6c73ab135de | 964 | py | Python | 001_Introduccion/005_listas.py | cobymotion/PythonCourse | 3dcf4ab8cd59210f3d806aa79142fbc94240bc9e | [
"Apache-2.0"
] | null | null | null | 001_Introduccion/005_listas.py | cobymotion/PythonCourse | 3dcf4ab8cd59210f3d806aa79142fbc94240bc9e | [
"Apache-2.0"
] | null | null | null | 001_Introduccion/005_listas.py | cobymotion/PythonCourse | 3dcf4ab8cd59210f3d806aa79142fbc94240bc9e | [
"Apache-2.0"
] | null | null | null | # -*- coding: utf-8 -*-
"""
Created on Thu Jan 24 12:41:16 2019
@author: Luis Cobian
Practia 4: listas
"""
mi_lista = ["cadenas",16,65.2,True]
print(mi_lista)
# Agregar valores a la lista
mi_lista.append(7)
print(mi_lista)
#Inserta valor a la lista
mi_lista.insert(2,"Insertado")
print(mi_lista)
#Remover un valor
mi_lista.remove(16)
print(mi_lista)
#Pueden trabajar como pilas
valor = mi_lista.pop()
print(valor)
print (mi_lista)
#ordenando listas
#mi_lista.sort(); # no es posible ya que no tienen los mismos datos
mi_lista_enteros = [5,9,10,3,5,4,3]
mi_lista_enteros.sort();
print(mi_lista_enteros);
# En orden inverso
mi_lista_enteros.sort(reverse=True)
print(mi_lista_enteros);
# unir dos listas
mi_lista_dos = [4,3,2]
mi_lista_enteros = mi_lista_enteros + mi_lista_dos
print(mi_lista_enteros);
# añadir una lista dentro de otra
mi_lista_enteros.append(mi_lista_dos)
print(mi_lista_enteros)
print(mi_lista_enteros[10])
print(mi_lista_enteros[10][1])
| 22.952381 | 67 | 0.760373 | 172 | 964 | 4.023256 | 0.412791 | 0.263006 | 0.242775 | 0.16474 | 0.228324 | 0.083815 | 0.083815 | 0 | 0 | 0 | 0 | 0.045775 | 0.116183 | 964 | 41 | 68 | 23.512195 | 0.766432 | 0.354772 | 0 | 0.347826 | 0 | 0 | 0.02649 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.521739 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 4 |
896cf87afcb2e77f62c4cfa1b8f019b14a1d662d | 203 | py | Python | Cards/serializers.py | vabene1111/LearningCards | 00539c8d5d3063eecc306dd68eb3eeeac89dba9f | [
"MIT"
] | 1 | 2020-03-18T15:10:42.000Z | 2020-03-18T15:10:42.000Z | Cards/serializers.py | vabene1111/LearningCards | 00539c8d5d3063eecc306dd68eb3eeeac89dba9f | [
"MIT"
] | 1 | 2020-02-22T20:03:02.000Z | 2020-02-23T16:31:56.000Z | Cards/serializers.py | vabene1111/LearningCards | 00539c8d5d3063eecc306dd68eb3eeeac89dba9f | [
"MIT"
] | null | null | null | from rest_framework import serializers
class SetPinSerializer(serializers.Serializer):
pin = serializers.IntegerField()
mode = serializers.IntegerField()
state = serializers.IntegerField()
| 25.375 | 47 | 0.778325 | 18 | 203 | 8.722222 | 0.666667 | 0.43949 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.142857 | 203 | 7 | 48 | 29 | 0.902299 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.2 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 4 |
899c050cb600bf6773fb636b5138dd3a4b04e0f8 | 71 | py | Python | imutils/ml/models/pl/__init__.py | JacobARose/image-utils | aa0e005c0b4df5198d188b074f4e21f8d8f97962 | [
"MIT"
] | null | null | null | imutils/ml/models/pl/__init__.py | JacobARose/image-utils | aa0e005c0b4df5198d188b074f4e21f8d8f97962 | [
"MIT"
] | null | null | null | imutils/ml/models/pl/__init__.py | JacobARose/image-utils | aa0e005c0b4df5198d188b074f4e21f8d8f97962 | [
"MIT"
] | null | null | null | """
imutils/ml/models/pl/__init__.py
"""
#from .classifier import * | 8.875 | 32 | 0.661972 | 9 | 71 | 4.777778 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.140845 | 71 | 8 | 33 | 8.875 | 0.704918 | 0.816901 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 4 |
89bb0a8d5eabcf46c689d0f96d454ff321bb6b14 | 217 | py | Python | section4/video2/functions.py | PacktPublishing/Mastering-Python-3.x-3rd-Edition | addfb6b1ecbc788030be119318386e1261ba6f2a | [
"MIT"
] | 6 | 2019-04-10T17:27:30.000Z | 2021-11-08T13:10:37.000Z | section4/video2/functions.py | PacktPublishing/Mastering-Python-3.x | 526f2d02266fa6c0a5badf892f2db177b2f52f64 | [
"MIT"
] | 2 | 2021-06-01T23:45:41.000Z | 2021-06-02T00:07:56.000Z | section4/video2/functions.py | PacktPublishing/Mastering-Python-3.x | 526f2d02266fa6c0a5badf892f2db177b2f52f64 | [
"MIT"
] | 8 | 2019-05-02T20:56:37.000Z | 2021-09-02T08:55:06.000Z | def add(pair):
return pair[0] + pair[1]
def even(a):
return a % 2 == 0
def map(func, objects):
return [func(x) for x in objects]
def filter(func, objects):
return [x for x in objects if func(x)]
| 14.466667 | 42 | 0.603687 | 39 | 217 | 3.358974 | 0.435897 | 0.167939 | 0.259542 | 0.10687 | 0.21374 | 0 | 0 | 0 | 0 | 0 | 0 | 0.024845 | 0.258065 | 217 | 14 | 43 | 15.5 | 0.78882 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | false | 0 | 0 | 0.5 | 1 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 4 |
98209252b2c42bf5929f2fffa7e7298613add98f | 716 | py | Python | kinopoisk_unofficial/client/reviews_api_client.py | masterWeber/kinopoisk-api-unofficial-client | 5c95e1ec6e43bd302399b63a1525ee7e61724155 | [
"MIT"
] | 2 | 2021-11-13T12:23:41.000Z | 2021-12-24T14:09:49.000Z | kinopoisk_unofficial/client/reviews_api_client.py | masterWeber/kinopoisk-api-unofficial-client | 5c95e1ec6e43bd302399b63a1525ee7e61724155 | [
"MIT"
] | 1 | 2022-03-29T19:13:24.000Z | 2022-03-30T18:57:23.000Z | kinopoisk_unofficial/client/reviews_api_client.py | masterWeber/kinopoisk-api-unofficial-client | 5c95e1ec6e43bd302399b63a1525ee7e61724155 | [
"MIT"
] | 1 | 2021-11-13T12:30:01.000Z | 2021-11-13T12:30:01.000Z | from kinopoisk_unofficial.client.api_client import ApiClient
from kinopoisk_unofficial.request.reviews.review_details_request import ReviewDetailsRequest
from kinopoisk_unofficial.request.reviews.reviews_request import ReviewsRequest
from kinopoisk_unofficial.response.reviews.review_details_response import ReviewDetailsResponse
from kinopoisk_unofficial.response.reviews.reviews_response import ReviewsResponse
class ReviewsApiClient(ApiClient):
def send_reviews_request(self, request: ReviewsRequest) -> ReviewsResponse:
return self._send_request(request)
def send_review_details_request(self, request: ReviewDetailsRequest) -> ReviewDetailsResponse:
return self._send_request(request)
| 51.142857 | 98 | 0.857542 | 76 | 716 | 7.802632 | 0.289474 | 0.109612 | 0.193929 | 0.10118 | 0.347386 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.090782 | 716 | 13 | 99 | 55.076923 | 0.910906 | 0 | 0 | 0.2 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0.5 | 0.2 | 1 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 4 |
982e0ba26d302bcf0055ad821f10034e9954631a | 154 | py | Python | producer/app/config.py | gimmesomethinggood/apache-nifi-kafka | 5cdf58727a450dc2685412bd80c8c4e7379bc163 | [
"Apache-2.0"
] | 2 | 2020-07-07T15:28:05.000Z | 2020-12-23T03:42:05.000Z | producer/app/config.py | gimmesomethinggood/apache-nifi-kafka | 5cdf58727a450dc2685412bd80c8c4e7379bc163 | [
"Apache-2.0"
] | null | null | null | producer/app/config.py | gimmesomethinggood/apache-nifi-kafka | 5cdf58727a450dc2685412bd80c8c4e7379bc163 | [
"Apache-2.0"
] | 7 | 2020-10-14T14:22:07.000Z | 2022-03-27T02:53:05.000Z | import os
class Config(object):
bootstrap_server = os.environ['BOOTSTRAP_SERVER']
topic = os.environ['TOPIC']
covid_api = os.environ['API_COVID']
| 19.25 | 51 | 0.727273 | 21 | 154 | 5.142857 | 0.52381 | 0.25 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.136364 | 154 | 7 | 52 | 22 | 0.81203 | 0 | 0 | 0 | 0 | 0 | 0.194805 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.2 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 4 |
983d6f24c6da46a0b6dd04d6203c5d4e34b05d13 | 83 | py | Python | runapp.py | Unixeno/PicMe | 376f486c8c7375d6a43eaed4139988090c679e53 | [
"MIT"
] | 1 | 2019-06-23T03:28:04.000Z | 2019-06-23T03:28:04.000Z | runapp.py | Unixeno/PicMe | 376f486c8c7375d6a43eaed4139988090c679e53 | [
"MIT"
] | 1 | 2019-06-23T03:29:22.000Z | 2019-06-23T03:29:22.000Z | runapp.py | Unixeno/PicMe | 376f486c8c7375d6a43eaed4139988090c679e53 | [
"MIT"
] | 1 | 2019-06-23T03:28:20.000Z | 2019-06-23T03:28:20.000Z | from app import instance
if __name__ == '__main__':
instance.run(debug=True)
| 13.833333 | 28 | 0.710843 | 11 | 83 | 4.636364 | 0.909091 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.180723 | 83 | 5 | 29 | 16.6 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0.096386 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.333333 | 0 | 0.333333 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 4 |
98469c092125704e51d5dc39fa2ab001c8dd10a1 | 363 | py | Python | app/serializers.py | raptor419/privi | f92b70b98e5d02c553734e8c79969aba9d4158fa | [
"MIT"
] | null | null | null | app/serializers.py | raptor419/privi | f92b70b98e5d02c553734e8c79969aba9d4158fa | [
"MIT"
] | null | null | null | app/serializers.py | raptor419/privi | f92b70b98e5d02c553734e8c79969aba9d4158fa | [
"MIT"
] | null | null | null | from rest_framework import serializers
from .models import *
class QuestionSerializer(serializers.ModelSerializer):
class Meta:
model = Question
fields = ['content', 'options', 'max_time', 'correct_option']
class SnippetSerializer(serializers.ModelSerializer):
class Meta:
model = Question
fields = ['title', 'content']
| 25.928571 | 69 | 0.694215 | 34 | 363 | 7.323529 | 0.617647 | 0.208835 | 0.248996 | 0.281125 | 0.433735 | 0.433735 | 0.433735 | 0 | 0 | 0 | 0 | 0 | 0.206612 | 363 | 13 | 70 | 27.923077 | 0.864583 | 0 | 0 | 0.4 | 0 | 0 | 0.132597 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.2 | 0 | 0.6 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 4 |
9847eb7ca7345f0393c5f63273bee5d2ef4b63cf | 694 | py | Python | clients/python-experimental/generated/openapi_client/api/location_api.py | cliffano/pokeapi-clients | 92af296c68c3e94afac52642ae22057faaf071ee | [
"MIT"
] | null | null | null | clients/python-experimental/generated/openapi_client/api/location_api.py | cliffano/pokeapi-clients | 92af296c68c3e94afac52642ae22057faaf071ee | [
"MIT"
] | null | null | null | clients/python-experimental/generated/openapi_client/api/location_api.py | cliffano/pokeapi-clients | 92af296c68c3e94afac52642ae22057faaf071ee | [
"MIT"
] | null | null | null | # coding: utf-8
"""
No description provided (generated by Openapi Generator https://github.com/openapitools/openapi-generator) # noqa: E501
The version of the OpenAPI document: 20220523
Generated by: https://openapi-generator.tech
"""
from openapi_client.api_client import ApiClient
from openapi_client.api.location_api_endpoints.location_list import LocationList
from openapi_client.api.location_api_endpoints.location_read import LocationRead
class LocationApi(
LocationList,
LocationRead,
ApiClient,
):
"""NOTE: This class is auto generated by OpenAPI Generator
Ref: https://openapi-generator.tech
Do not edit the class manually.
"""
pass
| 25.703704 | 124 | 0.756484 | 86 | 694 | 5.988372 | 0.523256 | 0.15534 | 0.099029 | 0.116505 | 0.186408 | 0.186408 | 0.186408 | 0.186408 | 0 | 0 | 0 | 0.020725 | 0.165706 | 694 | 26 | 125 | 26.692308 | 0.868739 | 0.507205 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.111111 | 0.333333 | 0 | 0.444444 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 4 |
988d9cc0f6aeebdb7692d97235402a16fff98a23 | 99 | py | Python | functions.py | DanielAdeyemi/CS50_Web_Python_practice | 435e25c1967d8792c93db162878a7e80832cc32d | [
"MIT"
] | null | null | null | functions.py | DanielAdeyemi/CS50_Web_Python_practice | 435e25c1967d8792c93db162878a7e80832cc32d | [
"MIT"
] | null | null | null | functions.py | DanielAdeyemi/CS50_Web_Python_practice | 435e25c1967d8792c93db162878a7e80832cc32d | [
"MIT"
] | null | null | null | def square(x):
return x*x
for i in range(10):
print(f"Square of {i+1} is {square(i+1)}")
| 14.142857 | 46 | 0.575758 | 21 | 99 | 2.714286 | 0.666667 | 0.070175 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.052632 | 0.232323 | 99 | 6 | 47 | 16.5 | 0.697368 | 0 | 0 | 0 | 0 | 0 | 0.323232 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0 | 0.25 | 0.5 | 0.25 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 4 |
988f8cace1ba3b520995f676b9468eab6a0a5ab8 | 208 | py | Python | main_app/forms.py | safatalnur/bookApp | 416db6f268ad00fa992a6d50c0ce5d161b057ace | [
"MIT"
] | null | null | null | main_app/forms.py | safatalnur/bookApp | 416db6f268ad00fa992a6d50c0ce5d161b057ace | [
"MIT"
] | 7 | 2021-03-30T14:06:37.000Z | 2022-03-12T00:41:19.000Z | main_app/forms.py | safatalnur/bookApp | 416db6f268ad00fa992a6d50c0ce5d161b057ace | [
"MIT"
] | null | null | null | from django import forms
from . import models
class CreateBook(forms.ModelForm):
class Meta:
model = models.Book
fields = ['title', 'author', 'illustrated', 'age', 'bookImage', 'bookPdf'] | 29.714286 | 82 | 0.658654 | 23 | 208 | 5.956522 | 0.782609 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.206731 | 208 | 7 | 82 | 29.714286 | 0.830303 | 0 | 0 | 0 | 0 | 0 | 0.196172 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 4 |
98b11dcfe7fb52cfcec24b693d650a283184c244 | 22 | py | Python | tests/__init__.py | Nachtfeuer/concept-py | 64e1f82de144f959cdf3c6dcf0f692bbc0ceb20f | [
"MIT"
] | 2 | 2019-03-02T18:50:24.000Z | 2019-12-19T14:15:42.000Z | tests/__init__.py | Nachtfeuer/concept-py | 64e1f82de144f959cdf3c6dcf0f692bbc0ceb20f | [
"MIT"
] | 10 | 2015-07-27T03:24:57.000Z | 2017-03-31T18:11:26.000Z | tests/__init__.py | Nachtfeuer/concept-py | 64e1f82de144f959cdf3c6dcf0f692bbc0ceb20f | [
"MIT"
] | null | null | null | """Package: tests."""
| 11 | 21 | 0.545455 | 2 | 22 | 6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.090909 | 22 | 1 | 22 | 22 | 0.6 | 0.681818 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 4 |
7f9534f6efe8b2cc66022720f3e2efef5e4398ca | 73 | py | Python | deepfry/__init__.py | skylarr1227/FlameCogs | f75afadaf5f73b97cf5925177597ffee06b81f6a | [
"MIT"
] | null | null | null | deepfry/__init__.py | skylarr1227/FlameCogs | f75afadaf5f73b97cf5925177597ffee06b81f6a | [
"MIT"
] | null | null | null | deepfry/__init__.py | skylarr1227/FlameCogs | f75afadaf5f73b97cf5925177597ffee06b81f6a | [
"MIT"
] | null | null | null | from .deepfry import Deepfry
def setup(bot):
bot.add_cog(Deepfry(bot))
| 14.6 | 28 | 0.753425 | 12 | 73 | 4.5 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.123288 | 73 | 4 | 29 | 18.25 | 0.84375 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 4 |
7f987c12d10531eb278e8a5b85bd2cff5197af3b | 222 | py | Python | gendiff/__init__.py | Zed-chi/python-project-lvl2 | b2c1c23170879ff3be3fb2edc1c41e282abb7405 | [
"MIT"
] | null | null | null | gendiff/__init__.py | Zed-chi/python-project-lvl2 | b2c1c23170879ff3be3fb2edc1c41e282abb7405 | [
"MIT"
] | null | null | null | gendiff/__init__.py | Zed-chi/python-project-lvl2 | b2c1c23170879ff3be3fb2edc1c41e282abb7405 | [
"MIT"
] | null | null | null | from .scripts.parsers import get_differ
from .scripts.utils import diff_to_str
def generate_diff(a, b, format="json"):
differ = get_differ(format)
diff_summary = differ(a, b)
return diff_to_str(diff_summary)
| 24.666667 | 39 | 0.747748 | 35 | 222 | 4.485714 | 0.514286 | 0.140127 | 0.11465 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.157658 | 222 | 8 | 40 | 27.75 | 0.839572 | 0 | 0 | 0 | 1 | 0 | 0.018018 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0 | 0.333333 | 0 | 0.666667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 4 |
7f9b5d1b700de003d475f14f137fc74bf64c6503 | 6,846 | py | Python | covigator/tests/unit_tests/test_precomputer.py | TRON-Bioinformatics/covigator | 59cd5012217cb043d97c77ce5273d8930e74390d | [
"MIT"
] | 7 | 2021-07-23T14:09:51.000Z | 2022-01-26T20:26:27.000Z | covigator/tests/unit_tests/test_precomputer.py | TRON-Bioinformatics/covigator | 59cd5012217cb043d97c77ce5273d8930e74390d | [
"MIT"
] | 2 | 2021-07-27T08:30:22.000Z | 2022-02-22T20:06:05.000Z | covigator/tests/unit_tests/test_precomputer.py | TRON-Bioinformatics/covigator | 59cd5012217cb043d97c77ce5273d8930e74390d | [
"MIT"
] | null | null | null | from sqlalchemy import and_, func
from covigator.database.model import PrecomputedSynonymousNonSynonymousCounts, RegionType, DataSource, \
PrecomputedOccurrence
from covigator.precomputations.load_ns_s_counts import NsSCountsLoader
from covigator.precomputations.load_top_occurrences import TopOccurrencesLoader
from covigator.precomputations.loader import PrecomputationsLoader
from covigator.tests.unit_tests.abstract_test import AbstractTest
from covigator.tests.unit_tests.mocked import mock_samples_and_variants, MOCKED_GENES, MOCKED_DOMAINS
class TestPrecomputer(AbstractTest):
def setUp(self) -> None:
mock_samples_and_variants(session=self.session, faker=self.faker, num_samples=100)
self.ns_counts_loader = NsSCountsLoader(session=self.session)
self.top_occurrences_loader = TopOccurrencesLoader(session=self.session)
self.precomputations_loader = PrecomputationsLoader(session=self.session)
def test_load_dn_ds(self):
self.ns_counts_loader.load()
for g in MOCKED_GENES:
self.assertGreater(
self.session.query(PrecomputedSynonymousNonSynonymousCounts).filter(
and_(PrecomputedSynonymousNonSynonymousCounts.region_type == RegionType.GENE.name,
PrecomputedSynonymousNonSynonymousCounts.region_name == g)).count(),
0)
self.assertGreater(
self.session.query(PrecomputedSynonymousNonSynonymousCounts).filter(
and_(PrecomputedSynonymousNonSynonymousCounts.region_type == RegionType.GENE.name,
PrecomputedSynonymousNonSynonymousCounts.region_name == g,
PrecomputedSynonymousNonSynonymousCounts.source == DataSource.ENA.name)).count(),
0)
self.assertGreater(
self.session.query(PrecomputedSynonymousNonSynonymousCounts).filter(
and_(PrecomputedSynonymousNonSynonymousCounts.region_type == RegionType.GENE.name,
PrecomputedSynonymousNonSynonymousCounts.region_name == g,
PrecomputedSynonymousNonSynonymousCounts.source == DataSource.GISAID.name)).count(),
0)
self.assertEqual(
self.session.query(PrecomputedSynonymousNonSynonymousCounts).filter(
and_(PrecomputedSynonymousNonSynonymousCounts.region_type != RegionType.GENE.name,
PrecomputedSynonymousNonSynonymousCounts.region_name == g)).count(),
0)
self.assertGreater(
self.session.query(PrecomputedSynonymousNonSynonymousCounts).filter(
and_(PrecomputedSynonymousNonSynonymousCounts.region_type == RegionType.CODING_REGION.name)).count(), 0)
self.assertGreater(
self.session.query(PrecomputedSynonymousNonSynonymousCounts).filter(
and_(PrecomputedSynonymousNonSynonymousCounts.region_type == RegionType.CODING_REGION.name,
PrecomputedSynonymousNonSynonymousCounts.source == DataSource.ENA.name)).count(), 0)
self.assertGreater(
self.session.query(PrecomputedSynonymousNonSynonymousCounts).filter(
and_(PrecomputedSynonymousNonSynonymousCounts.region_type == RegionType.CODING_REGION.name,
PrecomputedSynonymousNonSynonymousCounts.source == DataSource.GISAID.name)).count(), 0)
s_genes = self.session.query(func.sum(PrecomputedSynonymousNonSynonymousCounts.s)).filter(
PrecomputedSynonymousNonSynonymousCounts.region_type == RegionType.GENE.name).scalar()
s_coding_region = self.session.query(func.sum(PrecomputedSynonymousNonSynonymousCounts.s)).filter(
PrecomputedSynonymousNonSynonymousCounts.region_type == RegionType.CODING_REGION.name).scalar()
self.assertEqual(s_genes, s_coding_region)
ns_genes = self.session.query(func.sum(PrecomputedSynonymousNonSynonymousCounts.ns)).filter(
PrecomputedSynonymousNonSynonymousCounts.region_type == RegionType.GENE.name).scalar()
ns_coding_region = self.session.query(func.sum(PrecomputedSynonymousNonSynonymousCounts.ns)).filter(
PrecomputedSynonymousNonSynonymousCounts.region_type == RegionType.CODING_REGION.name).scalar()
self.assertEqual(ns_genes, ns_coding_region)
for d in MOCKED_DOMAINS:
self.assertGreater(
self.session.query(PrecomputedSynonymousNonSynonymousCounts).filter(
and_(PrecomputedSynonymousNonSynonymousCounts.region_type == RegionType.DOMAIN.name,
PrecomputedSynonymousNonSynonymousCounts.region_name == d)).count(),
0)
self.assertGreater(
self.session.query(PrecomputedSynonymousNonSynonymousCounts).filter(
and_(PrecomputedSynonymousNonSynonymousCounts.region_type == RegionType.DOMAIN.name,
PrecomputedSynonymousNonSynonymousCounts.region_name == d,
PrecomputedSynonymousNonSynonymousCounts.source == DataSource.ENA.name)).count(),
0)
self.assertGreater(
self.session.query(PrecomputedSynonymousNonSynonymousCounts).filter(
and_(PrecomputedSynonymousNonSynonymousCounts.region_type == RegionType.DOMAIN.name,
PrecomputedSynonymousNonSynonymousCounts.region_name == d,
PrecomputedSynonymousNonSynonymousCounts.source == DataSource.GISAID.name)).count(),
0)
self.assertEqual(
self.session.query(PrecomputedSynonymousNonSynonymousCounts).filter(
and_(PrecomputedSynonymousNonSynonymousCounts.region_type != RegionType.DOMAIN.name,
PrecomputedSynonymousNonSynonymousCounts.region_name == d)).count(),
0)
def test_load_precomputed_occurrences(self):
self.assertEqual(self.session.query(PrecomputedOccurrence).count(), 0)
self.precomputations_loader.load_table_counts() # table counts precomputations are needed
self.top_occurrences_loader.load()
self.assertGreater(self.session.query(PrecomputedOccurrence).count(), 0)
for g in MOCKED_GENES:
occurrences = self.session.query(PrecomputedOccurrence).filter(PrecomputedOccurrence.gene_name == g).all()
self.assertGreater(len(occurrences), 0)
for o in occurrences:
self.assertGreater(o.total, 0)
self.assertGreater(o.frequency, 0.0)
self.assertIsNotNone(o.variant_id)
self.assertIsNotNone(o.gene_name)
self.assertIsNotNone(o.domain)
self.assertIsNotNone(o.annotation)
| 63.388889 | 120 | 0.69515 | 544 | 6,846 | 8.577206 | 0.147059 | 0.226747 | 0.061723 | 0.192885 | 0.729318 | 0.703601 | 0.685169 | 0.679597 | 0.668453 | 0.668453 | 0 | 0.003778 | 0.226702 | 6,846 | 107 | 121 | 63.981308 | 0.877597 | 0.005697 | 0 | 0.571429 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.22449 | 1 | 0.030612 | false | 0 | 0.071429 | 0 | 0.112245 | 0 | 0 | 0 | 0 | null | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 4 |
7fb7fffb25d732801371b6ac097b20f32a48998c | 3,727 | py | Python | src/backend/tests/fixtures/post_login_page.py | sico/recordexpungPDX | c2f18322014add7c78b27736bde2d29d1d086aa8 | [
"MIT"
] | 38 | 2019-05-09T03:13:43.000Z | 2022-03-16T22:59:25.000Z | src/backend/tests/fixtures/post_login_page.py | sico/recordexpungPDX | c2f18322014add7c78b27736bde2d29d1d086aa8 | [
"MIT"
] | 938 | 2019-05-02T15:13:21.000Z | 2022-02-27T20:59:00.000Z | src/backend/tests/fixtures/post_login_page.py | kenichi/recordexpungPDX | 100d9249473a01953451b83a72ec1b74574acc43 | [
"MIT"
] | 65 | 2019-05-09T03:28:12.000Z | 2022-03-21T00:06:39.000Z | class PostLoginPage:
POST_LOGIN_PAGE = """
<html>
<head>
</head>
<body>
<table cellspacing="0" cellpadding="0" width="100%" height="100%" border="0" style="table-layout: fixed;"><tr><td style="height:83px"><table cellspacing="0" cellpadding="0" width="100%" border="0" style="table-layout: fixed; margin:0px; padding:0px;"><tr><td class="ssHeaderTitleBanner"></td></tr></table><table cellspacing="0" cellpadding="0" width="100%" border="0" style="table-layout: fixed; margin:0px; padding:0px;"><tr><td bgcolor="#000000" height="20px"><table cellspacing="0" cellpadding="0" width="100%" border="0"><tr><td align="left" style="padding-left: 5px"><font size="1"><a class="ssBlackNavBarHyperlink" href="#MainContent"></a> <a class="ssBlackNavBarHyperlink" href="logout.aspx">Logout</a> <a class="ssBlackNavBarHyperlink" href="MyAccount.aspx?ReturnURL=default.aspx"></a> </font></td><td align="center" class="ssBlackNavBarLocation"></td><td align="right" style="padding-right: 10px"><table cellspacing="0" cellpadding="0" border="0"><tr><td><font size="1"><a class="ssBlackNavBarHyperlink" target="_blank" href="http://www.courts.oregon.gov/services/online/Documents/OJCIN/OECI/PA_QRefG_OJIN.pdf"></a></font></td></tr></table></td></tr></table></td></tr></table></td></tr><tr height="*"><td><a id="MainContent" name="MainContent" tabindex="-1"></a><table cellspacing="0" cellpadding="0" height="300" width="100%" border="0" style="table-layout: fixed"><tr><td align="center"><img src="Images/ad_PA_ecourt.gif" alt="Welcome to Oregon eCourt Case Information"></img></td><td><div class="ssLaunchProductTitle" style="width: 200px">Case Records</div><label class="ssLogin" for="sbxControlID2"></label><br /><select id="sbxControlID2" onchange="LocationChange(this)"><option value="101100,102100,103100"></option><option value="555555"></option><option value="555555"></option></select><div> </div><a class="ssSearchHyperlink"></a><br /><a class="ssSearchHyperlink"></a><br /><a class="ssSearchHyperlink">r</a><br /><a class="ssSearchHyperlink" </a><br /><div id="divOption1"></div><div id="divOption2"></div><div id="divOption3"></div><div id="divOption4"></div><div id="divOption5"></div><div id="divOption6"></div><div id="divOption7"></div><div id="divOption8"></div><div id="divOption9"></div><div id="divOption10"></div><div id="divOption11"></div><div id="divOption12"></div><div id="divOption13"></div><div id="divOption14"></div><div id="divOption15"></div><div id="divOption16"></div><div id="divOption17"></div><div id="divOption18"></div><div id="divOption19"></div><div id="divOption20"></div><div id="divOption21"></div><div id="divOption22"></div><div id="divOption23"></div><div id="divOption24"></div><div id="divOption25"></div><div id="divOption26"></div><div id="divOption27"></div><div id="divOption28"></div><div id="divOption29"></div><div id="divOption30"></div><div id="divOption31"></div><div id="divOption32"></div><div id="divOption33"></div><div id="divOption34"></div><div id="divOption35"></div><div id="divOption36"></div><div id="divOption37"></div><div id="divOption38"></div><div id="divOption39"></div><div id="divOption40"></div><div id="divOption41"></div><div id="divOption42"></div><div id="divOption43"></div><div id="divOption44"></div><div id="divOption45"></div><div id="divOption46"></div><div id="divOption47"></div><div id="divOption48"></div><div id="divOption49"></div><div id="divOption50"></div><div id="divOption51"></div><div id="divOption52"></div><p></p></td></tr><tr><td class="ssMessageText" colspan="2"><BR/><BR/><BR/><a /><BR/><BR/><BR/><BR/></a>.</td></tr></table></td></tr><tr valign="bottom"><td></td></tr></table>
</body>
</html>
"""
| 310.583333 | 3,617 | 0.681245 | 531 | 3,727 | 4.768362 | 0.299435 | 0.123223 | 0.161137 | 0.066351 | 0.293049 | 0.249605 | 0.184439 | 0.158373 | 0.114534 | 0.06951 | 0 | 0.053687 | 0.050443 | 3,727 | 11 | 3,618 | 338.818182 | 0.661769 | 0 | 0 | 0 | 0 | 0.1 | 0.986316 | 0.656024 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0.2 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 4 |
7fbc73c0e4b649a57a616a432a3e6afb9cadddb5 | 7,591 | py | Python | tests/test_integration_autoload.py | ColinKennedy/ways | 1eb44e4aa5e35fb839212cd8cb1c59c714ba10d3 | [
"MIT"
] | 2 | 2019-11-10T18:35:38.000Z | 2020-05-12T10:37:42.000Z | tests/test_integration_autoload.py | ColinKennedy/ways | 1eb44e4aa5e35fb839212cd8cb1c59c714ba10d3 | [
"MIT"
] | 5 | 2017-11-27T18:05:25.000Z | 2021-06-01T21:57:48.000Z | tests/test_integration_autoload.py | ColinKennedy/ways | 1eb44e4aa5e35fb839212cd8cb1c59c714ba10d3 | [
"MIT"
] | 1 | 2017-11-27T17:54:53.000Z | 2017-11-27T17:54:53.000Z | #!/usr/bin/env python
# -*- coding: utf-8 -*-
'''Ways uses a few techniques to automatically load its objects.
Plugin Sheets, Desctiptors, and Python plugin files all have different ways
of being added to the Ways cache so we'll test these methods, in this module.
'''
# IMPORT STANDARD LIBRARIES
import os
import tempfile
import textwrap
# IMPORT WAYS LIBRARIES
import ways.api
# IMPORT LOCAL LIBRARIES
# IMPORT 'LOCAL' LIBRARIES
from . import common_test
class AutoloadTestCase(common_test.ContextTestCase):
'''Test to that Plugins and Descriptors load in the HistoryCache.'''
def test_plugins_from_env_file(self):
'''Mimic a user adding plugins to a pathfinder environment variable.'''
plugin_file_contents = textwrap.dedent(
"""\
# IMPORT STANDARD LIBRARIES
import tempfile
import json
import os
# IMPORT THIRD-PARTY LIBRARIES
from ways.base import cache
import ways.api
def main():
class SomeNewAssetClass(object):
'''Some class that will take the place of our Asset.'''
def __init__(self, info):
'''Create the object.'''
super(SomeNewAssetClass, self).__init__()
self.context = context
def a_custom_init_function(info, context, *args, **kwargs):
'''Purposefully ignore the context that gets passed.'''
return SomeNewAssetClass(info, *args, **kwargs)
def make_plugin_folder_with_plugin_load(contents):
'''str: Make a folder and put a plugin inside of it.'''
folder = tempfile.mkdtemp()
plugin_file = os.path.join(folder, 'example_plugin' + '.json')
with open(plugin_file, 'w') as file_:
json.dump(contents, file_)
return plugin_file
contents = {
'globals': {},
'plugins': {
'a_parse_plugin': {
'mapping': '/jobs/{JOB}/some_kind/of/real_folders',
'mapping_details': {
'JOB': {
'parse': {
'regex': '.+',
},
'required': False,
},
},
'hierarchy': 'some/thing2/context',
},
},
}
path = make_plugin_folder_with_plugin_load(contents)
ways.api.add_search_path(path)
# Create a default Asset
some_path = '/jobs/some_job/some_kind/of/real_folders'
asset = ways.api.get_asset(some_path, context='some/thing2/context')
asset_is_default_asset_type = isinstance(asset, ways.api.Asset)
# Register a new class type for our Context
context = ways.api.get_context('some/thing2/context')
ways.api.register_asset_class(
SomeNewAssetClass, context, init=a_custom_init_function)
""")
temp_file = tempfile.NamedTemporaryFile(mode='w', suffix='.py', delete=False)
with temp_file as file_:
file_.write(plugin_file_contents)
os.environ[ways.api.PLUGINS_ENV_VAR] = temp_file.name
# Note: This method normally runs on init but because of other tests
# instantiating the HistoryCache, we just re-add our plugins
#
ways.api.init_plugins()
path = '/jobs/some_job/some_kind/of/real_folders'
asset = ways.api.get_asset(info=path, context='some/thing2/context')
self.assertFalse(isinstance(asset, ways.api.Asset))
def test_plugins_from_env_folder(self):
'''Mimic a user adding plugin folders to a pathfinder env var.'''
temp_directory = tempfile.mkdtemp()
plugin_file_contents = textwrap.dedent(
"""\
# IMPORT STANDARD LIBRARIES
import tempfile
import json
import os
# IMPORT THIRD-PARTY LIBRARIES
import ways.api
def main():
class SomeNewAssetClass(object):
'''Some class that will take the place of our Asset.'''
def __init__(self, info):
'''Create the object.'''
super(SomeNewAssetClass, self).__init__()
self.context = context
def a_custom_init_function(info, context, *args, **kwargs):
'''Purposefully ignore the context that gets passed.'''
return SomeNewAssetClass(info, *args, **kwargs)
def make_plugin_folder_with_plugin_load(contents):
'''str: Make a folder and put a plugin inside of it.'''
folder = tempfile.mkdtemp()
plugin_file = os.path.join(folder, 'example_plugin' + '.json')
with open(plugin_file, 'w') as file_:
json.dump(contents, file_)
return plugin_file
contents = {
'globals': {},
'plugins': {
'a_parse_plugin': {
'mapping': '/jobs/{JOB}/some_kind/of/real_folders',
'mapping_details': {
'JOB': {
'parse': {
'regex': '.+',
},
'required': False,
},
},
'hierarchy': 'some/thing2/context',
},
},
}
plugin_file = make_plugin_folder_with_plugin_load(contents=contents)
folder = os.path.dirname(plugin_file)
ways.api.add_search_path(folder)
# Create a default Asset
some_path = '/jobs/some_job/some_kind/of/real_folders'
asset = ways.api.get_asset(some_path, context='some/thing2/context')
asset_is_default_asset_type = isinstance(asset, ways.api.Asset)
# Register a new class type for our Context
context = ways.api.get_context('some/thing2/context')
ways.api.register_asset_class(
SomeNewAssetClass, context, init=a_custom_init_function)
""")
temp_file = tempfile.NamedTemporaryFile(suffix='.py').name
with open(os.path.join(temp_directory, os.path.basename(temp_file)), 'w') as file_:
file_.write(plugin_file_contents)
# Add the path to our env var
os.environ[ways.api.PLUGINS_ENV_VAR] = temp_directory
# Note: This method normally runs on init but because of other tests
# instantiating the HistoryCache, we just re-add our plugins
#
ways.api.init_plugins()
path = '/jobs/some_job/some_kind/of/real_folders'
self.assertFalse(
isinstance(ways.api.get_asset(info=path, context='some/thing2/context'),
ways.api.Asset))
| 36.671498 | 91 | 0.514688 | 747 | 7,591 | 5.034806 | 0.212851 | 0.039085 | 0.036161 | 0.020739 | 0.780378 | 0.74076 | 0.738899 | 0.701143 | 0.683595 | 0.683595 | 0 | 0.001967 | 0.397313 | 7,591 | 206 | 92 | 36.849515 | 0.820109 | 0.11013 | 0 | 0.266667 | 0 | 0 | 0.085948 | 0.05457 | 0 | 0 | 0 | 0 | 0.066667 | 1 | 0.066667 | false | 0 | 0.166667 | 0 | 0.266667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 4 |
f6bc90298f7da30b3fe0b818a6365514c81e4b1d | 95 | py | Python | dymoesco/estimation/__init__.py | samlaf/dymoesco | 1695333aab8171f7a26062eb8ad7b0be38493d3d | [
"MIT"
] | null | null | null | dymoesco/estimation/__init__.py | samlaf/dymoesco | 1695333aab8171f7a26062eb8ad7b0be38493d3d | [
"MIT"
] | null | null | null | dymoesco/estimation/__init__.py | samlaf/dymoesco | 1695333aab8171f7a26062eb8ad7b0be38493d3d | [
"MIT"
] | null | null | null | """Subpackage for state estimation.
So far only filtering algorithms have been implemented.""" | 31.666667 | 58 | 0.789474 | 12 | 95 | 6.25 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.126316 | 95 | 3 | 58 | 31.666667 | 0.903614 | 0.936842 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 4 |
f6e6cfe0c85551352293303d95f755a33ef09b45 | 156 | py | Python | __main__.py | thaije/gym-super-mario-bros | 5316881097bf6951bf3f9cfa9707a23d459fa2e6 | [
"MIT"
] | null | null | null | __main__.py | thaije/gym-super-mario-bros | 5316881097bf6951bf3f9cfa9707a23d459fa2e6 | [
"MIT"
] | null | null | null | __main__.py | thaije/gym-super-mario-bros | 5316881097bf6951bf3f9cfa9707a23d459fa2e6 | [
"MIT"
] | null | null | null | """The main execution script for this package for testing."""
from gym_super_mario_bros._cli import main
# execute the main entry point of the CLI
main()
| 22.285714 | 61 | 0.769231 | 26 | 156 | 4.461538 | 0.730769 | 0.12069 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.160256 | 156 | 6 | 62 | 26 | 0.885496 | 0.615385 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 4 |
1006ea8ffd8ac03d1e62b581280868c8a5b24da0 | 1,685 | py | Python | pypy/rpython/microbench/list.py | camillobruni/pygirl | ddbd442d53061d6ff4af831c1eab153bcc771b5a | [
"MIT"
] | 12 | 2016-01-06T07:10:28.000Z | 2021-05-13T23:02:02.000Z | pypy/rpython/microbench/list.py | woodrow/pyoac | b5dc59e6a38e7912db47f26fb23ffa4764a3c0e7 | [
"MIT"
] | null | null | null | pypy/rpython/microbench/list.py | woodrow/pyoac | b5dc59e6a38e7912db47f26fb23ffa4764a3c0e7 | [
"MIT"
] | 2 | 2016-07-29T07:09:50.000Z | 2016-10-16T08:50:26.000Z | from pypy.rpython.microbench.microbench import MetaBench
class list__append:
__metaclass__ = MetaBench
def init():
return []
args = ['obj', 'i']
def loop(obj, i):
obj.append(i)
class list__get_item:
__metaclass__ = MetaBench
LOOPS = 100000000
def init():
obj = []
for i in xrange(1000):
obj.append(i)
return obj
args = ['obj', 'i']
def loop(obj, i):
return obj[i%1000]
class list__set_item:
__metaclass__ = MetaBench
LOOPS = 100000000
def init():
obj = []
for i in xrange(1000):
obj.append(i)
return obj
args = ['obj', 'i']
def loop(obj, i):
obj[i%1000] = i
class fixed_list__get_item:
__metaclass__ = MetaBench
LOOPS = 100000000
def init():
return [0] * 1000
args = ['obj', 'i']
def loop(obj, i):
return obj[i%1000]
class fixed_list__set_item:
__metaclass__ = MetaBench
LOOPS = 100000000
def init():
return [0] * 1000
args = ['obj', 'i']
def loop(obj, i):
obj[i%1000] = i
class list__iteration__int:
__metaclass__ = MetaBench
LOOPS = 100000
def init():
obj = [0]*1000
obj[0] = 42
return obj
args = ['obj']
def loop(obj):
tot = 0
for item in obj:
tot += item
return tot
class list__iteration__string:
__metaclass__ = MetaBench
LOOPS = 100000
def init():
obj = ['foo']*1000
obj[0] = 'bar'
return obj
args = ['obj']
def loop(obj):
tot = 0
for item in obj:
tot += len(item)
return tot
| 21.0625 | 56 | 0.530564 | 206 | 1,685 | 4.087379 | 0.179612 | 0.066508 | 0.083135 | 0.065321 | 0.748219 | 0.748219 | 0.748219 | 0.655582 | 0.629454 | 0.5962 | 0 | 0.089401 | 0.356083 | 1,685 | 79 | 57 | 21.329114 | 0.686636 | 0 | 0 | 0.791667 | 0 | 0 | 0.018991 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.194444 | false | 0 | 0.013889 | 0.069444 | 0.736111 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 4 |
1010d78200749f22d415c401ff780b2f3d269a0b | 18 | py | Python | shopify/version.py | traaan/shopify_python_api | 23516d058963bfd2b98e5295072e984909fdbdc1 | [
"MIT"
] | null | null | null | shopify/version.py | traaan/shopify_python_api | 23516d058963bfd2b98e5295072e984909fdbdc1 | [
"MIT"
] | null | null | null | shopify/version.py | traaan/shopify_python_api | 23516d058963bfd2b98e5295072e984909fdbdc1 | [
"MIT"
] | null | null | null | VERSION = '8.2.0'
| 9 | 17 | 0.555556 | 4 | 18 | 2.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.2 | 0.166667 | 18 | 1 | 18 | 18 | 0.466667 | 0 | 0 | 0 | 0 | 0 | 0.277778 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 4 |
120b994ac55684097a9c81d5a31c20406b10db07 | 89 | py | Python | REST_API/config.py | Shafiq-Kyazze/Netflix-Data-Pipeline-and-Rest-API | 9f549a29b8c6a2e5d346235e4f57de6cca8e6dd0 | [
"MIT"
] | null | null | null | REST_API/config.py | Shafiq-Kyazze/Netflix-Data-Pipeline-and-Rest-API | 9f549a29b8c6a2e5d346235e4f57de6cca8e6dd0 | [
"MIT"
] | null | null | null | REST_API/config.py | Shafiq-Kyazze/Netflix-Data-Pipeline-and-Rest-API | 9f549a29b8c6a2e5d346235e4f57de6cca8e6dd0 | [
"MIT"
] | null | null | null | """Config file"""
DATABASE_URI = "postgresql://****:**-***@rogue.db.elephantsql.com/**"
| 22.25 | 69 | 0.595506 | 9 | 89 | 5.777778 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.067416 | 89 | 3 | 70 | 29.666667 | 0.626506 | 0.123596 | 0 | 0 | 0 | 0 | 0.722222 | 0.722222 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 4 |
12146e90f5656d9fa48c4470b9c678213fa9e226 | 16,520 | py | Python | test/test_sql_filter.py | mafrosis/jiracli | 4fbca877ab90a61c8785b7f815c0de59abafbce1 | [
"MIT"
] | 1 | 2019-12-16T14:42:27.000Z | 2019-12-16T14:42:27.000Z | test/test_sql_filter.py | mafrosis/jiracli | 4fbca877ab90a61c8785b7f815c0de59abafbce1 | [
"MIT"
] | 13 | 2020-03-16T04:59:49.000Z | 2020-04-20T22:27:29.000Z | test/test_sql_filter.py | mafrosis/jiracli | 4fbca877ab90a61c8785b7f815c0de59abafbce1 | [
"MIT"
] | null | null | null | from unittest import mock
import pytest
from fixtures import ISSUE_1
from jira_offline.exceptions import FilterQueryEscapingError, FilterQueryParseFailed
from jira_offline.models import CustomFields, Issue, ProjectMeta, Sprint
from jira_offline.sql_filter import IssueFilter
def test_parse__bad_query__double_escaping():
'''
Ensure that a double escaped query string is escaped correctly
'''
filt = IssueFilter()
with pytest.raises(FilterQueryEscapingError):
filt.set("'summary == An eggcellent summarisation'")
@pytest.mark.parametrize('operator,search_term,count', [
('==', "'eggcellent'", 1),
('==', 'eggcellent', 1),
('!=', 'eggcellent', 1),
('!=', 'missing', 2),
('==', "'This is the story summary'", 1),
])
def test_parse__primitive_str(mock_jira, project, operator, search_term, count):
'''
Test string field ==,!= value filter
'''
# Setup test fixtures to target in the filter query
with mock.patch.dict(ISSUE_1, {'summary': 'This is the story summary'}):
mock_jira['TEST-71'] = Issue.deserialize(ISSUE_1, project)
with mock.patch.dict(ISSUE_1, {'summary': 'eggcellent', 'key': 'FILT-1'}):
mock_jira['FILT-1'] = Issue.deserialize(ISSUE_1, project)
assert len(mock_jira) == 2
filt = IssueFilter()
filt.set(f"summary {operator} {search_term}")
with mock.patch('jira_offline.jira.jira', mock_jira):
df = filt.apply()
assert len(df) == count
def test_parse__primitive_project_eq_str(mock_jira, project):
'''
Test special-case project field EQUALS string filter
The underlying field name is "project_key"
'''
# Setup test fixtures to target in the filter query
mock_jira['TEST-71'] = Issue.deserialize(ISSUE_1, project)
project_2 = ProjectMeta.factory('http://example.com/EGG')
with mock.patch.dict(ISSUE_1, {'key': 'FILT-1'}):
mock_jira['FILT-1'] = Issue.deserialize(ISSUE_1, project_2)
assert len(mock_jira) == 2
filt = IssueFilter()
filt.set(f'project == {project_2.key}')
with mock.patch('jira_offline.jira.jira', mock_jira):
df = filt.apply()
assert len(df) == 1
assert df.iloc[0]['key'] == 'FILT-1'
@pytest.mark.parametrize('where', [
"summary LIKE 'eggcellent'",
"summary LIKE eggcellent",
])
def test_parse__primitive_like_str(mock_jira, project, where):
'''
Test string field LIKE value filter
'''
# Setup test fixtures to target in the filter query
with mock.patch.dict(ISSUE_1, {'summary': 'This is the story summary'}):
mock_jira['TEST-71'] = Issue.deserialize(ISSUE_1, project)
with mock.patch.dict(ISSUE_1, {'summary': 'An eggcellent summarisation', 'key': 'FILT-1'}):
mock_jira['FILT-1'] = Issue.deserialize(ISSUE_1, project)
assert len(mock_jira) == 2
filt = IssueFilter()
filt.set(where)
with mock.patch('jira_offline.jira.jira', mock_jira):
df = filt.apply()
assert len(df) == 1
assert df.iloc[0]['key'] == 'FILT-1'
@pytest.mark.parametrize('fixture,operator,count', [
(1111, '==', 1),
(1111, '!=', 1),
(1230, '<', 1),
(1230, '<=', 2),
(1232, '>', 1),
(1232, '>=', 2),
])
def test_parse__primitive_int(mock_jira, project, fixture, operator, count):
'''
Test field ==,!=,<,<=,>,>= integer filter
'''
# Setup test fixtures to target in the filter query
with mock.patch.dict(ISSUE_1, {'id': 1231}):
mock_jira['TEST-71'] = Issue.deserialize(ISSUE_1, project)
with mock.patch.dict(ISSUE_1, {'id': fixture, 'key': 'FILT-1'}):
mock_jira['FILT-1'] = Issue.deserialize(ISSUE_1, project)
assert len(mock_jira) == 2
filt = IssueFilter()
filt.set(f"id {operator} 1231")
with mock.patch('jira_offline.jira.jira', mock_jira):
df = filt.apply()
assert len(df) == count
@pytest.mark.parametrize('operator,fixture,count', [
('<', '2018-09-24T08:44:05', 1),
('<=', '2018-09-24T08:44:05', 2),
('>', '2018-09-24T08:44:07', 1),
('>=', '2018-09-24T08:44:07', 2),
])
@mock.patch('jira_offline.sql_filter.IssueFilter.tz', new_callable=mock.PropertyMock)
def test_parse__primitive_datetime(mock_tz, mock_jira, timezone_project, operator, fixture, count):
'''
Test field <,<=,>,>= datetime filter
'''
# Setup test fixtures to target in the filter query
with mock.patch.dict(ISSUE_1, {'created': '2018-09-24T08:44:06'}):
mock_jira['TEST-71'] = Issue.deserialize(ISSUE_1, project=timezone_project)
with mock.patch.dict(ISSUE_1, {'created': fixture, 'key': 'FILT-1'}):
mock_jira['FILT-1'] = Issue.deserialize(ISSUE_1, project=timezone_project)
assert len(mock_jira) == 2
filt = IssueFilter()
filt.set(f"created {operator} '2018-09-24T08:44:06'")
# Set the timezone of the date in the passed query (default is local system time)
mock_tz.return_value = timezone_project.timezone
with mock.patch('jira_offline.jira.jira', mock_jira):
df = filt.apply()
assert len(df) == count
@pytest.mark.parametrize('operator,search_terms,count', [
('in', 'EGG', 1),
('in', 'BACON', 1),
('in', 'EGG, BACON', 1),
('in', '0.1', 2),
('in', 'EGG, BACON, 0.1', 2),
('in', 'MISSING', 0),
('not in', 'EGG', 1),
('not in', 'BACON', 1),
('not in', 'EGG, BACON', 1),
('not in', '0.1', 0),
('not in', 'EGG, BACON, 0.1', 0),
('not in', 'MISSING', 2),
])
def test_parse__primitive_list__set(mock_jira, project, operator, search_terms, count):
'''
Test set field IN/NOT IN a list of values
'''
# Setup test fixtures to target in the filter query
with mock.patch.dict(ISSUE_1, {'fix_versions': ['0.1']}):
mock_jira['TEST-71'] = Issue.deserialize(ISSUE_1, project)
with mock.patch.dict(ISSUE_1, {'fix_versions': ['EGG', 'BACON', '0.1'], 'key': 'FILT-1'}):
mock_jira['FILT-1'] = Issue.deserialize(ISSUE_1, project)
assert len(mock_jira) == 2
filt = IssueFilter()
filt.set(f"fix_versions {operator} ({search_terms})")
with mock.patch('jira_offline.jira.jira', mock_jira):
df = filt.apply()
assert len(df) == count
@pytest.mark.parametrize('operator,search_terms,count', [
('in', '"Story Done", Egg', 2),
('in', 'Egg', 1),
('in', '"Story Done"', 1),
('in', 'Egg, Missing', 1),
('in', 'Missing', 0),
('not in', '"Story Done", Egg', 0),
('not in', 'Egg', 1),
('not in', '"Story Done"', 1),
('not in', 'Egg, Missing', 1),
('not in', 'Missing', 2),
])
def test_parse__primitive_list__string(mock_jira, project, operator, search_terms, count):
'''
Test string field IN/NOT IN a list of values
'''
# Setup test fixtures to target in the filter query
with mock.patch.dict(ISSUE_1, {'status': 'Story Done'}):
mock_jira['TEST-71'] = Issue.deserialize(ISSUE_1, project)
with mock.patch.dict(ISSUE_1, {'status': 'Egg', 'key': 'FILT-1'}):
mock_jira['FILT-1'] = Issue.deserialize(ISSUE_1, project)
assert len(mock_jira) == 2
filt = IssueFilter()
filt.set(f"status {operator} ({search_terms})")
with mock.patch('jira_offline.jira.jira', mock_jira):
df = filt.apply()
assert len(df) == count
@pytest.mark.parametrize('operator,search_terms,count', [
('in', '"Sprint 1", "Sprint 2"', 2),
('in', '"Sprint 1"', 1),
('in', '"Sprint 2"', 1),
('not in', '"Sprint 1", "Sprint 2"', 0),
('not in', '"Sprint 1"', 1),
('not in', '"Sprint 2"', 1),
])
def test_parse__primitive_list__sprint(mock_jira, operator, search_terms, count):
'''
Test sprint string IN/NOT IN a list of sprint objects.
This is a special case as sprint is stored in the DataFrame as a list of objects, not a simple list of string.
'''
# Setup the project configuration with sprint customfield, and two sprints on the project
project = ProjectMeta(
key='TEST',
jira_id='10000',
customfields=CustomFields(sprint='customfield_10300'),
sprints={
1: Sprint(id=1, name='Sprint 1', active=True),
2: Sprint(id=2, name='Sprint 2', active=False),
},
)
mock_jira.config.projects = {project.id: project}
# Setup test fixtures to target in the filter query
with mock.patch.dict(ISSUE_1, {'sprint': 'Sprint 1'}):
mock_jira['TEST-71'] = Issue.deserialize(ISSUE_1, project)
with mock.patch.dict(ISSUE_1, {'sprint': 'Sprint 2', 'key': 'FILT-1'}):
mock_jira['FILT-1'] = Issue.deserialize(ISSUE_1, project)
assert len(mock_jira) == 2
filt = IssueFilter()
filt.set(f"sprint {operator} ({search_terms}) AND project = TEST")
with mock.patch('jira_offline.jira.jira', mock_jira):
df = filt.apply()
assert len(df) == count
def test_parse__primitive_list__sprint_error(mock_jira):
'''
Test error raised when sprint is not valid for the supplied project.
'''
# Setup the project configuration with sprint customfield, and two sprints on the project
project = ProjectMeta(
key='TEST',
jira_id='10000',
customfields=CustomFields(sprint='customfield_10300'),
sprints={
1: Sprint(id=1, name='Sprint 1', active=True),
},
)
mock_jira.config.projects = {project.id: project}
# Setup test fixtures to target in the filter query
with mock.patch.dict(ISSUE_1, {'sprint': 'Sprint 1'}):
mock_jira['TEST-71'] = Issue.deserialize(ISSUE_1, project)
assert len(mock_jira) == 1
filt = IssueFilter()
filt.set("sprint IN (BadSprint) AND project = TEST")
with mock.patch('jira_offline.jira.jira', mock_jira):
with pytest.raises(FilterQueryParseFailed):
filt.apply()
@pytest.mark.parametrize('where,count', [
('summary == eggcellent and creator == dave', 1),
('summary == notarealsummary and creator == dave', 0),
('summary == eggcellent and creator == dave and description == 1', 1),
('summary == eggcellent and creator == dave and description == 0', 0),
])
def test_parse__compound_and_eq_str(mock_jira, project, where, count):
'''
Test field EQUALS string AND otherfield EQUALS otherstring filter
'''
# Setup test fixtures to target in the filter query
with mock.patch.dict(ISSUE_1, {'summary': 'This is the story summary', 'creator': 'danil1', 'description': '1'}):
mock_jira['TEST-71'] = Issue.deserialize(ISSUE_1, project)
with mock.patch.dict(ISSUE_1, {'summary': 'eggcellent', 'creator': 'dave', 'description': '1', 'key': 'FILT-1'}):
mock_jira['FILT-1'] = Issue.deserialize(ISSUE_1, project)
assert len(mock_jira) == 2
filt = IssueFilter()
filt.set(where)
with mock.patch('jira_offline.jira.jira', mock_jira):
df = filt.apply()
assert len(df) == count
@pytest.mark.parametrize('where,count', [
('summary == eggcellent or creator == dave', 1),
('summary == notarealsummary or creator == dave', 0),
('summary == notarealsummary or creator == dave or description == 1', 2),
('summary == notarealsummary or creator == noone or description == 0', 0),
])
def test_parse__compound_or_eq_str(mock_jira, project, where, count):
'''
Test field EQUALS string OR otherfield EQUALS otherstring filter
'''
# Setup test fixtures to target in the filter query
with mock.patch.dict(ISSUE_1, {'summary': 'This is the story summary', 'creator': 'danil1', 'description': '1'}):
mock_jira['TEST-71'] = Issue.deserialize(ISSUE_1, project)
with mock.patch.dict(ISSUE_1, {'summary': 'eggcellent', 'creator': 'notarealcreator', 'description': '1', 'key': 'FILT-1'}):
mock_jira['FILT-1'] = Issue.deserialize(ISSUE_1, project)
assert len(mock_jira) == 2
filt = IssueFilter()
filt.set(where)
with mock.patch('jira_offline.jira.jira', mock_jira):
df = filt.apply()
assert len(df) == count
@pytest.mark.parametrize('where,count', [
("created > '2018-09-24T08:44:06' and created < '2018-09-24T08:44:08'", 1),
("created > '2018-09-24T08:44:06' and created <= '2018-09-24T08:44:07'", 1),
("created >= '2018-09-24T08:44:07' and created < '2018-09-24T08:44:08'", 1),
])
@mock.patch('jira_offline.sql_filter.IssueFilter.tz', new_callable=mock.PropertyMock)
def test_parse__compound_in_daterange(mock_tz, mock_jira, timezone_project, where, count):
'''
Test field BETWEEN two datetimes
'''
# Setup test fixtures to target in the filter query
with mock.patch.dict(ISSUE_1, {'created': '2018-09-24T08:44:06'}):
mock_jira['TEST-71'] = Issue.deserialize(ISSUE_1, project=timezone_project)
with mock.patch.dict(ISSUE_1, {'created': '2018-09-24T08:44:07', 'key': 'FILT-1'}):
mock_jira['FILT-1'] = Issue.deserialize(ISSUE_1, project=timezone_project)
assert len(mock_jira) == 2
filt = IssueFilter()
filt.set(where)
# Set the timezone of the date in the passed query (default is local system time)
mock_tz.return_value = timezone_project.timezone
with mock.patch('jira_offline.jira.jira', mock_jira):
df = filt.apply()
assert len(df) == count
@pytest.mark.parametrize('operator,fixture,count', [
('==', '2018-09-23T12:00:00', 0),
('==', '2018-09-23T23:59:59', 0),
('==', '2018-09-24T00:00:00', 1),
('==', '2018-09-24T00:00:01', 1),
('==', '2018-09-24T12:00:00', 1),
('==', '2018-09-24T23:59:59', 1),
('==', '2018-09-25T00:00:00', 0),
('==', '2018-09-25T12:00:00', 0),
('<', '2018-09-23T12:00:00', 1),
('<', '2018-09-23T23:59:59', 1),
('<', '2018-09-24T00:00:00', 0),
('<', '2018-09-24T00:00:01', 0),
('<', '2018-09-24T12:00:00', 0),
('<', '2018-09-24T23:59:59', 0),
('<', '2018-09-25T00:00:00', 0),
('<', '2018-09-25T12:00:00', 0),
('<=', '2018-09-23T12:00:00', 1),
('<=', '2018-09-23T23:59:59', 1),
('<=', '2018-09-24T00:00:00', 1),
('<=', '2018-09-24T00:00:01', 1),
('<=', '2018-09-24T12:00:00', 1),
('<=', '2018-09-24T23:59:59', 1),
('<=', '2018-09-25T00:00:00', 0),
('<=', '2018-09-25T12:00:00', 0),
('>', '2018-09-23T12:00:00', 0),
('>', '2018-09-23T23:59:59', 0),
('>', '2018-09-24T00:00:00', 0),
('>', '2018-09-24T00:00:01', 0),
('>', '2018-09-24T12:00:00', 0),
('>', '2018-09-24T23:59:59', 0),
('>', '2018-09-25T00:00:00', 1),
('>', '2018-09-25T12:00:00', 1),
('>=', '2018-09-23T12:00:00', 0),
('>=', '2018-09-23T23:59:59', 0),
('>=', '2018-09-24T00:00:00', 1),
('>=', '2018-09-24T00:00:01', 1),
('>=', '2018-09-24T12:00:00', 1),
('>=', '2018-09-24T23:59:59', 1),
('>=', '2018-09-25T00:00:00', 1),
('>=', '2018-09-25T12:00:00', 1),
('!=', '2018-09-23T12:00:00', 1),
('!=', '2018-09-23T23:59:59', 1),
('!=', '2018-09-24T00:00:00', 0),
('!=', '2018-09-24T00:00:01', 0),
('!=', '2018-09-24T12:00:00', 0),
('!=', '2018-09-24T23:59:59', 0),
('!=', '2018-09-25T00:00:00', 1),
('!=', '2018-09-25T12:00:00', 1),
])
@mock.patch('jira_offline.sql_filter.IssueFilter.tz', new_callable=mock.PropertyMock)
def test_parse__primitive_date_special_case(mock_tz, mock_jira, timezone_project, operator, fixture, count):
'''
Test special-case datetime field ==,>,>=,<,<= to specific day date
'''
# Setup test fixture to target in the filter query
with mock.patch.dict(ISSUE_1, {'created': fixture, 'key': 'FILT-1'}):
mock_jira['FILT-1'] = Issue.deserialize(ISSUE_1, project=timezone_project)
filt = IssueFilter()
filt.set(f"created {operator} '2018-09-24'")
# Set the timezone of the date in the passed query (default is local system time)
mock_tz.return_value = timezone_project.timezone
with mock.patch('jira_offline.jira.jira', mock_jira):
df = filt.apply()
assert len(df) == count
def test_parse__build_mask_caching(mock_jira, project):
'''
Ensure that _build_mask is not called repeatedly, as it can be expensive
'''
# Add single test fixture to the local Jira storage
mock_jira['TEST-71'] = Issue.deserialize(ISSUE_1, project)
filt = IssueFilter()
filt.set("summary == 'This is a story or issue'")
with mock.patch.object(IssueFilter, '_build_mask', wraps=filt._build_mask) as mock_build_mask:
with mock.patch('jira_offline.jira.jira', mock_jira):
filt.apply()
filt.apply()
filt.apply()
assert mock_build_mask.call_count == 1
| 33.783231 | 128 | 0.618584 | 2,289 | 16,520 | 4.342071 | 0.084753 | 0.053929 | 0.049703 | 0.055338 | 0.798571 | 0.748767 | 0.736291 | 0.708019 | 0.693128 | 0.660429 | 0 | 0.089888 | 0.19661 | 16,520 | 488 | 129 | 33.852459 | 0.658981 | 0.124274 | 0 | 0.472669 | 0 | 0.009646 | 0.293 | 0.052198 | 0 | 0 | 0 | 0 | 0.086817 | 1 | 0.048232 | false | 0 | 0.019293 | 0 | 0.067524 | 0.067524 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 4 |
1229318639650e253625be91207985405526326f | 164 | py | Python | src/wishlist/forms.py | junaidq1/greendot | cd9e7791523317d759e0f5f9cf544deff34a8c79 | [
"MIT"
] | null | null | null | src/wishlist/forms.py | junaidq1/greendot | cd9e7791523317d759e0f5f9cf544deff34a8c79 | [
"MIT"
] | null | null | null | src/wishlist/forms.py | junaidq1/greendot | cd9e7791523317d759e0f5f9cf544deff34a8c79 | [
"MIT"
] | null | null | null |
from django import forms
from .models import Wishlist, Fvote
class Create_wishlist_item(forms.ModelForm):
class Meta:
model = Wishlist
fields = ["feature"]
| 16.4 | 44 | 0.756098 | 21 | 164 | 5.809524 | 0.714286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.164634 | 164 | 9 | 45 | 18.222222 | 0.890511 | 0 | 0 | 0 | 0 | 0 | 0.042945 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 4 |
89c3e8e28a26d4d06c1efee2b825bdfc6f0c57d6 | 48 | py | Python | snippets_menu_magic/nb_register.py | diramazioni/snippets_menu_magic | e6d68fe0949b045df998015c649913877500703e | [
"Apache-2.0"
] | 1 | 2020-12-12T10:29:28.000Z | 2020-12-12T10:29:28.000Z | snippets_menu_magic/nb_register.py | diramazioni/snippets_menu_magic | e6d68fe0949b045df998015c649913877500703e | [
"Apache-2.0"
] | null | null | null | snippets_menu_magic/nb_register.py | diramazioni/snippets_menu_magic | e6d68fe0949b045df998015c649913877500703e | [
"Apache-2.0"
] | null | null | null | get_ipython().register_magics(SnippetsMenuMagic) | 48 | 48 | 0.895833 | 5 | 48 | 8.2 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 48 | 1 | 48 | 48 | 0.854167 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 4 |
89d7867552c8222452f56ec14f04a184eec55b91 | 150 | py | Python | corehq/apps/cloudcare/exceptions.py | kkrampa/commcare-hq | d64d7cad98b240325ad669ccc7effb07721b4d44 | [
"BSD-3-Clause"
] | 1 | 2020-05-05T13:10:01.000Z | 2020-05-05T13:10:01.000Z | corehq/apps/cloudcare/exceptions.py | kkrampa/commcare-hq | d64d7cad98b240325ad669ccc7effb07721b4d44 | [
"BSD-3-Clause"
] | 1 | 2019-12-09T14:00:14.000Z | 2019-12-09T14:00:14.000Z | corehq/apps/cloudcare/exceptions.py | MaciejChoromanski/commcare-hq | fd7f65362d56d73b75a2c20d2afeabbc70876867 | [
"BSD-3-Clause"
] | 5 | 2015-11-30T13:12:45.000Z | 2019-07-01T19:27:07.000Z | from __future__ import unicode_literals
class RemoteAppError(Exception):
"""Exception raised when cloudcare attempts to display a remote app"""
| 25 | 74 | 0.793333 | 18 | 150 | 6.333333 | 0.944444 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.146667 | 150 | 5 | 75 | 30 | 0.890625 | 0.426667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 4 |
d63a1498940c476013764880ebfa013833645873 | 8,886 | py | Python | bindings/python/pymongoarrow/api.py | Claire-Eleutheriane/mongo-arrow | 4a054523a36379356aa709257756434c196ee71e | [
"Apache-2.0"
] | null | null | null | bindings/python/pymongoarrow/api.py | Claire-Eleutheriane/mongo-arrow | 4a054523a36379356aa709257756434c196ee71e | [
"Apache-2.0"
] | null | null | null | bindings/python/pymongoarrow/api.py | Claire-Eleutheriane/mongo-arrow | 4a054523a36379356aa709257756434c196ee71e | [
"Apache-2.0"
] | null | null | null | # Copyright 2021-present MongoDB, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import warnings
from pymongoarrow.context import PyMongoArrowContext
from pymongoarrow.lib import process_bson_stream
from pymongoarrow.schema import Schema
__all__ = [
"aggregate_arrow_all",
"find_arrow_all",
"aggregate_pandas_all",
"find_pandas_all",
"aggregate_numpy_all",
"find_numpy_all",
"Schema",
]
_PATCH_METHODS = [
"aggregate_arrow_all",
"find_arrow_all",
"aggregate_pandas_all",
"find_pandas_all",
"aggregate_numpy_all",
"find_numpy_all",
]
def find_arrow_all(collection, query, *, schema, **kwargs):
"""Method that returns the results of a find query as a
:class:`pyarrow.Table` instance.
:Parameters:
- `collection`: Instance of :class:`~pymongo.collection.Collection`.
against which to run the ``find`` operation.
- `query`: A mapping containing the query to use for the find operation.
- `schema`: Instance of :class:`~pymongoarrow.schema.Schema`.
Additional keyword-arguments passed to this method will be passed
directly to the underlying ``find`` operation.
:Returns:
An instance of class:`pyarrow.Table`.
"""
context = PyMongoArrowContext.from_schema(schema, codec_options=collection.codec_options)
for opt in ("cursor_type",):
if kwargs.pop(opt, None):
warnings.warn(
f"Ignoring option {opt!r} as it is not supported by PyMongoArrow",
UserWarning,
stacklevel=2,
)
kwargs.setdefault("projection", schema._get_projection())
raw_batch_cursor = collection.find_raw_batches(query, **kwargs)
for batch in raw_batch_cursor:
process_bson_stream(batch, context)
return context.finish()
def aggregate_arrow_all(collection, pipeline, *, schema, **kwargs):
"""Method that returns the results of an aggregation pipeline as a
:class:`pyarrow.Table` instance.
:Parameters:
- `collection`: Instance of :class:`~pymongo.collection.Collection`.
against which to run the ``aggregate`` operation.
- `pipeline`: A list of aggregation pipeline stages.
- `schema`: Instance of :class:`~pymongoarrow.schema.Schema`.
Additional keyword-arguments passed to this method will be passed
directly to the underlying ``aggregate`` operation.
:Returns:
An instance of class:`pyarrow.Table`.
"""
context = PyMongoArrowContext.from_schema(schema, codec_options=collection.codec_options)
if pipeline and ("$out" in pipeline[-1] or "$merge" in pipeline[-1]):
raise ValueError(
"Aggregation pipelines containing a '$out' or '$merge' stage are "
"not supported by PyMongoArrow"
)
for opt in ("batchSize", "useCursor"):
if kwargs.pop(opt, None):
warnings.warn(
f"Ignoring option {opt!r} as it is not supported by PyMongoArrow",
UserWarning,
stacklevel=2,
)
pipeline.append({"$project": schema._get_projection()})
raw_batch_cursor = collection.aggregate_raw_batches(pipeline, **kwargs)
for batch in raw_batch_cursor:
process_bson_stream(batch, context)
return context.finish()
def _arrow_to_pandas(arrow_table):
"""Helper function that converts an Arrow Table to a Pandas DataFrame
while minimizing peak memory consumption during conversion. The memory
buffers backing the given Arrow Table are also destroyed after conversion.
See https://arrow.apache.org/docs/python/pandas.html#reducing-memory-use-in-table-to-pandas
for details.
"""
return arrow_table.to_pandas(split_blocks=True, self_destruct=True)
def find_pandas_all(collection, query, *, schema, **kwargs):
"""Method that returns the results of a find query as a
:class:`pandas.DataFrame` instance.
:Parameters:
- `collection`: Instance of :class:`~pymongo.collection.Collection`.
against which to run the ``find`` operation.
- `query`: A mapping containing the query to use for the find operation.
- `schema`: Instance of :class:`~pymongoarrow.schema.Schema`.
Additional keyword-arguments passed to this method will be passed
directly to the underlying ``find`` operation.
:Returns:
An instance of class:`pandas.DataFrame`.
"""
return _arrow_to_pandas(find_arrow_all(collection, query, schema=schema, **kwargs))
def aggregate_pandas_all(collection, pipeline, *, schema, **kwargs):
"""Method that returns the results of an aggregation pipeline as a
:class:`pandas.DataFrame` instance.
:Parameters:
- `collection`: Instance of :class:`~pymongo.collection.Collection`.
against which to run the ``find`` operation.
- `pipeline`: A list of aggregation pipeline stages.
- `schema`: Instance of :class:`~pymongoarrow.schema.Schema`.
Additional keyword-arguments passed to this method will be passed
directly to the underlying ``aggregate`` operation.
:Returns:
An instance of class:`pandas.DataFrame`.
"""
return _arrow_to_pandas(aggregate_arrow_all(collection, pipeline, schema=schema, **kwargs))
def _arrow_to_numpy(arrow_table, schema):
"""Helper function that converts an Arrow Table to a dictionary
containing NumPy arrays. The memory buffers backing the given Arrow Table
may be destroyed after conversion if the resulting Numpy array(s) is not a
view on the Arrow data.
See https://arrow.apache.org/docs/python/numpy.html for details.
"""
container = {}
for fname in schema:
container[fname] = arrow_table[fname].to_numpy()
return container
def find_numpy_all(collection, query, *, schema, **kwargs):
"""Method that returns the results of a find query as a
:class:`dict` instance whose keys are field names and values are
:class:`~numpy.ndarray` instances bearing the appropriate dtype.
:Parameters:
- `collection`: Instance of :class:`~pymongo.collection.Collection`.
against which to run the ``find`` operation.
- `query`: A mapping containing the query to use for the find operation.
- `schema`: Instance of :class:`~pymongoarrow.schema.Schema`.
Additional keyword-arguments passed to this method will be passed
directly to the underlying ``find`` operation.
This method attempts to create each NumPy array as a view on the Arrow
data corresponding to each field in the result set. When this is not
possible, the underlying data is copied into a new NumPy array. See
:meth:`pyarrow.Array.to_numpy` for more information.
NumPy arrays returned by this method that are views on Arrow data
are not writable. Users seeking to modify such arrays must first
create an editable copy using :meth:`numpy.copy`.
:Returns:
An instance of :class:`dict`.
"""
return _arrow_to_numpy(find_arrow_all(collection, query, schema=schema, **kwargs), schema)
def aggregate_numpy_all(collection, pipeline, *, schema, **kwargs):
"""Method that returns the results of an aggregation pipeline as a
:class:`dict` instance whose keys are field names and values are
:class:`~numpy.ndarray` instances bearing the appropriate dtype.
:Parameters:
- `collection`: Instance of :class:`~pymongo.collection.Collection`.
against which to run the ``find`` operation.
- `query`: A mapping containing the query to use for the find operation.
- `schema`: Instance of :class:`~pymongoarrow.schema.Schema`.
Additional keyword-arguments passed to this method will be passed
directly to the underlying ``aggregate`` operation.
This method attempts to create each NumPy array as a view on the Arrow
data corresponding to each field in the result set. When this is not
possible, the underlying data is copied into a new NumPy array. See
:meth:`pyarrow.Array.to_numpy` for more information.
NumPy arrays returned by this method that are views on Arrow data
are not writable. Users seeking to modify such arrays must first
create an editable copy using :meth:`numpy.copy`.
:Returns:
An instance of :class:`dict`.
"""
return _arrow_to_numpy(
aggregate_arrow_all(collection, pipeline, schema=schema, **kwargs), schema
)
| 37.652542 | 95 | 0.701666 | 1,151 | 8,886 | 5.324066 | 0.195482 | 0.029373 | 0.04406 | 0.02154 | 0.770725 | 0.770725 | 0.763871 | 0.739393 | 0.694027 | 0.680646 | 0 | 0.001704 | 0.207405 | 8,886 | 235 | 96 | 37.812766 | 0.86838 | 0.60871 | 0 | 0.422535 | 0 | 0 | 0.158136 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.112676 | false | 0 | 0.056338 | 0 | 0.28169 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 4 |
c399cbb3b416c47d45d06a5ad52fa519c1c5a698 | 3,353 | py | Python | src/generaterom.py | yantayga/atsc_verilog | 1fadd800fb0044a90b4739c1394d8466c87525e0 | [
"Unlicense"
] | null | null | null | src/generaterom.py | yantayga/atsc_verilog | 1fadd800fb0044a90b4739c1394d8466c87525e0 | [
"Unlicense"
] | null | null | null | src/generaterom.py | yantayga/atsc_verilog | 1fadd800fb0044a90b4739c1394d8466c87525e0 | [
"Unlicense"
] | null | null | null | import sys
from random import *
f = open('ccurom.hex', 'w')
noop = 0x0000
setupOpcode = 0x0001
readPC = 0x0002
writePC = 0x0004
incPC = 0x0008
readA = 0x0010
writeA = 0x0020
readB = 0x0040
writeB = 0x0080
negateB = 0x0100
readAluResult = 0x0200
writeDisplay = 0x0400
writeMemoryRegister = 0x0800
readMemory = 0x1000
writeMemory = 0x2000
halt = 0x4000
resetSteps = 0x8000
reserved = 0x0000
defaultFetch = [
readPC | writeMemoryRegister, # setup mempry address to PC
readMemory | setupOpcode | incPC, # setup opcode & increment PC
]
cmds = [
('NOOP', [noop,noop,noop,noop,noop,noop], []),
('LDA', [
readPC | writeMemoryRegister,
readMemory | writeA | incPC,
resetSteps,
noop,noop,noop], []),
('LDB', [
readPC | writeMemoryRegister,
readMemory | writeB | incPC,
resetSteps,
noop,noop,noop], []),
('ADD', [
readAluResult | writeA,
resetSteps,
noop,noop,noop,noop], []),
('SUB', [
negateB | readAluResult | writeA,
resetSteps,
noop,noop,noop,noop], []),
('STA', [
readPC | writeMemoryRegister,
readA | writeMemory | incPC,
resetSteps,
noop,noop,noop], []),
('STB', [
readPC | writeMemoryRegister,
readB | writeMemory | incPC,
resetSteps,
noop,noop,noop], []),
('JZ', [
readAluResult | readPC | writeMemoryRegister,
readMemory | writePC,
incPC,
resetSteps,
noop,noop], [noop,noop,noop,noop,noop,noop]),
('JNZ', [noop,noop,noop,noop,noop,noop], [
readPC | writeMemoryRegister,
readMemory | writePC,
incPC,
resetSteps,
noop,noop]),
('JMP', [
readPC | writeMemoryRegister,
readMemory | writePC,
incPC,
resetSteps,
noop,noop], []),
('RESERVED1', [resetSteps,reserved,reserved,reserved,reserved,reserved], []),
('RESERVED2', [resetSteps,reserved,reserved,reserved,reserved,reserved], []),
('RESERVED3', [resetSteps,reserved,reserved,reserved,reserved,reserved], []),
('OUTA', [
readA | writeDisplay,
resetSteps,
noop,noop,noop,noop], []),
('OUTB', [
readB | writeDisplay,
resetSteps,
noop,noop,noop,noop], []),
('HLT', [
halt,
noop,noop,noop,noop,noop], []),
]
cmdCounter = 0
for (cmd, codes1, codes2) in cmds:
for z in range(2):
print "Setting up", cmd, hex(z), "at", hex(cmdCounter)
if ((z == 0) or (codes2 == [])):
codes = codes1
else:
codes = codes2
step = 0
for code in defaultFetch:
print "Memory at", hex((z << 7) | (cmdCounter << 3) | step), " = ", hex(code)
f.write(hex(code)[2:])
f.write("\n")
step += 1
pass
for code in codes:
print "Memory at", hex((z << 7) | (cmdCounter << 3) | step), " = ", hex(code)
f.write(hex(code)[2:])
f.write("\n")
step += 1
pass
cmdCounter += 1
f.close()
| 28.176471 | 89 | 0.507903 | 300 | 3,353 | 5.676667 | 0.32 | 0.206694 | 0.211392 | 0.169113 | 0.514974 | 0.471521 | 0.280094 | 0.196712 | 0.082208 | 0.082208 | 0 | 0.051484 | 0.356994 | 3,353 | 118 | 90 | 28.415254 | 0.738404 | 0.016105 | 0 | 0.378378 | 0 | 0 | 0.036104 | 0 | 0 | 0 | 0.032767 | 0 | 0 | 0 | null | null | 0.018018 | 0.018018 | null | null | 0.027027 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 4 |
c3a82bfb283465662123ddede54a13bf3b1e2157 | 51 | py | Python | svox2/version.py | QiukuZ/svox2 | 6b4c3b0437da9a273f5d2eff5212daaf88c5c025 | [
"BSD-2-Clause"
] | 1,724 | 2021-12-10T02:02:54.000Z | 2022-03-31T13:41:17.000Z | svox2/version.py | ccxiaotoancai/svox2 | 59984d6c4fd3d713353bafdcb011646e64647cc7 | [
"BSD-2-Clause"
] | 67 | 2021-12-10T04:44:48.000Z | 2022-03-30T13:25:06.000Z | svox2/version.py | ccxiaotoancai/svox2 | 59984d6c4fd3d713353bafdcb011646e64647cc7 | [
"BSD-2-Clause"
] | 228 | 2021-12-10T04:21:37.000Z | 2022-03-29T23:44:58.000Z | __version__ = '0.0.1.dev0+sphtexcub.lincolor.fast'
| 25.5 | 50 | 0.764706 | 8 | 51 | 4.375 | 0.875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.083333 | 0.058824 | 51 | 1 | 51 | 51 | 0.645833 | 0 | 0 | 0 | 0 | 0 | 0.666667 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 4 |
c3b5609944febde28a6bcbb5087281f2de1b7a9f | 385 | py | Python | pycmp/ast/node.py | aeroshev/CMP | f4366972dfd752833094920728e4ce11ee58feae | [
"MIT"
] | null | null | null | pycmp/ast/node.py | aeroshev/CMP | f4366972dfd752833094920728e4ce11ee58feae | [
"MIT"
] | null | null | null | pycmp/ast/node.py | aeroshev/CMP | f4366972dfd752833094920728e4ce11ee58feae | [
"MIT"
] | null | null | null | from abc import ABC
from typing import Iterator
class Node(ABC):
"""Base node for AST"""
__slots__ = ()
def __iter__(self) -> Iterator['Node']:
yield self
def __len__(self) -> int:
return len(self.__slots__)
def __str__(self) -> str:
return self.__class__.__name__
def __repr__(self) -> str:
return self.__class__.__name__
| 19.25 | 43 | 0.620779 | 47 | 385 | 4.234043 | 0.446809 | 0.080402 | 0.130653 | 0.170854 | 0.261307 | 0.261307 | 0 | 0 | 0 | 0 | 0 | 0 | 0.267532 | 385 | 19 | 44 | 20.263158 | 0.705674 | 0.044156 | 0 | 0.166667 | 0 | 0 | 0.01105 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.166667 | 0.25 | 0.916667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 4 |
c3cb75aa222824e76bd4dd04d7b88b5d4186bfd1 | 118 | py | Python | comments.py | kmarcini/Learn-Python---Full-Course-for-Beginners-Tutorial- | 8ea4ef004d86fdf393980fd356edcf5b769bfeac | [
"BSD-3-Clause"
] | null | null | null | comments.py | kmarcini/Learn-Python---Full-Course-for-Beginners-Tutorial- | 8ea4ef004d86fdf393980fd356edcf5b769bfeac | [
"BSD-3-Clause"
] | null | null | null | comments.py | kmarcini/Learn-Python---Full-Course-for-Beginners-Tutorial- | 8ea4ef004d86fdf393980fd356edcf5b769bfeac | [
"BSD-3-Clause"
] | null | null | null |
'''
This is a
multi-line comment
'''
# Single line comment
print("Comments are fun!")
# print("Not going to run")
| 9.833333 | 27 | 0.652542 | 18 | 118 | 4.277778 | 0.833333 | 0.285714 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.194915 | 118 | 11 | 28 | 10.727273 | 0.810526 | 0.635593 | 0 | 0 | 0 | 0 | 0.515152 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 4 |
c3eceed2f11cf54ae4736000cd8ab5f0b2cd08d3 | 1,068 | py | Python | tensorflow/python/saved_model/registration/test_util.py | EricRemmerswaal/tensorflow | 141ff27877579c81a213fa113bd1b474c1749aca | [
"Apache-2.0"
] | 7 | 2022-03-04T21:14:47.000Z | 2022-03-22T23:07:39.000Z | tensorflow/python/saved_model/registration/test_util.py | EricRemmerswaal/tensorflow | 141ff27877579c81a213fa113bd1b474c1749aca | [
"Apache-2.0"
] | 3 | 2022-02-06T00:10:55.000Z | 2022-02-06T00:10:55.000Z | tensorflow/python/saved_model/registration/test_util.py | EricRemmerswaal/tensorflow | 141ff27877579c81a213fa113bd1b474c1749aca | [
"Apache-2.0"
] | 1 | 2021-11-21T02:32:27.000Z | 2021-11-21T02:32:27.000Z | # Copyright 2022 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Utils for testing registered objects."""
from tensorflow.python.saved_model.registration import registration as registration_lib
# pylint: disable=protected-access
def get_all_registered_serializables():
return registration_lib._class_registry.get_registrations()
def get_all_registered_checkpoint_savers():
return registration_lib._saver_registry.get_registrations()
| 41.076923 | 87 | 0.736891 | 139 | 1,068 | 5.539568 | 0.654676 | 0.077922 | 0.033766 | 0.041558 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.008574 | 0.126404 | 1,068 | 25 | 88 | 42.72 | 0.81672 | 0.685393 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.4 | true | 0 | 0.2 | 0.4 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 4 |
c3faf82b7b773e9b97e7d64043ebba1bb608b655 | 71 | py | Python | sub_ln/bitcoin/__init__.py | willcl-ark/go-sat-sub | 3a2eee93b7171ddc94e759edaac41756f30f0b41 | [
"MIT"
] | null | null | null | sub_ln/bitcoin/__init__.py | willcl-ark/go-sat-sub | 3a2eee93b7171ddc94e759edaac41756f30f0b41 | [
"MIT"
] | null | null | null | sub_ln/bitcoin/__init__.py | willcl-ark/go-sat-sub | 3a2eee93b7171ddc94e759edaac41756f30f0b41 | [
"MIT"
] | 2 | 2019-07-22T12:26:13.000Z | 2019-08-03T10:21:57.000Z | from sub_ln.bitcoin.authproxy import AuthServiceProxy, JSONRPCException | 71 | 71 | 0.901408 | 8 | 71 | 7.875 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.056338 | 71 | 1 | 71 | 71 | 0.940299 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 4 |
615a27297c3afcd15fcf88ade45289295110c285 | 1,714 | py | Python | PackageTests/knowledge/test_Instances.py | Kieran-Bacon/InfoGain | 621ccd111d474f96f0ba19a8972821becea0c5db | [
"Apache-2.0"
] | 1 | 2019-10-14T00:49:04.000Z | 2019-10-14T00:49:04.000Z | PackageTests/knowledge/test_Instances.py | Kieran-Bacon/InfoGain | 621ccd111d474f96f0ba19a8972821becea0c5db | [
"Apache-2.0"
] | 2 | 2018-06-12T12:46:35.000Z | 2019-02-22T10:52:15.000Z | PackageTests/knowledge/test_Instances.py | Kieran-Bacon/InfoGain | 621ccd111d474f96f0ba19a8972821becea0c5db | [
"Apache-2.0"
] | null | null | null | import unittest
from infogain.knowledge import Instance, Concept
class Test_Instance(unittest.TestCase):
def test_function_behaviour(self):
def concatinate(a, b):
return a + b
example = Instance("x")
example.concatinate = concatinate
self.assertEqual(example.concatinate("hello", "there"), "hellothere")
self.assertEqual(example.concatinate(1, 2), 3)
def test_property_behaviour(self):
example = Instance("x", properties={"prop": "value"})
self.assertEqual(example.prop, "value")
self.assertIsNone(example.something)
def test_equality_string(self):
self.assertTrue("England" == Instance("England"))
self.assertFalse("England" == Instance("England", "uuid"))
self.assertTrue("England" == Instance("Country", "England"))
def test_equality_concept(self):
england = Concept("England")
self.assertTrue(england == Instance("England"))
self.assertTrue(england == Instance("England", "uuid"))
self.assertFalse(england == Instance("Country", "England"))
def test_equality_instance(self):
england = Instance("England")
self.assertTrue(england == Instance("England"))
self.assertFalse(england == Instance("England", "uuid"))
self.assertFalse(england == Instance("Country", "England"))
england = Instance("Country", "England")
self.assertFalse(england == Instance("England"))
self.assertFalse(england == Instance("England", "uuid"))
self.assertTrue(england == Instance("Country", "England"))
self.assertFalse(england == Instance("SomethingElse", "England"))
| 31.740741 | 77 | 0.638273 | 164 | 1,714 | 6.603659 | 0.25 | 0.207756 | 0.182825 | 0.193906 | 0.554017 | 0.554017 | 0.554017 | 0.506925 | 0.385042 | 0.385042 | 0 | 0.002235 | 0.217036 | 1,714 | 53 | 78 | 32.339623 | 0.804769 | 0 | 0 | 0.181818 | 0 | 0 | 0.136019 | 0 | 0 | 0 | 0 | 0 | 0.515152 | 1 | 0.181818 | false | 0 | 0.060606 | 0.030303 | 0.30303 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 4 |
61785849123efba7d795cd115a22978cfa9db8bd | 91 | py | Python | backend/src/__init__.py | saiamrut/job-search | 8f1c1fff4604e1aec9aa06a7593b5e8e95a27d12 | [
"MIT"
] | null | null | null | backend/src/__init__.py | saiamrut/job-search | 8f1c1fff4604e1aec9aa06a7593b5e8e95a27d12 | [
"MIT"
] | null | null | null | backend/src/__init__.py | saiamrut/job-search | 8f1c1fff4604e1aec9aa06a7593b5e8e95a27d12 | [
"MIT"
] | null | null | null | import sys
import os
ROOT_DIR = os.path.dirname(os.path.abspath(__file__))
print(ROOT_DIR) | 18.2 | 53 | 0.791209 | 16 | 91 | 4.125 | 0.625 | 0.212121 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.087912 | 91 | 5 | 54 | 18.2 | 0.795181 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.5 | 0 | 0.5 | 0.25 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 4 |
617e88e3a677af219cdf91115d59d10e0169a40a | 1,713 | py | Python | run_me.py | raiyansarker/SpeedTools | 3453f13daea6a01f9332b81876aaf53ba39c178e | [
"MIT"
] | 1 | 2020-05-26T02:58:07.000Z | 2020-05-26T02:58:07.000Z | run_me.py | raiyansarker/SpeedTools | 3453f13daea6a01f9332b81876aaf53ba39c178e | [
"MIT"
] | null | null | null | run_me.py | raiyansarker/SpeedTools | 3453f13daea6a01f9332b81876aaf53ba39c178e | [
"MIT"
] | null | null | null | from time import sleep
import sys
import time
from answer import a, t, d, u, v
# Logo
print("███████ ██████ ███████ ███████ ██████ ████████ ██████ ██████ ██ ███████ ")
print("██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ")
print("███████ ██████ █████ █████ ██ ██ ██ ██ ██ ██ ██ ██ ███████ ")
print(" ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ")
print("███████ ██ ███████ ███████ ██████ ██ ██████ ██████ ███████ ███████ ")
print(".......................................................................................")
# Welcome message
message = "\033[1;31;49mThe tool is starting!.............." + "\n"
for char in message:
sleep(0.2)
sys.stdout.write(char)
sys.stdout.flush()
# Array to get desired query
options = ["(1) Acceleration", "(2) Time", "(3) Distance", "(4) First Momentum", "(5) Last Momentum"]
for x in options:
time.sleep(0.3)
print("\033[1;34;49m"+x)
# Break
print("\n")
try:
# Guery
query = float(input("\033[1;31;49mWhat you want to know? - "))
# Get the program
if query == 1:
print("\033[1;34;49mThe acceleration is " + str(a()) + "m/s\u00b2")
elif query == 2:
print("\033[1;34;49mThe time is " + str(t()) + "s")
elif query == 3:
print("\033[1;34;49mThe distance is " + str(d()) + "m")
elif query == 4:
print("\033[1;34;49mThe first momentum is " + str(u()) + "m/s")
elif query == 5:
print("\033[1;34;49mThe last momentum is " + str(v()) + "m/s")
else:
print("Something went wrong")
except ValueError:
print("Select a number")
| 34.26 | 101 | 0.395213 | 241 | 1,713 | 3.717842 | 0.323651 | 0.142857 | 0.194196 | 0.232143 | 0.243304 | 0.133929 | 0.133929 | 0.133929 | 0.118304 | 0.118304 | 0 | 0.066176 | 0.285464 | 1,713 | 49 | 102 | 34.959184 | 0.486928 | 0.043783 | 0 | 0 | 0 | 0 | 0.554261 | 0.067443 | 0.027778 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.111111 | 0 | 0.111111 | 0.416667 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 4 |
61afde042e8e2eac29bf135204449bc464b7dc74 | 1,389 | py | Python | 2-2_parallel_processing.py | tsh/edx_algs201x_data_structures_fundamentals | 7d15c9fa7f5a2232812c13854d54934321211977 | [
"Apache-2.0"
] | null | null | null | 2-2_parallel_processing.py | tsh/edx_algs201x_data_structures_fundamentals | 7d15c9fa7f5a2232812c13854d54934321211977 | [
"Apache-2.0"
] | null | null | null | 2-2_parallel_processing.py | tsh/edx_algs201x_data_structures_fundamentals | 7d15c9fa7f5a2232812c13854d54934321211977 | [
"Apache-2.0"
] | null | null | null | from heapq import *
def main(threads, tasks):
heap = [(0, thread) for thread in range(threads)]
res = []
for task in tasks:
time, thread = heappop(heap)
res.append(str(thread))
res.append(str(time))
heappush(heap, (time + task, thread))
print(' '.join(res))
if __name__ == '__main__':
# main(2, [1, 2, 3, 4, 5])
main(10, map(int, '124860658 388437511 753484620 349021732 311346104 235543106 665655446 28787989 706718118 409836312 217716719 757274700 609723717 880970735 972393187 246159983 318988174 209495228 854708169 945600937 773832664 587887000 531713892 734781348 603087775 148283412 195634719 968633747 697254794 304163856 554172907 197744495 261204530 641309055 773073192 463418708 59676768 16042361 210106931 901997880 220470855 647104348 163515452 27308711 836338869 505101921 397086591 126041010 704685424 48832532 944295743 840261083 407178084 723373230 242749954 62738878 445028313 734727516 370425459 607137327 541789278 281002380 548695538 651178045 638430458 981678371 648753077 417312222 446493640 201544143 293197772 298610124 31821879 46071794 509690783 183827382 867731980 524516363 376504571 748818121 36366377 404131214 128632009 535716196 470711551 19833703 516847878 422344417 453049973 58419678 175133498 967886806 49897195 188342011 272087192 798530288 210486166 836411405 909200386 561566778'.split())) | 81.705882 | 1,021 | 0.799856 | 155 | 1,389 | 7.116129 | 0.858065 | 0.016319 | 0.021759 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.758883 | 0.149028 | 1,389 | 17 | 1,021 | 81.705882 | 0.174281 | 0.017279 | 0 | 0 | 0 | 0.083333 | 0.730205 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.083333 | false | 0 | 0.083333 | 0 | 0.166667 | 0.083333 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 4 |
61b8d48b2a4beee9166ee2f19d1eca60abf947b7 | 1,019 | py | Python | axis/axis.py | calebsanfo/axis | 46c9700fffaa7a85ab742ff0a64e052121b203d5 | [
"Apache-2.0"
] | null | null | null | axis/axis.py | calebsanfo/axis | 46c9700fffaa7a85ab742ff0a64e052121b203d5 | [
"Apache-2.0"
] | null | null | null | axis/axis.py | calebsanfo/axis | 46c9700fffaa7a85ab742ff0a64e052121b203d5 | [
"Apache-2.0"
] | null | null | null | import serial
class Axis:
def __init__(self, COM_port):
self.connection = serial.Serial(COM_port, 9600, timeout=5)
def get_x_pos(self):
message = "get x".encode()
self.connection.write(message)
return float(self.connection.readline())
def get_y_pos(self):
message = "get y".encode()
self.connection.write(message)
return float(self.connection.readline())
def get_z_pos(self):
message = "get z".encode()
self.connection.write(message)
return float(self.connection.readline())
def set_xy_pos(self, x, y):
self.connection.write(("moveXY "+str(x)+" "+str(y)).encode())
return self.connection.readline()
def set_z_pos(self, z):
self.connection.write(("moveZ "+str(z)).encode())
return self.connection.readline()
def get_analog_input(self):
self.connection.write("analogread".encode())
return float(self.connection.readline()) | 29.970588 | 70 | 0.610402 | 124 | 1,019 | 4.870968 | 0.258065 | 0.301325 | 0.188742 | 0.206954 | 0.539735 | 0.470199 | 0.347682 | 0.347682 | 0.347682 | 0.347682 | 0 | 0.00657 | 0.253189 | 1,019 | 34 | 71 | 29.970588 | 0.787122 | 0 | 0 | 0.36 | 0 | 0 | 0.039514 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.28 | false | 0 | 0.04 | 0 | 0.6 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 4 |
f613e522ec970cae855fe92daf909cc1d697bbd9 | 364 | py | Python | applications/iLife/models/iLife.py | manohar899/iLife | cf193686fd0fad810fa56f720e872fa6d6c2baa1 | [
"BSD-3-Clause"
] | null | null | null | applications/iLife/models/iLife.py | manohar899/iLife | cf193686fd0fad810fa56f720e872fa6d6c2baa1 | [
"BSD-3-Clause"
] | null | null | null | applications/iLife/models/iLife.py | manohar899/iLife | cf193686fd0fad810fa56f720e872fa6d6c2baa1 | [
"BSD-3-Clause"
] | null | null | null | # -*- coding: utf-8 -*-
db.define_table('Journal_Events', Field('Title'), Field('Description','text'),Field('Reminder','datetime'),Field('upload', 'upload'),Field('mail_id'),Field('status'),auth.signature)
db.define_table('Tag',Field('tagged_by','reference auth_user'),Field('tagged','reference auth_user'),Field('post','reference Journal_Events'),auth.signature)
| 91 | 181 | 0.728022 | 48 | 364 | 5.354167 | 0.541667 | 0.062257 | 0.101167 | 0.171206 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.002849 | 0.035714 | 364 | 3 | 182 | 121.333333 | 0.729345 | 0.057692 | 0 | 0 | 0 | 0 | 0.466276 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 4 |
f62cf81bd23ea2f65d8f04d9b8c6cf836306d58b | 100 | py | Python | hubcare/metrics/issue_metrics/activity_rate/apps.py | aleronupe/2019.1-hubcare-api | 3f031eac9559a10fdcf70a88ee4c548cf93e4ac2 | [
"MIT"
] | 7 | 2019-03-31T17:58:45.000Z | 2020-02-29T22:44:27.000Z | hubcare/metrics/issue_metrics/activity_rate/apps.py | aleronupe/2019.1-hubcare-api | 3f031eac9559a10fdcf70a88ee4c548cf93e4ac2 | [
"MIT"
] | 90 | 2019-03-26T01:14:54.000Z | 2021-06-10T21:30:25.000Z | hubcare/metrics/issue_metrics/activity_rate/apps.py | aleronupe/2019.1-hubcare-api | 3f031eac9559a10fdcf70a88ee4c548cf93e4ac2 | [
"MIT"
] | null | null | null | from django.apps import AppConfig
class ActivityRateConfig(AppConfig):
name = 'activity_rate'
| 16.666667 | 36 | 0.78 | 11 | 100 | 7 | 0.909091 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.15 | 100 | 5 | 37 | 20 | 0.905882 | 0 | 0 | 0 | 0 | 0 | 0.13 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 4 |
f654f851a76a43ec16cfb632c1f3cbe72aa60189 | 153 | py | Python | apps/resource/forms.py | vishalpandeyvip/GURU-LMS | bc566e7cd390d5b76c0cf6a72f4b686df1938e36 | [
"Apache-2.0"
] | null | null | null | apps/resource/forms.py | vishalpandeyvip/GURU-LMS | bc566e7cd390d5b76c0cf6a72f4b686df1938e36 | [
"Apache-2.0"
] | null | null | null | apps/resource/forms.py | vishalpandeyvip/GURU-LMS | bc566e7cd390d5b76c0cf6a72f4b686df1938e36 | [
"Apache-2.0"
] | null | null | null | from django import forms
from .models import Note
class NoteForm(forms.ModelForm):
class Meta:
model = Note
fields = ['topic','file','description'] | 21.857143 | 41 | 0.732026 | 20 | 153 | 5.6 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.150327 | 153 | 7 | 41 | 21.857143 | 0.861538 | 0 | 0 | 0 | 0 | 0 | 0.12987 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 4 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.