hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
05cdb2e16a99a1eeedf00411a80492c0745f8ef3 | 59 | py | Python | projects/PoseNet/posenet/checkpoint/__init__.py | thanhhvnqb/detectron2 | 53a964ec53c8bf2e87e73ec1d086dad8d2993b4e | [
"Apache-2.0"
] | null | null | null | projects/PoseNet/posenet/checkpoint/__init__.py | thanhhvnqb/detectron2 | 53a964ec53c8bf2e87e73ec1d086dad8d2993b4e | [
"Apache-2.0"
] | null | null | null | projects/PoseNet/posenet/checkpoint/__init__.py | thanhhvnqb/detectron2 | 53a964ec53c8bf2e87e73ec1d086dad8d2993b4e | [
"Apache-2.0"
] | null | null | null | from .detection_checkpoint import DevDetectionCheckpointer
| 29.5 | 58 | 0.915254 | 5 | 59 | 10.6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.067797 | 59 | 1 | 59 | 59 | 0.963636 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
05fcb2b65c26412edd94a67598f4ed28018c26aa | 192 | py | Python | docs/make.py | fx-kirin/tendo | 38184cdded6bd5982d11a38cd4a570847328f76f | [
"Python-2.0",
"PSF-2.0"
] | 115 | 2015-01-07T16:54:22.000Z | 2022-03-27T23:40:25.000Z | docs/make.py | ssbarnea/tendo | 9b3bb56628ce6da74ffe1895d4bce67cfda06ce9 | [
"Python-2.0",
"PSF-2.0"
] | 41 | 2015-02-03T18:25:27.000Z | 2022-02-01T12:28:27.000Z | docs/make.py | ssbarnea/tendo | 9b3bb56628ce6da74ffe1895d4bce67cfda06ce9 | [
"Python-2.0",
"PSF-2.0"
] | 51 | 2015-06-05T06:50:30.000Z | 2022-03-28T21:35:24.000Z | #!/usr/bin/env python
import os
if 'PYTHONPATH' in os.environ:
os.environ['PYTHONPATH'] = "..:" + os.environ['PYTHONPATH']
else:
os.environ['PYTHONPATH'] = ".."
os.system("make html")
| 24 | 63 | 0.640625 | 25 | 192 | 4.92 | 0.56 | 0.292683 | 0.463415 | 0.341463 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.140625 | 192 | 7 | 64 | 27.428571 | 0.745455 | 0.104167 | 0 | 0 | 0 | 0 | 0.315789 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.166667 | 0 | 0.166667 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
afad23fe64b237add03b3b140b8956541873b307 | 1,308 | py | Python | example/demo_python/ProjectConsts.py | MicroTCA-Tech-Lab/libudmaio | c7fcc77fd2cf9990c77d4e0bde59c650bf71aa0d | [
"BSD-3-Clause"
] | 2 | 2022-02-11T12:34:12.000Z | 2022-02-11T12:35:58.000Z | example/demo_python/ProjectConsts.py | MicroTCA-Tech-Lab/libudmaio | c7fcc77fd2cf9990c77d4e0bde59c650bf71aa0d | [
"BSD-3-Clause"
] | null | null | null | example/demo_python/ProjectConsts.py | MicroTCA-Tech-Lab/libudmaio | c7fcc77fd2cf9990c77d4e0bde59c650bf71aa0d | [
"BSD-3-Clause"
] | null | null | null | # Copyright (c) 2021 Deutsches Elektronen-Synchrotron DESY
from pyudmaio import UioDeviceLocation, UioRegion
class ZupExampleConsts(object):
AXI_GPIO_STATUS = UioDeviceLocation(
'axi_gpio_status', UioRegion(0x00801000, 4 * 1024)
)
AXI_DMA_0 = UioDeviceLocation(
'hier_daq_arm_axi_dma_0', UioRegion(0x00910000, 4 * 1024), 'events0'
)
BRAM_CTRL_0 = UioDeviceLocation(
'hier_daq_arm_axi_bram_ctrl_0', UioRegion(0x00920000, 8 * 1024)
)
AXI_TRAFFIC_GEN_0 = UioDeviceLocation(
'hier_daq_arm_axi_traffic_gen_0', UioRegion(0x00890000, 64 * 1024)
)
FPGA_MEM_PHYS_ADDR = 0x500000000
PCIE_AXI4L_OFFSET = 0x88000000
LFSR_BYTES_PER_BEAT = 16
class Z7ioExampleConsts(object):
AXI_GPIO_STATUS = UioDeviceLocation(
'axi_gpio_status', UioRegion(0x00801000, 4 * 1024)
)
AXI_DMA_0 = UioDeviceLocation(
'hier_daq_arm_axi_dma_0', UioRegion(0x00910000, 4 * 1024), 'events0'
)
BRAM_CTRL_0 = UioDeviceLocation(
'hier_daq_arm_axi_bram_ctrl_0', UioRegion(0x00920000, 8 * 1024)
)
AXI_TRAFFIC_GEN_0 = UioDeviceLocation(
'hier_daq_arm_axi_traffic_gen_0', UioRegion(0x00890000, 64 * 1024)
)
FPGA_MEM_PHYS_ADDR = 0x03f100000
PCIE_AXI4L_OFFSET = 0x44000000
LFSR_BYTES_PER_BEAT = 8
| 31.142857 | 76 | 0.717125 | 160 | 1,308 | 5.4125 | 0.33125 | 0.124711 | 0.152425 | 0.17321 | 0.720554 | 0.720554 | 0.720554 | 0.720554 | 0.720554 | 0.720554 | 0 | 0.168108 | 0.204128 | 1,308 | 41 | 77 | 31.902439 | 0.663785 | 0.042813 | 0 | 0.484848 | 0 | 0 | 0.1632 | 0.128 | 0 | 0 | 0.0976 | 0 | 0 | 1 | 0 | false | 0 | 0.030303 | 0 | 0.515152 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
afc34835f21deebd4d25c2d846e4e53e31eb0bb1 | 312 | py | Python | database/models.py | wilsonpe66/server-backend | 16665d810fe1829f5dacc67f396b7cecf5af042f | [
"BSD-3-Clause"
] | 1 | 2019-09-26T04:00:55.000Z | 2019-09-26T04:00:55.000Z | database/models.py | wilsonpe66/server-backend | 16665d810fe1829f5dacc67f396b7cecf5af042f | [
"BSD-3-Clause"
] | 2 | 2020-06-05T21:58:55.000Z | 2021-06-10T21:45:08.000Z | database/models.py | wilsonpe66/server-backend | 16665d810fe1829f5dacc67f396b7cecf5af042f | [
"BSD-3-Clause"
] | 1 | 2019-09-26T03:55:06.000Z | 2019-09-26T03:55:06.000Z | from database.data_access_models import ChunkRegistry, FileToProcess, FileProcessLock
from database.profiling_models import DecryptionKeyError, EncryptionErrorMetadata, LineEncryptionError, UploadTracking
from database.study_models import DeviceSettings, Participant, Researcher, Study, Survey, SurveyArchive
| 78 | 119 | 0.878205 | 29 | 312 | 9.310345 | 0.689655 | 0.133333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.080128 | 312 | 3 | 120 | 104 | 0.940767 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
bb947568bb9af729557762990cec2bb88ed80da1 | 145 | py | Python | application/routes.py | LandRegistry/lc-alpha-document-api | b0f48a4181d1535956aa32c4037887d8e51245cc | [
"MIT"
] | null | null | null | application/routes.py | LandRegistry/lc-alpha-document-api | b0f48a4181d1535956aa32c4037887d8e51245cc | [
"MIT"
] | null | null | null | application/routes.py | LandRegistry/lc-alpha-document-api | b0f48a4181d1535956aa32c4037887d8e51245cc | [
"MIT"
] | 1 | 2021-04-11T06:04:50.000Z | 2021-04-11T06:04:50.000Z | from application import app
from flask import Response, request
@app.route('/', methods=["GET"])
def index():
return Response(status=200)
| 16.111111 | 35 | 0.710345 | 19 | 145 | 5.421053 | 0.789474 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.02439 | 0.151724 | 145 | 8 | 36 | 18.125 | 0.813008 | 0 | 0 | 0 | 0 | 0 | 0.027778 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | true | 0 | 0.4 | 0.2 | 0.8 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
bbbfcce06c690cfb366f8767b258b06602756308 | 175 | py | Python | xlayers/__init__.py | cspencerjones/xlayers | dc61e8b9189c2933f38547fd2cf77210bfd7d35c | [
"MIT"
] | 11 | 2019-10-16T17:27:32.000Z | 2021-07-14T18:47:52.000Z | xlayers/__init__.py | cspencerjones/xlayers | dc61e8b9189c2933f38547fd2cf77210bfd7d35c | [
"MIT"
] | 11 | 2019-10-27T14:18:06.000Z | 2020-10-30T14:39:57.000Z | xlayers/__init__.py | cspencerjones/xlayers | dc61e8b9189c2933f38547fd2cf77210bfd7d35c | [
"MIT"
] | 5 | 2019-10-26T14:02:36.000Z | 2020-07-13T04:45:52.000Z | from ._version import get_versions
__version__ = get_versions()['version']
del get_versions
__all__ = ['core']
from .core import layers_numpy
from .core import layers_xarray
| 21.875 | 39 | 0.794286 | 24 | 175 | 5.208333 | 0.458333 | 0.264 | 0.288 | 0.32 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.12 | 175 | 7 | 40 | 25 | 0.811688 | 0 | 0 | 0 | 0 | 0 | 0.062857 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
3c3888d52cf4534b673f928c113342e994c01b16 | 164 | py | Python | pytest_asyncio_network_simulator/__init__.py | vaporyproject/pytest-asyncio-network-simulator | 7a7ee136c8e47cde751c1a5af9739b1099810608 | [
"MIT"
] | null | null | null | pytest_asyncio_network_simulator/__init__.py | vaporyproject/pytest-asyncio-network-simulator | 7a7ee136c8e47cde751c1a5af9739b1099810608 | [
"MIT"
] | null | null | null | pytest_asyncio_network_simulator/__init__.py | vaporyproject/pytest-asyncio-network-simulator | 7a7ee136c8e47cde751c1a5af9739b1099810608 | [
"MIT"
] | null | null | null | from .address import Address # noqa: F401
from .host import Host # noqa: F401
from .network import Network # noqa: F401
from .router import Router # noqa: F401
| 32.8 | 42 | 0.731707 | 24 | 164 | 5 | 0.333333 | 0.266667 | 0.3 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.090909 | 0.195122 | 164 | 4 | 43 | 41 | 0.818182 | 0.262195 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
3c9ddc335b9aba043ad4ab2edee4f69776bb799e | 68 | py | Python | test-requirements_type-example/main.py | XingangShi/exampe_pip_package | be82707f56c41dbc89316b832263336a964c4338 | [
"MIT"
] | null | null | null | test-requirements_type-example/main.py | XingangShi/exampe_pip_package | be82707f56c41dbc89316b832263336a964c4338 | [
"MIT"
] | null | null | null | test-requirements_type-example/main.py | XingangShi/exampe_pip_package | be82707f56c41dbc89316b832263336a964c4338 | [
"MIT"
] | null | null | null | import exampe_pip_package
exampe_pip_package.pip_test().get_info()
| 17 | 40 | 0.852941 | 11 | 68 | 4.727273 | 0.636364 | 0.346154 | 0.615385 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.058824 | 68 | 3 | 41 | 22.666667 | 0.8125 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
b1b9bbfbcc060d0c09a68d8fda3651d79ba4c385 | 156 | py | Python | app/error.py | puzzle9/FaceApi | 9a19babf1759a637261b1ad7d9c35ec630679527 | [
"MIT"
] | null | null | null | app/error.py | puzzle9/FaceApi | 9a19babf1759a637261b1ad7d9c35ec630679527 | [
"MIT"
] | null | null | null | app/error.py | puzzle9/FaceApi | 9a19babf1759a637261b1ad7d9c35ec630679527 | [
"MIT"
] | null | null | null | from app.response import error
def register_errors(app):
@app.errorhandler(422)
def errorhandler_422(err):
return error(err.description)
| 17.333333 | 37 | 0.717949 | 20 | 156 | 5.5 | 0.65 | 0.272727 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.047619 | 0.192308 | 156 | 8 | 38 | 19.5 | 0.825397 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.4 | false | 0 | 0.2 | 0.2 | 0.8 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
b1ca8879bf97ca70eb546bc4ec5d8d2e4da5f5e9 | 1,045 | py | Python | seg1d/examples/ex_segment_class.py | cadop/seg1d | f5de3949da1fee71110435c5700c4774a03fedb1 | [
"MIT"
] | 9 | 2020-05-12T18:15:40.000Z | 2021-09-24T13:11:21.000Z | seg1d/examples/ex_segment_class.py | cadop/seg1d | f5de3949da1fee71110435c5700c4774a03fedb1 | [
"MIT"
] | 8 | 2020-06-29T19:44:36.000Z | 2020-08-11T13:38:03.000Z | seg1d/examples/ex_segment_class.py | cadop/seg1d | f5de3949da1fee71110435c5700c4774a03fedb1 | [
"MIT"
] | 4 | 2020-08-11T10:59:20.000Z | 2021-03-29T17:53:04.000Z | '''
>>> import numpy as np
>>> import seg1d
>>> #retrieve the sample reference, target, and weight data
>>> r,t,w = seg1d.sampleData()
>>> # define some test parameters
>>> minW = 70 #minimum percent to scale down reference data
>>> maxW = 150 #maximum percent to scale up reference data
>>> step = 1 #step to use for correlating reference to target data
>>> #call the segmentation algorithm
>>> np.around(seg1d.segment_data(r,t,w,minW,maxW,step), 5)
array([[207. , 240. , 0.91242],
[342. , 381. , 0.88019],
[ 72. , 112. , 0.87768]])
'''
import seg1d
#retrieve the sample reference, target, and weight data
r,t,w = seg1d.sampleData()
### define some test parameters
minW = 70 #minimum percent to scale down reference data
maxW = 150 #maximum percent to scale up reference data
step = 1 #step to use for correlating reference to target data
#call the segmentation algorithm
segments = seg1d.segment_data(r,t,w,minW,maxW,step)
print(segments) | 33.709677 | 70 | 0.644019 | 145 | 1,045 | 4.627586 | 0.386207 | 0.029806 | 0.035768 | 0.041729 | 0.873323 | 0.873323 | 0.873323 | 0.873323 | 0.873323 | 0.780924 | 0 | 0.067925 | 0.239234 | 1,045 | 31 | 71 | 33.709677 | 0.776101 | 0.790431 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.142857 | 0 | 0.142857 | 0.142857 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
590ce1942076e23e8458ea8ea919ec261860bfb3 | 128 | py | Python | tools/cli_lint.py | grbd/GBD.NetCore.WebTemplates | 19dee03ecc98279c10999fe6c32c61e17357d4c9 | [
"MIT"
] | null | null | null | tools/cli_lint.py | grbd/GBD.NetCore.WebTemplates | 19dee03ecc98279c10999fe6c32c61e17357d4c9 | [
"MIT"
] | null | null | null | tools/cli_lint.py | grbd/GBD.NetCore.WebTemplates | 19dee03ecc98279c10999fe6c32c61e17357d4c9 | [
"MIT"
] | null | null | null |
# TODO
# TODO
# "lint": "eslint -c ./.eslintrc.js ClientApp/**/*.js ClientApp/**/*.vue ClientApp/**/*.json webpack*.js",
| 21.333333 | 111 | 0.578125 | 15 | 128 | 4.933333 | 0.666667 | 0.297297 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.15625 | 128 | 5 | 112 | 25.6 | 0.685185 | 0.929688 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0.2 | null | 1 | null | true | 0 | 0 | null | null | null | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
591080a84682873a66cf64cf4e4e67659ebc96c1 | 45 | py | Python | scenecuts/__init__.py | JosselinSomervilleRoberts/SpeechBubbleSubtitles | a4467b042919f34fdd47648ae31af7df5247b6d1 | [
"MIT"
] | 1 | 2022-01-27T19:46:02.000Z | 2022-01-27T19:46:02.000Z | scenecuts/__init__.py | JosselinSomervilleRoberts/SpeechBubbleSubtitles | a4467b042919f34fdd47648ae31af7df5247b6d1 | [
"MIT"
] | null | null | null | scenecuts/__init__.py | JosselinSomervilleRoberts/SpeechBubbleSubtitles | a4467b042919f34fdd47648ae31af7df5247b6d1 | [
"MIT"
] | null | null | null | from scenecuts.cutDetector import CutDetector | 45 | 45 | 0.911111 | 5 | 45 | 8.2 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.066667 | 45 | 1 | 45 | 45 | 0.97619 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
5944e4b3583fca15837c66e704c5b54a84748cf6 | 35 | py | Python | src/coffee/__init__.py | Coffee2Bits/Coffee | e322633cd2fa76e5a9c28e67422a35c2ce98f559 | [
"MIT"
] | null | null | null | src/coffee/__init__.py | Coffee2Bits/Coffee | e322633cd2fa76e5a9c28e67422a35c2ce98f559 | [
"MIT"
] | null | null | null | src/coffee/__init__.py | Coffee2Bits/Coffee | e322633cd2fa76e5a9c28e67422a35c2ce98f559 | [
"MIT"
] | null | null | null | """Main entry point."""
import sys | 11.666667 | 23 | 0.657143 | 5 | 35 | 4.6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.142857 | 35 | 3 | 24 | 11.666667 | 0.766667 | 0.485714 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
3cf00aa786b9a2ffd27a9d8264c2fee4c632aec1 | 11,329 | py | Python | nova/tests/unit/scheduler/filters/test_compute_capabilities_filters.py | bopopescu/nova-token | ec98f69dea7b3e2b9013b27fd55a2c1a1ac6bfb2 | [
"Apache-2.0"
] | null | null | null | nova/tests/unit/scheduler/filters/test_compute_capabilities_filters.py | bopopescu/nova-token | ec98f69dea7b3e2b9013b27fd55a2c1a1ac6bfb2 | [
"Apache-2.0"
] | null | null | null | nova/tests/unit/scheduler/filters/test_compute_capabilities_filters.py | bopopescu/nova-token | ec98f69dea7b3e2b9013b27fd55a2c1a1ac6bfb2 | [
"Apache-2.0"
] | 2 | 2017-07-20T17:31:34.000Z | 2020-07-24T02:42:19.000Z | begin_unit
comment|'# Licensed under the Apache License, Version 2.0 (the "License"); you may'
nl|'\n'
comment|'# not use this file except in compliance with the License. You may obtain'
nl|'\n'
comment|'# a copy of the License at'
nl|'\n'
comment|'#'
nl|'\n'
comment|'# http://www.apache.org/licenses/LICENSE-2.0'
nl|'\n'
comment|'#'
nl|'\n'
comment|'# Unless required by applicable law or agreed to in writing, software'
nl|'\n'
comment|'# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT'
nl|'\n'
comment|'# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the'
nl|'\n'
comment|'# License for the specific language governing permissions and limitations'
nl|'\n'
comment|'# under the License.'
nl|'\n'
nl|'\n'
name|'import'
name|'six'
newline|'\n'
nl|'\n'
name|'from'
name|'nova'
name|'import'
name|'objects'
newline|'\n'
name|'from'
name|'nova'
op|'.'
name|'scheduler'
op|'.'
name|'filters'
name|'import'
name|'compute_capabilities_filter'
newline|'\n'
name|'from'
name|'nova'
name|'import'
name|'test'
newline|'\n'
name|'from'
name|'nova'
op|'.'
name|'tests'
op|'.'
name|'unit'
op|'.'
name|'scheduler'
name|'import'
name|'fakes'
newline|'\n'
nl|'\n'
nl|'\n'
DECL|class|TestComputeCapabilitiesFilter
name|'class'
name|'TestComputeCapabilitiesFilter'
op|'('
name|'test'
op|'.'
name|'NoDBTestCase'
op|')'
op|':'
newline|'\n'
nl|'\n'
DECL|member|setUp
indent|' '
name|'def'
name|'setUp'
op|'('
name|'self'
op|')'
op|':'
newline|'\n'
indent|' '
name|'super'
op|'('
name|'TestComputeCapabilitiesFilter'
op|','
name|'self'
op|')'
op|'.'
name|'setUp'
op|'('
op|')'
newline|'\n'
name|'self'
op|'.'
name|'filt_cls'
op|'='
name|'compute_capabilities_filter'
op|'.'
name|'ComputeCapabilitiesFilter'
op|'('
op|')'
newline|'\n'
nl|'\n'
DECL|member|_do_test_compute_filter_extra_specs
dedent|''
name|'def'
name|'_do_test_compute_filter_extra_specs'
op|'('
name|'self'
op|','
name|'ecaps'
op|','
name|'especs'
op|','
name|'passes'
op|')'
op|':'
newline|'\n'
comment|'# In real OpenStack runtime environment,compute capabilities'
nl|'\n'
comment|'# value may be number, so we should use number to do unit test.'
nl|'\n'
indent|' '
name|'capabilities'
op|'='
op|'{'
op|'}'
newline|'\n'
name|'capabilities'
op|'.'
name|'update'
op|'('
name|'ecaps'
op|')'
newline|'\n'
name|'spec_obj'
op|'='
name|'objects'
op|'.'
name|'RequestSpec'
op|'('
nl|'\n'
name|'flavor'
op|'='
name|'objects'
op|'.'
name|'Flavor'
op|'('
name|'memory_mb'
op|'='
number|'1024'
op|','
name|'extra_specs'
op|'='
name|'especs'
op|')'
op|')'
newline|'\n'
name|'host_state'
op|'='
op|'{'
string|"'free_ram_mb'"
op|':'
number|'1024'
op|'}'
newline|'\n'
name|'host_state'
op|'.'
name|'update'
op|'('
name|'capabilities'
op|')'
newline|'\n'
name|'host'
op|'='
name|'fakes'
op|'.'
name|'FakeHostState'
op|'('
string|"'host1'"
op|','
string|"'node1'"
op|','
name|'host_state'
op|')'
newline|'\n'
name|'assertion'
op|'='
name|'self'
op|'.'
name|'assertTrue'
name|'if'
name|'passes'
name|'else'
name|'self'
op|'.'
name|'assertFalse'
newline|'\n'
name|'assertion'
op|'('
name|'self'
op|'.'
name|'filt_cls'
op|'.'
name|'host_passes'
op|'('
name|'host'
op|','
name|'spec_obj'
op|')'
op|')'
newline|'\n'
nl|'\n'
DECL|member|test_compute_filter_passes_without_extra_specs
dedent|''
name|'def'
name|'test_compute_filter_passes_without_extra_specs'
op|'('
name|'self'
op|')'
op|':'
newline|'\n'
indent|' '
name|'spec_obj'
op|'='
name|'objects'
op|'.'
name|'RequestSpec'
op|'('
nl|'\n'
name|'flavor'
op|'='
name|'objects'
op|'.'
name|'Flavor'
op|'('
name|'memory_mb'
op|'='
number|'1024'
op|')'
op|')'
newline|'\n'
name|'host_state'
op|'='
op|'{'
string|"'free_ram_mb'"
op|':'
number|'1024'
op|'}'
newline|'\n'
name|'host'
op|'='
name|'fakes'
op|'.'
name|'FakeHostState'
op|'('
string|"'host1'"
op|','
string|"'node1'"
op|','
name|'host_state'
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertTrue'
op|'('
name|'self'
op|'.'
name|'filt_cls'
op|'.'
name|'host_passes'
op|'('
name|'host'
op|','
name|'spec_obj'
op|')'
op|')'
newline|'\n'
nl|'\n'
DECL|member|test_compute_filter_fails_without_host_state
dedent|''
name|'def'
name|'test_compute_filter_fails_without_host_state'
op|'('
name|'self'
op|')'
op|':'
newline|'\n'
indent|' '
name|'especs'
op|'='
op|'{'
string|"'capabilities'"
op|':'
string|"'1'"
op|'}'
newline|'\n'
name|'spec_obj'
op|'='
name|'objects'
op|'.'
name|'RequestSpec'
op|'('
nl|'\n'
name|'flavor'
op|'='
name|'objects'
op|'.'
name|'Flavor'
op|'('
name|'memory_mb'
op|'='
number|'1024'
op|','
name|'extra_specs'
op|'='
name|'especs'
op|')'
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertFalse'
op|'('
name|'self'
op|'.'
name|'filt_cls'
op|'.'
name|'host_passes'
op|'('
name|'None'
op|','
name|'spec_obj'
op|')'
op|')'
newline|'\n'
nl|'\n'
DECL|member|test_compute_filter_fails_without_capabilites
dedent|''
name|'def'
name|'test_compute_filter_fails_without_capabilites'
op|'('
name|'self'
op|')'
op|':'
newline|'\n'
indent|' '
name|'cpu_info'
op|'='
string|'""" { } """'
newline|'\n'
nl|'\n'
name|'cpu_info'
op|'='
name|'six'
op|'.'
name|'text_type'
op|'('
name|'cpu_info'
op|')'
newline|'\n'
nl|'\n'
name|'self'
op|'.'
name|'_do_test_compute_filter_extra_specs'
op|'('
nl|'\n'
name|'ecaps'
op|'='
op|'{'
string|"'cpu_info'"
op|':'
name|'cpu_info'
op|'}'
op|','
nl|'\n'
name|'especs'
op|'='
op|'{'
string|"'capabilities:cpu_info:vendor'"
op|':'
string|"'Intel'"
op|'}'
op|','
nl|'\n'
name|'passes'
op|'='
name|'False'
op|')'
newline|'\n'
nl|'\n'
DECL|member|test_compute_filter_pass_cpu_info_as_text_type
dedent|''
name|'def'
name|'test_compute_filter_pass_cpu_info_as_text_type'
op|'('
name|'self'
op|')'
op|':'
newline|'\n'
indent|' '
name|'cpu_info'
op|'='
string|'""" { "vendor": "Intel", "model": "core2duo",\n "arch": "i686","features": ["lahf_lm", "rdtscp"], "topology":\n {"cores": 1, "threads":1, "sockets": 1}} """'
newline|'\n'
nl|'\n'
name|'cpu_info'
op|'='
name|'six'
op|'.'
name|'text_type'
op|'('
name|'cpu_info'
op|')'
newline|'\n'
nl|'\n'
name|'self'
op|'.'
name|'_do_test_compute_filter_extra_specs'
op|'('
nl|'\n'
name|'ecaps'
op|'='
op|'{'
string|"'cpu_info'"
op|':'
name|'cpu_info'
op|'}'
op|','
nl|'\n'
name|'especs'
op|'='
op|'{'
string|"'capabilities:cpu_info:vendor'"
op|':'
string|"'Intel'"
op|'}'
op|','
nl|'\n'
name|'passes'
op|'='
name|'True'
op|')'
newline|'\n'
nl|'\n'
DECL|member|test_compute_filter_fail_cpu_info_as_text_type_not_valid
dedent|''
name|'def'
name|'test_compute_filter_fail_cpu_info_as_text_type_not_valid'
op|'('
name|'self'
op|')'
op|':'
newline|'\n'
indent|' '
name|'cpu_info'
op|'='
string|'"cpu_info"'
newline|'\n'
nl|'\n'
name|'cpu_info'
op|'='
name|'six'
op|'.'
name|'text_type'
op|'('
name|'cpu_info'
op|')'
newline|'\n'
nl|'\n'
name|'self'
op|'.'
name|'_do_test_compute_filter_extra_specs'
op|'('
nl|'\n'
name|'ecaps'
op|'='
op|'{'
string|"'cpu_info'"
op|':'
name|'cpu_info'
op|'}'
op|','
nl|'\n'
name|'especs'
op|'='
op|'{'
string|"'capabilities:cpu_info:vendor'"
op|':'
string|"'Intel'"
op|'}'
op|','
nl|'\n'
name|'passes'
op|'='
name|'False'
op|')'
newline|'\n'
nl|'\n'
DECL|member|test_compute_filter_passes_extra_specs_simple
dedent|''
name|'def'
name|'test_compute_filter_passes_extra_specs_simple'
op|'('
name|'self'
op|')'
op|':'
newline|'\n'
indent|' '
name|'self'
op|'.'
name|'_do_test_compute_filter_extra_specs'
op|'('
nl|'\n'
name|'ecaps'
op|'='
op|'{'
string|"'stats'"
op|':'
op|'{'
string|"'opt1'"
op|':'
number|'1'
op|','
string|"'opt2'"
op|':'
number|'2'
op|'}'
op|'}'
op|','
nl|'\n'
name|'especs'
op|'='
op|'{'
string|"'opt1'"
op|':'
string|"'1'"
op|','
string|"'opt2'"
op|':'
string|"'2'"
op|','
string|"'trust:trusted_host'"
op|':'
string|"'true'"
op|'}'
op|','
nl|'\n'
name|'passes'
op|'='
name|'True'
op|')'
newline|'\n'
nl|'\n'
DECL|member|test_compute_filter_fails_extra_specs_simple
dedent|''
name|'def'
name|'test_compute_filter_fails_extra_specs_simple'
op|'('
name|'self'
op|')'
op|':'
newline|'\n'
indent|' '
name|'self'
op|'.'
name|'_do_test_compute_filter_extra_specs'
op|'('
nl|'\n'
name|'ecaps'
op|'='
op|'{'
string|"'stats'"
op|':'
op|'{'
string|"'opt1'"
op|':'
number|'1'
op|','
string|"'opt2'"
op|':'
number|'2'
op|'}'
op|'}'
op|','
nl|'\n'
name|'especs'
op|'='
op|'{'
string|"'opt1'"
op|':'
string|"'1'"
op|','
string|"'opt2'"
op|':'
string|"'222'"
op|','
string|"'trust:trusted_host'"
op|':'
string|"'true'"
op|'}'
op|','
nl|'\n'
name|'passes'
op|'='
name|'False'
op|')'
newline|'\n'
nl|'\n'
DECL|member|test_compute_filter_pass_extra_specs_simple_with_scope
dedent|''
name|'def'
name|'test_compute_filter_pass_extra_specs_simple_with_scope'
op|'('
name|'self'
op|')'
op|':'
newline|'\n'
indent|' '
name|'self'
op|'.'
name|'_do_test_compute_filter_extra_specs'
op|'('
nl|'\n'
name|'ecaps'
op|'='
op|'{'
string|"'stats'"
op|':'
op|'{'
string|"'opt1'"
op|':'
number|'1'
op|','
string|"'opt2'"
op|':'
number|'2'
op|'}'
op|'}'
op|','
nl|'\n'
name|'especs'
op|'='
op|'{'
string|"'capabilities:opt1'"
op|':'
string|"'1'"
op|','
nl|'\n'
string|"'trust:trusted_host'"
op|':'
string|"'true'"
op|'}'
op|','
nl|'\n'
name|'passes'
op|'='
name|'True'
op|')'
newline|'\n'
nl|'\n'
DECL|member|test_compute_filter_pass_extra_specs_same_as_scope
dedent|''
name|'def'
name|'test_compute_filter_pass_extra_specs_same_as_scope'
op|'('
name|'self'
op|')'
op|':'
newline|'\n'
comment|'# Make sure this still works even if the key is the same as the scope'
nl|'\n'
indent|' '
name|'self'
op|'.'
name|'_do_test_compute_filter_extra_specs'
op|'('
nl|'\n'
name|'ecaps'
op|'='
op|'{'
string|"'capabilities'"
op|':'
number|'1'
op|'}'
op|','
nl|'\n'
name|'especs'
op|'='
op|'{'
string|"'capabilities'"
op|':'
string|"'1'"
op|'}'
op|','
nl|'\n'
name|'passes'
op|'='
name|'True'
op|')'
newline|'\n'
nl|'\n'
DECL|member|test_compute_filter_extra_specs_simple_with_wrong_scope
dedent|''
name|'def'
name|'test_compute_filter_extra_specs_simple_with_wrong_scope'
op|'('
name|'self'
op|')'
op|':'
newline|'\n'
indent|' '
name|'self'
op|'.'
name|'_do_test_compute_filter_extra_specs'
op|'('
nl|'\n'
name|'ecaps'
op|'='
op|'{'
string|"'opt1'"
op|':'
number|'1'
op|','
string|"'opt2'"
op|':'
number|'2'
op|'}'
op|','
nl|'\n'
name|'especs'
op|'='
op|'{'
string|"'wrong_scope:opt1'"
op|':'
string|"'1'"
op|','
nl|'\n'
string|"'trust:trusted_host'"
op|':'
string|"'true'"
op|'}'
op|','
nl|'\n'
name|'passes'
op|'='
name|'True'
op|')'
newline|'\n'
nl|'\n'
DECL|member|test_compute_filter_extra_specs_pass_multi_level_with_scope
dedent|''
name|'def'
name|'test_compute_filter_extra_specs_pass_multi_level_with_scope'
op|'('
name|'self'
op|')'
op|':'
newline|'\n'
indent|' '
name|'self'
op|'.'
name|'_do_test_compute_filter_extra_specs'
op|'('
nl|'\n'
name|'ecaps'
op|'='
op|'{'
string|"'stats'"
op|':'
op|'{'
string|"'opt1'"
op|':'
op|'{'
string|"'a'"
op|':'
number|'1'
op|','
string|"'b'"
op|':'
op|'{'
string|"'aa'"
op|':'
number|'2'
op|'}'
op|'}'
op|','
string|"'opt2'"
op|':'
number|'2'
op|'}'
op|'}'
op|','
nl|'\n'
name|'especs'
op|'='
op|'{'
string|"'opt1:a'"
op|':'
string|"'1'"
op|','
string|"'capabilities:opt1:b:aa'"
op|':'
string|"'2'"
op|','
nl|'\n'
string|"'trust:trusted_host'"
op|':'
string|"'true'"
op|'}'
op|','
nl|'\n'
name|'passes'
op|'='
name|'True'
op|')'
newline|'\n'
dedent|''
dedent|''
endmarker|''
end_unit
| 13.765492 | 179 | 0.623797 | 1,705 | 11,329 | 3.975953 | 0.105572 | 0.095589 | 0.061956 | 0.039829 | 0.799823 | 0.76914 | 0.749226 | 0.736539 | 0.685499 | 0.631362 | 0 | 0.007533 | 0.097714 | 11,329 | 822 | 180 | 13.782238 | 0.655645 | 0 | 0 | 0.925791 | 0 | 0.001217 | 0.393945 | 0.101068 | 0 | 0 | 0 | 0 | 0.007299 | 0 | null | null | 0.03163 | 0.006083 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
a739e5bbd5ff2626e141af6c1b03ed773286d58e | 46 | py | Python | bookstone/backends/ftp/__init__.py | Timtam/bookstone | fe06e35ad18c51e452d400a5e679aa2824931557 | [
"MIT"
] | null | null | null | bookstone/backends/ftp/__init__.py | Timtam/bookstone | fe06e35ad18c51e452d400a5e679aa2824931557 | [
"MIT"
] | 3 | 2020-10-29T23:55:17.000Z | 2021-04-16T20:41:46.000Z | bookstone/backends/ftp/__init__.py | Timtam/bookstone | fe06e35ad18c51e452d400a5e679aa2824931557 | [
"MIT"
] | null | null | null | from .backend import FTPBackend # noqa: F401
| 23 | 45 | 0.76087 | 6 | 46 | 5.833333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.078947 | 0.173913 | 46 | 1 | 46 | 46 | 0.842105 | 0.217391 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
596ea53731f1e111f3a5a93873879d8ac4084733 | 280 | py | Python | test/test_lcs.py | neo0057/Algorithms | 75300184665d71a0fc1248448c509e14b51e05a0 | [
"WTFPL"
] | 4 | 2018-09-18T09:02:03.000Z | 2019-11-27T08:29:43.000Z | test/test_lcs.py | neo0057/Algorithms | 75300184665d71a0fc1248448c509e14b51e05a0 | [
"WTFPL"
] | 4 | 2018-10-12T13:32:43.000Z | 2018-10-24T16:39:02.000Z | test/test_lcs.py | neo0057/Algorithms | 75300184665d71a0fc1248448c509e14b51e05a0 | [
"WTFPL"
] | 9 | 2018-10-12T14:11:26.000Z | 2019-10-04T08:16:02.000Z | import pytest
from dp import lcs
def test_lcs():
assert lcs.longest_common_subsequence("ABCD", "BBDABXYDCCAD") == (4, "ABCD")
assert lcs.longest_common_subsequence("BANANA", "ATANA") == (4, "AANA")
assert lcs.longest_common_subsequence("ABCDEFG", "BDGK") == (3, "BDG")
| 35 | 78 | 0.696429 | 36 | 280 | 5.222222 | 0.583333 | 0.143617 | 0.255319 | 0.351064 | 0.526596 | 0 | 0 | 0 | 0 | 0 | 0 | 0.012346 | 0.132143 | 280 | 7 | 79 | 40 | 0.761317 | 0 | 0 | 0 | 0 | 0 | 0.179487 | 0 | 0 | 0 | 0 | 0 | 0.5 | 1 | 0.166667 | true | 0 | 0.333333 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
59a3204a18beb15f000e8e2dc4c88299c5af12b0 | 22 | py | Python | scripts/header.py | Solacex/Flat-White | 6d0f94180b6dfae45955595962f1965b26b2b6f5 | [
"MIT"
] | 2 | 2021-02-17T13:37:56.000Z | 2021-02-17T13:40:45.000Z | scripts/header.py | Solacex/Flat-White | 6d0f94180b6dfae45955595962f1965b26b2b6f5 | [
"MIT"
] | null | null | null | scripts/header.py | Solacex/Flat-White | 6d0f94180b6dfae45955595962f1965b26b2b6f5 | [
"MIT"
] | null | null | null | import os
import time
| 7.333333 | 11 | 0.818182 | 4 | 22 | 4.5 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.181818 | 22 | 2 | 12 | 11 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
59be1ea0ca2c9b15f63353e472d61fb03991f786 | 75 | py | Python | component/demo_redis/__init__.py | caserwin/daily-learning-python | 01fea4c5d4e86cbea2dbef8817146f018b5f1479 | [
"Apache-2.0"
] | 1 | 2019-05-04T07:27:18.000Z | 2019-05-04T07:27:18.000Z | component/demo_redis/__init__.py | caserwin/daily-learning-python | 01fea4c5d4e86cbea2dbef8817146f018b5f1479 | [
"Apache-2.0"
] | null | null | null | component/demo_redis/__init__.py | caserwin/daily-learning-python | 01fea4c5d4e86cbea2dbef8817146f018b5f1479 | [
"Apache-2.0"
] | 1 | 2018-09-20T01:49:36.000Z | 2018-09-20T01:49:36.000Z | # -*- coding: utf-8 -*-
# @Time : 2018/8/11 上午10:06
# @Author : yidxue
| 18.75 | 30 | 0.52 | 11 | 75 | 3.545455 | 0.909091 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.210526 | 0.24 | 75 | 3 | 31 | 25 | 0.473684 | 0.906667 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
abb4c3128c267e5de6860c12b0eb1bca20121772 | 34 | py | Python | src/python/addons/dorothy's_development_environment/dorothy.py | natukikazemizo/Sedna1.0 | d3b60d1d58d6b99f8cae40d9fa9787e043971e2b | [
"MIT"
] | 4 | 2018-06-26T18:55:39.000Z | 2021-06-19T12:34:19.000Z | src/python/addons/dorothy's_development_environment/dorothy.py | natukikazemizo/Sedna1.0 | d3b60d1d58d6b99f8cae40d9fa9787e043971e2b | [
"MIT"
] | 33 | 2018-04-02T12:10:06.000Z | 2021-05-02T05:39:54.000Z | src/python/addons/dorothy's_development_environment/loris.py | natukikazemizo/Sedna1.0 | d3b60d1d58d6b99f8cae40d9fa9787e043971e2b | [
"MIT"
] | 1 | 2019-05-16T05:03:26.000Z | 2019-05-16T05:03:26.000Z | import bpy
print('Hello world')
| 8.5 | 20 | 0.705882 | 6 | 34 | 4.166667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.147059 | 34 | 3 | 21 | 11.333333 | 0.827586 | 0 | 0 | 0 | 0 | 0 | 0.333333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.5 | null | null | 0.5 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 6 |
f9f56c37ec5f8f77ae66bacef30744d0b06bfea2 | 20 | py | Python | AI/ailib/__init__.py | RGBHack/AlzheimersAI | aaa18a7eb6f1e759fec0b79a62de0ff021884f8d | [
"MIT"
] | 14 | 2020-05-09T20:48:05.000Z | 2022-02-03T04:07:06.000Z | AI/ailib/__init__.py | bmswgnp/AlzheimersAI | 23937c90e3ba4fa39aca6a58d1c8cf320800bd28 | [
"MIT"
] | null | null | null | AI/ailib/__init__.py | bmswgnp/AlzheimersAI | 23937c90e3ba4fa39aca6a58d1c8cf320800bd28 | [
"MIT"
] | 1 | 2021-12-22T10:04:26.000Z | 2021-12-22T10:04:26.000Z | from .ailib import * | 20 | 20 | 0.75 | 3 | 20 | 5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.15 | 20 | 1 | 20 | 20 | 0.882353 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
f9fb35084ea1dfbdf428ea66ab8c277887cbf8e3 | 35 | py | Python | django_postgres_extensions/models/sql/__init__.py | jstacoder/django_postgres_extensions | f2a17466e950d8595db08364d4c1f47259a7191b | [
"BSD-3-Clause"
] | 56 | 2016-08-19T10:47:24.000Z | 2022-01-04T16:19:40.000Z | django_postgres_extensions/models/sql/__init__.py | jstacoder/django_postgres_extensions | f2a17466e950d8595db08364d4c1f47259a7191b | [
"BSD-3-Clause"
] | 8 | 2016-11-18T17:02:55.000Z | 2020-02-05T02:45:05.000Z | django_postgres_extensions/models/sql/__init__.py | jstacoder/django_postgres_extensions | f2a17466e950d8595db08364d4c1f47259a7191b | [
"BSD-3-Clause"
] | 30 | 2017-07-17T19:06:15.000Z | 2022-03-26T12:03:01.000Z | from .subqueries import UpdateQuery | 35 | 35 | 0.885714 | 4 | 35 | 7.75 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.085714 | 35 | 1 | 35 | 35 | 0.96875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
e61448c6ca4799f39d4491cd07562d0d03daa7ee | 218 | py | Python | source/ppi_traj/mdtraj_utils/__init__.py | PolyachenkoYA/masif_2021 | 93ff3395696d2f2515f569c2d0218af251168e34 | [
"Apache-2.0"
] | null | null | null | source/ppi_traj/mdtraj_utils/__init__.py | PolyachenkoYA/masif_2021 | 93ff3395696d2f2515f569c2d0218af251168e34 | [
"Apache-2.0"
] | null | null | null | source/ppi_traj/mdtraj_utils/__init__.py | PolyachenkoYA/masif_2021 | 93ff3395696d2f2515f569c2d0218af251168e34 | [
"Apache-2.0"
] | null | null | null | from . import trajectory_utils as utils
from . import trajectory_measures as measures
from . import data_manager_connector as data
from . import electrostatics as phys
from . import statistical_contacts_model as stats
| 36.333333 | 49 | 0.83945 | 31 | 218 | 5.709677 | 0.483871 | 0.282486 | 0.225989 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.137615 | 218 | 5 | 50 | 43.6 | 0.941489 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
e62f17c6ed8f1978d29d5ef001beaaf18b742277 | 40 | py | Python | app/views/users/performance/__init__.py | dandye/DjanGoat | 72beb30afe3ddd5b31ce74a5d3b9da61d2c5df1d | [
"MIT"
] | 65 | 2017-08-18T15:12:03.000Z | 2021-08-14T16:50:07.000Z | app/views/users/performance/__init__.py | dandye/DjanGoat | 72beb30afe3ddd5b31ce74a5d3b9da61d2c5df1d | [
"MIT"
] | 83 | 2017-11-28T21:45:20.000Z | 2021-11-02T18:52:52.000Z | app/views/users/performance/__init__.py | dandye/DjanGoat | 72beb30afe3ddd5b31ce74a5d3b9da61d2c5df1d | [
"MIT"
] | 71 | 2017-08-17T14:58:01.000Z | 2022-02-02T17:09:49.000Z | import app.views.users.performance.urls
| 20 | 39 | 0.85 | 6 | 40 | 5.666667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.05 | 40 | 1 | 40 | 40 | 0.894737 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
0517eeffd2f2788470f31231fd6f897edf2068aa | 21 | py | Python | Modules/vms/dcdef/dcdef.py | vmssoftware/cpython | b5d2c7f578d33963798a02ca32f0c151c908aa7c | [
"0BSD"
] | 2 | 2021-10-06T15:46:53.000Z | 2022-01-26T02:58:54.000Z | Modules/vms/dcdef/dcdef.py | vmssoftware/cpython | b5d2c7f578d33963798a02ca32f0c151c908aa7c | [
"0BSD"
] | null | null | null | Modules/vms/dcdef/dcdef.py | vmssoftware/cpython | b5d2c7f578d33963798a02ca32f0c151c908aa7c | [
"0BSD"
] | null | null | null | from _dcdef import *
| 10.5 | 20 | 0.761905 | 3 | 21 | 5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.190476 | 21 | 1 | 21 | 21 | 0.882353 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
053e35a804317d308847cf6413beadc29da11824 | 132 | py | Python | testingScripts/sorting.py | REIGNjs/Lithops-Thumbnails-CodeEngine | 5ec93924acb85123db6076341813972e71233d9a | [
"MIT"
] | 1 | 2021-10-16T11:22:22.000Z | 2021-10-16T11:22:22.000Z | testingScripts/sorting.py | REIGNjs/Lithops-Thumbnails-CodeEngine | 5ec93924acb85123db6076341813972e71233d9a | [
"MIT"
] | null | null | null | testingScripts/sorting.py | REIGNjs/Lithops-Thumbnails-CodeEngine | 5ec93924acb85123db6076341813972e71233d9a | [
"MIT"
] | null | null | null | import io
import time
from lithops import Storage
from lithops.multiprocessing import Pool
if __name__ == '__main__':
print() | 14.666667 | 40 | 0.765152 | 17 | 132 | 5.470588 | 0.705882 | 0.236559 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.174242 | 132 | 9 | 41 | 14.666667 | 0.853211 | 0 | 0 | 0 | 0 | 0 | 0.06015 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.666667 | 0 | 0.666667 | 0.166667 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
559ac31c365ab592d75ee60e7ea389a6de4f80f4 | 198 | py | Python | patly_backend/app/admin.py | CodeChefVIT/CHAI-KA-DOOBA-BISCUIT | 12c4ec075290bac776ce063d23077708bd1f31fa | [
"MIT"
] | null | null | null | patly_backend/app/admin.py | CodeChefVIT/CHAI-KA-DOOBA-BISCUIT | 12c4ec075290bac776ce063d23077708bd1f31fa | [
"MIT"
] | null | null | null | patly_backend/app/admin.py | CodeChefVIT/CHAI-KA-DOOBA-BISCUIT | 12c4ec075290bac776ce063d23077708bd1f31fa | [
"MIT"
] | null | null | null | from django.contrib import admin
from app.models import *
# Register your models here.
admin.site.register(DLUser)
admin.site.register(PoolUsers)
admin.site.register(Pool)
admin.site.register(Jobs) | 24.75 | 32 | 0.808081 | 29 | 198 | 5.517241 | 0.517241 | 0.225 | 0.425 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.085859 | 198 | 8 | 33 | 24.75 | 0.883978 | 0.131313 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.333333 | 0 | 0.333333 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
55a9702cbe92b0bd681a9070e198479e0d496910 | 44 | py | Python | tests/t74.py | jplevyak/pyc | 9f4bc49be78ba29427841460945ce63826fcd857 | [
"BSD-3-Clause"
] | 3 | 2019-08-21T22:01:35.000Z | 2021-07-25T00:21:28.000Z | tests/t74.py | jplevyak/pyc | 9f4bc49be78ba29427841460945ce63826fcd857 | [
"BSD-3-Clause"
] | null | null | null | tests/t74.py | jplevyak/pyc | 9f4bc49be78ba29427841460945ce63826fcd857 | [
"BSD-3-Clause"
] | null | null | null | x = [' '] * 3
print x
y = [ 1 ] * 4
print y
| 8.8 | 13 | 0.386364 | 9 | 44 | 1.888889 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.111111 | 0.386364 | 44 | 4 | 14 | 11 | 0.518519 | 0 | 0 | 0 | 0 | 0 | 0.022727 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0.5 | 1 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
55b0c3492ea9b9f3b1887d4ab145650c1578d2dd | 261 | py | Python | pymtl3_net/ringnet/test/RingNetworkCL_test.py | cornell-brg/ocn-posh | 7f8bfd800627364cfc37dc5d6a36333ee2e48c99 | [
"BSD-3-Clause"
] | 3 | 2019-06-07T13:27:06.000Z | 2019-07-16T19:00:23.000Z | pymtl3_net/ringnet/test/RingNetworkCL_test.py | cornell-brg/ocn-posh | 7f8bfd800627364cfc37dc5d6a36333ee2e48c99 | [
"BSD-3-Clause"
] | 12 | 2019-07-23T02:29:31.000Z | 2019-07-25T11:07:00.000Z | pymtl3_net/ringnet/test/RingNetworkCL_test.py | cornell-brg/posh-ocn | 7f8bfd800627364cfc37dc5d6a36333ee2e48c99 | [
"BSD-3-Clause"
] | null | null | null | """
=========================================================================
MeshNetworkRTL_test.py
=========================================================================
Test for NetworkRTL
TODO: re-implement
Author : Yanghui Ou
Date : May 19, 2019
"""
| 23.727273 | 73 | 0.310345 | 16 | 261 | 5 | 0.9375 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.025105 | 0.084291 | 261 | 10 | 74 | 26.1 | 0.309623 | 0.965517 | 0 | null | 0 | null | 0 | 0 | null | 1 | 0 | 0.1 | null | 1 | null | true | 0 | 0 | null | null | null | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 1 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
e9d451e7d38b3c85c32f1ad69d4ac7eb849b516b | 72 | py | Python | tools/kbpy/scripts/test.py | yuliu2016/knotbook | 122488c0a71c69551dbdf3903634b9ca0a7e9fc1 | [
"MIT"
] | null | null | null | tools/kbpy/scripts/test.py | yuliu2016/knotbook | 122488c0a71c69551dbdf3903634b9ca0a7e9fc1 | [
"MIT"
] | 16 | 2019-08-22T23:39:15.000Z | 2019-10-28T20:27:02.000Z | tools/kbpy/scripts/test.py | yuliu2016/knotbook | 122488c0a71c69551dbdf3903634b9ca0a7e9fc1 | [
"MIT"
] | null | null | null | from rtlib import script_context
with script_context() as ctx:
pass | 18 | 32 | 0.777778 | 11 | 72 | 4.909091 | 0.818182 | 0.481481 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.180556 | 72 | 4 | 33 | 18 | 0.915254 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 0.333333 | 0 | 0.333333 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 6 |
75a6af9b4e09abf7b1e7a1aafa346e88c23417c4 | 102 | py | Python | rasa_chinese_service/core/policies/__init__.py | lhr0909/rasa_chinese_service | 3ea96c3f2b5af94aeb06a47621ca3a4c3c3368a9 | [
"Apache-2.0"
] | null | null | null | rasa_chinese_service/core/policies/__init__.py | lhr0909/rasa_chinese_service | 3ea96c3f2b5af94aeb06a47621ca3a4c3c3368a9 | [
"Apache-2.0"
] | null | null | null | rasa_chinese_service/core/policies/__init__.py | lhr0909/rasa_chinese_service | 3ea96c3f2b5af94aeb06a47621ca3a4c3c3368a9 | [
"Apache-2.0"
] | 1 | 2021-10-04T05:52:43.000Z | 2021-10-04T05:52:43.000Z | from rasa_chinese_service.core.policies.stacked_bilstm_tf_policy import StackedBilstmTensorFlowPolicy
| 51 | 101 | 0.931373 | 12 | 102 | 7.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.039216 | 102 | 1 | 102 | 102 | 0.918367 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
75af755d6b602f11b67d79eb983fabc5ed1f9997 | 20 | py | Python | pyFM/FMN/__init__.py | Yang-L1/pyFM | bfc9cf58da81441c13dbfe0645872e82b6038521 | [
"MIT"
] | 35 | 2020-09-10T14:27:37.000Z | 2022-03-30T02:39:18.000Z | pyFM/FMN/__init__.py | Yang-L1/pyFM | bfc9cf58da81441c13dbfe0645872e82b6038521 | [
"MIT"
] | 2 | 2020-12-01T07:30:24.000Z | 2020-12-03T08:19:57.000Z | pyFM/FMN/__init__.py | Yang-L1/pyFM | bfc9cf58da81441c13dbfe0645872e82b6038521 | [
"MIT"
] | 3 | 2021-02-15T10:56:23.000Z | 2021-12-27T07:31:15.000Z | from .FMN import FMN | 20 | 20 | 0.8 | 4 | 20 | 4 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.15 | 20 | 1 | 20 | 20 | 0.941176 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
75e1ccbbc800745f0feb22c9259be8118d049f3a | 187 | py | Python | layers/__init__.py | inflation/wgan | a546251fd4c825a95e075fea0dac7abe301b0b81 | [
"MIT"
] | 2 | 2017-03-12T07:55:35.000Z | 2019-12-04T06:09:10.000Z | layers/__init__.py | inflation/wgan | a546251fd4c825a95e075fea0dac7abe301b0b81 | [
"MIT"
] | null | null | null | layers/__init__.py | inflation/wgan | a546251fd4c825a95e075fea0dac7abe301b0b81 | [
"MIT"
] | null | null | null | from .activation import leaky_relu
from .connection import conv2d, conv2d_trans, linear
from .normalizer import BatchNorm
__all__ = [leaky_relu, conv2d, conv2d_trans, linear, BatchNorm]
| 31.166667 | 63 | 0.818182 | 24 | 187 | 6.041667 | 0.5 | 0.124138 | 0.234483 | 0.317241 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.024242 | 0.117647 | 187 | 5 | 64 | 37.4 | 0.854545 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.75 | 0 | 0.75 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
75f556fcec598ccf84d85e39f58873752526f9c8 | 140 | py | Python | tests/Keywords.py | cwiki-us-docs/python-tutorials | 3986a5c63606b50d0b765cc8405268473a0824db | [
"BSD-2-Clause"
] | 1 | 2021-03-05T20:32:01.000Z | 2021-03-05T20:32:01.000Z | tests/Keywords.py | cwiki-us-docs/python-tutorials | 3986a5c63606b50d0b765cc8405268473a0824db | [
"BSD-2-Clause"
] | null | null | null | tests/Keywords.py | cwiki-us-docs/python-tutorials | 3986a5c63606b50d0b765cc8405268473a0824db | [
"BSD-2-Clause"
] | null | null | null | # Print Python keyword List.
# Author - https://www.ossez.com
import keyword
import json
print(keyword.kwlist)
print(len(keyword.kwlist))
| 15.555556 | 32 | 0.757143 | 20 | 140 | 5.3 | 0.65 | 0.245283 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.121429 | 140 | 8 | 33 | 17.5 | 0.861789 | 0.407143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0.5 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 6 |
f92f97a5ffdf579598d27a8e3cc39a9cafb25a80 | 70 | py | Python | inverted_index/__init__.py | chachazhu/inverted-index.py | b9bf6738ed84db0a853b4ccbb780dd59cbd6d06e | [
"MIT"
] | null | null | null | inverted_index/__init__.py | chachazhu/inverted-index.py | b9bf6738ed84db0a853b4ccbb780dd59cbd6d06e | [
"MIT"
] | null | null | null | inverted_index/__init__.py | chachazhu/inverted-index.py | b9bf6738ed84db0a853b4ccbb780dd59cbd6d06e | [
"MIT"
] | 1 | 2021-04-26T22:07:33.000Z | 2021-04-26T22:07:33.000Z | from dotenv import find_dotenv, load_dotenv
load_dotenv(find_dotenv()) | 35 | 43 | 0.857143 | 11 | 70 | 5.090909 | 0.454545 | 0.357143 | 0.571429 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.071429 | 70 | 2 | 44 | 35 | 0.861538 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
f94a3d130f45dbc645ab523ffbd7e8d76b4cfeb0 | 48 | py | Python | zabanshenas/__init__.py | m3hrdadfi/zabanshenas | af105f8de5b25ae6f5690845ff74d98f0591daf4 | [
"Apache-2.0"
] | 16 | 2021-02-14T14:35:01.000Z | 2022-03-10T22:25:51.000Z | zabanshenas/__init__.py | m3hrdadfi/zabanshenas | af105f8de5b25ae6f5690845ff74d98f0591daf4 | [
"Apache-2.0"
] | 1 | 2021-02-14T17:28:05.000Z | 2022-01-16T08:05:10.000Z | zabanshenas/__init__.py | m3hrdadfi/zabanshenas | af105f8de5b25ae6f5690845ff74d98f0591daf4 | [
"Apache-2.0"
] | 1 | 2021-12-17T07:10:48.000Z | 2021-12-17T07:10:48.000Z | from zabanshenas.zabanshenas import Zabanshenas
| 24 | 47 | 0.895833 | 5 | 48 | 8.6 | 0.6 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.083333 | 48 | 1 | 48 | 48 | 0.977273 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
f99a29a51715afa77c3345b5f56f8b6db2c54d68 | 140 | py | Python | characterize/sample_program/second/definitions.py | arpheno/characterize | 661bd38fd758cd660a3b2aa706c60a431796eaa3 | [
"MIT"
] | null | null | null | characterize/sample_program/second/definitions.py | arpheno/characterize | 661bd38fd758cd660a3b2aa706c60a431796eaa3 | [
"MIT"
] | null | null | null | characterize/sample_program/second/definitions.py | arpheno/characterize | 661bd38fd758cd660a3b2aa706c60a431796eaa3 | [
"MIT"
] | null | null | null | from urllib2 import urlopen
def a(something,somethingelse):
return something+somethingelse
def c(url):
return urlopen(url).read()
| 17.5 | 34 | 0.75 | 18 | 140 | 5.833333 | 0.666667 | 0.419048 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.008475 | 0.157143 | 140 | 7 | 35 | 20 | 0.881356 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.4 | false | 0 | 0.2 | 0.4 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
f9b4259720cc053133dad5b7f78e7858dc1f06b3 | 4,539 | py | Python | wins/tests/test_email_command.py | uktrade/export-wins-data | 46caa444812e89abe504bec8c15aa7f7ba1a247e | [
"MIT"
] | 5 | 2016-09-12T12:52:45.000Z | 2020-03-24T14:43:13.000Z | wins/tests/test_email_command.py | uktrade/export-wins-data | 46caa444812e89abe504bec8c15aa7f7ba1a247e | [
"MIT"
] | 435 | 2016-10-18T12:51:39.000Z | 2021-06-09T17:22:08.000Z | wins/tests/test_email_command.py | uktrade/export-wins-data | 46caa444812e89abe504bec8c15aa7f7ba1a247e | [
"MIT"
] | 2 | 2016-12-06T10:37:21.000Z | 2017-02-22T17:27:43.000Z | import datetime
from django.core.management import call_command
from django.test import TestCase
from wins.factories import WinFactory
from wins.models import Win, Notification
class CommandsTestCase(TestCase):
""" Testing email_blast Django management command """
def _call_command(self):
args = []
opts = {}
call_command('email_blast', *args, **opts)
def test_no_wins(self):
self._call_command()
def test_new_win(self):
WinFactory(
id='6e18a056-1a25-46ce-a4bb-0553a912706d',
date=datetime.datetime.now(),
complete=True,
)
self._call_command()
win = Win.objects.get(id='6e18a056-1a25-46ce-a4bb-0553a912706d')
self.assertTrue(win.notifications.count(), 1)
def test_win_older_than_7_days(self):
old_date = datetime.date.today() - datetime.timedelta(days=8)
WinFactory(
id='6e18a056-1a25-46ce-a4bb-0553a912706d',
date=old_date,
complete=True,
)
self._call_command()
win = Win.objects.get(id='6e18a056-1a25-46ce-a4bb-0553a912706d')
self.assertTrue(win.notifications.count(), 0)
def test_win_border_check_7_days(self):
old_date = datetime.date.today() - datetime.timedelta(days=7)
WinFactory(
id='6e18a056-1a25-46ce-a4bb-0553a912706d',
date=old_date,
complete=True,
)
self._call_command()
win = Win.objects.get(id='6e18a056-1a25-46ce-a4bb-0553a912706d')
self.assertTrue(win.notifications.count(), 1)
def test_win_with_1_notifications(self):
old_date = datetime.date.today() - datetime.timedelta(days=6)
win = WinFactory(
id='6e18a056-1a25-46ce-a4bb-0553a912706d',
date=old_date,
complete=True,
)
notification = Notification(
win=win,
user=win.user,
recipient=win.customer_email_address,
type=Notification.TYPE_CUSTOMER,
)
notification.save()
self.assertTrue(win.notifications.count(), 1)
self._call_command()
win = Win.objects.get(id='6e18a056-1a25-46ce-a4bb-0553a912706d')
self.assertTrue(win.notifications.count(), 2)
def test_win_with_2_notifications(self):
old_date = datetime.date.today() - datetime.timedelta(days=6)
win = WinFactory(
id='6e18a056-1a25-46ce-a4bb-0553a912706d',
date=old_date,
complete=True,
)
for _ in range(1):
notification = Notification(
win=win,
user=win.user,
recipient=win.customer_email_address,
type=Notification.TYPE_CUSTOMER,
)
notification.save()
self.assertTrue(win.notifications.count(), 2)
self._call_command()
win = Win.objects.get(id='6e18a056-1a25-46ce-a4bb-0553a912706d')
self.assertTrue(win.notifications.count(), 3)
def test_win_with_3_notifications(self):
old_date = datetime.date.today() - datetime.timedelta(days=6)
win = WinFactory(
id='6e18a056-1a25-46ce-a4bb-0553a912706d',
date=old_date,
complete=True,
)
for _ in range(2):
notification = Notification(
win=win,
user=win.user,
recipient=win.customer_email_address,
type=Notification.TYPE_CUSTOMER,
)
notification.save()
self.assertTrue(win.notifications.count(), 3)
self._call_command()
win = Win.objects.get(id='6e18a056-1a25-46ce-a4bb-0553a912706d')
self.assertTrue(win.notifications.count(), 4)
def test_win_with_4_notifications_no_more_email(self):
old_date = datetime.date.today() - datetime.timedelta(days=6)
win = WinFactory(
id='6e18a056-1a25-46ce-a4bb-0553a912706d',
date=old_date,
complete=True,
)
for _ in range(3):
notification = Notification(
win=win,
user=win.user,
recipient=win.customer_email_address,
type=Notification.TYPE_CUSTOMER,
)
notification.save()
self.assertTrue(win.notifications.count(), 4)
self._call_command()
win = Win.objects.get(id='6e18a056-1a25-46ce-a4bb-0553a912706d')
self.assertTrue(win.notifications.count(), 4)
| 34.915385 | 72 | 0.600132 | 490 | 4,539 | 5.391837 | 0.140816 | 0.05299 | 0.074186 | 0.095382 | 0.828539 | 0.828539 | 0.826268 | 0.826268 | 0.8081 | 0.8081 | 0 | 0.103727 | 0.290593 | 4,539 | 129 | 73 | 35.186047 | 0.71677 | 0.009914 | 0 | 0.689655 | 0 | 0 | 0.114802 | 0.11235 | 0 | 0 | 0 | 0 | 0.094828 | 1 | 0.077586 | false | 0 | 0.043103 | 0 | 0.12931 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
fb38b2638983ed672194e2d6e575c046951a0c40 | 27,095 | py | Python | embedding.py | vid-koci/KBCtransferlearning | 57faabf21bbfa92068a36708c352cfab0071a22f | [
"MIT"
] | 13 | 2021-09-03T03:18:57.000Z | 2022-02-18T05:22:09.000Z | embedding.py | vid-koci/KBCtransferlearning | 57faabf21bbfa92068a36708c352cfab0071a22f | [
"MIT"
] | 1 | 2022-02-21T08:42:04.000Z | 2022-02-21T09:47:37.000Z | embedding.py | vid-koci/KBCtransferlearning | 57faabf21bbfa92068a36708c352cfab0071a22f | [
"MIT"
] | 1 | 2021-10-17T09:04:20.000Z | 2021-10-17T09:04:20.000Z | import torch
import torch.nn as nn
import torch.nn.functional as F
import numpy as np
from bisect import bisect_left
import encoder
import random
import tqdm
class ConvE(nn.Module):
def __init__(self, data, rel_encoder, ent_encoder, dimension=300, dropout=0.1):
super(ConvE,self).__init__()
self.data = data
self.dimension = dimension
self.dropout = dropout
self.rel_encoder = rel_encoder
self.ent_encoder = ent_encoder
self.inp_drop = nn.Dropout(self.dropout)
self.hidden_drop = nn.Dropout(self.dropout)
self.feature_map_drop = torch.nn.Dropout2d(self.dropout)
self.conv1 = nn.Conv2d(1,32,(3,3))#Exact architecture follows Gupta et al, CaRe model
self.bn0 = nn.BatchNorm2d(1)
self.bn1 = nn.BatchNorm2d(32)
self.bn2 = nn.BatchNorm1d(self.dimension)
self.register_parameter('b',nn.Parameter(torch.zeros(len(self.data.list_of_ent))))
if self.dimension==300:
self.fc = nn.Linear(16128,self.dimension)
else: #assuming dim=500
self.fc = nn.Linear(27968,self.dimension)
def forward(self, pairs):#given a batch of <head,rel> pairs (as text), return their embeddings
head_enc, head_len = self.ent_encoder.prepare_batch([t[0] for t in pairs])
if self.dimension==300:
head_emb = self.ent_encoder.forward(head_enc,head_len).view(-1,1,15,20)
else: #assuming dim==500
head_emb = self.ent_encoder.forward(head_enc,head_len).view(-1,1,20,25)
rel_enc, rel_len = self.rel_encoder.prepare_batch([t[1] for t in pairs])
if self.dimension==300:
rel_emb = self.rel_encoder.forward(rel_enc,rel_len).view(-1,1,15,20)
else:
rel_emb = self.rel_encoder.forward(rel_enc,rel_len).view(-1,1,20,25)
stacked_inputs = torch.cat([head_emb,rel_emb],2)
stacked_inputs = self.bn0(stacked_inputs)
x = self.inp_drop(stacked_inputs)
x = self.conv1(x)
x = self.bn1(x)
x = F.relu(x)
x = self.feature_map_drop(x)
x = x.view(x.shape[0], -1)
x = self.fc(x)
x = self.hidden_drop(x)
x = self.bn2(x)
x = F.relu(x)
return x
def trip2text(self,trips):#turns a list of triples with IDs to <head, rel> pairs and <tail, rel^-1> pairs as text
return [[self.data.id2ent[t[0]],self.data.id2rel[t[1]]] for t in trips]+[[self.data.id2ent[t[2]],"inverse of "+self.data.id2rel[t[1]]] for t in trips]
def batch_loss(self, batch, only_batch_negative = False):#given a batch (triples as IDs), compute loss (both directions) if only_batch_negative flag is positive, only candidates in the batch will be used as negative examples
if only_batch_negative:
negative_tail = list(set([x[0] for x in batch]+[x[2] for x in batch]))
else:
negative_tail = self.data.list_of_ent
all_tail_texts = [self.data.id2ent[ent] for ent in negative_tail]
tail_enc, tail_len = self.ent_encoder.prepare_batch(all_tail_texts)
tail_emb = self.ent_encoder.forward(tail_enc,tail_len)
mapped_embs = self.forward(self.trip2text(batch))
scores = torch.mm(mapped_embs, tail_emb.transpose(1,0))
if only_batch_negative:
scores += self.b.index_select(0,torch.LongTensor(negative_tail).to(scores.device)).expand_as(scores)
else:
scores += self.b.expand_as(scores)
expected_tail_ids = torch.LongTensor(
[negative_tail.index(triple[2]) for triple in batch]+
[negative_tail.index(triple[0]) for triple in batch]).to(scores.device)
return F.cross_entropy(scores,expected_tail_ids)
def get_rank(self, triples, filtered, evalTail=True):#given a list of triples and filtered heads/tails (all as IDs), eval them but take advantage of memoization. For eval only.
filtered_clusts = []
for f_list in filtered:
f_clusts = set(self.data.ent2cluster[x] for x in f_list)
if not self.data.canonicalized:
all_filtered = [x for x in self.data.list_of_ent if self.data.ent2cluster[x] in f_clusts]
else:
all_filtered = [x for x in f_clusts]
filtered_clusts.append(sorted(all_filtered))
correct_positions = []
for trip in triples:
cluster = self.data.ent2cluster[trip[2 if evalTail else 0]]
if not self.data.canonicalized:
pos_in_cluster = [self.data.list_of_ent.index(x) for x in self.data.list_of_ent if self.data.ent2cluster[x]==cluster]
correct_positions.append(pos_in_cluster)
else:
correct_positions.append([cluster])
ranks = []
if evalTail:
input_text = [[self.data.id2ent[t[0]],self.data.id2rel[t[1]]] for t in triples]
else:
input_text = [[self.data.id2ent[t[2]],"inverse of "+self.data.id2rel[t[1]]] for t in triples]
mapped_embs = self.forward(input_text)
#Let us get the scores of correct answers for each input
correct_scores = []
for i in range(len(mapped_embs)):
all_correct_texts = [self.data.id2ent[ent] for ent in correct_positions[i]]
tail_enc, tail_len = self.ent_encoder.prepare_batch(all_correct_texts)
correct_emb_mat = self.ent_encoder.forward(tail_enc,tail_len)
scores = torch.mm(mapped_embs[i].view(1,-1),correct_emb_mat.transpose(1,0))
scores += self.b.index_select(0,torch.LongTensor(correct_positions[i]).to(scores.device))
scores = scores.cpu()
correct_scores.append(torch.max(scores))
ranks = [1]*len(mapped_embs)
eval_batch_size=1000
for i in range(0,len(self.data.id2ent),eval_batch_size):#evaluate all candidates in batches of 10k
ent_texts = [self.data.id2ent[ent] for ent in range(i,min(i+eval_batch_size,len(self.data.id2ent)))]
tail_enc, tail_len = self.ent_encoder.prepare_batch(ent_texts)
tail_emb_mat = self.ent_encoder.forward(tail_enc,tail_len)
scores = torch.mm(mapped_embs,tail_emb_mat.transpose(1,0))
scores += self.b.narrow(0,i,len(ent_texts)).expand_as(scores)
scores = scores.detach().cpu()
for j in range(len(ranks)):
l,r = bisect_left(filtered_clusts[j],i),bisect_left(filtered_clusts[j],i+eval_batch_size)
for target in filtered_clusts[j][l:r]:
scores[j][target-i]=-1e9
ranks[j]+=torch.sum(scores[j]>correct_scores[j]).numpy()
return ranks
class TuckER(nn.Module):
def __init__(self, data, rel_encoder, ent_encoder, dimension=300, dropout=0.1):
super(TuckER,self).__init__()
self.data = data
self.dimension = dimension
self.dropout = dropout
self.rel_encoder = rel_encoder
self.ent_encoder = ent_encoder
self.W = nn.Parameter(torch.zeros([dimension,dimension,dimension],
dtype=torch.float, requires_grad=True))
nn.init.xavier_uniform_(self.W)
self.input_drop = nn.Dropout(self.dropout)
self.hidden_drop1 = nn.Dropout(self.dropout)
self.hidden_drop2 = nn.Dropout(self.dropout)
self.bn0 = nn.BatchNorm1d(dimension)
self.bn1 = nn.BatchNorm1d(dimension)
def forward(self, pairs):#given a batch of <head,rel> pairs (as text), return their embeddings
head_enc, head_len = self.ent_encoder.prepare_batch([t[0] for t in pairs])
head_emb = self.ent_encoder.forward(head_enc,head_len).view(-1,self.dimension)
rel_enc, rel_len = self.rel_encoder.prepare_batch([t[1] for t in pairs])
rel_emb = self.rel_encoder.forward(rel_enc,rel_len).view(-1,self.dimension)
x = self.bn0(head_emb)
x = self.input_drop(x)
x = x.view(-1, 1, head_emb.size(1))
W_mat = torch.mm(rel_emb,self.W.view(rel_emb.size(1),-1))
W_mat = W_mat.view(-1,head_emb.size(1),head_emb.size(1))
W_mat = self.hidden_drop1(W_mat)
x = torch.bmm(x,W_mat)
x = x.view(-1,head_emb.size(1))
x = self.bn1(x)
x = self.hidden_drop2(x)
return x
def trip2text(self,trips):#turns a list of triples with IDs to <head, rel> pairs and <tail, rel^-1> pairs as text
return [[self.data.id2ent[t[0]],self.data.id2rel[t[1]]] for t in trips]+[[self.data.id2ent[t[2]],"inverse of "+self.data.id2rel[t[1]]] for t in trips]
def batch_loss(self, batch, only_batch_negative = False):#given a batch (triples as IDs), compute loss (both directions) if only_batch_negative flag is positive, only candidates in the batch will be used as negative examples
if only_batch_negative:
negative_tail = list(set([x[0] for x in batch]+[x[2] for x in batch]))
else:
negative_tail = self.data.list_of_ent
all_tail_texts = [self.data.id2ent[ent] for ent in negative_tail]
tail_enc, tail_len = self.ent_encoder.prepare_batch(all_tail_texts)
tail_emb = self.ent_encoder.forward(tail_enc,tail_len)
mapped_embs = self.forward(self.trip2text(batch))
scores = torch.mm(mapped_embs, tail_emb.transpose(1,0))
expected_tail_ids = torch.LongTensor(
[negative_tail.index(triple[2]) for triple in batch]+
[negative_tail.index(triple[0]) for triple in batch]).to(scores.device)
return F.cross_entropy(scores,expected_tail_ids)
def get_rank(self, triples, filtered, evalTail=True):#given a list of triples and filtered heads/tails (all as IDs), eval them but take advantage of memoization. For eval only.
filtered_clusts = []
for f_list in filtered:
f_clusts = set(self.data.ent2cluster[x] for x in f_list)
if not self.data.canonicalized:
all_filtered = [x for x in self.data.list_of_ent if self.data.ent2cluster[x] in f_clusts]
else:
all_filtered = [x for x in f_clusts]
filtered_clusts.append(sorted(all_filtered))
correct_positions = []
for trip in triples:
cluster = self.data.ent2cluster[trip[2 if evalTail else 0]]
if not self.data.canonicalized:
pos_in_cluster = [self.data.list_of_ent.index(x) for x in self.data.list_of_ent if self.data.ent2cluster[x]==cluster]
correct_positions.append(pos_in_cluster)
else:
correct_positions.append([cluster])
ranks = []
if evalTail:
input_text = [[self.data.id2ent[t[0]],self.data.id2rel[t[1]]] for t in triples]
else:
input_text = [[self.data.id2ent[t[2]],"inverse of "+self.data.id2rel[t[1]]] for t in triples]
mapped_embs = self.forward(input_text)
#Let us get the scores of correct answers for each input
correct_scores = []
for i in range(len(mapped_embs)):
all_correct_texts = [self.data.id2ent[ent] for ent in correct_positions[i]]
tail_enc, tail_len = self.ent_encoder.prepare_batch(all_correct_texts)
correct_emb_mat = self.ent_encoder.forward(tail_enc,tail_len)
scores = torch.mm(mapped_embs[i].view(1,-1),correct_emb_mat.transpose(1,0))
scores = scores.cpu()
correct_scores.append(torch.max(scores))
ranks = [1]*len(mapped_embs)
eval_batch_size=1000
for i in range(0,len(self.data.id2ent),eval_batch_size):#evaluate all candidates in batches of 1k
ent_texts = [self.data.id2ent[ent] for ent in range(i,min(i+eval_batch_size,len(self.data.id2ent)))]
tail_enc, tail_len = self.ent_encoder.prepare_batch(ent_texts)
tail_emb_mat = self.ent_encoder.forward(tail_enc,tail_len)
scores = torch.mm(mapped_embs,tail_emb_mat.transpose(1,0)).detach().cpu()
for j in range(len(ranks)):
l,r = bisect_left(filtered_clusts[j],i),bisect_left(filtered_clusts[j],i+eval_batch_size)
for target in filtered_clusts[j][l:r]:
scores[j][target-i]=-1e9
ranks[j]+=torch.sum(scores[j]>correct_scores[j]).numpy()
return ranks
class FiveStarE(nn.Module):
def __init__(self, data, rel_encoder, ent_encoder, dimension=200, regularization = 0.1):
super(FiveStarE,self).__init__()
self.data = data
self.dimension = dimension
self.rel_encoder = rel_encoder
self.ent_encoder = ent_encoder
self.regularization = regularization
def forward(self, pairs):#given a batch of <head,rel> pairs (as text), return their embeddings
head_enc, head_len = self.ent_encoder.prepare_batch([t[0] for t in pairs])
head_emb = self.ent_encoder.forward(head_enc,head_len).view(-1,self.dimension*2)
rel_enc, rel_len = self.rel_encoder.prepare_batch([t[1] for t in pairs])
rel_emb = self.rel_encoder.forward(rel_enc,rel_len).view(-1,self.dimension*8)
head_re, head_im = head_emb[:,:self.dimension], head_emb[:,self.dimension:]
rel_re_a,rel_im_a,rel_re_b,rel_im_b,rel_re_c,rel_im_c,rel_re_d,rel_im_d = rel_emb[:,:self.dimension],rel_emb[:,self.dimension:2*self.dimension],rel_emb[:,2*self.dimension:3*self.dimension],rel_emb[:,3*self.dimension:4*self.dimension],rel_emb[:,4*self.dimension:5*self.dimension],rel_emb[:,5*self.dimension:6*self.dimension],rel_emb[:,6*self.dimension:7*self.dimension],rel_emb[:,7*self.dimension:]
score_re_a = head_re * rel_re_a - head_im * rel_im_a
score_im_a = head_re * rel_im_a + head_im * rel_re_a
#ah+b
score_re_top = score_re_a + rel_re_b
score_im_top = score_im_a + rel_im_b
#ch
score_re_c = head_re * rel_re_c - head_im * rel_im_c
score_im_c = head_re * rel_im_c + head_im * rel_re_c
#ch+d
score_re_dn = score_re_c + rel_re_d
score_im_dn = score_im_c + rel_im_d
#(ah+b)Conj(ch+d)
dn_re = torch.sqrt(score_re_dn * score_re_dn+score_im_dn*score_im_dn)
up_re = torch.div(score_re_top * score_re_dn + score_im_top * score_im_dn, dn_re)
up_im = torch.div(score_re_top * score_im_dn - score_im_top * score_re_dn, dn_re)
#For regularization, head embeddings are multiplied by 2 because the same embeddings also appear as tails.
reg_weight = self.regularization*torch.sum(2*(head_re**2+head_im**2)**1.5 + (rel_re_a**2+rel_im_a**2+rel_re_b**2+rel_im_b**2+rel_re_c**2+rel_im_c**2+rel_re_d**2+rel_im_d**2)**1.5)/len(pairs)
return (up_re,up_im,reg_weight)
def trip2text(self,trips):#turns a list of triples with IDs to <head, rel> pairs and <tail, rel^-1> pairs as text
return [[self.data.id2ent[t[0]],self.data.id2rel[t[1]]] for t in trips]+[[self.data.id2ent[t[2]],"inverse of "+self.data.id2rel[t[1]]] for t in trips]
def batch_loss(self, batch, only_batch_negative = False):#given a batch (triples as IDs), compute loss (both directions) if only_batch_negative flag is positive, only candidates in the batch will be used as negative examples
if only_batch_negative:
negative_tail = list(set([x[0] for x in batch]+[x[2] for x in batch]))
else:
negative_tail = self.data.list_of_ent
all_tail_texts = [self.data.id2ent[ent] for ent in negative_tail]
tail_enc, tail_len = self.ent_encoder.prepare_batch(all_tail_texts)
tail_emb = self.ent_encoder.forward(tail_enc,tail_len)
tail_re, tail_im = tail_emb[:,:self.dimension], tail_emb[:,self.dimension:]
up_re, up_im, reg_weight = self.forward(self.trip2text(batch))
scores = up_re @ tail_re.transpose(0,1) + up_im @ tail_im.transpose(0,1)
expected_tail_ids = torch.LongTensor(
[negative_tail.index(triple[2]) for triple in batch]+
[negative_tail.index(triple[0]) for triple in batch]).to(scores.device)
return F.cross_entropy(scores,expected_tail_ids)+reg_weight
def get_rank(self, triples, filtered, evalTail=True):#given a list of triples and filtered heads/tails (all as IDs), eval them but take advantage of memoization. For eval only.
filtered_clusts = []
for f_list in filtered:
f_clusts = set(self.data.ent2cluster[x] for x in f_list)
if not self.data.canonicalized:
all_filtered = [x for x in self.data.list_of_ent if self.data.ent2cluster[x] in f_clusts]
else:
all_filtered = [x for x in f_clusts]
filtered_clusts.append(sorted(all_filtered))
correct_positions = []
for trip in triples:
cluster = self.data.ent2cluster[trip[2 if evalTail else 0]]
if not self.data.canonicalized:
pos_in_cluster = [self.data.list_of_ent.index(x) for x in self.data.list_of_ent if self.data.ent2cluster[x]==cluster]
correct_positions.append(pos_in_cluster)
else:
correct_positions.append([cluster])
ranks = []
if evalTail:
input_text = [[self.data.id2ent[t[0]],self.data.id2rel[t[1]]] for t in triples]
else:
input_text = [[self.data.id2ent[t[2]],"inverse of "+self.data.id2rel[t[1]]] for t in triples]
up_re,up_im,_ = self.forward(input_text)
#Let us get the scores of correct answers for each input
correct_scores = []
for i in range(len(up_re)):
all_correct_texts = [self.data.id2ent[ent] for ent in correct_positions[i]]
tail_enc, tail_len = self.ent_encoder.prepare_batch(all_correct_texts)
correct_emb_mat = self.ent_encoder.forward(tail_enc,tail_len)
correct_re, correct_im = correct_emb_mat[:,:self.dimension], correct_emb_mat[:,self.dimension:]
scores = up_re[i].view(1,-1) @ correct_re.transpose(0,1) + up_im[i].view(1,-1) @ correct_im.transpose(0,1)
scores = scores.detach().cpu()
correct_scores.append(torch.max(scores))
ranks = [1]*len(up_re)
eval_batch_size=1000
for i in range(0,len(self.data.id2ent),eval_batch_size):#evaluate all candidates in batches of 10k
ent_texts = [self.data.id2ent[ent] for ent in range(i,min(i+eval_batch_size,len(self.data.id2ent)))]
tail_enc, tail_len = self.ent_encoder.prepare_batch(ent_texts)
tail_emb_mat = self.ent_encoder.forward(tail_enc,tail_len)
tail_re, tail_im = tail_emb_mat[:,:self.dimension], tail_emb_mat[:,self.dimension:]
scores = (up_re @ tail_re.transpose(0,1) + up_im @ tail_im.transpose(0,1)).detach().cpu()
for j in range(len(ranks)):
l,r = bisect_left(filtered_clusts[j],i),bisect_left(filtered_clusts[j],i+eval_batch_size)
for target in filtered_clusts[j][l:r]:
scores[j][target-i]=-1e9
ranks[j]+=torch.sum(scores[j]>correct_scores[j]).numpy()
return ranks
class BoxE(nn.Module):
def __init__(self, data, rel_encoder, ent_encoder, dimension=300, neg_examples = 100, margin=9.0):
super(BoxE,self).__init__()
self.data = data
self.dimension = dimension
#it is assumed that entity encoder gives outputs of size 2*dimension, e,b concatenated
#it is assumed that relation encoder gives outputs of size 4*dimension c1,s1,c2,s2 concatenated
self.rel_encoder = rel_encoder
self.ent_encoder = ent_encoder
self.neg_examples = neg_examples
self.margin = margin
def forward(self, head, rel, tail, evalTail = True):#given a batch of embedded <head,rel,tail> return their scores
if evalTail:
e = torch.narrow(tail,1,0,self.dimension)+torch.narrow(head,1,self.dimension,self.dimension)
else:
e = torch.narrow(head,1,0,self.dimension)+torch.narrow(tail,1,self.dimension,self.dimension)
if evalTail:
c = torch.narrow(rel,1,2*self.dimension,self.dimension)
s = torch.narrow(rel,1,3*self.dimension,self.dimension)
else:
c = torch.narrow(rel,1,0,self.dimension)
s = torch.narrow(rel,1,self.dimension,self.dimension)
l_placeholder = c-s/2
u_placeholder = c+s/2
l = torch.min(l_placeholder,u_placeholder)
u = torch.max(l_placeholder,u_placeholder)
w = u-l+1
kappa = 0.5*(w-1)*(w-1/w)
in_box = torch.logical_and(l<=e,e<=u)
return torch.norm(in_box * torch.abs(e-c)/w + torch.logical_not(in_box)*(torch.abs(e-c)*w-kappa),dim=1)
def trip2text(self,trips):#turns a list of triples with IDs to <head, rel, tail> pairs as text
return [[self.data.id2ent[t[0]],self.data.id2rel[t[1]],self.data.id2ent[t[2]]] for t in trips]
def batch_loss(self, batch, only_batch_negative = True):#given a batch (triples as IDs), compute loss (both directions)
all_triples = self.trip2text(batch)
head_enc, head_len = self.ent_encoder.prepare_batch([t[0] for t in all_triples])
head_emb = self.ent_encoder.forward(head_enc,head_len).view(-1,self.dimension*2)
rel_enc, rel_len = self.rel_encoder.prepare_batch([t[1] for t in all_triples])
rel_emb = self.rel_encoder.forward(rel_enc,rel_len).view(-1,self.dimension*4)
tail_enc, tail_len = self.ent_encoder.prepare_batch([t[2] for t in all_triples])
tail_emb = self.ent_encoder.forward(tail_enc,tail_len).view(-1,self.dimension*2)
batch_size = len(batch)
if only_batch_negative:
neg_emb = torch.cat([head_emb,tail_emb])
else:
all_neg_texts = [self.data.id2ent[ent] for ent in self.data.list_of_ent]
neg_enc, neg_len = self.ent_encoder.prepare_batch(all_neg_texts)
neg_emb = self.ent_encoder.forward(neg_enc,neg_len)
#eval tails
positive_scores = self.forward(head_emb,rel_emb,tail_emb,evalTail=True)
negative_heads = head_emb.repeat(self.neg_examples,1)
negative_rels = rel_emb.repeat(self.neg_examples,1)
negative_tails = neg_emb.index_select(0,torch.LongTensor(np.random.randint(len(neg_emb),size=self.neg_examples*batch_size)))
negative_scores = self.forward(negative_heads,negative_rels,negative_tails,evalTail=True)
loss = -torch.mean(F.logsigmoid(self.margin-positive_scores))-torch.mean(F.logsigmoid(negative_scores-self.margin))
#eval_heads
positive_scores = self.forward(head_emb,rel_emb,tail_emb,evalTail=False)
negative_heads = neg_emb.index_select(0,torch.LongTensor(np.random.randint(len(neg_emb),size=self.neg_examples*batch_size)))
negative_tails = tail_emb.repeat(self.neg_examples,1)
negative_scores = self.forward(negative_heads,negative_rels,negative_tails,evalTail=False)
loss += -torch.mean(F.logsigmoid(self.margin-positive_scores))-torch.mean(F.logsigmoid(negative_scores-self.margin))
return loss/2 #divided by 2 to get the average of heads and tails
def get_rank(self, triples, filtered, evalTail=True):#given a list of triples and filtered heads/tails (all as IDs), eval them. For eval only.
filtered_clusts = []
for f_list in filtered:
f_clusts = set(self.data.ent2cluster[x] for x in f_list)
if not self.data.canonicalized:
all_filtered = [x for x in self.data.list_of_ent if self.data.ent2cluster[x] in f_clusts]
else:
all_filtered = [x for x in f_clusts]
filtered_clusts.append(sorted(all_filtered))
correct_positions = []
for trip in triples:
cluster = self.data.ent2cluster[trip[2 if evalTail else 0]]
if not self.data.canonicalized:
pos_in_cluster = [self.data.list_of_ent.index(x) for x in self.data.list_of_ent if self.data.ent2cluster[x]==cluster]
correct_positions.append(pos_in_cluster)
else:
correct_positions.append([cluster])
ranks = []
input_text=self.trip2text(triples)
if evalTail:
input_text = [[self.data.id2ent[t[0]],self.data.id2rel[t[1]]] for t in triples]
else:
input_text = [[self.data.id2ent[t[2]],self.data.id2rel[t[1]]] for t in triples]
head_enc, head_len = self.ent_encoder.prepare_batch([t[0] for t in input_text])
head_emb = self.ent_encoder.forward(head_enc,head_len).view(-1,self.dimension*2)
rel_enc, rel_len = self.rel_encoder.prepare_batch([t[1] for t in input_text])
rel_emb = self.rel_encoder.forward(rel_enc,rel_len).view(-1,self.dimension*4)
#Let us get the scores of correct answers for each input
correct_scores = []
for i in range(len(input_text)):
all_correct_texts = [self.data.id2ent[ent] for ent in correct_positions[i]]
tail_enc, tail_len = self.ent_encoder.prepare_batch(all_correct_texts)
tail_emb = self.ent_encoder.forward(tail_enc,tail_len)
if evalTail:
scores = self.forward(torch.narrow(head_emb,0,i,1).expand(len(all_correct_texts),self.dimension*2),torch.narrow(rel_emb,0,i,1).expand(len(all_correct_texts),self.dimension*4),tail_emb,evalTail=True)
else:
scores = self.forward(tail_emb,torch.narrow(rel_emb,0,i,1).expand(len(all_correct_texts),self.dimension*4),torch.narrow(head_emb,0,i,1).expand(len(all_correct_texts),self.dimension*2),evalTail=False)
scores = scores.cpu()
correct_scores.append(torch.min(scores))
ranks = [1]*len(input_text)
eval_batch_size=10
for i in tqdm.trange(0,len(self.data.id2ent),eval_batch_size,desc="computing rank"):#evaluate all candidates in batches of 10
ent_texts = [self.data.id2ent[ent] for ent in range(i,min(i+eval_batch_size,len(self.data.id2ent)))]
tail_enc, tail_len = self.ent_encoder.prepare_batch(ent_texts)
tail_emb = self.ent_encoder.forward(tail_enc,tail_len)
batch_size = len(ent_texts)
all_heads = torch.cat([torch.narrow(head_emb,0,j,1).expand(batch_size,self.dimension*2) for j in range(len(ranks))])
all_rels = torch.cat([torch.narrow(rel_emb,0,j,1).expand(batch_size,self.dimension*4) for j in range(len(ranks))])
all_tails = tail_emb.repeat(len(ranks),1)
if evalTail:
scores= self.forward(all_heads,all_rels,all_tails, evalTail=True)
else:
scores= self.forward(all_tails,all_rels,all_heads, evalTail=False)
scores = torch.reshape(scores,(len(ranks),batch_size)).detach().cpu()
for j in range(len(ranks)):
l,r = bisect_left(filtered_clusts[j],i),bisect_left(filtered_clusts[j],i+eval_batch_size)
for target in filtered_clusts[j][l:r]:
scores[j][target-i]=1e9
ranks[j]+=torch.sum(scores[j]<correct_scores[j]).numpy()
return ranks
| 56.447917 | 405 | 0.655176 | 4,165 | 27,095 | 4.05114 | 0.064826 | 0.047413 | 0.034019 | 0.019913 | 0.813489 | 0.773544 | 0.749185 | 0.720382 | 0.707343 | 0.696734 | 0 | 0.020349 | 0.227348 | 27,095 | 479 | 406 | 56.565762 | 0.785622 | 0.090423 | 0 | 0.629187 | 0 | 0 | 0.003292 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.047847 | false | 0 | 0.019139 | 0.009569 | 0.114833 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
3489c52f721ebdba43226ea4085eb1b512f7c24a | 164 | py | Python | app/api/__init__.py | Mellcap/MailSender | 92d7299342afba8ad5c2ae684a0ade7169cfe300 | [
"MIT"
] | null | null | null | app/api/__init__.py | Mellcap/MailSender | 92d7299342afba8ad5c2ae684a0ade7169cfe300 | [
"MIT"
] | null | null | null | app/api/__init__.py | Mellcap/MailSender | 92d7299342afba8ad5c2ae684a0ade7169cfe300 | [
"MIT"
] | null | null | null | #!/usr/bin/env python
# -*- coding: utf-8 -*-
from flask import Blueprint
api = Blueprint('api', __name__)
from app.api import email_events, email_groups, errors | 20.5 | 54 | 0.719512 | 24 | 164 | 4.666667 | 0.75 | 0.214286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.007092 | 0.140244 | 164 | 8 | 54 | 20.5 | 0.787234 | 0.256098 | 0 | 0 | 0 | 0 | 0.024793 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.666667 | 0 | 0.666667 | 0.666667 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 6 |
34a21f6b4b7e9c4fa6cc1f3711a950cd7bfaf64f | 40 | py | Python | ary/__init__.py | JeffTheK/ary | 25b74593232cac372ad242ab5ee28eeff79761da | [
"MIT"
] | 1 | 2021-12-12T09:37:15.000Z | 2021-12-12T09:37:15.000Z | ary/__init__.py | JeffTheK/ary | 25b74593232cac372ad242ab5ee28eeff79761da | [
"MIT"
] | null | null | null | ary/__init__.py | JeffTheK/ary | 25b74593232cac372ad242ab5ee28eeff79761da | [
"MIT"
] | null | null | null | from .util import add_file_from_template | 40 | 40 | 0.9 | 7 | 40 | 4.714286 | 0.857143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.075 | 40 | 1 | 40 | 40 | 0.891892 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
9b57986d8603e283c289897b1bc6fd22f9e39688 | 21 | py | Python | tensor2struct/languages/ast/__init__.py | chenyangh/tensor2struct-public | d3257cba6d76d3c658a58a78f687d986bdc755cf | [
"MIT"
] | 75 | 2021-01-07T21:29:45.000Z | 2022-03-11T08:36:39.000Z | rat-sql-gap/seq2struct/datasets/__init__.py | alan-ai-learner/gap-text2sql | c90d4e039123db9c57568d1a005b19e6d35df5ea | [
"Apache-2.0"
] | 27 | 2021-01-18T14:16:17.000Z | 2022-03-28T07:37:29.000Z | rat-sql-gap/seq2struct/datasets/__init__.py | alan-ai-learner/gap-text2sql | c90d4e039123db9c57568d1a005b19e6d35df5ea | [
"Apache-2.0"
] | 18 | 2021-04-14T07:19:56.000Z | 2022-03-23T19:26:18.000Z | from . import spider
| 10.5 | 20 | 0.761905 | 3 | 21 | 5.333333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.190476 | 21 | 1 | 21 | 21 | 0.941176 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
9b6b8fbac1dae2d8ca56a4ff1db9d78d9e5b7e24 | 39 | py | Python | test.py | sinkaroid/scathach-api.py | bfa206b7b62f00ec9e3c877c96fcd35938bbd8b9 | [
"MIT"
] | 14 | 2020-12-26T19:49:12.000Z | 2022-03-15T19:04:29.000Z | test.py | sinkaroid/scathach-api.py | bfa206b7b62f00ec9e3c877c96fcd35938bbd8b9 | [
"MIT"
] | null | null | null | test.py | sinkaroid/scathach-api.py | bfa206b7b62f00ec9e3c877c96fcd35938bbd8b9 | [
"MIT"
] | 4 | 2020-12-26T19:46:34.000Z | 2021-12-14T09:18:04.000Z | import scathach
print(scathach.mashu()) | 19.5 | 23 | 0.820513 | 5 | 39 | 6.4 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.051282 | 39 | 2 | 23 | 19.5 | 0.864865 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0.5 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 6 |
9b90cf094f29ea9806906fdc75318ccf19721771 | 154 | py | Python | systemconfig/admin.py | XiaoBiaoBai/xiaobiaobai_api | 409922a1406e6572c931f65be995ce12673cc0d4 | [
"MIT"
] | null | null | null | systemconfig/admin.py | XiaoBiaoBai/xiaobiaobai_api | 409922a1406e6572c931f65be995ce12673cc0d4 | [
"MIT"
] | 8 | 2020-06-05T18:15:44.000Z | 2022-03-11T23:21:19.000Z | systemconfig/admin.py | XiaoBiaoBai/xiaobiaobai.api | 409922a1406e6572c931f65be995ce12673cc0d4 | [
"MIT"
] | null | null | null | from django.contrib import admin
# Register your models here.
from .models import SystemConfigMode
class SystemConfigAdmin(admin.ModelAdmin):
pass
| 17.111111 | 42 | 0.798701 | 18 | 154 | 6.833333 | 0.777778 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.149351 | 154 | 8 | 43 | 19.25 | 0.938931 | 0.168831 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.25 | 0.5 | 0 | 0.75 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
fd0b377dcba685ea7eb354c5e4b97887468247cc | 21 | py | Python | server/mainapp/__init__.py | SanahSidhu/CreepRescue | ec4233821e89fe7cb3e4c3bb486595868cbce266 | [
"MIT"
] | null | null | null | server/mainapp/__init__.py | SanahSidhu/CreepRescue | ec4233821e89fe7cb3e4c3bb486595868cbce266 | [
"MIT"
] | null | null | null | server/mainapp/__init__.py | SanahSidhu/CreepRescue | ec4233821e89fe7cb3e4c3bb486595868cbce266 | [
"MIT"
] | null | null | null | from .back import app | 21 | 21 | 0.809524 | 4 | 21 | 4.25 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.142857 | 21 | 1 | 21 | 21 | 0.944444 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
b5c644549098939d06a224b0aa2b603d7c957515 | 12,181 | py | Python | src/modules/table3/queryDB.py | sarahyyx/paper1 | bed24d4e0162641d16eeb0c7f485d942df74a453 | [
"MIT"
] | null | null | null | src/modules/table3/queryDB.py | sarahyyx/paper1 | bed24d4e0162641d16eeb0c7f485d942df74a453 | [
"MIT"
] | 6 | 2020-01-28T22:45:43.000Z | 2022-02-10T00:09:23.000Z | src/modules/table3/queryDB.py | sarahyyx/paper1 | bed24d4e0162641d16eeb0c7f485d942df74a453 | [
"MIT"
] | null | null | null | from logs import logDecorator as lD
import jsonref, pprint
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import csv
from psycopg2.sql import SQL, Identifier, Literal
from lib.databaseIO import pgIO
from collections import Counter
from textwrap import wrap
from tqdm import tqdm
from multiprocessing import Pool
config = jsonref.load(open('../config/config.json'))
table3_config = jsonref.load(open('../config/modules/table3.json'))
logBase = config['logging']['logBase'] + '.modules.table3.table3'
@lD.log(logBase + '.addmorethan2sudcolumn')
def addmorethan2sudcolumn(logger):
'''Populates the 'morethan2sud' column in sarah.test4
This function counts the number of 'True' for each mental disorder
for each user in sarah.test4. If they have more than 1 'True' value,
their 'morethan2sud' column will be set to 'True'.
Decorators:
lD.log
Arguments:
logger {logging.Logger} -- logs error information
'''
try:
query = '''
SELECT
t1.patientid,
t2.alc,
t2.cannabis,
t2.amphe,
t2.halluc,
t2.nicotin,
t2.cocaine,
t2.opioids,
t2.sedate,
t2.others,
t2.polysub,
t2.inhalant
FROM
sarah.test2 t1
INNER JOIN
sarah.test4 t2
ON
t1.patientid = t2.patientid
'''
data = pgIO.getAllData(query)
csvfile = '../data/raw_data/morethan2suduser_keys.csv'
with open(csvfile, 'w+') as output:
csv_output = csv.writer(output)
for row in data:
if sum(list(row[1:12])) >=2:
csv_output.writerow(row)
output.close()
with open(csvfile) as f:
readCSV = csv.reader(f, delimiter=",")
for user in tqdm(readCSV):
updateQuery = '''
UPDATE
sarah.test4
SET
morethan2sud = True
WHERE
patientid = {}
'''.format(user[0])
print(pgIO.commitData(updateQuery))
# print(type(user[0]))
#Update column's null values to false
updateQuery2 = '''
UPDATE
sarah.test4
SET
morethan2sud = False
WHERE
morethan2sud is null
'''
print(pgIO.commitData(updateQuery2))
except Exception as e:
logger.error('adding morethan2sud column to the databse failed because of {}'.format(e))
return
@lD.log(logBase + '.createDF_allRaces_anySUD')
def createDF_allRaces_anySUD(logger):
'''Creates dataframe for total sample, dependent variable = any sud
This function creates a dataframe for the total sample, where the
dependent variable is any sud and the independent variables are:
race, age, sex and setting.
Decorators:
lD.log
Arguments:
logger {logging.Logger} -- logs error information
'''
try:
query = '''
SELECT
t2.sud,
t1.race,
t1.age,
t1.sex,
t1.visit_type
FROM
sarah.test2 t1
INNER JOIN
sarah.test3 t2
ON
t1.patientid = t2.patientid
WHERE
t1.age BETWEEN 12 AND 100
'''
data = pgIO.getAllData(query)
sud_data = [d[0] for d in data]
race_data = [d[1] for d in data]
age_data = [d[2] for d in data]
sex_data = [d[3] for d in data]
setting_data = [d[4] for d in data]
d = {'sud': sud_data, 'race': race_data, 'age': age_data, 'sex': sex_data, 'setting': setting_data}
main = pd.DataFrame(data=d)
df = main.copy()
# Change sud column to binary, dummify the other columns
df.replace({False:0, True:1}, inplace=True)
dummy_races = pd.get_dummies(main['race'])
df = df[['sud']].join(dummy_races.ix[:, 'MR':])
main.replace(to_replace=list(range(12, 18)), value="12-17", inplace=True)
main.replace(to_replace=list(range(18, 35)), value="18-34", inplace=True)
main.replace(to_replace=list(range(35, 50)), value="35-49", inplace=True)
main.replace(to_replace=list(range(50, 100)), value="50+", inplace=True)
dummy_ages = pd.get_dummies(main['age'])
df = df[['sud', 'MR', 'NHPI']].join(dummy_ages.ix[:, :'35-49'])
dummy_sexes = pd.get_dummies(main['sex'])
df = df[['sud', 'MR', 'NHPI', '12-17', '18-34', '35-49']].join(dummy_sexes.ix[:, 'M':])
dummy_setting = pd.get_dummies(main['setting'])
df = df[['sud', 'MR', 'NHPI', '12-17', '18-34', '35-49', 'M']].join(dummy_setting.ix[:, :'Hospital'])
df['intercept'] = 1.0
except Exception as e:
logger.error('createDF_allRaces_anySUD failed because of {}'.format(e))
return df
@lD.log(logBase + '.createDF_allRaces_morethan2SUD')
def createDF_allRaces_morethan2SUD(logger):
'''Creates dataframe for total sample, dependent variable = more than 2 sud
This function creates a dataframe for the total sample, where the
dependent variable is >=2 sud and the independent variables are:
race, age, sex and setting.
Decorators:
lD.log
Arguments:
logger {logging.Logger} -- logs error information
'''
try:
query = '''
SELECT
t2.morethan2sud,
t1.race,
t1.age,
t1.sex,
t1.visit_type
FROM
sarah.test2 t1
INNER JOIN
sarah.test4 t2
ON
t1.patientid = t2.patientid
WHERE
t1.age BETWEEN 12 AND 100
'''
data = pgIO.getAllData(query)
sud_data = [d[0] for d in data]
race_data = [d[1] for d in data]
age_data = [d[2] for d in data]
sex_data = [d[3] for d in data]
setting_data = [d[4] for d in data]
d = {'sud': sud_data, 'race': race_data, 'age': age_data, 'sex': sex_data, 'setting': setting_data}
main = pd.DataFrame(data=d)
df = main.copy()
# Change sud column to binary, dummify the other columns
df.replace({False:0, True:1}, inplace=True)
dummy_races = pd.get_dummies(main['race'])
df = df[['sud']].join(dummy_races.ix[:, 'MR':])
main.replace(to_replace=list(range(12, 18)), value="12-17", inplace=True)
main.replace(to_replace=list(range(18, 35)), value="18-34", inplace=True)
main.replace(to_replace=list(range(35, 50)), value="35-49", inplace=True)
main.replace(to_replace=list(range(50, 100)), value="50+", inplace=True)
dummy_ages = pd.get_dummies(main['age'])
df = df[['sud', 'MR', 'NHPI']].join(dummy_ages.ix[:, :'35-49'])
dummy_sexes = pd.get_dummies(main['sex'])
df = df[['sud', 'MR', 'NHPI', '12-17', '18-34', '35-49']].join(dummy_sexes.ix[:, 'M':])
dummy_setting = pd.get_dummies(main['setting'])
df = df[['sud', 'MR', 'NHPI', '12-17', '18-34', '35-49', 'M']].join(dummy_setting.ix[:, :'Hospital'])
df['intercept'] = 1.0
except Exception as e:
logger.error('createDF_allRaces_morethan2SUD failed because of {}'.format(e))
return df
@lD.log(logBase + '.createDF_byRace_anySUD')
def createDF_byRace_anySUD(logger, race):
'''Creates dataframe for a sample from a specified race,
dependent variable = any sud
This function creates a dataframe for a sample from a specified race,
where the dependent variable is any sud and the independent variables
are: age, sex and setting.
Decorators:
lD.log
Arguments:
logger {logging.Logger} -- logs error information
race {str} -- 'AA', 'NHPI', or 'MR'
'''
try:
query = SQL('''
SELECT
t2.sud,
t1.age,
t1.sex,
t1.visit_type
FROM
sarah.test2 t1
INNER JOIN
sarah.test3 t2
ON
t1.patientid = t2.patientid
WHERE
t1.age BETWEEN 12 AND 100
AND
t1.race = {}
''').format(
Literal(race)
)
data = pgIO.getAllData(query)
sud_data = [d[0] for d in data]
age_data = [d[1] for d in data]
sex_data = [d[2] for d in data]
setting_data = [d[3] for d in data]
d = {'sud': sud_data, 'age': age_data, 'sex': sex_data, 'setting': setting_data}
main = pd.DataFrame(data=d)
df = main.copy()
# Change sud column to binary, dummify the other columns
df.replace({False:0, True:1}, inplace=True)
main.replace(to_replace=list(range(12, 18)), value="12-17", inplace=True)
main.replace(to_replace=list(range(18, 35)), value="18-34", inplace=True)
main.replace(to_replace=list(range(35, 50)), value="35-49", inplace=True)
main.replace(to_replace=list(range(50, 100)), value="50+", inplace=True)
dummy_ages = pd.get_dummies(main['age'])
df = df[['sud']].join(dummy_ages.ix[:, :'35-49'])
dummy_sexes = pd.get_dummies(main['sex'])
df = df[['sud', '12-17', '18-34', '35-49']].join(dummy_sexes.ix[:, 'M':])
dummy_setting = pd.get_dummies(main['setting'])
df = df[['sud', '12-17', '18-34', '35-49', 'M']].join(dummy_setting.ix[:, :'Hospital'])
df['intercept'] = 1.0
except Exception as e:
logger.error('createDF_byRace_anySUD failed because of {}'.format(e))
return df
@lD.log(logBase + '.createDF_byRace_morethan2SUD')
def createDF_byRace_morethan2SUD(logger, race):
'''Creates dataframe for a sample from a specified race,
dependent variable = more than 2 sud
This function creates a dataframe for a sample from a specified race,
where the dependent variable is >=2 sud and the independent variables
are: age, sex and setting.
Decorators:
lD.log
Arguments:
logger {logging.Logger} -- logs error information
race {str} -- 'AA', 'NHPI', or 'MR'
'''
try:
query = SQL('''
SELECT
t2.morethan2sud,
t1.age,
t1.sex,
t1.visit_type
FROM
sarah.test2 t1
INNER JOIN
sarah.test4 t2
ON
t1.patientid = t2.patientid
WHERE
t1.age BETWEEN 12 AND 100
AND
t1.race = {}
''').format(
Literal(race)
)
data = pgIO.getAllData(query)
sud_data = [d[0] for d in data]
age_data = [d[1] for d in data]
sex_data = [d[2] for d in data]
setting_data = [d[3] for d in data]
d = {'sud': sud_data, 'age': age_data, 'sex': sex_data, 'setting': setting_data}
main = pd.DataFrame(data=d)
df = main.copy()
# Change sud column to binary, dummify the other columns
df.replace({False:0, True:1}, inplace=True)
main.replace(to_replace=list(range(12, 18)), value="12-17", inplace=True)
main.replace(to_replace=list(range(18, 35)), value="18-34", inplace=True)
main.replace(to_replace=list(range(35, 50)), value="35-49", inplace=True)
main.replace(to_replace=list(range(50, 100)), value="50+", inplace=True)
dummy_ages = pd.get_dummies(main['age'])
df = df[['sud']].join(dummy_ages.ix[:, :'35-49'])
dummy_sexes = pd.get_dummies(main['sex'])
df = df[['sud', '12-17', '18-34', '35-49']].join(dummy_sexes.ix[:, 'M':])
dummy_setting = pd.get_dummies(main['setting'])
df = df[['sud', '12-17', '18-34', '35-49', 'M']].join(dummy_setting.ix[:, :'Hospital'])
df['intercept'] = 1.0
except Exception as e:
logger.error('createDF_byRace_morethan2SUD failed because of {}'.format(e))
return df | 31.233333 | 109 | 0.559149 | 1,571 | 12,181 | 4.254615 | 0.127944 | 0.019449 | 0.016158 | 0.02693 | 0.799372 | 0.776631 | 0.768103 | 0.768103 | 0.749252 | 0.749252 | 0 | 0.046918 | 0.308842 | 12,181 | 390 | 110 | 31.233333 | 0.747001 | 0.166407 | 0 | 0.74502 | 0 | 0 | 0.322271 | 0.035212 | 0 | 0 | 0 | 0 | 0 | 1 | 0.01992 | false | 0 | 0.047809 | 0 | 0.087649 | 0.011952 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
b5cfede92d3fda8071a06563d7408ca0952a3cd1 | 110 | py | Python | airflow_dbt/__init__.py | ayobamshy/airflow-dbt | 7e3e7e99e959a912721f6e2c4b45192bd0d769be | [
"MIT"
] | null | null | null | airflow_dbt/__init__.py | ayobamshy/airflow-dbt | 7e3e7e99e959a912721f6e2c4b45192bd0d769be | [
"MIT"
] | null | null | null | airflow_dbt/__init__.py | ayobamshy/airflow-dbt | 7e3e7e99e959a912721f6e2c4b45192bd0d769be | [
"MIT"
] | null | null | null | from .hooks import DbtCliHook
from .operators import DbtRunOperator, DbtTestOperator, DbtDocsGenerateOperator
| 36.666667 | 79 | 0.872727 | 10 | 110 | 9.6 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.090909 | 110 | 2 | 80 | 55 | 0.96 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
b5f7bacf5c2b4f678d7c8a40de51e514e7b13c75 | 32 | py | Python | cashed/__init__.py | jaemk/cashed | c86051a016e5797dfcb47c944c6eb3f15962b436 | [
"MIT"
] | 1 | 2018-11-14T15:07:04.000Z | 2018-11-14T15:07:04.000Z | cashed/__init__.py | jaemk/cashed | c86051a016e5797dfcb47c944c6eb3f15962b436 | [
"MIT"
] | null | null | null | cashed/__init__.py | jaemk/cashed | c86051a016e5797dfcb47c944c6eb3f15962b436 | [
"MIT"
] | null | null | null | from cashed.cache import cached
| 16 | 31 | 0.84375 | 5 | 32 | 5.4 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.125 | 32 | 1 | 32 | 32 | 0.964286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
bd18328fbe541a1a8eb90288f4d216ab25f2b1b2 | 209 | py | Python | filestack/models/__init__.py | yousefiparsa/filestack-python | e1773e2254d2a451216e13ce8ecb4f69f863ac82 | [
"Apache-2.0"
] | 1 | 2019-04-09T18:49:36.000Z | 2019-04-09T18:49:36.000Z | filestack/models/__init__.py | yousefiparsa/filestack-python | e1773e2254d2a451216e13ce8ecb4f69f863ac82 | [
"Apache-2.0"
] | null | null | null | filestack/models/__init__.py | yousefiparsa/filestack-python | e1773e2254d2a451216e13ce8ecb4f69f863ac82 | [
"Apache-2.0"
] | null | null | null | from .filestack_filelink import Filelink
from .filestack_client import Client
from .filestack_transform import Transform
from .filestack_security import security
from .filestack_audiovisual import AudioVisual
| 34.833333 | 46 | 0.880383 | 25 | 209 | 7.16 | 0.32 | 0.363128 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.095694 | 209 | 5 | 47 | 41.8 | 0.94709 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
bd3865c80f0bd75797a19907ccd1621964e7fb52 | 75 | py | Python | vae/src/modules/decoders/__init__.py | ioangatop/GenerativeModels | c6924e91de475be36253f9f20b687d1e1c8b0dde | [
"MIT"
] | 4 | 2019-12-04T06:10:23.000Z | 2021-09-14T06:17:24.000Z | vae/src/modules/decoders/__init__.py | ioangatop/GenerativeModels | c6924e91de475be36253f9f20b687d1e1c8b0dde | [
"MIT"
] | null | null | null | vae/src/modules/decoders/__init__.py | ioangatop/GenerativeModels | c6924e91de475be36253f9f20b687d1e1c8b0dde | [
"MIT"
] | 1 | 2021-09-16T21:10:12.000Z | 2021-09-16T21:10:12.000Z | from .decoder import Decoder
from .residual_decoder import ResidualDecoder
| 25 | 45 | 0.866667 | 9 | 75 | 7.111111 | 0.555556 | 0.40625 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.106667 | 75 | 2 | 46 | 37.5 | 0.955224 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
1fb42226fa8cac58b4e94081d55ebe8be7bacd85 | 47 | py | Python | examples/crop-pipeline/overlandflow/overlandflow_omf/__init__.py | openmodelingfoundation/meillionen | 69a534699ab2ee09d720262c7a78b86e802ee9ba | [
"MIT-0"
] | 1 | 2020-09-18T11:57:31.000Z | 2020-09-18T11:57:31.000Z | examples/crop-pipeline/overlandflow/overlandflow_omf/__init__.py | openmodelingfoundation/meillionen | 69a534699ab2ee09d720262c7a78b86e802ee9ba | [
"MIT-0"
] | 67 | 2020-07-23T05:39:54.000Z | 2022-03-18T20:23:39.000Z | examples/crop-pipeline/overlandflow/overlandflow_omf/__init__.py | openmodelingfoundation/meillionen | 69a534699ab2ee09d720262c7a78b86e802ee9ba | [
"MIT-0"
] | 1 | 2021-05-26T05:20:53.000Z | 2021-05-26T05:20:53.000Z | #!/usr/bin/env python3
from .model import main
| 15.666667 | 23 | 0.744681 | 8 | 47 | 4.375 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.02439 | 0.12766 | 47 | 2 | 24 | 23.5 | 0.829268 | 0.446809 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
1fd8152e862f544c9000c20a117de898f44c6052 | 91 | py | Python | office365/sharepoint/fields/field_currency.py | wreiner/Office365-REST-Python-Client | 476bbce4f5928a140b4f5d33475d0ac9b0783530 | [
"MIT"
] | 544 | 2016-08-04T17:10:16.000Z | 2022-03-31T07:17:20.000Z | office365/sharepoint/fields/field_currency.py | wreiner/Office365-REST-Python-Client | 476bbce4f5928a140b4f5d33475d0ac9b0783530 | [
"MIT"
] | 438 | 2016-10-11T12:24:22.000Z | 2022-03-31T19:30:35.000Z | office365/sharepoint/fields/field_currency.py | wreiner/Office365-REST-Python-Client | 476bbce4f5928a140b4f5d33475d0ac9b0783530 | [
"MIT"
] | 202 | 2016-08-22T19:29:40.000Z | 2022-03-30T20:26:15.000Z | from office365.sharepoint.fields.field import Field
class FieldCurrency(Field):
pass
| 15.166667 | 51 | 0.791209 | 11 | 91 | 6.545455 | 0.818182 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.038462 | 0.142857 | 91 | 5 | 52 | 18.2 | 0.884615 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
1ffeb6ff771cc98eb68291772085edc18e27319e | 26 | py | Python | base/test-show-scope/func-1.py | jpolitz/lambda-py-paper | 746ef63fc1123714b4adaf78119028afbea7bd76 | [
"Apache-2.0"
] | 25 | 2015-04-16T04:31:49.000Z | 2022-03-10T15:53:28.000Z | base/test-show-scope/func-1.py | jpolitz/lambda-py-paper | 746ef63fc1123714b4adaf78119028afbea7bd76 | [
"Apache-2.0"
] | 1 | 2018-11-21T22:40:02.000Z | 2018-11-26T17:53:11.000Z | base/test-show-scope/func-1.py | jpolitz/lambda-py-paper | 746ef63fc1123714b4adaf78119028afbea7bd76 | [
"Apache-2.0"
] | 1 | 2021-03-26T03:36:19.000Z | 2021-03-26T03:36:19.000Z | x = 9
def f():
return x
| 5.2 | 9 | 0.5 | 6 | 26 | 2.166667 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.058824 | 0.346154 | 26 | 4 | 10 | 6.5 | 0.705882 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0 | 0.333333 | 0.666667 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
951b3b3f08fc42d48164ab12dbf70ef5df1b0744 | 30 | py | Python | allcasts/__init__.py | illegalbyte/fullpod | 6c10cff38cfaf96a2c3f72a1c1c9887e8efb6c92 | [
"MIT"
] | 3 | 2022-01-07T16:40:24.000Z | 2022-03-22T11:55:42.000Z | allcasts/__init__.py | illegalbyte/allcasts | 6c10cff38cfaf96a2c3f72a1c1c9887e8efb6c92 | [
"MIT"
] | null | null | null | allcasts/__init__.py | illegalbyte/allcasts | 6c10cff38cfaf96a2c3f72a1c1c9887e8efb6c92 | [
"MIT"
] | null | null | null | from .allcasts import AllCasts | 30 | 30 | 0.866667 | 4 | 30 | 6.5 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.1 | 30 | 1 | 30 | 30 | 0.962963 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
951d859637ac5ba16d55993c55e823f9ed43dce2 | 24 | py | Python | utils/__init__.py | rainwangphy/AdversarialNAS | 6ddefdd524cb97c161bcc3b226009e768192c5aa | [
"MIT"
] | null | null | null | utils/__init__.py | rainwangphy/AdversarialNAS | 6ddefdd524cb97c161bcc3b226009e768192c5aa | [
"MIT"
] | null | null | null | utils/__init__.py | rainwangphy/AdversarialNAS | 6ddefdd524cb97c161bcc3b226009e768192c5aa | [
"MIT"
] | null | null | null | from utils import utils
| 12 | 23 | 0.833333 | 4 | 24 | 5 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.166667 | 24 | 1 | 24 | 24 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
951fd2c27b515b83be3b747c63365a67f763aa19 | 144 | py | Python | backend_application/app/woogle/views.py | klimente/woogle | 0e7dc8918b16ee290aa1f8fd31710a53825a5a0e | [
"MIT"
] | null | null | null | backend_application/app/woogle/views.py | klimente/woogle | 0e7dc8918b16ee290aa1f8fd31710a53825a5a0e | [
"MIT"
] | null | null | null | backend_application/app/woogle/views.py | klimente/woogle | 0e7dc8918b16ee290aa1f8fd31710a53825a5a0e | [
"MIT"
] | null | null | null | from django.http import HttpResponse
from django.shortcuts import render_to_response
def main(request):
return HttpResponse("index.html")
| 20.571429 | 47 | 0.805556 | 19 | 144 | 6 | 0.789474 | 0.175439 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.125 | 144 | 6 | 48 | 24 | 0.904762 | 0 | 0 | 0 | 0 | 0 | 0.069444 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.5 | 0.25 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
95209c92f121a35a8888e9579cf3701c636dc35b | 77 | py | Python | model/__init__.py | jorgeMorfinezM/cargamos_api_test | 3efd45d5ad1af83f358e0ad07fd8d3feb7bdfcff | [
"Apache-2.0"
] | null | null | null | model/__init__.py | jorgeMorfinezM/cargamos_api_test | 3efd45d5ad1af83f358e0ad07fd8d3feb7bdfcff | [
"Apache-2.0"
] | null | null | null | model/__init__.py | jorgeMorfinezM/cargamos_api_test | 3efd45d5ad1af83f358e0ad07fd8d3feb7bdfcff | [
"Apache-2.0"
] | null | null | null | # -*- coding: utf-8 -*-
from . import StoreModel
from . import ProductModel
| 15.4 | 26 | 0.675325 | 9 | 77 | 5.777778 | 0.777778 | 0.384615 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.015873 | 0.181818 | 77 | 4 | 27 | 19.25 | 0.809524 | 0.272727 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
1f1183eca9a3ad91db50904c643b3006b3a28f94 | 79 | py | Python | models/__init__.py | CFM-MSG/CMAN_pytorch | e176debef6888ae96781a6cfafabcbd438fbfcb0 | [
"MIT"
] | null | null | null | models/__init__.py | CFM-MSG/CMAN_pytorch | e176debef6888ae96781a6cfafabcbd438fbfcb0 | [
"MIT"
] | null | null | null | models/__init__.py | CFM-MSG/CMAN_pytorch | e176debef6888ae96781a6cfafabcbd438fbfcb0 | [
"MIT"
] | null | null | null | from models.mem_mnist import MEMMNIST
from models.mem_cifar10 import MEMCIFAR10 | 39.5 | 41 | 0.886076 | 12 | 79 | 5.666667 | 0.666667 | 0.294118 | 0.382353 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.055556 | 0.088608 | 79 | 2 | 41 | 39.5 | 0.888889 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
1f264d2bd9f87016437b1fbe0e9626538336889e | 137 | wsgi | Python | wsgi/beelogger.wsgi | Programmier-AG/BeeLogger | 2f85bc27f4af9442bca4691c52f4d5a4b520caa7 | [
"MIT"
] | 1 | 2022-03-04T08:42:54.000Z | 2022-03-04T08:42:54.000Z | wsgi/beelogger.wsgi | Programmier-AG/BeeLogger | 2f85bc27f4af9442bca4691c52f4d5a4b520caa7 | [
"MIT"
] | 29 | 2020-09-23T17:17:05.000Z | 2022-03-29T08:03:46.000Z | wsgi/beelogger.wsgi | Programmier-AG/BeeLogger | 2f85bc27f4af9442bca4691c52f4d5a4b520caa7 | [
"MIT"
] | 8 | 2021-03-21T21:13:23.000Z | 2021-10-20T20:08:16.000Z | import sys
import os
sys.path.insert(0, "/var/www/BeeLogger")
os.chdir("/var/www/BeeLogger")
from app import app as application
| 17.125 | 41 | 0.708029 | 22 | 137 | 4.409091 | 0.636364 | 0.123711 | 0.309278 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.008696 | 0.160584 | 137 | 7 | 42 | 19.571429 | 0.834783 | 0 | 0 | 0 | 0 | 0 | 0.276923 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.6 | 0 | 0.6 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
85edf204b5f725d35da8f5a7fc06459281bbee69 | 22 | py | Python | DI2/__init__.py | JupitersMight/DI2 | fcecf45043d6800b237799f0854919f358642ec7 | [
"MIT"
] | 3 | 2020-11-25T12:33:03.000Z | 2021-08-25T09:19:05.000Z | DI2/__init__.py | JupitersMight/DI2 | fcecf45043d6800b237799f0854919f358642ec7 | [
"MIT"
] | 1 | 2021-03-04T11:36:54.000Z | 2021-06-24T16:52:51.000Z | DI2/__init__.py | JupitersMight/DI2 | fcecf45043d6800b237799f0854919f358642ec7 | [
"MIT"
] | null | null | null | from DI2.DI2 import *
| 11 | 21 | 0.727273 | 4 | 22 | 4 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.111111 | 0.181818 | 22 | 1 | 22 | 22 | 0.777778 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
c09f0ac3d2ebc89e230ac1a00acc4bb94018b287 | 100 | py | Python | econopy/utils.py | KeoH/econoPy | 33844fb537f3fd6ad281f7c5200b6444273a8946 | [
"MIT"
] | 1 | 2020-07-01T17:14:05.000Z | 2020-07-01T17:14:05.000Z | econopy/utils.py | KeoH/econoPy | 33844fb537f3fd6ad281f7c5200b6444273a8946 | [
"MIT"
] | 6 | 2018-10-02T11:12:26.000Z | 2019-12-03T23:17:31.000Z | econopy/utils.py | KeoH/econoPy | 33844fb537f3fd6ad281f7c5200b6444273a8946 | [
"MIT"
] | 1 | 2020-07-01T17:14:15.000Z | 2020-07-01T17:14:15.000Z | from econopy.constants import VAT
def deflate_vat(amount, vat=VAT):
return amount / (1 + vat)
| 16.666667 | 33 | 0.71 | 15 | 100 | 4.666667 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.012346 | 0.19 | 100 | 5 | 34 | 20 | 0.851852 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
c0c8c486270677fcbea42b63d4238434801f5702 | 9,161 | py | Python | tests/test_fuzz_brownie.py | ConsenSys/diligence-fuzzing | 74491aa8424752baf338d26d667f5259ef296d89 | [
"Apache-2.0"
] | 8 | 2021-08-20T10:51:32.000Z | 2022-03-31T16:26:22.000Z | tests/test_fuzz_brownie.py | ConsenSys/diligence-fuzzing | 74491aa8424752baf338d26d667f5259ef296d89 | [
"Apache-2.0"
] | 6 | 2021-09-29T05:33:15.000Z | 2022-02-10T09:01:54.000Z | tests/test_fuzz_brownie.py | ConsenSys/diligence-fuzzing | 74491aa8424752baf338d26d667f5259ef296d89 | [
"Apache-2.0"
] | 1 | 2021-08-20T11:35:58.000Z | 2021-08-20T11:35:58.000Z | import json
from unittest.mock import patch
import pytest
import requests
from click.testing import CliRunner
from requests import RequestException
from fuzzing_cli.cli import cli
from fuzzing_cli.fuzz.exceptions import RequestError
from fuzzing_cli.fuzz.faas import FaasClient
from fuzzing_cli.fuzz.rpc import RPCClient
from .common import get_test_case, write_config
FAAS_URL = "http://localhost:9899"
ORIGINAL_SOL_CODE = "original sol code here"
def test_fuzz_no_build_dir(tmp_path):
runner = CliRunner()
write_config(not_include=["build_directory"])
result = runner.invoke(cli, ["run", "contracts"])
assert (
"Build directory not provided. You need to set the `build_directory`"
in result.output
)
assert result.exit_code != 0
def test_fuzz_no_deployed_address(tmp_path):
runner = CliRunner()
write_config(not_include=["deployed_contract_address"])
result = runner.invoke(cli, ["run", "contracts"])
assert (
"Deployed contract address not provided. You need to provide an address"
in result.output
)
assert result.exit_code != 0
def test_fuzz_no_target(tmp_path):
runner = CliRunner()
write_config(not_include=["targets"])
result = runner.invoke(cli, ["run"])
assert "Error: Target not provided." in result.output
assert result.exit_code != 0
def test_fuzz_no_contract_at_address(tmp_path, brownie_project):
write_config(base_path=str(tmp_path))
with patch.object(
RPCClient, "contract_exists"
) as contract_exists_mock, patch.object(
RPCClient, "get_all_blocks"
) as get_all_blocks_mock:
get_all_blocks_mock.return_value = get_test_case(
"testdata/ganache-all-blocks.json"
)
contract_exists_mock.return_value = False
runner = CliRunner()
result = runner.invoke(cli, ["run", f"{tmp_path}/contracts"])
assert "Error: Unable to find a contract deployed" in result.output
assert result.exit_code != 0
def test_faas_not_running(tmp_path, brownie_project):
write_config(base_path=str(tmp_path))
with patch.object(
RPCClient, "contract_exists"
) as contract_exists_mock, patch.object(
RPCClient, "get_all_blocks"
) as get_all_blocks_mock, patch.object(
FaasClient, "start_faas_campaign"
) as start_faas_campaign_mock:
get_all_blocks_mock.return_value = get_test_case(
"testdata/ganache-all-blocks.json"
)
contract_exists_mock.return_value = True
start_faas_campaign_mock.side_effect = RequestError(
f"Error starting FaaS campaign."
)
runner = CliRunner()
result = runner.invoke(cli, ["run", f"{tmp_path}/contracts"])
assert "RequestError: Error starting FaaS campaign" in result.output
assert result.exit_code != 0
def test_faas_target_config_file(tmp_path, brownie_project):
"""Here we reuse the test_faas_not_running logic to check that the target is being read
from the config file. This is possible because the faas not running error is triggered
after the Target check. If the target was not available, a different error would be thrown
and the test would fail"""
write_config(base_path=str(tmp_path))
with patch.object(
RPCClient, "contract_exists"
) as contract_exists_mock, patch.object(
RPCClient, "get_all_blocks"
) as get_all_blocks_mock, patch.object(
FaasClient, "start_faas_campaign"
) as start_faas_campaign_mock:
get_all_blocks_mock.return_value = get_test_case(
"testdata/ganache-all-blocks.json"
)
contract_exists_mock.return_value = True
start_faas_campaign_mock.side_effect = RequestError(
f"Error starting FaaS campaign."
)
runner = CliRunner()
# we call the run command without the target parameter.
result = runner.invoke(cli, ["run"])
assert "RequestError: Error starting FaaS campaign." in result.output
assert result.exit_code != 0
def test_rpc_not_running(tmp_path):
write_config(base_path=str(tmp_path))
with patch.object(requests, "request") as requests_mock:
requests_mock.side_effect = RequestException()
runner = CliRunner()
result = runner.invoke(cli, ["run", f"{tmp_path}/contracts"])
assert "HTTP error calling RPC method eth_getCode with parameters" in result.output
assert result.exit_code != 0
def test_fuzz_run(tmp_path, brownie_project):
write_config(base_path=str(tmp_path))
with patch.object(
RPCClient, "contract_exists"
) as contract_exists_mock, patch.object(
RPCClient, "get_all_blocks"
) as get_all_blocks_mock, patch.object(
FaasClient, "start_faas_campaign"
) as start_faas_campaign_mock:
get_all_blocks_mock.return_value = get_test_case(
"testdata/ganache-all-blocks.json"
)
contract_exists_mock.return_value = True
campaign_id = "560ba03a-8744-4da6-aeaa-a62568ccbf44"
start_faas_campaign_mock.return_value = campaign_id
runner = CliRunner()
result = runner.invoke(cli, ["run", f"{tmp_path}/contracts"])
contract_exists_mock.assert_called_with(
"0x7277646075fa72737e1F6114654C5d9949a67dF2"
)
contract_exists_mock.assert_called_once()
get_all_blocks_mock.assert_called_once()
start_faas_campaign_mock.assert_called_once()
called_with = start_faas_campaign_mock.call_args
assert (
f"You can view campaign here: {FAAS_URL}/campaigns/{campaign_id}"
in result.output
)
request_payload = json.dumps(called_with[0])
keywords = [
"parameters",
"name",
"corpus",
"sources",
"contracts",
"address-under-test",
"source",
"fileIndex",
"sourcePaths",
"deployedSourceMap",
"mainSourceFile",
"contractName",
"bytecode",
"deployedBytecode",
"sourceMap",
"deployedSourceMap",
]
for keyword in keywords:
assert keyword in request_payload
assert result.exit_code == 0
def test_fuzz_run_map_to_original_source(tmp_path, brownie_project):
write_config(base_path=str(tmp_path))
with patch.object(
RPCClient, "contract_exists"
) as contract_exists_mock, patch.object(
RPCClient, "get_all_blocks"
) as get_all_blocks_mock, patch.object(
FaasClient, "start_faas_campaign"
) as start_faas_campaign_mock:
get_all_blocks_mock.return_value = get_test_case(
"testdata/ganache-all-blocks.json"
)
contract_exists_mock.return_value = True
campaign_id = "560ba03a-8744-4da6-aeaa-a62568ccbf44"
start_faas_campaign_mock.return_value = campaign_id
runner = CliRunner()
result = runner.invoke(
cli, ["run", "--map-to-original-source", f"{tmp_path}/contracts"]
)
contract_exists_mock.assert_called_with(
"0x7277646075fa72737e1F6114654C5d9949a67dF2"
)
contract_exists_mock.assert_called_once()
get_all_blocks_mock.assert_called_once()
start_faas_campaign_mock.assert_called_once()
called_with = start_faas_campaign_mock.call_args
assert (
f"You can view campaign here: {FAAS_URL}/campaigns/{campaign_id}"
in result.output
)
request_payload = json.dumps(called_with[0])
assert ORIGINAL_SOL_CODE in request_payload
keywords = [
"parameters",
"name",
"corpus",
"sources",
"contracts",
"address-under-test",
"source",
"fileIndex",
"sourcePaths",
"deployedSourceMap",
"mainSourceFile",
"contractName",
"bytecode",
"deployedBytecode",
"sourceMap",
"deployedSourceMap",
]
for keyword in keywords:
assert keyword in request_payload
assert result.exit_code == 0
@pytest.mark.parametrize("keyword", ("run", "disarm", "arm", "run"))
def test_fuzz_subcommands_present(keyword):
runner = CliRunner()
result = runner.invoke(cli, ["--help"])
assert keyword in result.output
@patch("fuzzing_cli.fuzz.scribble.ScribbleMixin.instrument_solc_in_place")
def test_fuzz_arm(mock, tmp_path, brownie_project):
runner = CliRunner()
result = runner.invoke(cli, ["arm", f"{tmp_path}/contracts/sample.sol"])
mock.assert_called()
mock.assert_called_with(
file_list=(f"{tmp_path}/contracts/sample.sol",),
scribble_path="scribble",
remappings=[],
solc_version=None,
)
assert result.exit_code == 0
@patch("fuzzing_cli.fuzz.scribble.ScribbleMixin.disarm_solc_in_place")
def test_fuzz_disarm(mock, tmp_path, brownie_project):
runner = CliRunner()
result = runner.invoke(cli, ["disarm", f"{tmp_path}/contracts/sample.sol"])
mock.assert_called()
mock.assert_called_with(
file_list=(f"{tmp_path}/contracts/sample.sol",),
scribble_path="scribble",
remappings=[],
solc_version=None,
)
assert result.exit_code == 0
| 30.536667 | 94 | 0.681912 | 1,120 | 9,161 | 5.284821 | 0.158036 | 0.030748 | 0.034465 | 0.042575 | 0.789491 | 0.776314 | 0.740159 | 0.726981 | 0.705187 | 0.702315 | 0 | 0.016428 | 0.222574 | 9,161 | 299 | 95 | 30.638796 | 0.814659 | 0.037223 | 0 | 0.711207 | 0 | 0 | 0.224835 | 0.077369 | 0 | 0 | 0.009543 | 0 | 0.155172 | 1 | 0.051724 | false | 0 | 0.047414 | 0 | 0.099138 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
c0cee732301241cc7ba562014c2cdc296501cf14 | 33 | py | Python | dataflow/stream_decode/options/__init__.py | GlobalFishingWatch/ais-stream | e16edde88144229a8470a533dab4f7eca8c8e48a | [
"Apache-2.0"
] | null | null | null | dataflow/stream_decode/options/__init__.py | GlobalFishingWatch/ais-stream | e16edde88144229a8470a533dab4f7eca8c8e48a | [
"Apache-2.0"
] | null | null | null | dataflow/stream_decode/options/__init__.py | GlobalFishingWatch/ais-stream | e16edde88144229a8470a533dab4f7eca8c8e48a | [
"Apache-2.0"
] | 1 | 2020-09-28T04:54:28.000Z | 2020-09-28T04:54:28.000Z | from .decode import DecodeOptions | 33 | 33 | 0.878788 | 4 | 33 | 7.25 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.090909 | 33 | 1 | 33 | 33 | 0.966667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
c0d4f314d0eeb99e35b640608833595a1efa89f3 | 86,526 | py | Python | cottonformation/res/batch.py | gitter-badger/cottonformation-project | 354f1dce7ea106e209af2d5d818b6033a27c193c | [
"BSD-2-Clause"
] | null | null | null | cottonformation/res/batch.py | gitter-badger/cottonformation-project | 354f1dce7ea106e209af2d5d818b6033a27c193c | [
"BSD-2-Clause"
] | null | null | null | cottonformation/res/batch.py | gitter-badger/cottonformation-project | 354f1dce7ea106e209af2d5d818b6033a27c193c | [
"BSD-2-Clause"
] | null | null | null | # -*- coding: utf-8 -*-
"""
This module
"""
import attr
import typing
from ..core.model import (
Property, Resource, Tag, GetAtt, TypeHint, TypeCheck,
)
from ..core.constant import AttrMeta
#--- Property declaration ---
@attr.s
class JobDefinitionAuthorizationConfig(Property):
"""
AWS Object Type = "AWS::Batch::JobDefinition.AuthorizationConfig"
Resource Document: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-authorizationconfig.html
Property Document:
- ``p_AccessPointId``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-authorizationconfig.html#cfn-batch-jobdefinition-authorizationconfig-accesspointid
- ``p_Iam``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-authorizationconfig.html#cfn-batch-jobdefinition-authorizationconfig-iam
"""
AWS_OBJECT_TYPE = "AWS::Batch::JobDefinition.AuthorizationConfig"
p_AccessPointId: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(TypeCheck.intrinsic_str_type)),
metadata={AttrMeta.PROPERTY_NAME: "AccessPointId"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-authorizationconfig.html#cfn-batch-jobdefinition-authorizationconfig-accesspointid"""
p_Iam: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(TypeCheck.intrinsic_str_type)),
metadata={AttrMeta.PROPERTY_NAME: "Iam"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-authorizationconfig.html#cfn-batch-jobdefinition-authorizationconfig-iam"""
@attr.s
class JobDefinitionResourceRequirement(Property):
"""
AWS Object Type = "AWS::Batch::JobDefinition.ResourceRequirement"
Resource Document: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-resourcerequirement.html
Property Document:
- ``p_Type``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-resourcerequirement.html#cfn-batch-jobdefinition-resourcerequirement-type
- ``p_Value``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-resourcerequirement.html#cfn-batch-jobdefinition-resourcerequirement-value
"""
AWS_OBJECT_TYPE = "AWS::Batch::JobDefinition.ResourceRequirement"
p_Type: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(TypeCheck.intrinsic_str_type)),
metadata={AttrMeta.PROPERTY_NAME: "Type"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-resourcerequirement.html#cfn-batch-jobdefinition-resourcerequirement-type"""
p_Value: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(TypeCheck.intrinsic_str_type)),
metadata={AttrMeta.PROPERTY_NAME: "Value"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-resourcerequirement.html#cfn-batch-jobdefinition-resourcerequirement-value"""
@attr.s
class JobDefinitionEnvironment(Property):
"""
AWS Object Type = "AWS::Batch::JobDefinition.Environment"
Resource Document: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-environment.html
Property Document:
- ``p_Name``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-environment.html#cfn-batch-jobdefinition-environment-name
- ``p_Value``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-environment.html#cfn-batch-jobdefinition-environment-value
"""
AWS_OBJECT_TYPE = "AWS::Batch::JobDefinition.Environment"
p_Name: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(TypeCheck.intrinsic_str_type)),
metadata={AttrMeta.PROPERTY_NAME: "Name"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-environment.html#cfn-batch-jobdefinition-environment-name"""
p_Value: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(TypeCheck.intrinsic_str_type)),
metadata={AttrMeta.PROPERTY_NAME: "Value"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-environment.html#cfn-batch-jobdefinition-environment-value"""
@attr.s
class JobDefinitionVolumesHost(Property):
"""
AWS Object Type = "AWS::Batch::JobDefinition.VolumesHost"
Resource Document: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-volumeshost.html
Property Document:
- ``p_SourcePath``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-volumeshost.html#cfn-batch-jobdefinition-volumeshost-sourcepath
"""
AWS_OBJECT_TYPE = "AWS::Batch::JobDefinition.VolumesHost"
p_SourcePath: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(TypeCheck.intrinsic_str_type)),
metadata={AttrMeta.PROPERTY_NAME: "SourcePath"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-volumeshost.html#cfn-batch-jobdefinition-volumeshost-sourcepath"""
@attr.s
class JobQueueComputeEnvironmentOrder(Property):
"""
AWS Object Type = "AWS::Batch::JobQueue.ComputeEnvironmentOrder"
Resource Document: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobqueue-computeenvironmentorder.html
Property Document:
- ``rp_ComputeEnvironment``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobqueue-computeenvironmentorder.html#cfn-batch-jobqueue-computeenvironmentorder-computeenvironment
- ``rp_Order``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobqueue-computeenvironmentorder.html#cfn-batch-jobqueue-computeenvironmentorder-order
"""
AWS_OBJECT_TYPE = "AWS::Batch::JobQueue.ComputeEnvironmentOrder"
rp_ComputeEnvironment: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.instance_of(TypeCheck.intrinsic_str_type),
metadata={AttrMeta.PROPERTY_NAME: "ComputeEnvironment"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobqueue-computeenvironmentorder.html#cfn-batch-jobqueue-computeenvironmentorder-computeenvironment"""
rp_Order: int = attr.ib(
default=None,
validator=attr.validators.instance_of(int),
metadata={AttrMeta.PROPERTY_NAME: "Order"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobqueue-computeenvironmentorder.html#cfn-batch-jobqueue-computeenvironmentorder-order"""
@attr.s
class JobDefinitionSecret(Property):
"""
AWS Object Type = "AWS::Batch::JobDefinition.Secret"
Resource Document: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-secret.html
Property Document:
- ``rp_Name``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-secret.html#cfn-batch-jobdefinition-secret-name
- ``rp_ValueFrom``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-secret.html#cfn-batch-jobdefinition-secret-valuefrom
"""
AWS_OBJECT_TYPE = "AWS::Batch::JobDefinition.Secret"
rp_Name: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.instance_of(TypeCheck.intrinsic_str_type),
metadata={AttrMeta.PROPERTY_NAME: "Name"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-secret.html#cfn-batch-jobdefinition-secret-name"""
rp_ValueFrom: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.instance_of(TypeCheck.intrinsic_str_type),
metadata={AttrMeta.PROPERTY_NAME: "ValueFrom"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-secret.html#cfn-batch-jobdefinition-secret-valuefrom"""
@attr.s
class JobDefinitionNetworkConfiguration(Property):
"""
AWS Object Type = "AWS::Batch::JobDefinition.NetworkConfiguration"
Resource Document: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-containerproperties-networkconfiguration.html
Property Document:
- ``p_AssignPublicIp``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-containerproperties-networkconfiguration.html#cfn-batch-jobdefinition-containerproperties-networkconfiguration-assignpublicip
"""
AWS_OBJECT_TYPE = "AWS::Batch::JobDefinition.NetworkConfiguration"
p_AssignPublicIp: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(TypeCheck.intrinsic_str_type)),
metadata={AttrMeta.PROPERTY_NAME: "AssignPublicIp"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-containerproperties-networkconfiguration.html#cfn-batch-jobdefinition-containerproperties-networkconfiguration-assignpublicip"""
@attr.s
class JobDefinitionLogConfiguration(Property):
"""
AWS Object Type = "AWS::Batch::JobDefinition.LogConfiguration"
Resource Document: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-containerproperties-logconfiguration.html
Property Document:
- ``rp_LogDriver``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-containerproperties-logconfiguration.html#cfn-batch-jobdefinition-containerproperties-logconfiguration-logdriver
- ``p_Options``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-containerproperties-logconfiguration.html#cfn-batch-jobdefinition-containerproperties-logconfiguration-options
- ``p_SecretOptions``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-containerproperties-logconfiguration.html#cfn-batch-jobdefinition-containerproperties-logconfiguration-secretoptions
"""
AWS_OBJECT_TYPE = "AWS::Batch::JobDefinition.LogConfiguration"
rp_LogDriver: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.instance_of(TypeCheck.intrinsic_str_type),
metadata={AttrMeta.PROPERTY_NAME: "LogDriver"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-containerproperties-logconfiguration.html#cfn-batch-jobdefinition-containerproperties-logconfiguration-logdriver"""
p_Options: dict = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(dict)),
metadata={AttrMeta.PROPERTY_NAME: "Options"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-containerproperties-logconfiguration.html#cfn-batch-jobdefinition-containerproperties-logconfiguration-options"""
p_SecretOptions: typing.List[typing.Union['JobDefinitionSecret', dict]] = attr.ib(
default=None,
converter=JobDefinitionSecret.from_list,
validator=attr.validators.optional(attr.validators.deep_iterable(member_validator=attr.validators.instance_of(JobDefinitionSecret), iterable_validator=attr.validators.instance_of(list))),
metadata={AttrMeta.PROPERTY_NAME: "SecretOptions"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-containerproperties-logconfiguration.html#cfn-batch-jobdefinition-containerproperties-logconfiguration-secretoptions"""
@attr.s
class ComputeEnvironmentLaunchTemplateSpecification(Property):
"""
AWS Object Type = "AWS::Batch::ComputeEnvironment.LaunchTemplateSpecification"
Resource Document: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-computeenvironment-launchtemplatespecification.html
Property Document:
- ``p_LaunchTemplateId``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-computeenvironment-launchtemplatespecification.html#cfn-batch-computeenvironment-launchtemplatespecification-launchtemplateid
- ``p_LaunchTemplateName``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-computeenvironment-launchtemplatespecification.html#cfn-batch-computeenvironment-launchtemplatespecification-launchtemplatename
- ``p_Version``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-computeenvironment-launchtemplatespecification.html#cfn-batch-computeenvironment-launchtemplatespecification-version
"""
AWS_OBJECT_TYPE = "AWS::Batch::ComputeEnvironment.LaunchTemplateSpecification"
p_LaunchTemplateId: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(TypeCheck.intrinsic_str_type)),
metadata={AttrMeta.PROPERTY_NAME: "LaunchTemplateId"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-computeenvironment-launchtemplatespecification.html#cfn-batch-computeenvironment-launchtemplatespecification-launchtemplateid"""
p_LaunchTemplateName: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(TypeCheck.intrinsic_str_type)),
metadata={AttrMeta.PROPERTY_NAME: "LaunchTemplateName"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-computeenvironment-launchtemplatespecification.html#cfn-batch-computeenvironment-launchtemplatespecification-launchtemplatename"""
p_Version: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(TypeCheck.intrinsic_str_type)),
metadata={AttrMeta.PROPERTY_NAME: "Version"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-computeenvironment-launchtemplatespecification.html#cfn-batch-computeenvironment-launchtemplatespecification-version"""
@attr.s
class JobDefinitionMountPoints(Property):
"""
AWS Object Type = "AWS::Batch::JobDefinition.MountPoints"
Resource Document: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-mountpoints.html
Property Document:
- ``p_ContainerPath``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-mountpoints.html#cfn-batch-jobdefinition-mountpoints-containerpath
- ``p_ReadOnly``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-mountpoints.html#cfn-batch-jobdefinition-mountpoints-readonly
- ``p_SourceVolume``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-mountpoints.html#cfn-batch-jobdefinition-mountpoints-sourcevolume
"""
AWS_OBJECT_TYPE = "AWS::Batch::JobDefinition.MountPoints"
p_ContainerPath: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(TypeCheck.intrinsic_str_type)),
metadata={AttrMeta.PROPERTY_NAME: "ContainerPath"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-mountpoints.html#cfn-batch-jobdefinition-mountpoints-containerpath"""
p_ReadOnly: bool = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(bool)),
metadata={AttrMeta.PROPERTY_NAME: "ReadOnly"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-mountpoints.html#cfn-batch-jobdefinition-mountpoints-readonly"""
p_SourceVolume: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(TypeCheck.intrinsic_str_type)),
metadata={AttrMeta.PROPERTY_NAME: "SourceVolume"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-mountpoints.html#cfn-batch-jobdefinition-mountpoints-sourcevolume"""
@attr.s
class JobDefinitionEvaluateOnExit(Property):
"""
AWS Object Type = "AWS::Batch::JobDefinition.EvaluateOnExit"
Resource Document: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-evaluateonexit.html
Property Document:
- ``rp_Action``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-evaluateonexit.html#cfn-batch-jobdefinition-evaluateonexit-action
- ``p_OnExitCode``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-evaluateonexit.html#cfn-batch-jobdefinition-evaluateonexit-onexitcode
- ``p_OnReason``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-evaluateonexit.html#cfn-batch-jobdefinition-evaluateonexit-onreason
- ``p_OnStatusReason``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-evaluateonexit.html#cfn-batch-jobdefinition-evaluateonexit-onstatusreason
"""
AWS_OBJECT_TYPE = "AWS::Batch::JobDefinition.EvaluateOnExit"
rp_Action: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.instance_of(TypeCheck.intrinsic_str_type),
metadata={AttrMeta.PROPERTY_NAME: "Action"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-evaluateonexit.html#cfn-batch-jobdefinition-evaluateonexit-action"""
p_OnExitCode: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(TypeCheck.intrinsic_str_type)),
metadata={AttrMeta.PROPERTY_NAME: "OnExitCode"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-evaluateonexit.html#cfn-batch-jobdefinition-evaluateonexit-onexitcode"""
p_OnReason: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(TypeCheck.intrinsic_str_type)),
metadata={AttrMeta.PROPERTY_NAME: "OnReason"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-evaluateonexit.html#cfn-batch-jobdefinition-evaluateonexit-onreason"""
p_OnStatusReason: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(TypeCheck.intrinsic_str_type)),
metadata={AttrMeta.PROPERTY_NAME: "OnStatusReason"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-evaluateonexit.html#cfn-batch-jobdefinition-evaluateonexit-onstatusreason"""
@attr.s
class JobDefinitionUlimit(Property):
"""
AWS Object Type = "AWS::Batch::JobDefinition.Ulimit"
Resource Document: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-ulimit.html
Property Document:
- ``rp_HardLimit``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-ulimit.html#cfn-batch-jobdefinition-ulimit-hardlimit
- ``rp_Name``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-ulimit.html#cfn-batch-jobdefinition-ulimit-name
- ``rp_SoftLimit``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-ulimit.html#cfn-batch-jobdefinition-ulimit-softlimit
"""
AWS_OBJECT_TYPE = "AWS::Batch::JobDefinition.Ulimit"
rp_HardLimit: int = attr.ib(
default=None,
validator=attr.validators.instance_of(int),
metadata={AttrMeta.PROPERTY_NAME: "HardLimit"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-ulimit.html#cfn-batch-jobdefinition-ulimit-hardlimit"""
rp_Name: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.instance_of(TypeCheck.intrinsic_str_type),
metadata={AttrMeta.PROPERTY_NAME: "Name"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-ulimit.html#cfn-batch-jobdefinition-ulimit-name"""
rp_SoftLimit: int = attr.ib(
default=None,
validator=attr.validators.instance_of(int),
metadata={AttrMeta.PROPERTY_NAME: "SoftLimit"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-ulimit.html#cfn-batch-jobdefinition-ulimit-softlimit"""
@attr.s
class JobDefinitionFargatePlatformConfiguration(Property):
"""
AWS Object Type = "AWS::Batch::JobDefinition.FargatePlatformConfiguration"
Resource Document: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-containerproperties-fargateplatformconfiguration.html
Property Document:
- ``p_PlatformVersion``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-containerproperties-fargateplatformconfiguration.html#cfn-batch-jobdefinition-containerproperties-fargateplatformconfiguration-platformversion
"""
AWS_OBJECT_TYPE = "AWS::Batch::JobDefinition.FargatePlatformConfiguration"
p_PlatformVersion: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(TypeCheck.intrinsic_str_type)),
metadata={AttrMeta.PROPERTY_NAME: "PlatformVersion"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-containerproperties-fargateplatformconfiguration.html#cfn-batch-jobdefinition-containerproperties-fargateplatformconfiguration-platformversion"""
@attr.s
class JobDefinitionTimeout(Property):
"""
AWS Object Type = "AWS::Batch::JobDefinition.Timeout"
Resource Document: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-timeout.html
Property Document:
- ``p_AttemptDurationSeconds``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-timeout.html#cfn-batch-jobdefinition-timeout-attemptdurationseconds
"""
AWS_OBJECT_TYPE = "AWS::Batch::JobDefinition.Timeout"
p_AttemptDurationSeconds: int = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(int)),
metadata={AttrMeta.PROPERTY_NAME: "AttemptDurationSeconds"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-timeout.html#cfn-batch-jobdefinition-timeout-attemptdurationseconds"""
@attr.s
class JobDefinitionTmpfs(Property):
"""
AWS Object Type = "AWS::Batch::JobDefinition.Tmpfs"
Resource Document: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-tmpfs.html
Property Document:
- ``rp_ContainerPath``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-tmpfs.html#cfn-batch-jobdefinition-tmpfs-containerpath
- ``rp_Size``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-tmpfs.html#cfn-batch-jobdefinition-tmpfs-size
- ``p_MountOptions``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-tmpfs.html#cfn-batch-jobdefinition-tmpfs-mountoptions
"""
AWS_OBJECT_TYPE = "AWS::Batch::JobDefinition.Tmpfs"
rp_ContainerPath: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.instance_of(TypeCheck.intrinsic_str_type),
metadata={AttrMeta.PROPERTY_NAME: "ContainerPath"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-tmpfs.html#cfn-batch-jobdefinition-tmpfs-containerpath"""
rp_Size: int = attr.ib(
default=None,
validator=attr.validators.instance_of(int),
metadata={AttrMeta.PROPERTY_NAME: "Size"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-tmpfs.html#cfn-batch-jobdefinition-tmpfs-size"""
p_MountOptions: typing.List[TypeHint.intrinsic_str] = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.deep_iterable(member_validator=attr.validators.instance_of(TypeCheck.intrinsic_str_type), iterable_validator=attr.validators.instance_of(list))),
metadata={AttrMeta.PROPERTY_NAME: "MountOptions"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-tmpfs.html#cfn-batch-jobdefinition-tmpfs-mountoptions"""
@attr.s
class JobDefinitionEfsVolumeConfiguration(Property):
"""
AWS Object Type = "AWS::Batch::JobDefinition.EfsVolumeConfiguration"
Resource Document: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-efsvolumeconfiguration.html
Property Document:
- ``rp_FileSystemId``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-efsvolumeconfiguration.html#cfn-batch-jobdefinition-efsvolumeconfiguration-filesystemid
- ``p_AuthorizationConfig``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-efsvolumeconfiguration.html#cfn-batch-jobdefinition-efsvolumeconfiguration-authorizationconfig
- ``p_RootDirectory``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-efsvolumeconfiguration.html#cfn-batch-jobdefinition-efsvolumeconfiguration-rootdirectory
- ``p_TransitEncryption``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-efsvolumeconfiguration.html#cfn-batch-jobdefinition-efsvolumeconfiguration-transitencryption
- ``p_TransitEncryptionPort``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-efsvolumeconfiguration.html#cfn-batch-jobdefinition-efsvolumeconfiguration-transitencryptionport
"""
AWS_OBJECT_TYPE = "AWS::Batch::JobDefinition.EfsVolumeConfiguration"
rp_FileSystemId: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.instance_of(TypeCheck.intrinsic_str_type),
metadata={AttrMeta.PROPERTY_NAME: "FileSystemId"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-efsvolumeconfiguration.html#cfn-batch-jobdefinition-efsvolumeconfiguration-filesystemid"""
p_AuthorizationConfig: typing.Union['JobDefinitionAuthorizationConfig', dict] = attr.ib(
default=None,
converter=JobDefinitionAuthorizationConfig.from_dict,
validator=attr.validators.optional(attr.validators.instance_of(JobDefinitionAuthorizationConfig)),
metadata={AttrMeta.PROPERTY_NAME: "AuthorizationConfig"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-efsvolumeconfiguration.html#cfn-batch-jobdefinition-efsvolumeconfiguration-authorizationconfig"""
p_RootDirectory: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(TypeCheck.intrinsic_str_type)),
metadata={AttrMeta.PROPERTY_NAME: "RootDirectory"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-efsvolumeconfiguration.html#cfn-batch-jobdefinition-efsvolumeconfiguration-rootdirectory"""
p_TransitEncryption: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(TypeCheck.intrinsic_str_type)),
metadata={AttrMeta.PROPERTY_NAME: "TransitEncryption"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-efsvolumeconfiguration.html#cfn-batch-jobdefinition-efsvolumeconfiguration-transitencryption"""
p_TransitEncryptionPort: int = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(int)),
metadata={AttrMeta.PROPERTY_NAME: "TransitEncryptionPort"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-efsvolumeconfiguration.html#cfn-batch-jobdefinition-efsvolumeconfiguration-transitencryptionport"""
@attr.s
class JobDefinitionDevice(Property):
"""
AWS Object Type = "AWS::Batch::JobDefinition.Device"
Resource Document: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-device.html
Property Document:
- ``p_ContainerPath``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-device.html#cfn-batch-jobdefinition-device-containerpath
- ``p_HostPath``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-device.html#cfn-batch-jobdefinition-device-hostpath
- ``p_Permissions``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-device.html#cfn-batch-jobdefinition-device-permissions
"""
AWS_OBJECT_TYPE = "AWS::Batch::JobDefinition.Device"
p_ContainerPath: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(TypeCheck.intrinsic_str_type)),
metadata={AttrMeta.PROPERTY_NAME: "ContainerPath"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-device.html#cfn-batch-jobdefinition-device-containerpath"""
p_HostPath: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(TypeCheck.intrinsic_str_type)),
metadata={AttrMeta.PROPERTY_NAME: "HostPath"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-device.html#cfn-batch-jobdefinition-device-hostpath"""
p_Permissions: typing.List[TypeHint.intrinsic_str] = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.deep_iterable(member_validator=attr.validators.instance_of(TypeCheck.intrinsic_str_type), iterable_validator=attr.validators.instance_of(list))),
metadata={AttrMeta.PROPERTY_NAME: "Permissions"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-device.html#cfn-batch-jobdefinition-device-permissions"""
@attr.s
class ComputeEnvironmentEc2ConfigurationObject(Property):
"""
AWS Object Type = "AWS::Batch::ComputeEnvironment.Ec2ConfigurationObject"
Resource Document: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-computeenvironment-ec2configurationobject.html
Property Document:
- ``rp_ImageType``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-computeenvironment-ec2configurationobject.html#cfn-batch-computeenvironment-ec2configurationobject-imagetype
- ``p_ImageIdOverride``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-computeenvironment-ec2configurationobject.html#cfn-batch-computeenvironment-ec2configurationobject-imageidoverride
"""
AWS_OBJECT_TYPE = "AWS::Batch::ComputeEnvironment.Ec2ConfigurationObject"
rp_ImageType: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.instance_of(TypeCheck.intrinsic_str_type),
metadata={AttrMeta.PROPERTY_NAME: "ImageType"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-computeenvironment-ec2configurationobject.html#cfn-batch-computeenvironment-ec2configurationobject-imagetype"""
p_ImageIdOverride: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(TypeCheck.intrinsic_str_type)),
metadata={AttrMeta.PROPERTY_NAME: "ImageIdOverride"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-computeenvironment-ec2configurationobject.html#cfn-batch-computeenvironment-ec2configurationobject-imageidoverride"""
@attr.s
class JobDefinitionVolumes(Property):
"""
AWS Object Type = "AWS::Batch::JobDefinition.Volumes"
Resource Document: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-volumes.html
Property Document:
- ``p_EfsVolumeConfiguration``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-volumes.html#cfn-batch-jobdefinition-volumes-efsvolumeconfiguration
- ``p_Host``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-volumes.html#cfn-batch-jobdefinition-volumes-host
- ``p_Name``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-volumes.html#cfn-batch-jobdefinition-volumes-name
"""
AWS_OBJECT_TYPE = "AWS::Batch::JobDefinition.Volumes"
p_EfsVolumeConfiguration: typing.Union['JobDefinitionEfsVolumeConfiguration', dict] = attr.ib(
default=None,
converter=JobDefinitionEfsVolumeConfiguration.from_dict,
validator=attr.validators.optional(attr.validators.instance_of(JobDefinitionEfsVolumeConfiguration)),
metadata={AttrMeta.PROPERTY_NAME: "EfsVolumeConfiguration"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-volumes.html#cfn-batch-jobdefinition-volumes-efsvolumeconfiguration"""
p_Host: typing.Union['JobDefinitionVolumesHost', dict] = attr.ib(
default=None,
converter=JobDefinitionVolumesHost.from_dict,
validator=attr.validators.optional(attr.validators.instance_of(JobDefinitionVolumesHost)),
metadata={AttrMeta.PROPERTY_NAME: "Host"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-volumes.html#cfn-batch-jobdefinition-volumes-host"""
p_Name: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(TypeCheck.intrinsic_str_type)),
metadata={AttrMeta.PROPERTY_NAME: "Name"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-volumes.html#cfn-batch-jobdefinition-volumes-name"""
@attr.s
class ComputeEnvironmentComputeResources(Property):
"""
AWS Object Type = "AWS::Batch::ComputeEnvironment.ComputeResources"
Resource Document: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-computeenvironment-computeresources.html
Property Document:
- ``rp_MaxvCpus``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-computeenvironment-computeresources.html#cfn-batch-computeenvironment-computeresources-maxvcpus
- ``rp_Subnets``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-computeenvironment-computeresources.html#cfn-batch-computeenvironment-computeresources-subnets
- ``rp_Type``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-computeenvironment-computeresources.html#cfn-batch-computeenvironment-computeresources-type
- ``p_AllocationStrategy``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-computeenvironment-computeresources.html#cfn-batch-computeenvironment-computeresources-allocationstrategy
- ``p_BidPercentage``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-computeenvironment-computeresources.html#cfn-batch-computeenvironment-computeresources-bidpercentage
- ``p_DesiredvCpus``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-computeenvironment-computeresources.html#cfn-batch-computeenvironment-computeresources-desiredvcpus
- ``p_Ec2Configuration``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-computeenvironment-computeresources.html#cfn-batch-computeenvironment-computeresources-ec2configuration
- ``p_Ec2KeyPair``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-computeenvironment-computeresources.html#cfn-batch-computeenvironment-computeresources-ec2keypair
- ``p_ImageId``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-computeenvironment-computeresources.html#cfn-batch-computeenvironment-computeresources-imageid
- ``p_InstanceRole``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-computeenvironment-computeresources.html#cfn-batch-computeenvironment-computeresources-instancerole
- ``p_InstanceTypes``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-computeenvironment-computeresources.html#cfn-batch-computeenvironment-computeresources-instancetypes
- ``p_LaunchTemplate``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-computeenvironment-computeresources.html#cfn-batch-computeenvironment-computeresources-launchtemplate
- ``p_MinvCpus``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-computeenvironment-computeresources.html#cfn-batch-computeenvironment-computeresources-minvcpus
- ``p_PlacementGroup``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-computeenvironment-computeresources.html#cfn-batch-computeenvironment-computeresources-placementgroup
- ``p_SecurityGroupIds``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-computeenvironment-computeresources.html#cfn-batch-computeenvironment-computeresources-securitygroupids
- ``p_SpotIamFleetRole``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-computeenvironment-computeresources.html#cfn-batch-computeenvironment-computeresources-spotiamfleetrole
- ``p_Tags``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-computeenvironment-computeresources.html#cfn-batch-computeenvironment-computeresources-tags
"""
AWS_OBJECT_TYPE = "AWS::Batch::ComputeEnvironment.ComputeResources"
rp_MaxvCpus: int = attr.ib(
default=None,
validator=attr.validators.instance_of(int),
metadata={AttrMeta.PROPERTY_NAME: "MaxvCpus"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-computeenvironment-computeresources.html#cfn-batch-computeenvironment-computeresources-maxvcpus"""
rp_Subnets: typing.List[TypeHint.intrinsic_str] = attr.ib(
default=None,
validator=attr.validators.deep_iterable(member_validator=attr.validators.instance_of(TypeCheck.intrinsic_str_type), iterable_validator=attr.validators.instance_of(list)),
metadata={AttrMeta.PROPERTY_NAME: "Subnets"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-computeenvironment-computeresources.html#cfn-batch-computeenvironment-computeresources-subnets"""
rp_Type: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.instance_of(TypeCheck.intrinsic_str_type),
metadata={AttrMeta.PROPERTY_NAME: "Type"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-computeenvironment-computeresources.html#cfn-batch-computeenvironment-computeresources-type"""
p_AllocationStrategy: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(TypeCheck.intrinsic_str_type)),
metadata={AttrMeta.PROPERTY_NAME: "AllocationStrategy"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-computeenvironment-computeresources.html#cfn-batch-computeenvironment-computeresources-allocationstrategy"""
p_BidPercentage: int = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(int)),
metadata={AttrMeta.PROPERTY_NAME: "BidPercentage"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-computeenvironment-computeresources.html#cfn-batch-computeenvironment-computeresources-bidpercentage"""
p_DesiredvCpus: int = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(int)),
metadata={AttrMeta.PROPERTY_NAME: "DesiredvCpus"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-computeenvironment-computeresources.html#cfn-batch-computeenvironment-computeresources-desiredvcpus"""
p_Ec2Configuration: typing.List[typing.Union['ComputeEnvironmentEc2ConfigurationObject', dict]] = attr.ib(
default=None,
converter=ComputeEnvironmentEc2ConfigurationObject.from_list,
validator=attr.validators.optional(attr.validators.deep_iterable(member_validator=attr.validators.instance_of(ComputeEnvironmentEc2ConfigurationObject), iterable_validator=attr.validators.instance_of(list))),
metadata={AttrMeta.PROPERTY_NAME: "Ec2Configuration"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-computeenvironment-computeresources.html#cfn-batch-computeenvironment-computeresources-ec2configuration"""
p_Ec2KeyPair: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(TypeCheck.intrinsic_str_type)),
metadata={AttrMeta.PROPERTY_NAME: "Ec2KeyPair"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-computeenvironment-computeresources.html#cfn-batch-computeenvironment-computeresources-ec2keypair"""
p_ImageId: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(TypeCheck.intrinsic_str_type)),
metadata={AttrMeta.PROPERTY_NAME: "ImageId"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-computeenvironment-computeresources.html#cfn-batch-computeenvironment-computeresources-imageid"""
p_InstanceRole: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(TypeCheck.intrinsic_str_type)),
metadata={AttrMeta.PROPERTY_NAME: "InstanceRole"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-computeenvironment-computeresources.html#cfn-batch-computeenvironment-computeresources-instancerole"""
p_InstanceTypes: typing.List[TypeHint.intrinsic_str] = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.deep_iterable(member_validator=attr.validators.instance_of(TypeCheck.intrinsic_str_type), iterable_validator=attr.validators.instance_of(list))),
metadata={AttrMeta.PROPERTY_NAME: "InstanceTypes"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-computeenvironment-computeresources.html#cfn-batch-computeenvironment-computeresources-instancetypes"""
p_LaunchTemplate: typing.Union['ComputeEnvironmentLaunchTemplateSpecification', dict] = attr.ib(
default=None,
converter=ComputeEnvironmentLaunchTemplateSpecification.from_dict,
validator=attr.validators.optional(attr.validators.instance_of(ComputeEnvironmentLaunchTemplateSpecification)),
metadata={AttrMeta.PROPERTY_NAME: "LaunchTemplate"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-computeenvironment-computeresources.html#cfn-batch-computeenvironment-computeresources-launchtemplate"""
p_MinvCpus: int = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(int)),
metadata={AttrMeta.PROPERTY_NAME: "MinvCpus"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-computeenvironment-computeresources.html#cfn-batch-computeenvironment-computeresources-minvcpus"""
p_PlacementGroup: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(TypeCheck.intrinsic_str_type)),
metadata={AttrMeta.PROPERTY_NAME: "PlacementGroup"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-computeenvironment-computeresources.html#cfn-batch-computeenvironment-computeresources-placementgroup"""
p_SecurityGroupIds: typing.List[TypeHint.intrinsic_str] = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.deep_iterable(member_validator=attr.validators.instance_of(TypeCheck.intrinsic_str_type), iterable_validator=attr.validators.instance_of(list))),
metadata={AttrMeta.PROPERTY_NAME: "SecurityGroupIds"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-computeenvironment-computeresources.html#cfn-batch-computeenvironment-computeresources-securitygroupids"""
p_SpotIamFleetRole: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(TypeCheck.intrinsic_str_type)),
metadata={AttrMeta.PROPERTY_NAME: "SpotIamFleetRole"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-computeenvironment-computeresources.html#cfn-batch-computeenvironment-computeresources-spotiamfleetrole"""
p_Tags: dict = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(dict)),
metadata={AttrMeta.PROPERTY_NAME: "Tags"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-computeenvironment-computeresources.html#cfn-batch-computeenvironment-computeresources-tags"""
@attr.s
class JobDefinitionRetryStrategy(Property):
"""
AWS Object Type = "AWS::Batch::JobDefinition.RetryStrategy"
Resource Document: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-retrystrategy.html
Property Document:
- ``p_Attempts``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-retrystrategy.html#cfn-batch-jobdefinition-retrystrategy-attempts
- ``p_EvaluateOnExit``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-retrystrategy.html#cfn-batch-jobdefinition-retrystrategy-evaluateonexit
"""
AWS_OBJECT_TYPE = "AWS::Batch::JobDefinition.RetryStrategy"
p_Attempts: int = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(int)),
metadata={AttrMeta.PROPERTY_NAME: "Attempts"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-retrystrategy.html#cfn-batch-jobdefinition-retrystrategy-attempts"""
p_EvaluateOnExit: typing.List[typing.Union['JobDefinitionEvaluateOnExit', dict]] = attr.ib(
default=None,
converter=JobDefinitionEvaluateOnExit.from_list,
validator=attr.validators.optional(attr.validators.deep_iterable(member_validator=attr.validators.instance_of(JobDefinitionEvaluateOnExit), iterable_validator=attr.validators.instance_of(list))),
metadata={AttrMeta.PROPERTY_NAME: "EvaluateOnExit"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-retrystrategy.html#cfn-batch-jobdefinition-retrystrategy-evaluateonexit"""
@attr.s
class JobDefinitionLinuxParameters(Property):
"""
AWS Object Type = "AWS::Batch::JobDefinition.LinuxParameters"
Resource Document: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-containerproperties-linuxparameters.html
Property Document:
- ``p_Devices``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-containerproperties-linuxparameters.html#cfn-batch-jobdefinition-containerproperties-linuxparameters-devices
- ``p_InitProcessEnabled``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-containerproperties-linuxparameters.html#cfn-batch-jobdefinition-containerproperties-linuxparameters-initprocessenabled
- ``p_MaxSwap``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-containerproperties-linuxparameters.html#cfn-batch-jobdefinition-containerproperties-linuxparameters-maxswap
- ``p_SharedMemorySize``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-containerproperties-linuxparameters.html#cfn-batch-jobdefinition-containerproperties-linuxparameters-sharedmemorysize
- ``p_Swappiness``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-containerproperties-linuxparameters.html#cfn-batch-jobdefinition-containerproperties-linuxparameters-swappiness
- ``p_Tmpfs``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-containerproperties-linuxparameters.html#cfn-batch-jobdefinition-containerproperties-linuxparameters-tmpfs
"""
AWS_OBJECT_TYPE = "AWS::Batch::JobDefinition.LinuxParameters"
p_Devices: typing.List[typing.Union['JobDefinitionDevice', dict]] = attr.ib(
default=None,
converter=JobDefinitionDevice.from_list,
validator=attr.validators.optional(attr.validators.deep_iterable(member_validator=attr.validators.instance_of(JobDefinitionDevice), iterable_validator=attr.validators.instance_of(list))),
metadata={AttrMeta.PROPERTY_NAME: "Devices"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-containerproperties-linuxparameters.html#cfn-batch-jobdefinition-containerproperties-linuxparameters-devices"""
p_InitProcessEnabled: bool = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(bool)),
metadata={AttrMeta.PROPERTY_NAME: "InitProcessEnabled"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-containerproperties-linuxparameters.html#cfn-batch-jobdefinition-containerproperties-linuxparameters-initprocessenabled"""
p_MaxSwap: int = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(int)),
metadata={AttrMeta.PROPERTY_NAME: "MaxSwap"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-containerproperties-linuxparameters.html#cfn-batch-jobdefinition-containerproperties-linuxparameters-maxswap"""
p_SharedMemorySize: int = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(int)),
metadata={AttrMeta.PROPERTY_NAME: "SharedMemorySize"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-containerproperties-linuxparameters.html#cfn-batch-jobdefinition-containerproperties-linuxparameters-sharedmemorysize"""
p_Swappiness: int = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(int)),
metadata={AttrMeta.PROPERTY_NAME: "Swappiness"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-containerproperties-linuxparameters.html#cfn-batch-jobdefinition-containerproperties-linuxparameters-swappiness"""
p_Tmpfs: typing.List[typing.Union['JobDefinitionTmpfs', dict]] = attr.ib(
default=None,
converter=JobDefinitionTmpfs.from_list,
validator=attr.validators.optional(attr.validators.deep_iterable(member_validator=attr.validators.instance_of(JobDefinitionTmpfs), iterable_validator=attr.validators.instance_of(list))),
metadata={AttrMeta.PROPERTY_NAME: "Tmpfs"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-containerproperties-linuxparameters.html#cfn-batch-jobdefinition-containerproperties-linuxparameters-tmpfs"""
@attr.s
class JobDefinitionContainerProperties(Property):
"""
AWS Object Type = "AWS::Batch::JobDefinition.ContainerProperties"
Resource Document: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-containerproperties.html
Property Document:
- ``rp_Image``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-containerproperties.html#cfn-batch-jobdefinition-containerproperties-image
- ``p_Command``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-containerproperties.html#cfn-batch-jobdefinition-containerproperties-command
- ``p_Environment``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-containerproperties.html#cfn-batch-jobdefinition-containerproperties-environment
- ``p_ExecutionRoleArn``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-containerproperties.html#cfn-batch-jobdefinition-containerproperties-executionrolearn
- ``p_FargatePlatformConfiguration``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-containerproperties.html#cfn-batch-jobdefinition-containerproperties-fargateplatformconfiguration
- ``p_InstanceType``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-containerproperties.html#cfn-batch-jobdefinition-containerproperties-instancetype
- ``p_JobRoleArn``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-containerproperties.html#cfn-batch-jobdefinition-containerproperties-jobrolearn
- ``p_LinuxParameters``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-containerproperties.html#cfn-batch-jobdefinition-containerproperties-linuxparameters
- ``p_LogConfiguration``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-containerproperties.html#cfn-batch-jobdefinition-containerproperties-logconfiguration
- ``p_Memory``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-containerproperties.html#cfn-batch-jobdefinition-containerproperties-memory
- ``p_MountPoints``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-containerproperties.html#cfn-batch-jobdefinition-containerproperties-mountpoints
- ``p_NetworkConfiguration``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-containerproperties.html#cfn-batch-jobdefinition-containerproperties-networkconfiguration
- ``p_Privileged``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-containerproperties.html#cfn-batch-jobdefinition-containerproperties-privileged
- ``p_ReadonlyRootFilesystem``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-containerproperties.html#cfn-batch-jobdefinition-containerproperties-readonlyrootfilesystem
- ``p_ResourceRequirements``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-containerproperties.html#cfn-batch-jobdefinition-containerproperties-resourcerequirements
- ``p_Secrets``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-containerproperties.html#cfn-batch-jobdefinition-containerproperties-secrets
- ``p_Ulimits``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-containerproperties.html#cfn-batch-jobdefinition-containerproperties-ulimits
- ``p_User``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-containerproperties.html#cfn-batch-jobdefinition-containerproperties-user
- ``p_Vcpus``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-containerproperties.html#cfn-batch-jobdefinition-containerproperties-vcpus
- ``p_Volumes``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-containerproperties.html#cfn-batch-jobdefinition-containerproperties-volumes
"""
AWS_OBJECT_TYPE = "AWS::Batch::JobDefinition.ContainerProperties"
rp_Image: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.instance_of(TypeCheck.intrinsic_str_type),
metadata={AttrMeta.PROPERTY_NAME: "Image"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-containerproperties.html#cfn-batch-jobdefinition-containerproperties-image"""
p_Command: typing.List[TypeHint.intrinsic_str] = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.deep_iterable(member_validator=attr.validators.instance_of(TypeCheck.intrinsic_str_type), iterable_validator=attr.validators.instance_of(list))),
metadata={AttrMeta.PROPERTY_NAME: "Command"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-containerproperties.html#cfn-batch-jobdefinition-containerproperties-command"""
p_Environment: typing.List[typing.Union['JobDefinitionEnvironment', dict]] = attr.ib(
default=None,
converter=JobDefinitionEnvironment.from_list,
validator=attr.validators.optional(attr.validators.deep_iterable(member_validator=attr.validators.instance_of(JobDefinitionEnvironment), iterable_validator=attr.validators.instance_of(list))),
metadata={AttrMeta.PROPERTY_NAME: "Environment"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-containerproperties.html#cfn-batch-jobdefinition-containerproperties-environment"""
p_ExecutionRoleArn: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(TypeCheck.intrinsic_str_type)),
metadata={AttrMeta.PROPERTY_NAME: "ExecutionRoleArn"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-containerproperties.html#cfn-batch-jobdefinition-containerproperties-executionrolearn"""
p_FargatePlatformConfiguration: typing.Union['JobDefinitionFargatePlatformConfiguration', dict] = attr.ib(
default=None,
converter=JobDefinitionFargatePlatformConfiguration.from_dict,
validator=attr.validators.optional(attr.validators.instance_of(JobDefinitionFargatePlatformConfiguration)),
metadata={AttrMeta.PROPERTY_NAME: "FargatePlatformConfiguration"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-containerproperties.html#cfn-batch-jobdefinition-containerproperties-fargateplatformconfiguration"""
p_InstanceType: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(TypeCheck.intrinsic_str_type)),
metadata={AttrMeta.PROPERTY_NAME: "InstanceType"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-containerproperties.html#cfn-batch-jobdefinition-containerproperties-instancetype"""
p_JobRoleArn: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(TypeCheck.intrinsic_str_type)),
metadata={AttrMeta.PROPERTY_NAME: "JobRoleArn"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-containerproperties.html#cfn-batch-jobdefinition-containerproperties-jobrolearn"""
p_LinuxParameters: typing.Union['JobDefinitionLinuxParameters', dict] = attr.ib(
default=None,
converter=JobDefinitionLinuxParameters.from_dict,
validator=attr.validators.optional(attr.validators.instance_of(JobDefinitionLinuxParameters)),
metadata={AttrMeta.PROPERTY_NAME: "LinuxParameters"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-containerproperties.html#cfn-batch-jobdefinition-containerproperties-linuxparameters"""
p_LogConfiguration: typing.Union['JobDefinitionLogConfiguration', dict] = attr.ib(
default=None,
converter=JobDefinitionLogConfiguration.from_dict,
validator=attr.validators.optional(attr.validators.instance_of(JobDefinitionLogConfiguration)),
metadata={AttrMeta.PROPERTY_NAME: "LogConfiguration"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-containerproperties.html#cfn-batch-jobdefinition-containerproperties-logconfiguration"""
p_Memory: int = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(int)),
metadata={AttrMeta.PROPERTY_NAME: "Memory"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-containerproperties.html#cfn-batch-jobdefinition-containerproperties-memory"""
p_MountPoints: typing.List[typing.Union['JobDefinitionMountPoints', dict]] = attr.ib(
default=None,
converter=JobDefinitionMountPoints.from_list,
validator=attr.validators.optional(attr.validators.deep_iterable(member_validator=attr.validators.instance_of(JobDefinitionMountPoints), iterable_validator=attr.validators.instance_of(list))),
metadata={AttrMeta.PROPERTY_NAME: "MountPoints"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-containerproperties.html#cfn-batch-jobdefinition-containerproperties-mountpoints"""
p_NetworkConfiguration: typing.Union['JobDefinitionNetworkConfiguration', dict] = attr.ib(
default=None,
converter=JobDefinitionNetworkConfiguration.from_dict,
validator=attr.validators.optional(attr.validators.instance_of(JobDefinitionNetworkConfiguration)),
metadata={AttrMeta.PROPERTY_NAME: "NetworkConfiguration"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-containerproperties.html#cfn-batch-jobdefinition-containerproperties-networkconfiguration"""
p_Privileged: bool = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(bool)),
metadata={AttrMeta.PROPERTY_NAME: "Privileged"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-containerproperties.html#cfn-batch-jobdefinition-containerproperties-privileged"""
p_ReadonlyRootFilesystem: bool = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(bool)),
metadata={AttrMeta.PROPERTY_NAME: "ReadonlyRootFilesystem"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-containerproperties.html#cfn-batch-jobdefinition-containerproperties-readonlyrootfilesystem"""
p_ResourceRequirements: typing.List[typing.Union['JobDefinitionResourceRequirement', dict]] = attr.ib(
default=None,
converter=JobDefinitionResourceRequirement.from_list,
validator=attr.validators.optional(attr.validators.deep_iterable(member_validator=attr.validators.instance_of(JobDefinitionResourceRequirement), iterable_validator=attr.validators.instance_of(list))),
metadata={AttrMeta.PROPERTY_NAME: "ResourceRequirements"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-containerproperties.html#cfn-batch-jobdefinition-containerproperties-resourcerequirements"""
p_Secrets: typing.List[typing.Union['JobDefinitionSecret', dict]] = attr.ib(
default=None,
converter=JobDefinitionSecret.from_list,
validator=attr.validators.optional(attr.validators.deep_iterable(member_validator=attr.validators.instance_of(JobDefinitionSecret), iterable_validator=attr.validators.instance_of(list))),
metadata={AttrMeta.PROPERTY_NAME: "Secrets"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-containerproperties.html#cfn-batch-jobdefinition-containerproperties-secrets"""
p_Ulimits: typing.List[typing.Union['JobDefinitionUlimit', dict]] = attr.ib(
default=None,
converter=JobDefinitionUlimit.from_list,
validator=attr.validators.optional(attr.validators.deep_iterable(member_validator=attr.validators.instance_of(JobDefinitionUlimit), iterable_validator=attr.validators.instance_of(list))),
metadata={AttrMeta.PROPERTY_NAME: "Ulimits"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-containerproperties.html#cfn-batch-jobdefinition-containerproperties-ulimits"""
p_User: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(TypeCheck.intrinsic_str_type)),
metadata={AttrMeta.PROPERTY_NAME: "User"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-containerproperties.html#cfn-batch-jobdefinition-containerproperties-user"""
p_Vcpus: int = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(int)),
metadata={AttrMeta.PROPERTY_NAME: "Vcpus"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-containerproperties.html#cfn-batch-jobdefinition-containerproperties-vcpus"""
p_Volumes: typing.List[typing.Union['JobDefinitionVolumes', dict]] = attr.ib(
default=None,
converter=JobDefinitionVolumes.from_list,
validator=attr.validators.optional(attr.validators.deep_iterable(member_validator=attr.validators.instance_of(JobDefinitionVolumes), iterable_validator=attr.validators.instance_of(list))),
metadata={AttrMeta.PROPERTY_NAME: "Volumes"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-containerproperties.html#cfn-batch-jobdefinition-containerproperties-volumes"""
@attr.s
class JobDefinitionNodeRangeProperty(Property):
"""
AWS Object Type = "AWS::Batch::JobDefinition.NodeRangeProperty"
Resource Document: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-noderangeproperty.html
Property Document:
- ``rp_TargetNodes``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-noderangeproperty.html#cfn-batch-jobdefinition-noderangeproperty-targetnodes
- ``p_Container``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-noderangeproperty.html#cfn-batch-jobdefinition-noderangeproperty-container
"""
AWS_OBJECT_TYPE = "AWS::Batch::JobDefinition.NodeRangeProperty"
rp_TargetNodes: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.instance_of(TypeCheck.intrinsic_str_type),
metadata={AttrMeta.PROPERTY_NAME: "TargetNodes"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-noderangeproperty.html#cfn-batch-jobdefinition-noderangeproperty-targetnodes"""
p_Container: typing.Union['JobDefinitionContainerProperties', dict] = attr.ib(
default=None,
converter=JobDefinitionContainerProperties.from_dict,
validator=attr.validators.optional(attr.validators.instance_of(JobDefinitionContainerProperties)),
metadata={AttrMeta.PROPERTY_NAME: "Container"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-noderangeproperty.html#cfn-batch-jobdefinition-noderangeproperty-container"""
@attr.s
class JobDefinitionNodeProperties(Property):
"""
AWS Object Type = "AWS::Batch::JobDefinition.NodeProperties"
Resource Document: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-nodeproperties.html
Property Document:
- ``rp_MainNode``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-nodeproperties.html#cfn-batch-jobdefinition-nodeproperties-mainnode
- ``rp_NodeRangeProperties``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-nodeproperties.html#cfn-batch-jobdefinition-nodeproperties-noderangeproperties
- ``rp_NumNodes``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-nodeproperties.html#cfn-batch-jobdefinition-nodeproperties-numnodes
"""
AWS_OBJECT_TYPE = "AWS::Batch::JobDefinition.NodeProperties"
rp_MainNode: int = attr.ib(
default=None,
validator=attr.validators.instance_of(int),
metadata={AttrMeta.PROPERTY_NAME: "MainNode"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-nodeproperties.html#cfn-batch-jobdefinition-nodeproperties-mainnode"""
rp_NodeRangeProperties: typing.List[typing.Union['JobDefinitionNodeRangeProperty', dict]] = attr.ib(
default=None,
converter=JobDefinitionNodeRangeProperty.from_list,
validator=attr.validators.deep_iterable(member_validator=attr.validators.instance_of(JobDefinitionNodeRangeProperty), iterable_validator=attr.validators.instance_of(list)),
metadata={AttrMeta.PROPERTY_NAME: "NodeRangeProperties"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-nodeproperties.html#cfn-batch-jobdefinition-nodeproperties-noderangeproperties"""
rp_NumNodes: int = attr.ib(
default=None,
validator=attr.validators.instance_of(int),
metadata={AttrMeta.PROPERTY_NAME: "NumNodes"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-nodeproperties.html#cfn-batch-jobdefinition-nodeproperties-numnodes"""
#--- Resource declaration ---
@attr.s
class JobDefinition(Resource):
"""
AWS Object Type = "AWS::Batch::JobDefinition"
Resource Document: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-batch-jobdefinition.html
Property Document:
- ``rp_Type``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-batch-jobdefinition.html#cfn-batch-jobdefinition-type
- ``p_ContainerProperties``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-batch-jobdefinition.html#cfn-batch-jobdefinition-containerproperties
- ``p_JobDefinitionName``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-batch-jobdefinition.html#cfn-batch-jobdefinition-jobdefinitionname
- ``p_NodeProperties``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-batch-jobdefinition.html#cfn-batch-jobdefinition-nodeproperties
- ``p_Parameters``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-batch-jobdefinition.html#cfn-batch-jobdefinition-parameters
- ``p_PlatformCapabilities``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-batch-jobdefinition.html#cfn-batch-jobdefinition-platformcapabilities
- ``p_PropagateTags``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-batch-jobdefinition.html#cfn-batch-jobdefinition-propagatetags
- ``p_RetryStrategy``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-batch-jobdefinition.html#cfn-batch-jobdefinition-retrystrategy
- ``p_Timeout``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-batch-jobdefinition.html#cfn-batch-jobdefinition-timeout
- ``p_Tags``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-batch-jobdefinition.html#cfn-batch-jobdefinition-tags
"""
AWS_OBJECT_TYPE = "AWS::Batch::JobDefinition"
rp_Type: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.instance_of(TypeCheck.intrinsic_str_type),
metadata={AttrMeta.PROPERTY_NAME: "Type"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-batch-jobdefinition.html#cfn-batch-jobdefinition-type"""
p_ContainerProperties: typing.Union['JobDefinitionContainerProperties', dict] = attr.ib(
default=None,
converter=JobDefinitionContainerProperties.from_dict,
validator=attr.validators.optional(attr.validators.instance_of(JobDefinitionContainerProperties)),
metadata={AttrMeta.PROPERTY_NAME: "ContainerProperties"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-batch-jobdefinition.html#cfn-batch-jobdefinition-containerproperties"""
p_JobDefinitionName: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(TypeCheck.intrinsic_str_type)),
metadata={AttrMeta.PROPERTY_NAME: "JobDefinitionName"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-batch-jobdefinition.html#cfn-batch-jobdefinition-jobdefinitionname"""
p_NodeProperties: typing.Union['JobDefinitionNodeProperties', dict] = attr.ib(
default=None,
converter=JobDefinitionNodeProperties.from_dict,
validator=attr.validators.optional(attr.validators.instance_of(JobDefinitionNodeProperties)),
metadata={AttrMeta.PROPERTY_NAME: "NodeProperties"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-batch-jobdefinition.html#cfn-batch-jobdefinition-nodeproperties"""
p_Parameters: dict = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(dict)),
metadata={AttrMeta.PROPERTY_NAME: "Parameters"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-batch-jobdefinition.html#cfn-batch-jobdefinition-parameters"""
p_PlatformCapabilities: typing.List[TypeHint.intrinsic_str] = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.deep_iterable(member_validator=attr.validators.instance_of(TypeCheck.intrinsic_str_type), iterable_validator=attr.validators.instance_of(list))),
metadata={AttrMeta.PROPERTY_NAME: "PlatformCapabilities"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-batch-jobdefinition.html#cfn-batch-jobdefinition-platformcapabilities"""
p_PropagateTags: bool = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(bool)),
metadata={AttrMeta.PROPERTY_NAME: "PropagateTags"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-batch-jobdefinition.html#cfn-batch-jobdefinition-propagatetags"""
p_RetryStrategy: typing.Union['JobDefinitionRetryStrategy', dict] = attr.ib(
default=None,
converter=JobDefinitionRetryStrategy.from_dict,
validator=attr.validators.optional(attr.validators.instance_of(JobDefinitionRetryStrategy)),
metadata={AttrMeta.PROPERTY_NAME: "RetryStrategy"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-batch-jobdefinition.html#cfn-batch-jobdefinition-retrystrategy"""
p_Timeout: typing.Union['JobDefinitionTimeout', dict] = attr.ib(
default=None,
converter=JobDefinitionTimeout.from_dict,
validator=attr.validators.optional(attr.validators.instance_of(JobDefinitionTimeout)),
metadata={AttrMeta.PROPERTY_NAME: "Timeout"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-batch-jobdefinition.html#cfn-batch-jobdefinition-timeout"""
p_Tags: dict = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(dict)),
metadata={AttrMeta.PROPERTY_NAME: "Tags"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-batch-jobdefinition.html#cfn-batch-jobdefinition-tags"""
@attr.s
class JobQueue(Resource):
"""
AWS Object Type = "AWS::Batch::JobQueue"
Resource Document: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-batch-jobqueue.html
Property Document:
- ``rp_ComputeEnvironmentOrder``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-batch-jobqueue.html#cfn-batch-jobqueue-computeenvironmentorder
- ``rp_Priority``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-batch-jobqueue.html#cfn-batch-jobqueue-priority
- ``p_JobQueueName``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-batch-jobqueue.html#cfn-batch-jobqueue-jobqueuename
- ``p_State``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-batch-jobqueue.html#cfn-batch-jobqueue-state
- ``p_Tags``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-batch-jobqueue.html#cfn-batch-jobqueue-tags
"""
AWS_OBJECT_TYPE = "AWS::Batch::JobQueue"
rp_ComputeEnvironmentOrder: typing.List[typing.Union['JobQueueComputeEnvironmentOrder', dict]] = attr.ib(
default=None,
converter=JobQueueComputeEnvironmentOrder.from_list,
validator=attr.validators.deep_iterable(member_validator=attr.validators.instance_of(JobQueueComputeEnvironmentOrder), iterable_validator=attr.validators.instance_of(list)),
metadata={AttrMeta.PROPERTY_NAME: "ComputeEnvironmentOrder"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-batch-jobqueue.html#cfn-batch-jobqueue-computeenvironmentorder"""
rp_Priority: int = attr.ib(
default=None,
validator=attr.validators.instance_of(int),
metadata={AttrMeta.PROPERTY_NAME: "Priority"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-batch-jobqueue.html#cfn-batch-jobqueue-priority"""
p_JobQueueName: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(TypeCheck.intrinsic_str_type)),
metadata={AttrMeta.PROPERTY_NAME: "JobQueueName"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-batch-jobqueue.html#cfn-batch-jobqueue-jobqueuename"""
p_State: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(TypeCheck.intrinsic_str_type)),
metadata={AttrMeta.PROPERTY_NAME: "State"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-batch-jobqueue.html#cfn-batch-jobqueue-state"""
p_Tags: dict = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(dict)),
metadata={AttrMeta.PROPERTY_NAME: "Tags"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-batch-jobqueue.html#cfn-batch-jobqueue-tags"""
@attr.s
class ComputeEnvironment(Resource):
"""
AWS Object Type = "AWS::Batch::ComputeEnvironment"
Resource Document: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-batch-computeenvironment.html
Property Document:
- ``rp_Type``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-batch-computeenvironment.html#cfn-batch-computeenvironment-type
- ``p_ComputeEnvironmentName``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-batch-computeenvironment.html#cfn-batch-computeenvironment-computeenvironmentname
- ``p_ComputeResources``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-batch-computeenvironment.html#cfn-batch-computeenvironment-computeresources
- ``p_ServiceRole``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-batch-computeenvironment.html#cfn-batch-computeenvironment-servicerole
- ``p_State``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-batch-computeenvironment.html#cfn-batch-computeenvironment-state
- ``p_Tags``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-batch-computeenvironment.html#cfn-batch-computeenvironment-tags
"""
AWS_OBJECT_TYPE = "AWS::Batch::ComputeEnvironment"
rp_Type: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.instance_of(TypeCheck.intrinsic_str_type),
metadata={AttrMeta.PROPERTY_NAME: "Type"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-batch-computeenvironment.html#cfn-batch-computeenvironment-type"""
p_ComputeEnvironmentName: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(TypeCheck.intrinsic_str_type)),
metadata={AttrMeta.PROPERTY_NAME: "ComputeEnvironmentName"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-batch-computeenvironment.html#cfn-batch-computeenvironment-computeenvironmentname"""
p_ComputeResources: typing.Union['ComputeEnvironmentComputeResources', dict] = attr.ib(
default=None,
converter=ComputeEnvironmentComputeResources.from_dict,
validator=attr.validators.optional(attr.validators.instance_of(ComputeEnvironmentComputeResources)),
metadata={AttrMeta.PROPERTY_NAME: "ComputeResources"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-batch-computeenvironment.html#cfn-batch-computeenvironment-computeresources"""
p_ServiceRole: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(TypeCheck.intrinsic_str_type)),
metadata={AttrMeta.PROPERTY_NAME: "ServiceRole"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-batch-computeenvironment.html#cfn-batch-computeenvironment-servicerole"""
p_State: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(TypeCheck.intrinsic_str_type)),
metadata={AttrMeta.PROPERTY_NAME: "State"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-batch-computeenvironment.html#cfn-batch-computeenvironment-state"""
p_Tags: dict = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(dict)),
metadata={AttrMeta.PROPERTY_NAME: "Tags"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-batch-computeenvironment.html#cfn-batch-computeenvironment-tags"""
| 69.891761 | 268 | 0.779095 | 8,987 | 86,526 | 7.410816 | 0.022032 | 0.106485 | 0.043273 | 0.066876 | 0.909851 | 0.909851 | 0.896533 | 0.856008 | 0.856008 | 0.856008 | 0 | 0.000332 | 0.094943 | 86,526 | 1,237 | 269 | 69.948262 | 0.850136 | 0.332178 | 0 | 0.396011 | 0 | 0 | 0.089425 | 0.05296 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.005698 | 0 | 0.252137 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
c0f917ba193447f49463cd3c0c45ac5f1da6c3d3 | 2,969 | py | Python | def_age_plots.py | anewmark/galaxy_dark_matter | b5261e4e413d3a18a45a19e92f7545adc408878a | [
"MIT"
] | null | null | null | def_age_plots.py | anewmark/galaxy_dark_matter | b5261e4e413d3a18a45a19e92f7545adc408878a | [
"MIT"
] | null | null | null | def_age_plots.py | anewmark/galaxy_dark_matter | b5261e4e413d3a18a45a19e92f7545adc408878a | [
"MIT"
] | null | null | null | import matplotlib.pyplot as plt
import matplotlib.ticker as ticker
import numpy as np
import math
outdir='/Users/amandanewmark/repositories/galaxy_dark_matter/lumprofplots/clumps/'
def age_plot(x,y, start, end, tag=[]):
#print('x spacing: ', xspace)
print('x_center values: ', x)
print('y values= ', y)
start=np.array(start)
end=np.array(end)
arange=end-start
arange=arange[1:]
bwidth=arange
print('bar width=', bwidth)
print(len(bwidth))
f=plt.figure()
labels=np.append(start, end[len(end)-1])
print('labels', labels)
xspace=np.linspace(np.min(labels), np.max(labels), num=len(labels))
#label=[str(n) for n in labels]
#print(label)
#plt.bar(x, y, width=bwidth, align='center', color='None')
plt.bar(x, y, width=bwidth, align='center', color='None')
plt.xlabel('Lookback Time (Gyr)')
plt.xticks(labels, label=labels, fontsize=8, rotation='vertical') #smushed
#plt.xticks(xspace, label=labels, fontsize=8, rotation='vertical') #ticks dont align with bar graph
plt.ylabel('Stacked Mass Fractions')
plt.title('Age vs. Mass Fractions')
#plt.xlim(np.min(x)-bwidth[0], np.max(x)+bwidth[len(bwidth)-1]/2.0)
plt.xlim(np.min(start), np.max(end))
plt.ylim(0,1)
plt.plot(0,0, label=tag[1], c='k')
plt.plot(0,0, label=tag[2], c='k')
plt.legend(loc=2,prop={'size':6.0})
plt.show()
outdirs=outdir+tag[0]+'agebin.pdf'
f.savefig(outdirs)
print(outdirs)
def age_plot1(x,y, start, end, yerr, tag=''):
#print('x spacing: ', xspace)
print('x_center values: ', x)
print('y values= ', y)
print('y errors= ', yerr)
start=np.array(start)
end=np.array(end)
arange=end-start
arange=arange
bwidth=arange
print('bar width=', bwidth)
print(len(bwidth))
f=plt.figure()
labels=np.append(start, end[len(end)-1])
#print('labels', labels)
xspace=np.linspace(np.min(labels), np.max(labels), num=len(labels))
label=[str(n) for n in labels]
xsp=[math.log10(n) for n in labels]
#print(xsp)
#print(label)
#plt.bar(x, y, width=bwidth, align='center', color='None')
plt.bar(x, y, width=bwidth, align='center', color='None')
plt.errorbar(x, y, yerr=yerr,label='Standard Error on the Mean per Age Bin', fmt='.', color='b')
plt.xlabel('Lookback Time (Gyr)')
plt.xscale('log')
#plt.xticks(labels, label=labels, fontsize=8, rotation='vertical') #smushed
plt.xticks(xsp, label=labels, fontsize=8, rotation='vertical')
#plt.xticks(xspace, label=labels, fontsize=8, rotation='vertical') #ticks dont align with bar graph
plt.ylabel('Stacked Mass Fractions')
plt.title('Age vs. Mass Fractions')
#plt.xlim(np.min(x)-bwidth[0], np.max(x)+bwidth[len(bwidth)-1]/2.0)
plt.xlim(np.min(start), np.max(end))
nmax=np.max(y)+yerr[len(yerr)-1]
if nmax<1:
nmax=1
plt.ylim(0,nmax)
plt.plot(0,0, label=tag[1], c='k')
plt.plot(0,0, label=tag[2], c='k', marker='*')
plt.plot(0,0, label=tag[3], c='k', marker=' ')
plt.legend(loc=2,prop={'size':7.0})
plt.show()
outdirs=outdir+tag[0]+'agebin.pdf'
f.savefig(outdirs)
print(outdirs)
| 29.39604 | 100 | 0.67969 | 507 | 2,969 | 3.968442 | 0.220907 | 0.006958 | 0.047217 | 0.049702 | 0.813121 | 0.813121 | 0.733101 | 0.733101 | 0.733101 | 0.733101 | 0 | 0.018272 | 0.11519 | 2,969 | 100 | 101 | 29.69 | 0.747621 | 0.223308 | 0 | 0.608696 | 0 | 0 | 0.176213 | 0.03192 | 0 | 0 | 0 | 0 | 0 | 1 | 0.028986 | false | 0 | 0.057971 | 0 | 0.086957 | 0.173913 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
8d0474f0952dfddc61315423d976aca53f42cc4f | 161 | py | Python | sbaas/analysis/analysis_base/__init__.py | SBRG/sbaas | 9df76bbffdd620cf8566744a2b0503935998fbe0 | [
"Apache-2.0"
] | 1 | 2017-05-13T04:35:08.000Z | 2017-05-13T04:35:08.000Z | sbaas/analysis/analysis_base/__init__.py | SBRG/sbaas | 9df76bbffdd620cf8566744a2b0503935998fbe0 | [
"Apache-2.0"
] | null | null | null | sbaas/analysis/analysis_base/__init__.py | SBRG/sbaas | 9df76bbffdd620cf8566744a2b0503935998fbe0 | [
"Apache-2.0"
] | 2 | 2017-02-23T19:32:38.000Z | 2020-01-14T19:13:05.000Z | from .base_analysis import *
from .base_calculate import base_calculate
from .base_importData import base_importData
from .base_exportData import base_exportData | 40.25 | 44 | 0.875776 | 22 | 161 | 6.090909 | 0.318182 | 0.238806 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.093168 | 161 | 4 | 45 | 40.25 | 0.917808 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
23c2c91d60c23e2eabbc45c811144133b7451121 | 1,877 | py | Python | api_snippets_v1/migrations/0010_auto_20170221_2156.py | gargrave/snippets-server | 06ab97450be7c62c4821c2fb010cd18678e19a0d | [
"MIT"
] | 12 | 2017-05-04T17:59:22.000Z | 2021-07-06T07:39:13.000Z | api_snippets_v1/migrations/0010_auto_20170221_2156.py | gargrave/snippets-server | 06ab97450be7c62c4821c2fb010cd18678e19a0d | [
"MIT"
] | null | null | null | api_snippets_v1/migrations/0010_auto_20170221_2156.py | gargrave/snippets-server | 06ab97450be7c62c4821c2fb010cd18678e19a0d | [
"MIT"
] | 2 | 2019-03-18T13:29:57.000Z | 2019-04-30T04:20:40.000Z | # -*- coding: utf-8 -*-
# Generated by Django 1.9.2 on 2017-02-21 21:56
from __future__ import unicode_literals
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('api_snippets_v1', '0009_auto_20170204_1951'),
]
operations = [
migrations.AddField(
model_name='userprofile',
name='category_name_blue',
field=models.CharField(blank=True, max_length=16),
),
migrations.AddField(
model_name='userprofile',
name='category_name_gray',
field=models.CharField(blank=True, max_length=16),
),
migrations.AddField(
model_name='userprofile',
name='category_name_green',
field=models.CharField(blank=True, max_length=16),
),
migrations.AddField(
model_name='userprofile',
name='category_name_orange',
field=models.CharField(blank=True, max_length=16),
),
migrations.AddField(
model_name='userprofile',
name='category_name_red',
field=models.CharField(blank=True, max_length=16),
),
migrations.AddField(
model_name='userprofile',
name='category_name_teal',
field=models.CharField(blank=True, max_length=16),
),
migrations.AddField(
model_name='userprofile',
name='category_name_white',
field=models.CharField(blank=True, max_length=16),
),
migrations.AddField(
model_name='userprofile',
name='category_name_yellow',
field=models.CharField(blank=True, max_length=16),
),
migrations.AlterUniqueTogether(
name='tagsnippetrelation',
unique_together=set([]),
),
]
| 31.283333 | 62 | 0.584443 | 182 | 1,877 | 5.791209 | 0.318681 | 0.136622 | 0.174573 | 0.204934 | 0.72296 | 0.72296 | 0.72296 | 0.72296 | 0.671727 | 0.624288 | 0 | 0.037519 | 0.304209 | 1,877 | 59 | 63 | 31.813559 | 0.769525 | 0.035695 | 0 | 0.634615 | 1 | 0 | 0.162147 | 0.012728 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.038462 | 0 | 0.096154 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
23e739962e1def6cdce30f83974c51b6b5c37aa7 | 1,338 | py | Python | git_issue_pusher.py | aecobb53/notebook | 4ec0478b7c526fd57f4dbe9b2b3835b0e426cccd | [
"MIT"
] | null | null | null | git_issue_pusher.py | aecobb53/notebook | 4ec0478b7c526fd57f4dbe9b2b3835b0e426cccd | [
"MIT"
] | 2 | 2020-12-01T21:00:53.000Z | 2021-01-25T20:44:43.000Z | git_issue_pusher.py | aecobb53/notebook | 4ec0478b7c526fd57f4dbe9b2b3835b0e426cccd | [
"MIT"
] | null | null | null | import requests
owner = 'aecobb53'
repo = 'notebook'
url = f"http://repos/{owner}/{repo}/issues/1"
arguments = {
'title':'example issue for testing'
}
arguments = {
"title": "Found a bug",
"body": "I'm having a problem with this.",
"assignees": [
"octocat"
],
"milestone": 1,
"labels": [
"bug"
]
}
arguments = {}
print(f"url: {url}, args:{arguments}")
if arguments == {}:
print('running without arguments')
response = requests.get(url)
else:
print('running with arguments')
response = requests.get(url, arguments)
print(response)
print(response.status_code)
exit()
try:
print(response.json())
except:
print('there is no json data received')
# arguments = {
# 'title':'example issue for testing'
# }
# arguments = {
# "title": "Found a bug",
# "body": "I'm having a problem with this.",
# "assignees": [
# "octocat"
# ],
# "milestone": 1,
# "labels": [
# "bug"
# ]
# }
# print(f"url: {url}, args:{arguments}")
# if arguments == {}:
# print('running without arguments')
# response = requests.post(url)
# else:
# print('running with arguments')
# response = requests.post(url, arguments)
# print(response)
# print(response.status_code)
# exit()
# try:
# print(response.json())
# except:
# print('there is no json data received')
| 18.583333 | 46 | 0.604634 | 155 | 1,338 | 5.206452 | 0.335484 | 0.096654 | 0.123916 | 0.064436 | 0.904585 | 0.887237 | 0.887237 | 0.887237 | 0.768278 | 0.768278 | 0 | 0.004753 | 0.213752 | 1,338 | 71 | 47 | 18.84507 | 0.762357 | 0.432735 | 0 | 0.060606 | 0 | 0 | 0.373973 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.030303 | 0 | 0.030303 | 0.212121 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
9b0ec2a6cb58400abd5080cb4be8884b991dc6ab | 274 | bzl | Python | third_party/protobuf/deps.bzl | msfschaffner/opentitan-bak | de4cb1bb9e7b707a3ca2a6882d83af7ed2aa1ab8 | [
"Apache-2.0"
] | 1 | 2022-01-05T16:53:34.000Z | 2022-01-05T16:53:34.000Z | third_party/protobuf/deps.bzl | msfschaffner/opentitan-bak | de4cb1bb9e7b707a3ca2a6882d83af7ed2aa1ab8 | [
"Apache-2.0"
] | 2 | 2021-11-01T15:02:37.000Z | 2022-01-17T14:34:36.000Z | third_party/protobuf/deps.bzl | msfschaffner/opentitan-bak | de4cb1bb9e7b707a3ca2a6882d83af7ed2aa1ab8 | [
"Apache-2.0"
] | null | null | null | # Copyright lowRISC contributors.
# Licensed under the Apache License, Version 2.0, see LICENSE for details.
# SPDX-License-Identifier: Apache-2.0
load("@com_google_protobuf//:protobuf_deps.bzl", _protobuf_deps = "protobuf_deps")
def protobuf_deps():
_protobuf_deps()
| 30.444444 | 82 | 0.770073 | 37 | 274 | 5.459459 | 0.621622 | 0.29703 | 0.19802 | 0.237624 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.016529 | 0.116788 | 274 | 8 | 83 | 34.25 | 0.818182 | 0.510949 | 0 | 0 | 0 | 0 | 0.407692 | 0.307692 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | true | 0 | 0 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
f1b3a8dede2af750de20e7e1c9a50f797751b033 | 120 | py | Python | src/autogroceries/shopper/__init__.py | dzhang32/autogroceries | c705ab5918fd9ceaed56326aa7bfdbc4057d007c | [
"MIT"
] | null | null | null | src/autogroceries/shopper/__init__.py | dzhang32/autogroceries | c705ab5918fd9ceaed56326aa7bfdbc4057d007c | [
"MIT"
] | null | null | null | src/autogroceries/shopper/__init__.py | dzhang32/autogroceries | c705ab5918fd9ceaed56326aa7bfdbc4057d007c | [
"MIT"
] | null | null | null | from autogroceries.shopper.Shopper import Shopper
from autogroceries.shopper.SainsburysShopper import SainsburysShopper
| 40 | 69 | 0.9 | 12 | 120 | 9 | 0.416667 | 0.314815 | 0.444444 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.066667 | 120 | 2 | 70 | 60 | 0.964286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
f1c03bec3bf11d68217c7b5de3497a6b1ed34afa | 76 | py | Python | dpmhm/datasets/dcase/dcase2021_task2/__init__.py | yanncalec/dpmhm | 0a242bc8add0ba1463bb2b63b2c15abb80b83fa7 | [
"MIT"
] | null | null | null | dpmhm/datasets/dcase/dcase2021_task2/__init__.py | yanncalec/dpmhm | 0a242bc8add0ba1463bb2b63b2c15abb80b83fa7 | [
"MIT"
] | null | null | null | dpmhm/datasets/dcase/dcase2021_task2/__init__.py | yanncalec/dpmhm | 0a242bc8add0ba1463bb2b63b2c15abb80b83fa7 | [
"MIT"
] | null | null | null | """dcase2021_task2 dataset."""
from .dcase2021_task2 import Dcase2021Task2
| 19 | 43 | 0.802632 | 8 | 76 | 7.375 | 0.75 | 0.474576 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.217391 | 0.092105 | 76 | 3 | 44 | 25.333333 | 0.637681 | 0.315789 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
9e558e70d7f1a8c40d50f006d3706917c5ec4ea5 | 69 | py | Python | collection_modules/camCollectionPoint/__init__.py | maxakuru/SimpleSensor | 655d10ebed5eddb892d036012cb12ccd6b460d2d | [
"Apache-2.0"
] | null | null | null | collection_modules/camCollectionPoint/__init__.py | maxakuru/SimpleSensor | 655d10ebed5eddb892d036012cb12ccd6b460d2d | [
"Apache-2.0"
] | null | null | null | collection_modules/camCollectionPoint/__init__.py | maxakuru/SimpleSensor | 655d10ebed5eddb892d036012cb12ccd6b460d2d | [
"Apache-2.0"
] | null | null | null | from camCollectionPoint import CamCollectionPoint as CollectionMethod | 69 | 69 | 0.927536 | 6 | 69 | 10.666667 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.072464 | 69 | 1 | 69 | 69 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
9e7f38f8f3b0bbab9165a83dc8ed100f81134518 | 30,194 | py | Python | sdk/python/pulumi_azure/hpc/_inputs.py | henriktao/pulumi-azure | f1cbcf100b42b916da36d8fe28be3a159abaf022 | [
"ECL-2.0",
"Apache-2.0"
] | 109 | 2018-06-18T00:19:44.000Z | 2022-02-20T05:32:57.000Z | sdk/python/pulumi_azure/hpc/_inputs.py | henriktao/pulumi-azure | f1cbcf100b42b916da36d8fe28be3a159abaf022 | [
"ECL-2.0",
"Apache-2.0"
] | 663 | 2018-06-18T21:08:46.000Z | 2022-03-31T20:10:11.000Z | sdk/python/pulumi_azure/hpc/_inputs.py | henriktao/pulumi-azure | f1cbcf100b42b916da36d8fe28be3a159abaf022 | [
"ECL-2.0",
"Apache-2.0"
] | 41 | 2018-07-19T22:37:38.000Z | 2022-03-14T10:56:26.000Z | # coding=utf-8
# *** WARNING: this file was generated by the Pulumi Terraform Bridge (tfgen) Tool. ***
# *** Do not edit by hand unless you're certain you know what you are doing! ***
import warnings
import pulumi
import pulumi.runtime
from typing import Any, Mapping, Optional, Sequence, Union, overload
from .. import _utilities
__all__ = [
'CacheAccessPolicyAccessRuleArgs',
'CacheDefaultAccessPolicyArgs',
'CacheDefaultAccessPolicyAccessRuleArgs',
'CacheDirectoryActiveDirectoryArgs',
'CacheDirectoryFlatFileArgs',
'CacheDirectoryLdapArgs',
'CacheDirectoryLdapBindArgs',
'CacheDnsArgs',
'CacheNfsTargetNamespaceJunctionArgs',
]
@pulumi.input_type
class CacheAccessPolicyAccessRuleArgs:
def __init__(__self__, *,
access: pulumi.Input[str],
scope: pulumi.Input[str],
anonymous_gid: Optional[pulumi.Input[int]] = None,
anonymous_uid: Optional[pulumi.Input[int]] = None,
filter: Optional[pulumi.Input[str]] = None,
root_squash_enabled: Optional[pulumi.Input[bool]] = None,
submount_access_enabled: Optional[pulumi.Input[bool]] = None,
suid_enabled: Optional[pulumi.Input[bool]] = None):
"""
:param pulumi.Input[str] access: The access level for this rule. Possible values are: `rw`, `ro`, `no`.
:param pulumi.Input[str] scope: The scope of this rule. The `scope` and (potentially) the `filter` determine which clients match the rule. Possible values are: `default`, `network`, `host`.
:param pulumi.Input[int] anonymous_gid: The anonymous GID used when `root_squash_enabled` is `true`.
:param pulumi.Input[int] anonymous_uid: The anonymous UID used when `root_squash_enabled` is `true`.
:param pulumi.Input[str] filter: The filter applied to the `scope` for this rule. The filter's format depends on its scope: `default` scope matches all clients and has no filter value; `network` scope takes a CIDR format; `host` takes an IP address or fully qualified domain name. If a client does not match any filter rule and there is no default rule, access is denied.
:param pulumi.Input[bool] root_squash_enabled: Whether to enable [root squash](https://docs.microsoft.com/en-us/azure/hpc-cache/access-policies#root-squash)? Defaults to `false`.
:param pulumi.Input[bool] submount_access_enabled: Whether allow access to subdirectories under the root export? Defaults to `false`.
:param pulumi.Input[bool] suid_enabled: Whether [SUID](https://docs.microsoft.com/en-us/azure/hpc-cache/access-policies#suid) is allowed? Defaults to `false`.
"""
pulumi.set(__self__, "access", access)
pulumi.set(__self__, "scope", scope)
if anonymous_gid is not None:
pulumi.set(__self__, "anonymous_gid", anonymous_gid)
if anonymous_uid is not None:
pulumi.set(__self__, "anonymous_uid", anonymous_uid)
if filter is not None:
pulumi.set(__self__, "filter", filter)
if root_squash_enabled is not None:
pulumi.set(__self__, "root_squash_enabled", root_squash_enabled)
if submount_access_enabled is not None:
pulumi.set(__self__, "submount_access_enabled", submount_access_enabled)
if suid_enabled is not None:
pulumi.set(__self__, "suid_enabled", suid_enabled)
@property
@pulumi.getter
def access(self) -> pulumi.Input[str]:
"""
The access level for this rule. Possible values are: `rw`, `ro`, `no`.
"""
return pulumi.get(self, "access")
@access.setter
def access(self, value: pulumi.Input[str]):
pulumi.set(self, "access", value)
@property
@pulumi.getter
def scope(self) -> pulumi.Input[str]:
"""
The scope of this rule. The `scope` and (potentially) the `filter` determine which clients match the rule. Possible values are: `default`, `network`, `host`.
"""
return pulumi.get(self, "scope")
@scope.setter
def scope(self, value: pulumi.Input[str]):
pulumi.set(self, "scope", value)
@property
@pulumi.getter(name="anonymousGid")
def anonymous_gid(self) -> Optional[pulumi.Input[int]]:
"""
The anonymous GID used when `root_squash_enabled` is `true`.
"""
return pulumi.get(self, "anonymous_gid")
@anonymous_gid.setter
def anonymous_gid(self, value: Optional[pulumi.Input[int]]):
pulumi.set(self, "anonymous_gid", value)
@property
@pulumi.getter(name="anonymousUid")
def anonymous_uid(self) -> Optional[pulumi.Input[int]]:
"""
The anonymous UID used when `root_squash_enabled` is `true`.
"""
return pulumi.get(self, "anonymous_uid")
@anonymous_uid.setter
def anonymous_uid(self, value: Optional[pulumi.Input[int]]):
pulumi.set(self, "anonymous_uid", value)
@property
@pulumi.getter
def filter(self) -> Optional[pulumi.Input[str]]:
"""
The filter applied to the `scope` for this rule. The filter's format depends on its scope: `default` scope matches all clients and has no filter value; `network` scope takes a CIDR format; `host` takes an IP address or fully qualified domain name. If a client does not match any filter rule and there is no default rule, access is denied.
"""
return pulumi.get(self, "filter")
@filter.setter
def filter(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "filter", value)
@property
@pulumi.getter(name="rootSquashEnabled")
def root_squash_enabled(self) -> Optional[pulumi.Input[bool]]:
"""
Whether to enable [root squash](https://docs.microsoft.com/en-us/azure/hpc-cache/access-policies#root-squash)? Defaults to `false`.
"""
return pulumi.get(self, "root_squash_enabled")
@root_squash_enabled.setter
def root_squash_enabled(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "root_squash_enabled", value)
@property
@pulumi.getter(name="submountAccessEnabled")
def submount_access_enabled(self) -> Optional[pulumi.Input[bool]]:
"""
Whether allow access to subdirectories under the root export? Defaults to `false`.
"""
return pulumi.get(self, "submount_access_enabled")
@submount_access_enabled.setter
def submount_access_enabled(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "submount_access_enabled", value)
@property
@pulumi.getter(name="suidEnabled")
def suid_enabled(self) -> Optional[pulumi.Input[bool]]:
"""
Whether [SUID](https://docs.microsoft.com/en-us/azure/hpc-cache/access-policies#suid) is allowed? Defaults to `false`.
"""
return pulumi.get(self, "suid_enabled")
@suid_enabled.setter
def suid_enabled(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "suid_enabled", value)
@pulumi.input_type
class CacheDefaultAccessPolicyArgs:
def __init__(__self__, *,
access_rules: pulumi.Input[Sequence[pulumi.Input['CacheDefaultAccessPolicyAccessRuleArgs']]]):
"""
:param pulumi.Input[Sequence[pulumi.Input['CacheDefaultAccessPolicyAccessRuleArgs']]] access_rules: One to three `access_rule` blocks as defined above.
"""
pulumi.set(__self__, "access_rules", access_rules)
@property
@pulumi.getter(name="accessRules")
def access_rules(self) -> pulumi.Input[Sequence[pulumi.Input['CacheDefaultAccessPolicyAccessRuleArgs']]]:
"""
One to three `access_rule` blocks as defined above.
"""
return pulumi.get(self, "access_rules")
@access_rules.setter
def access_rules(self, value: pulumi.Input[Sequence[pulumi.Input['CacheDefaultAccessPolicyAccessRuleArgs']]]):
pulumi.set(self, "access_rules", value)
@pulumi.input_type
class CacheDefaultAccessPolicyAccessRuleArgs:
def __init__(__self__, *,
access: pulumi.Input[str],
scope: pulumi.Input[str],
anonymous_gid: Optional[pulumi.Input[int]] = None,
anonymous_uid: Optional[pulumi.Input[int]] = None,
filter: Optional[pulumi.Input[str]] = None,
root_squash_enabled: Optional[pulumi.Input[bool]] = None,
submount_access_enabled: Optional[pulumi.Input[bool]] = None,
suid_enabled: Optional[pulumi.Input[bool]] = None):
"""
:param pulumi.Input[str] access: The access level for this rule. Possible values are: `rw`, `ro`, `no`.
:param pulumi.Input[str] scope: The scope of this rule. The `scope` and (potentially) the `filter` determine which clients match the rule. Possible values are: `default`, `network`, `host`.
:param pulumi.Input[int] anonymous_gid: The anonymous GID used when `root_squash_enabled` is `true`.
:param pulumi.Input[int] anonymous_uid: The anonymous UID used when `root_squash_enabled` is `true`.
:param pulumi.Input[str] filter: The filter applied to the `scope` for this rule. The filter's format depends on its scope: `default` scope matches all clients and has no filter value; `network` scope takes a CIDR format; `host` takes an IP address or fully qualified domain name. If a client does not match any filter rule and there is no default rule, access is denied.
:param pulumi.Input[bool] root_squash_enabled: Whether to enable [root squash](https://docs.microsoft.com/en-us/azure/hpc-cache/access-policies#root-squash)? Defaults to `false`.
:param pulumi.Input[bool] submount_access_enabled: Whether allow access to subdirectories under the root export? Defaults to `false`.
:param pulumi.Input[bool] suid_enabled: Whether [SUID](https://docs.microsoft.com/en-us/azure/hpc-cache/access-policies#suid) is allowed? Defaults to `false`.
"""
pulumi.set(__self__, "access", access)
pulumi.set(__self__, "scope", scope)
if anonymous_gid is not None:
pulumi.set(__self__, "anonymous_gid", anonymous_gid)
if anonymous_uid is not None:
pulumi.set(__self__, "anonymous_uid", anonymous_uid)
if filter is not None:
pulumi.set(__self__, "filter", filter)
if root_squash_enabled is not None:
pulumi.set(__self__, "root_squash_enabled", root_squash_enabled)
if submount_access_enabled is not None:
pulumi.set(__self__, "submount_access_enabled", submount_access_enabled)
if suid_enabled is not None:
pulumi.set(__self__, "suid_enabled", suid_enabled)
@property
@pulumi.getter
def access(self) -> pulumi.Input[str]:
"""
The access level for this rule. Possible values are: `rw`, `ro`, `no`.
"""
return pulumi.get(self, "access")
@access.setter
def access(self, value: pulumi.Input[str]):
pulumi.set(self, "access", value)
@property
@pulumi.getter
def scope(self) -> pulumi.Input[str]:
"""
The scope of this rule. The `scope` and (potentially) the `filter` determine which clients match the rule. Possible values are: `default`, `network`, `host`.
"""
return pulumi.get(self, "scope")
@scope.setter
def scope(self, value: pulumi.Input[str]):
pulumi.set(self, "scope", value)
@property
@pulumi.getter(name="anonymousGid")
def anonymous_gid(self) -> Optional[pulumi.Input[int]]:
"""
The anonymous GID used when `root_squash_enabled` is `true`.
"""
return pulumi.get(self, "anonymous_gid")
@anonymous_gid.setter
def anonymous_gid(self, value: Optional[pulumi.Input[int]]):
pulumi.set(self, "anonymous_gid", value)
@property
@pulumi.getter(name="anonymousUid")
def anonymous_uid(self) -> Optional[pulumi.Input[int]]:
"""
The anonymous UID used when `root_squash_enabled` is `true`.
"""
return pulumi.get(self, "anonymous_uid")
@anonymous_uid.setter
def anonymous_uid(self, value: Optional[pulumi.Input[int]]):
pulumi.set(self, "anonymous_uid", value)
@property
@pulumi.getter
def filter(self) -> Optional[pulumi.Input[str]]:
"""
The filter applied to the `scope` for this rule. The filter's format depends on its scope: `default` scope matches all clients and has no filter value; `network` scope takes a CIDR format; `host` takes an IP address or fully qualified domain name. If a client does not match any filter rule and there is no default rule, access is denied.
"""
return pulumi.get(self, "filter")
@filter.setter
def filter(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "filter", value)
@property
@pulumi.getter(name="rootSquashEnabled")
def root_squash_enabled(self) -> Optional[pulumi.Input[bool]]:
"""
Whether to enable [root squash](https://docs.microsoft.com/en-us/azure/hpc-cache/access-policies#root-squash)? Defaults to `false`.
"""
return pulumi.get(self, "root_squash_enabled")
@root_squash_enabled.setter
def root_squash_enabled(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "root_squash_enabled", value)
@property
@pulumi.getter(name="submountAccessEnabled")
def submount_access_enabled(self) -> Optional[pulumi.Input[bool]]:
"""
Whether allow access to subdirectories under the root export? Defaults to `false`.
"""
return pulumi.get(self, "submount_access_enabled")
@submount_access_enabled.setter
def submount_access_enabled(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "submount_access_enabled", value)
@property
@pulumi.getter(name="suidEnabled")
def suid_enabled(self) -> Optional[pulumi.Input[bool]]:
"""
Whether [SUID](https://docs.microsoft.com/en-us/azure/hpc-cache/access-policies#suid) is allowed? Defaults to `false`.
"""
return pulumi.get(self, "suid_enabled")
@suid_enabled.setter
def suid_enabled(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "suid_enabled", value)
@pulumi.input_type
class CacheDirectoryActiveDirectoryArgs:
def __init__(__self__, *,
cache_netbios_name: pulumi.Input[str],
dns_primary_ip: pulumi.Input[str],
domain_name: pulumi.Input[str],
domain_netbios_name: pulumi.Input[str],
password: pulumi.Input[str],
username: pulumi.Input[str],
dns_secondary_ip: Optional[pulumi.Input[str]] = None):
"""
:param pulumi.Input[str] cache_netbios_name: The NetBIOS name to assign to the HPC Cache when it joins the Active Directory domain as a server.
:param pulumi.Input[str] dns_primary_ip: The primary DNS IP address used to resolve the Active Directory domain controller's FQDN.
:param pulumi.Input[str] domain_name: The fully qualified domain name of the Active Directory domain controller.
:param pulumi.Input[str] domain_netbios_name: The Active Directory domain's NetBIOS name.
:param pulumi.Input[str] password: The password of the Active Directory domain administrator.
:param pulumi.Input[str] username: The username of the Active Directory domain administrator.
:param pulumi.Input[str] dns_secondary_ip: The secondary DNS IP address used to resolve the Active Directory domain controller's FQDN.
"""
pulumi.set(__self__, "cache_netbios_name", cache_netbios_name)
pulumi.set(__self__, "dns_primary_ip", dns_primary_ip)
pulumi.set(__self__, "domain_name", domain_name)
pulumi.set(__self__, "domain_netbios_name", domain_netbios_name)
pulumi.set(__self__, "password", password)
pulumi.set(__self__, "username", username)
if dns_secondary_ip is not None:
pulumi.set(__self__, "dns_secondary_ip", dns_secondary_ip)
@property
@pulumi.getter(name="cacheNetbiosName")
def cache_netbios_name(self) -> pulumi.Input[str]:
"""
The NetBIOS name to assign to the HPC Cache when it joins the Active Directory domain as a server.
"""
return pulumi.get(self, "cache_netbios_name")
@cache_netbios_name.setter
def cache_netbios_name(self, value: pulumi.Input[str]):
pulumi.set(self, "cache_netbios_name", value)
@property
@pulumi.getter(name="dnsPrimaryIp")
def dns_primary_ip(self) -> pulumi.Input[str]:
"""
The primary DNS IP address used to resolve the Active Directory domain controller's FQDN.
"""
return pulumi.get(self, "dns_primary_ip")
@dns_primary_ip.setter
def dns_primary_ip(self, value: pulumi.Input[str]):
pulumi.set(self, "dns_primary_ip", value)
@property
@pulumi.getter(name="domainName")
def domain_name(self) -> pulumi.Input[str]:
"""
The fully qualified domain name of the Active Directory domain controller.
"""
return pulumi.get(self, "domain_name")
@domain_name.setter
def domain_name(self, value: pulumi.Input[str]):
pulumi.set(self, "domain_name", value)
@property
@pulumi.getter(name="domainNetbiosName")
def domain_netbios_name(self) -> pulumi.Input[str]:
"""
The Active Directory domain's NetBIOS name.
"""
return pulumi.get(self, "domain_netbios_name")
@domain_netbios_name.setter
def domain_netbios_name(self, value: pulumi.Input[str]):
pulumi.set(self, "domain_netbios_name", value)
@property
@pulumi.getter
def password(self) -> pulumi.Input[str]:
"""
The password of the Active Directory domain administrator.
"""
return pulumi.get(self, "password")
@password.setter
def password(self, value: pulumi.Input[str]):
pulumi.set(self, "password", value)
@property
@pulumi.getter
def username(self) -> pulumi.Input[str]:
"""
The username of the Active Directory domain administrator.
"""
return pulumi.get(self, "username")
@username.setter
def username(self, value: pulumi.Input[str]):
pulumi.set(self, "username", value)
@property
@pulumi.getter(name="dnsSecondaryIp")
def dns_secondary_ip(self) -> Optional[pulumi.Input[str]]:
"""
The secondary DNS IP address used to resolve the Active Directory domain controller's FQDN.
"""
return pulumi.get(self, "dns_secondary_ip")
@dns_secondary_ip.setter
def dns_secondary_ip(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "dns_secondary_ip", value)
@pulumi.input_type
class CacheDirectoryFlatFileArgs:
def __init__(__self__, *,
group_file_uri: pulumi.Input[str],
password_file_uri: pulumi.Input[str]):
"""
:param pulumi.Input[str] group_file_uri: The URI of the file containing group information (`/etc/group` file format in Unix-like OS).
:param pulumi.Input[str] password_file_uri: The URI of the file containing user information (`/etc/passwd` file format in Unix-like OS).
"""
pulumi.set(__self__, "group_file_uri", group_file_uri)
pulumi.set(__self__, "password_file_uri", password_file_uri)
@property
@pulumi.getter(name="groupFileUri")
def group_file_uri(self) -> pulumi.Input[str]:
"""
The URI of the file containing group information (`/etc/group` file format in Unix-like OS).
"""
return pulumi.get(self, "group_file_uri")
@group_file_uri.setter
def group_file_uri(self, value: pulumi.Input[str]):
pulumi.set(self, "group_file_uri", value)
@property
@pulumi.getter(name="passwordFileUri")
def password_file_uri(self) -> pulumi.Input[str]:
"""
The URI of the file containing user information (`/etc/passwd` file format in Unix-like OS).
"""
return pulumi.get(self, "password_file_uri")
@password_file_uri.setter
def password_file_uri(self, value: pulumi.Input[str]):
pulumi.set(self, "password_file_uri", value)
@pulumi.input_type
class CacheDirectoryLdapArgs:
def __init__(__self__, *,
base_dn: pulumi.Input[str],
server: pulumi.Input[str],
bind: Optional[pulumi.Input['CacheDirectoryLdapBindArgs']] = None,
certificate_validation_uri: Optional[pulumi.Input[str]] = None,
download_certificate_automatically: Optional[pulumi.Input[bool]] = None,
encrypted: Optional[pulumi.Input[bool]] = None):
"""
:param pulumi.Input[str] base_dn: The base distinguished name (DN) for the LDAP domain.
:param pulumi.Input[str] server: The FQDN or IP address of the LDAP server.
:param pulumi.Input['CacheDirectoryLdapBindArgs'] bind: A `bind` block as defined above.
:param pulumi.Input[str] certificate_validation_uri: The URI of the CA certificate to validate the LDAP secure connection.
:param pulumi.Input[bool] download_certificate_automatically: Whether the certificate should be automatically downloaded. This can be set to `true` only when `certificate_validation_uri` is provided. Defaults to `false`.
:param pulumi.Input[bool] encrypted: Whether the LDAP connection should be encrypted? Defaults to `false`.
"""
pulumi.set(__self__, "base_dn", base_dn)
pulumi.set(__self__, "server", server)
if bind is not None:
pulumi.set(__self__, "bind", bind)
if certificate_validation_uri is not None:
pulumi.set(__self__, "certificate_validation_uri", certificate_validation_uri)
if download_certificate_automatically is not None:
pulumi.set(__self__, "download_certificate_automatically", download_certificate_automatically)
if encrypted is not None:
pulumi.set(__self__, "encrypted", encrypted)
@property
@pulumi.getter(name="baseDn")
def base_dn(self) -> pulumi.Input[str]:
"""
The base distinguished name (DN) for the LDAP domain.
"""
return pulumi.get(self, "base_dn")
@base_dn.setter
def base_dn(self, value: pulumi.Input[str]):
pulumi.set(self, "base_dn", value)
@property
@pulumi.getter
def server(self) -> pulumi.Input[str]:
"""
The FQDN or IP address of the LDAP server.
"""
return pulumi.get(self, "server")
@server.setter
def server(self, value: pulumi.Input[str]):
pulumi.set(self, "server", value)
@property
@pulumi.getter
def bind(self) -> Optional[pulumi.Input['CacheDirectoryLdapBindArgs']]:
"""
A `bind` block as defined above.
"""
return pulumi.get(self, "bind")
@bind.setter
def bind(self, value: Optional[pulumi.Input['CacheDirectoryLdapBindArgs']]):
pulumi.set(self, "bind", value)
@property
@pulumi.getter(name="certificateValidationUri")
def certificate_validation_uri(self) -> Optional[pulumi.Input[str]]:
"""
The URI of the CA certificate to validate the LDAP secure connection.
"""
return pulumi.get(self, "certificate_validation_uri")
@certificate_validation_uri.setter
def certificate_validation_uri(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "certificate_validation_uri", value)
@property
@pulumi.getter(name="downloadCertificateAutomatically")
def download_certificate_automatically(self) -> Optional[pulumi.Input[bool]]:
"""
Whether the certificate should be automatically downloaded. This can be set to `true` only when `certificate_validation_uri` is provided. Defaults to `false`.
"""
return pulumi.get(self, "download_certificate_automatically")
@download_certificate_automatically.setter
def download_certificate_automatically(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "download_certificate_automatically", value)
@property
@pulumi.getter
def encrypted(self) -> Optional[pulumi.Input[bool]]:
"""
Whether the LDAP connection should be encrypted? Defaults to `false`.
"""
return pulumi.get(self, "encrypted")
@encrypted.setter
def encrypted(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "encrypted", value)
@pulumi.input_type
class CacheDirectoryLdapBindArgs:
def __init__(__self__, *,
dn: pulumi.Input[str],
password: pulumi.Input[str]):
"""
:param pulumi.Input[str] dn: The Bind Distinguished Name (DN) identity to be used in the secure LDAP connection.
:param pulumi.Input[str] password: The Bind password to be used in the secure LDAP connection.
"""
pulumi.set(__self__, "dn", dn)
pulumi.set(__self__, "password", password)
@property
@pulumi.getter
def dn(self) -> pulumi.Input[str]:
"""
The Bind Distinguished Name (DN) identity to be used in the secure LDAP connection.
"""
return pulumi.get(self, "dn")
@dn.setter
def dn(self, value: pulumi.Input[str]):
pulumi.set(self, "dn", value)
@property
@pulumi.getter
def password(self) -> pulumi.Input[str]:
"""
The Bind password to be used in the secure LDAP connection.
"""
return pulumi.get(self, "password")
@password.setter
def password(self, value: pulumi.Input[str]):
pulumi.set(self, "password", value)
@pulumi.input_type
class CacheDnsArgs:
def __init__(__self__, *,
servers: pulumi.Input[Sequence[pulumi.Input[str]]],
search_domain: Optional[pulumi.Input[str]] = None):
"""
:param pulumi.Input[Sequence[pulumi.Input[str]]] servers: A list of DNS servers for the HPC Cache. At most three IP(s) are allowed to set.
:param pulumi.Input[str] search_domain: The DNS search domain for the HPC Cache.
"""
pulumi.set(__self__, "servers", servers)
if search_domain is not None:
pulumi.set(__self__, "search_domain", search_domain)
@property
@pulumi.getter
def servers(self) -> pulumi.Input[Sequence[pulumi.Input[str]]]:
"""
A list of DNS servers for the HPC Cache. At most three IP(s) are allowed to set.
"""
return pulumi.get(self, "servers")
@servers.setter
def servers(self, value: pulumi.Input[Sequence[pulumi.Input[str]]]):
pulumi.set(self, "servers", value)
@property
@pulumi.getter(name="searchDomain")
def search_domain(self) -> Optional[pulumi.Input[str]]:
"""
The DNS search domain for the HPC Cache.
"""
return pulumi.get(self, "search_domain")
@search_domain.setter
def search_domain(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "search_domain", value)
@pulumi.input_type
class CacheNfsTargetNamespaceJunctionArgs:
def __init__(__self__, *,
namespace_path: pulumi.Input[str],
nfs_export: pulumi.Input[str],
access_policy_name: Optional[pulumi.Input[str]] = None,
target_path: Optional[pulumi.Input[str]] = None):
"""
:param pulumi.Input[str] namespace_path: The client-facing file path of this NFS target within the HPC Cache NFS Target.
:param pulumi.Input[str] nfs_export: The NFS export of this NFS target within the HPC Cache NFS Target.
:param pulumi.Input[str] access_policy_name: The name of the access policy applied to this target. Defaults to `default`.
:param pulumi.Input[str] target_path: The relative subdirectory path from the `nfs_export` to map to the `namespace_path`. Defaults to `""`, in which case the whole `nfs_export` is exported.
"""
pulumi.set(__self__, "namespace_path", namespace_path)
pulumi.set(__self__, "nfs_export", nfs_export)
if access_policy_name is not None:
pulumi.set(__self__, "access_policy_name", access_policy_name)
if target_path is not None:
pulumi.set(__self__, "target_path", target_path)
@property
@pulumi.getter(name="namespacePath")
def namespace_path(self) -> pulumi.Input[str]:
"""
The client-facing file path of this NFS target within the HPC Cache NFS Target.
"""
return pulumi.get(self, "namespace_path")
@namespace_path.setter
def namespace_path(self, value: pulumi.Input[str]):
pulumi.set(self, "namespace_path", value)
@property
@pulumi.getter(name="nfsExport")
def nfs_export(self) -> pulumi.Input[str]:
"""
The NFS export of this NFS target within the HPC Cache NFS Target.
"""
return pulumi.get(self, "nfs_export")
@nfs_export.setter
def nfs_export(self, value: pulumi.Input[str]):
pulumi.set(self, "nfs_export", value)
@property
@pulumi.getter(name="accessPolicyName")
def access_policy_name(self) -> Optional[pulumi.Input[str]]:
"""
The name of the access policy applied to this target. Defaults to `default`.
"""
return pulumi.get(self, "access_policy_name")
@access_policy_name.setter
def access_policy_name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "access_policy_name", value)
@property
@pulumi.getter(name="targetPath")
def target_path(self) -> Optional[pulumi.Input[str]]:
"""
The relative subdirectory path from the `nfs_export` to map to the `namespace_path`. Defaults to `""`, in which case the whole `nfs_export` is exported.
"""
return pulumi.get(self, "target_path")
@target_path.setter
def target_path(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "target_path", value)
| 42.767705 | 379 | 0.663576 | 3,734 | 30,194 | 5.192019 | 0.064006 | 0.100428 | 0.075102 | 0.039202 | 0.86331 | 0.779337 | 0.706659 | 0.644762 | 0.627173 | 0.587971 | 0 | 0.000043 | 0.224515 | 30,194 | 705 | 380 | 42.828369 | 0.827931 | 0.318871 | 0 | 0.515366 | 1 | 0 | 0.120817 | 0.044147 | 0 | 0 | 0 | 0 | 0 | 1 | 0.210402 | false | 0.052009 | 0.01182 | 0 | 0.338061 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
9e9d18f5c80b72f9753e446d60b697ce4f9b7c3b | 98 | py | Python | mindsdb/api/mongo/functions/__init__.py | yarenty/mindsdb | 9164bca6f45fd0f5ec329babe973f286ffe59709 | [
"MIT"
] | 261 | 2018-09-28T02:32:17.000Z | 2018-12-10T06:30:54.000Z | mindsdb/api/mongo/functions/__init__.py | yarenty/mindsdb | 9164bca6f45fd0f5ec329babe973f286ffe59709 | [
"MIT"
] | 27 | 2018-09-26T08:49:11.000Z | 2018-12-10T14:42:52.000Z | mindsdb/api/mongo/functions/__init__.py | yarenty/mindsdb | 9164bca6f45fd0f5ec329babe973f286ffe59709 | [
"MIT"
] | 46 | 2018-10-06T10:11:18.000Z | 2018-12-10T04:02:17.000Z | def is_true(val):
return bool(val) is True
def is_false(val):
return bool(val) is False
| 14 | 29 | 0.673469 | 18 | 98 | 3.555556 | 0.388889 | 0.15625 | 0.40625 | 0.5 | 0.5625 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.22449 | 98 | 6 | 30 | 16.333333 | 0.842105 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | false | 0 | 0 | 0.5 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 6 |
7b622f19cc39a5956c211ff27ef8af6c5d18a3b9 | 32 | py | Python | binreconfiguration/simulator/__init__.py | vialette/binreconfiguration | 57cc024fdcf9b083a830270176ade185b65a85d0 | [
"MIT"
] | null | null | null | binreconfiguration/simulator/__init__.py | vialette/binreconfiguration | 57cc024fdcf9b083a830270176ade185b65a85d0 | [
"MIT"
] | null | null | null | binreconfiguration/simulator/__init__.py | vialette/binreconfiguration | 57cc024fdcf9b083a830270176ade185b65a85d0 | [
"MIT"
] | null | null | null | from .simulator import Simulator | 32 | 32 | 0.875 | 4 | 32 | 7 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.09375 | 32 | 1 | 32 | 32 | 0.965517 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
7b77ddc8daf2cdf13b6b2513a9e5bf2f25bd123e | 100 | py | Python | keyboard.py | StepaTa/vkbottle | 3b04a5343380cbabe782151e7cb1c1645a9fa9ce | [
"MIT"
] | null | null | null | keyboard.py | StepaTa/vkbottle | 3b04a5343380cbabe782151e7cb1c1645a9fa9ce | [
"MIT"
] | null | null | null | keyboard.py | StepaTa/vkbottle | 3b04a5343380cbabe782151e7cb1c1645a9fa9ce | [
"MIT"
] | null | null | null | from .api.keyboard import Keyboard, KeyboardButton, keyboard_gen
from .api.keyboard.action import *
| 33.333333 | 64 | 0.82 | 13 | 100 | 6.230769 | 0.538462 | 0.17284 | 0.37037 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.1 | 100 | 2 | 65 | 50 | 0.9 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
7ba0c72e993dbdcf9d69c9eed42c6a8478bc9c43 | 47 | py | Python | algo_trader/clients/__init__.py | dignitas123/algo_trader | a7f85bc063eb23e492a06187efb5f7af87d3cd2e | [
"MIT"
] | 4 | 2020-06-12T08:59:06.000Z | 2022-03-18T18:52:33.000Z | algo_trader/clients/__init__.py | dignitas123/algo_trader | a7f85bc063eb23e492a06187efb5f7af87d3cd2e | [
"MIT"
] | 5 | 2022-01-13T16:29:50.000Z | 2022-01-17T05:54:27.000Z | algo_trader/clients/__init__.py | dignitas123/algo_trader | a7f85bc063eb23e492a06187efb5f7af87d3cd2e | [
"MIT"
] | 1 | 2022-03-31T07:04:51.000Z | 2022-03-31T07:04:51.000Z | from algo_trader.clients.bitmex_client import * | 47 | 47 | 0.87234 | 7 | 47 | 5.571429 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.06383 | 47 | 1 | 47 | 47 | 0.886364 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
c86e3a096bf4cb63d26093e9e12241d750a913fe | 172 | py | Python | example.py | shobeiry/Sudoku | 5f79251d9c0faff8478d43f1c6ed92a6b740bb77 | [
"MIT"
] | null | null | null | example.py | shobeiry/Sudoku | 5f79251d9c0faff8478d43f1c6ed92a6b740bb77 | [
"MIT"
] | null | null | null | example.py | shobeiry/Sudoku | 5f79251d9c0faff8478d43f1c6ed92a6b740bb77 | [
"MIT"
] | null | null | null | from src.sudoku import Sudoku
import numpy as np
if __name__ == '__main__':
s = Sudoku(25)
print(np.array(s.new_game()))
print(np.array(s.board))
# s.get()
| 21.5 | 33 | 0.639535 | 28 | 172 | 3.607143 | 0.642857 | 0.237624 | 0.237624 | 0.257426 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.014599 | 0.203488 | 172 | 7 | 34 | 24.571429 | 0.722628 | 0.040698 | 0 | 0 | 0 | 0 | 0.04908 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.333333 | 0.333333 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
c8802fb6fc9df2cda148b074e9887f422145f8c4 | 30 | py | Python | DisnakePaginator/__init__.py | MahediZaber51/DisnakePaginator | f66b1b07b112821551a60c35a22cb61c26585fbf | [
"MIT"
] | null | null | null | DisnakePaginator/__init__.py | MahediZaber51/DisnakePaginator | f66b1b07b112821551a60c35a22cb61c26585fbf | [
"MIT"
] | null | null | null | DisnakePaginator/__init__.py | MahediZaber51/DisnakePaginator | f66b1b07b112821551a60c35a22cb61c26585fbf | [
"MIT"
] | null | null | null | from .Paginator import Create
| 15 | 29 | 0.833333 | 4 | 30 | 6.25 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.133333 | 30 | 1 | 30 | 30 | 0.961538 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
c8db4cc45cce64c04b245b56fac59f2f23c68523 | 14,276 | py | Python | tests/app/clients/test_cbc_proxy.py | tlwr/notifications-api | 88a6b7729edb9be41ce3e7c027f1452b7b6d00d2 | [
"MIT"
] | null | null | null | tests/app/clients/test_cbc_proxy.py | tlwr/notifications-api | 88a6b7729edb9be41ce3e7c027f1452b7b6d00d2 | [
"MIT"
] | null | null | null | tests/app/clients/test_cbc_proxy.py | tlwr/notifications-api | 88a6b7729edb9be41ce3e7c027f1452b7b6d00d2 | [
"MIT"
] | null | null | null | import json
import uuid
from collections import namedtuple
from datetime import datetime
from unittest.mock import Mock
import pytest
from app.clients.cbc_proxy import CBCProxyClient, CBCProxyException, CBCProxyEE, CBCProxyCanary
from app.utils import DATETIME_FORMAT
@pytest.fixture(scope='function')
def cbc_proxy_client(client, mocker):
client = CBCProxyClient()
current_app = mocker.Mock(config={
'CBC_PROXY_AWS_ACCESS_KEY_ID': 'cbc-proxy-aws-access-key-id',
'CBC_PROXY_AWS_SECRET_ACCESS_KEY': 'cbc-proxy-aws-secret-access-key',
'CBC_PROXY_ENABLED': True,
})
client.init_app(current_app)
return client
@pytest.fixture
def cbc_proxy_ee(cbc_proxy_client):
return cbc_proxy_client.get_proxy('ee')
@pytest.fixture
def cbc_proxy_vodafone(cbc_proxy_client):
return cbc_proxy_client.get_proxy('vodafone')
@pytest.mark.parametrize('provider_name, expected_provider_class', [
('ee', CBCProxyEE),
('canary', CBCProxyCanary),
])
def test_cbc_proxy_client_returns_correct_client(provider_name, expected_provider_class):
mock_lambda = Mock()
cbc_proxy_client = CBCProxyClient()
cbc_proxy_client._lambda_client = mock_lambda
ret = cbc_proxy_client.get_proxy(provider_name)
assert type(ret) == expected_provider_class
assert ret._lambda_client == mock_lambda
def test_cbc_proxy_lambda_client_has_correct_region(cbc_proxy_ee):
assert cbc_proxy_ee._lambda_client._client_config.region_name == 'eu-west-2'
def test_cbc_proxy_lambda_client_has_correct_keys(cbc_proxy_ee):
key = cbc_proxy_ee._lambda_client._request_signer._credentials.access_key
secret = cbc_proxy_ee._lambda_client._request_signer._credentials.secret_key
assert key == 'cbc-proxy-aws-access-key-id'
assert secret == 'cbc-proxy-aws-secret-access-key'
@pytest.mark.parametrize('description, expected_language', (
('my-description', 'en-GB'),
('mŷ-description', 'cy-GB'),
))
def test_cbc_proxy_ee_create_and_send_invokes_function(
mocker,
cbc_proxy_ee,
description,
expected_language,
):
identifier = 'my-identifier'
headline = 'my-headline'
sent = 'a-passed-through-sent-value'
expires = 'a-passed-through-expires-value'
# a single area which is a square including london
areas = [{
'description': 'london',
'polygon': [
[51.12, -1.2],
[51.12, 1.2],
[51.74, 1.2],
[51.74, -1.2],
[51.12, -1.2],
],
}]
ld_client_mock = mocker.patch.object(
cbc_proxy_ee,
'_lambda_client',
create=True,
)
ld_client_mock.invoke.return_value = {
'StatusCode': 200,
}
cbc_proxy_ee.create_and_send_broadcast(
identifier=identifier,
message_number='0000007b',
headline=headline,
description=description,
areas=areas,
sent=sent, expires=expires,
)
ld_client_mock.invoke.assert_called_once_with(
FunctionName='bt-ee-1-proxy',
InvocationType='RequestResponse',
Payload=mocker.ANY,
)
kwargs = ld_client_mock.invoke.mock_calls[0][-1]
payload_bytes = kwargs['Payload']
payload = json.loads(payload_bytes)
assert payload['identifier'] == identifier
assert 'message_number' not in payload
assert payload['message_format'] == 'cap'
assert payload['message_type'] == 'alert'
assert payload['headline'] == headline
assert payload['description'] == description
assert payload['areas'] == areas
assert payload['sent'] == sent
assert payload['expires'] == expires
assert payload['language'] == expected_language
def test_cbc_proxy_ee_cancel_invokes_function(mocker, cbc_proxy_ee):
identifier = 'my-identifier'
MockProviderMessage = namedtuple(
'BroadcastProviderMessage', ['id', 'message_number', 'created_at']
)
provider_messages = [
MockProviderMessage(uuid.uuid4(), '0000007b', datetime(2020, 12, 16)),
MockProviderMessage(uuid.uuid4(), '0000004e', datetime(2020, 12, 17))
]
sent = '2020-12-17 14:19:44.130585'
ld_client_mock = mocker.patch.object(
cbc_proxy_ee,
'_lambda_client',
create=True,
)
ld_client_mock.invoke.return_value = {
'StatusCode': 200,
}
cbc_proxy_ee.cancel_broadcast(
identifier=identifier,
message_number='00000050',
previous_provider_messages=provider_messages,
sent=sent
)
ld_client_mock.invoke.assert_called_once_with(
FunctionName='bt-ee-1-proxy',
InvocationType='RequestResponse',
Payload=mocker.ANY,
)
kwargs = ld_client_mock.invoke.mock_calls[0][-1]
payload_bytes = kwargs['Payload']
payload = json.loads(payload_bytes)
assert payload['identifier'] == identifier
assert 'message_number' not in payload
assert payload['message_format'] == 'cap'
assert payload['message_type'] == 'cancel'
assert payload['references'] == [
{
"message_id": str(provider_messages[0].id),
"sent": provider_messages[0].created_at.strftime(DATETIME_FORMAT)
},
{
"message_id": str(provider_messages[1].id),
"sent": provider_messages[1].created_at.strftime(DATETIME_FORMAT)
},
]
assert payload['sent'] == sent
@pytest.mark.parametrize('description, expected_language', (
('my-description', 'English'),
('mŷ-description', 'Welsh'),
))
def test_cbc_proxy_vodafone_create_and_send_invokes_function(
mocker,
cbc_proxy_vodafone,
description,
expected_language,
):
identifier = 'my-identifier'
headline = 'my-headline'
sent = 'a-passed-through-sent-value'
expires = 'a-passed-through-expires-value'
# a single area which is a square including london
areas = [{
'description': 'london',
'polygon': [
[51.12, -1.2],
[51.12, 1.2],
[51.74, 1.2],
[51.74, -1.2],
[51.12, -1.2],
],
}]
ld_client_mock = mocker.patch.object(
cbc_proxy_vodafone,
'_lambda_client',
create=True,
)
ld_client_mock.invoke.return_value = {
'StatusCode': 200,
}
cbc_proxy_vodafone.create_and_send_broadcast(
identifier=identifier,
message_number='0000007b',
headline=headline,
description=description,
areas=areas,
sent=sent, expires=expires,
)
ld_client_mock.invoke.assert_called_once_with(
FunctionName='vodafone-1-proxy',
InvocationType='RequestResponse',
Payload=mocker.ANY,
)
kwargs = ld_client_mock.invoke.mock_calls[0][-1]
payload_bytes = kwargs['Payload']
payload = json.loads(payload_bytes)
assert payload['identifier'] == identifier
assert payload['message_number'] == '0000007b'
assert payload['message_format'] == 'ibag'
assert payload['message_type'] == 'alert'
assert payload['headline'] == headline
assert payload['description'] == description
assert payload['areas'] == areas
assert payload['sent'] == sent
assert payload['expires'] == expires
assert payload['language'] == expected_language
def test_cbc_proxy_vodafone_cancel_invokes_function(mocker, cbc_proxy_vodafone):
identifier = 'my-identifier'
MockProviderMessage = namedtuple(
'BroadcastProviderMessage',
['id', 'message_number', 'created_at']
)
provider_messages = [
MockProviderMessage(uuid.uuid4(), 78, datetime(2020, 12, 16)),
MockProviderMessage(uuid.uuid4(), 123, datetime(2020, 12, 17))
]
sent = '2020-12-18 14:19:44.130585'
ld_client_mock = mocker.patch.object(
cbc_proxy_vodafone,
'_lambda_client',
create=True,
)
ld_client_mock.invoke.return_value = {
'StatusCode': 200,
}
cbc_proxy_vodafone.cancel_broadcast(
identifier=identifier,
message_number='00000050',
previous_provider_messages=provider_messages,
sent=sent
)
ld_client_mock.invoke.assert_called_once_with(
FunctionName='vodafone-1-proxy',
InvocationType='RequestResponse',
Payload=mocker.ANY,
)
kwargs = ld_client_mock.invoke.mock_calls[0][-1]
payload_bytes = kwargs['Payload']
payload = json.loads(payload_bytes)
assert payload['identifier'] == identifier
assert payload['message_number'] == '00000050'
assert payload['message_format'] == 'ibag'
assert payload['message_type'] == 'cancel'
assert payload['references'] == [
{
"message_id": str(provider_messages[0].id),
"message_number": '0000004e',
"sent": provider_messages[0].created_at.strftime(DATETIME_FORMAT)
},
{
"message_id": str(provider_messages[1].id),
"message_number": '0000007b',
"sent": provider_messages[1].created_at.strftime(DATETIME_FORMAT)
},
]
assert payload['sent'] == sent
def test_cbc_proxy_create_and_send_handles_invoke_error(mocker, cbc_proxy_ee):
identifier = 'my-identifier'
headline = 'my-headline'
description = 'my-description'
sent = 'a-passed-through-sent-value'
expires = 'a-passed-through-expires-value'
# a single area which is a square including london
areas = [{
'description': 'london',
'polygon': [
[51.12, -1.2],
[51.12, 1.2],
[51.74, 1.2],
[51.74, -1.2],
[51.12, -1.2],
],
}]
ld_client_mock = mocker.patch.object(
cbc_proxy_ee,
'_lambda_client',
create=True,
)
ld_client_mock.invoke.return_value = {
'StatusCode': 400,
}
with pytest.raises(CBCProxyException) as e:
cbc_proxy_ee.create_and_send_broadcast(
identifier=identifier,
message_number='0000007b',
headline=headline,
description=description,
areas=areas,
sent=sent, expires=expires,
)
assert e.match('Could not invoke lambda')
ld_client_mock.invoke.assert_called_once_with(
FunctionName='bt-ee-1-proxy',
InvocationType='RequestResponse',
Payload=mocker.ANY,
)
def test_cbc_proxy_create_and_send_handles_function_error(mocker, cbc_proxy_ee):
identifier = 'my-identifier'
headline = 'my-headline'
description = 'my-description'
sent = 'a-passed-through-sent-value'
expires = 'a-passed-through-expires-value'
# a single area which is a square including london
areas = [{
'description': 'london',
'polygon': [
[51.12, -1.2],
[51.12, 1.2],
[51.74, 1.2],
[51.74, -1.2],
[51.12, -1.2],
],
}]
ld_client_mock = mocker.patch.object(
cbc_proxy_ee,
'_lambda_client',
create=True,
)
ld_client_mock.invoke.return_value = {
'StatusCode': 200,
'FunctionError': 'something',
}
with pytest.raises(CBCProxyException) as e:
cbc_proxy_ee.create_and_send_broadcast(
identifier=identifier,
message_number='0000007b',
headline=headline,
description=description,
areas=areas,
sent=sent, expires=expires,
)
assert e.match('Function exited with unhandled exception')
ld_client_mock.invoke.assert_called_once_with(
FunctionName='bt-ee-1-proxy',
InvocationType='RequestResponse',
Payload=mocker.ANY,
)
def test_cbc_proxy_send_canary_invokes_function(mocker, cbc_proxy_client):
identifier = str(uuid.uuid4())
canary_client = cbc_proxy_client.get_proxy('canary')
ld_client_mock = mocker.patch.object(
canary_client,
'_lambda_client',
create=True,
)
ld_client_mock.invoke.return_value = {
'StatusCode': 200,
}
canary_client.send_canary(
identifier=identifier,
)
ld_client_mock.invoke.assert_called_once_with(
FunctionName='canary',
InvocationType='RequestResponse',
Payload=mocker.ANY,
)
kwargs = ld_client_mock.invoke.mock_calls[0][-1]
payload_bytes = kwargs['Payload']
payload = json.loads(payload_bytes)
assert payload['identifier'] == identifier
def test_cbc_proxy_ee_send_link_test_invokes_function(mocker, cbc_proxy_ee):
identifier = str(uuid.uuid4())
ld_client_mock = mocker.patch.object(
cbc_proxy_ee,
'_lambda_client',
create=True,
)
ld_client_mock.invoke.return_value = {
'StatusCode': 200,
}
cbc_proxy_ee.send_link_test(
identifier=identifier,
sequential_number='0000007b',
)
ld_client_mock.invoke.assert_called_once_with(
FunctionName='bt-ee-1-proxy',
InvocationType='RequestResponse',
Payload=mocker.ANY,
)
kwargs = ld_client_mock.invoke.mock_calls[0][-1]
payload_bytes = kwargs['Payload']
payload = json.loads(payload_bytes)
assert payload['identifier'] == identifier
assert payload['message_type'] == 'test'
assert 'message_number' not in payload
assert payload['message_format'] == 'cap'
def test_cbc_proxy_vodafone_send_link_test_invokes_function(mocker, cbc_proxy_vodafone):
identifier = str(uuid.uuid4())
ld_client_mock = mocker.patch.object(
cbc_proxy_vodafone,
'_lambda_client',
create=True,
)
ld_client_mock.invoke.return_value = {
'StatusCode': 200,
}
cbc_proxy_vodafone.send_link_test(
identifier=identifier,
sequential_number='0000007b',
)
ld_client_mock.invoke.assert_called_once_with(
FunctionName='vodafone-1-proxy',
InvocationType='RequestResponse',
Payload=mocker.ANY,
)
kwargs = ld_client_mock.invoke.mock_calls[0][-1]
payload_bytes = kwargs['Payload']
payload = json.loads(payload_bytes)
assert payload['identifier'] == identifier
assert payload['message_type'] == 'test'
assert payload['message_number'] == '0000007b'
assert payload['message_format'] == 'ibag'
| 27.882813 | 95 | 0.648571 | 1,613 | 14,276 | 5.459392 | 0.101674 | 0.055417 | 0.046332 | 0.051102 | 0.875767 | 0.847717 | 0.83057 | 0.805814 | 0.742448 | 0.717579 | 0 | 0.033855 | 0.234449 | 14,276 | 511 | 96 | 27.937378 | 0.771891 | 0.013659 | 0 | 0.687042 | 0 | 0 | 0.164606 | 0.033603 | 0 | 0 | 0 | 0 | 0.139364 | 1 | 0.036675 | false | 0.01956 | 0.01956 | 0.00489 | 0.06357 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
a80b99ac24aa2a5171d52c77b5bf51e2f217dcea | 3,144 | py | Python | graphs/models/Mnist_dropout.py | kkontras/Sleep_net | a6a83d4624989cc8a79238e491da06dc22d562b8 | [
"MIT"
] | 1 | 2022-02-22T02:40:41.000Z | 2022-02-22T02:40:41.000Z | graphs/models/Mnist_dropout.py | kkontras/Sleep_net | a6a83d4624989cc8a79238e491da06dc22d562b8 | [
"MIT"
] | null | null | null | graphs/models/Mnist_dropout.py | kkontras/Sleep_net | a6a83d4624989cc8a79238e491da06dc22d562b8 | [
"MIT"
] | null | null | null | import torch.nn as nn
import torch.nn.functional as F
from ..weights_initializer import init_model_weights
class Mnist_Dropout(nn.Module):
def __init__(self,):
super(Mnist_Dropout, self).__init__()
self.conv1 = nn.Conv2d(1, 32, kernel_size=4, stride=1, padding=0)
self.conv2 = nn.Conv2d(32, 32, kernel_size=4, stride=1, padding=0)
self.conv2_drop = nn.Dropout2d()
self.fc1 = nn.Linear(11*11*32, 128)
self.fc2 = nn.Linear(128, 10)
self.train_p1 = 0.25
self.train_p2 = 0.5
self.test_p1 = 0.25
self.test_p2 = 0.5
# self.apply(init_model_weights)
def forward(self, x, training_time, traineval):
if training_time:
drop_prob1, drop_prob2 = self.train_p1, self.train_p2
else:
drop_prob1, drop_prob2 = self.test_p1, self.test_p2
x = F.relu(self.conv1(x))
x = F.relu(self.conv2(x))
x = F.max_pool2d(x, 2)
x = F.dropout(x, p=drop_prob1, training= traineval)
x = x.view(-1, 11*11*32)
x = F.relu(self.fc1(x))
x = F.dropout(x, p=drop_prob2, training= traineval)
x = self.fc2(x)
return F.softmax(x, dim=1)
# def test_drop(self, x, traineval):
# x = F.relu(self.conv1(x))
# x = F.relu(self.conv2(x))
#
# x = F.max_pool2d(x, 5)
# x = F.dropout(x, p=self.test_p1, training= traineval)
# # print(x.size())
#
# x = x.view(-1, 1024)
#
# x = F.relu(self.fc1(x))
# x = F.dropout(x, p=self.test_p2, training= traineval)
# x = self.fc2(x)
# return F.softmax(x, dim=1)
# class Mnist_Dropout(nn.Module):
# def __init__(self,):
# super(Mnist_Dropout, self).__init__()
# self.conv1 = nn.Conv2d(1, 32, kernel_size=3, stride=1, padding=0)
# self.conv2 = nn.Conv2d(32, 32, kernel_size=3, stride=1, padding=0)
# self.conv2_drop = nn.Dropout2d()
# self.fc1 = nn.Linear(512, 128)
# self.fc2 = nn.Linear(128, 10)
#
# self.train_p1 = 0.2
# self.train_p2 = 0.4
# self.test_p1 = 0.01
# self.test_p2 = 0.05
#
# self.apply(init_model_weights)
#
# def forward(self, x, traineval):
# # print(x.size())
# x = F.relu(self.conv1(x))
# # print(x.size())
#
# x = F.relu(self.conv2(x))
# # print(x.size())
#
# x = F.max_pool2d(x, 5)
# x = F.dropout(x, p=self.train_p1, training= traineval)
# # print(x.size())
#
# x = x.view(-1, 512)
#
# x = F.relu(self.fc1(x))
# x = F.dropout(x, p=self.train_p2, training= traineval)
# x = self.fc2(x)
# return F.softmax(x, dim=1)
#
# def test_drop(self, x, traineval):
# x = F.relu(self.conv1(x))
# x = F.relu(self.conv2(x))
#
# x = F.max_pool2d(x, 4)
# x = F.dropout(x, p=self.test_p1, training= traineval)
#
# x = x.view(-1, 512)
#
# x = F.relu(self.fc1(x))
# x = F.dropout(x, p=self.test_p2, training= traineval)
# x = self.fc2(x)
# return F.softmax(x, dim=1) | 31.44 | 76 | 0.540394 | 481 | 3,144 | 3.397089 | 0.141372 | 0.029376 | 0.044064 | 0.073439 | 0.834149 | 0.801714 | 0.761934 | 0.731334 | 0.731334 | 0.682375 | 0 | 0.072011 | 0.29771 | 3,144 | 100 | 77 | 31.44 | 0.668025 | 0.56584 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.068966 | false | 0 | 0.103448 | 0 | 0.241379 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
b5261a9bcb93c5c0a3db787782f81327a335b465 | 25 | py | Python | src/foremast/awslambda/sns_event/__init__.py | gitter-badger/foremast | 33530438ba5893a1d5cf822a63e03d7ab49dfcd7 | [
"Apache-2.0"
] | null | null | null | src/foremast/awslambda/sns_event/__init__.py | gitter-badger/foremast | 33530438ba5893a1d5cf822a63e03d7ab49dfcd7 | [
"Apache-2.0"
] | null | null | null | src/foremast/awslambda/sns_event/__init__.py | gitter-badger/foremast | 33530438ba5893a1d5cf822a63e03d7ab49dfcd7 | [
"Apache-2.0"
] | null | null | null | from .sns_event import *
| 12.5 | 24 | 0.76 | 4 | 25 | 4.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.16 | 25 | 1 | 25 | 25 | 0.857143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
b53df202c9a9f104624bbe66936c0385c8b8df76 | 50 | py | Python | electricityLoadForecasting/forecasting/models/afm/lbfgs/__init__.py | BCD65/electricityLoadForecasting | 07a6ed060afaf7cc2906c0389b5c9e9b0fede193 | [
"MIT"
] | null | null | null | electricityLoadForecasting/forecasting/models/afm/lbfgs/__init__.py | BCD65/electricityLoadForecasting | 07a6ed060afaf7cc2906c0389b5c9e9b0fede193 | [
"MIT"
] | null | null | null | electricityLoadForecasting/forecasting/models/afm/lbfgs/__init__.py | BCD65/electricityLoadForecasting | 07a6ed060afaf7cc2906c0389b5c9e9b0fede193 | [
"MIT"
] | null | null | null |
from .lbfgs import *
from .lbfgs_tools import *
| 10 | 26 | 0.72 | 7 | 50 | 5 | 0.571429 | 0.514286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.2 | 50 | 4 | 27 | 12.5 | 0.875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
b58c7f26a6a72e8491fe4bd8ceb1262cc26a776c | 318 | py | Python | app/models/encounters.py | neuralhalation/flask-call-of-cthulhu | 59e1afc9ac1244106fe3439f9a37f16dcfc15d52 | [
"MIT"
] | null | null | null | app/models/encounters.py | neuralhalation/flask-call-of-cthulhu | 59e1afc9ac1244106fe3439f9a37f16dcfc15d52 | [
"MIT"
] | null | null | null | app/models/encounters.py | neuralhalation/flask-call-of-cthulhu | 59e1afc9ac1244106fe3439f9a37f16dcfc15d52 | [
"MIT"
] | null | null | null | import app.functions.add_remove as ar
def encounter(description):
return ar.new("encounter", description=description)
def add_encounter(encounter, encounters=[]):
return ar.add_to_list(encounter, encounters)
def remove_encounter(encounter, encounters):
return ar.remove_by_obj(encounter, encounters)
| 22.714286 | 55 | 0.779874 | 40 | 318 | 6.025 | 0.425 | 0.315353 | 0.232365 | 0.282158 | 0.298755 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.122642 | 318 | 13 | 56 | 24.461538 | 0.863799 | 0 | 0 | 0 | 0 | 0 | 0.028302 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.428571 | false | 0 | 0.142857 | 0.428571 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
b5992214c51415d93c4ec38a2a191288f88fd6dc | 134 | py | Python | src/apis/__init__.py | beratakuzum/spam-detector-api | 2c2dd809363f931e5e6e21b53f8335c59219036c | [
"MIT"
] | 3 | 2020-12-20T18:26:32.000Z | 2021-11-15T19:40:00.000Z | src/apis/__init__.py | beratakuzum/spam-detector-api | 2c2dd809363f931e5e6e21b53f8335c59219036c | [
"MIT"
] | null | null | null | src/apis/__init__.py | beratakuzum/spam-detector-api | 2c2dd809363f931e5e6e21b53f8335c59219036c | [
"MIT"
] | null | null | null |
def init_apis(app):
from .auth import init_api
init_api(app=app)
from .prediction import init_api
init_api(app=app)
| 16.75 | 36 | 0.69403 | 22 | 134 | 4 | 0.409091 | 0.318182 | 0.295455 | 0.386364 | 0.590909 | 0.590909 | 0.590909 | 0 | 0 | 0 | 0 | 0 | 0.223881 | 134 | 7 | 37 | 19.142857 | 0.846154 | 0 | 0 | 0.4 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0.4 | 0 | 0.6 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
a9726bfeb1b947b8cc9d9c4f383520baf6b01a77 | 56 | py | Python | darklim/constants/__init__.py | slwatkins/DarkLim | 22a0f8ea7dd609075d55c413b598e42da8ef348f | [
"MIT"
] | 1 | 2022-01-21T16:56:36.000Z | 2022-01-21T16:56:36.000Z | darklim/constants/__init__.py | slwatkins/DarkLim | 22a0f8ea7dd609075d55c413b598e42da8ef348f | [
"MIT"
] | null | null | null | darklim/constants/__init__.py | slwatkins/DarkLim | 22a0f8ea7dd609075d55c413b598e42da8ef348f | [
"MIT"
] | null | null | null | from scipy.constants import *
from ._constants import *
| 18.666667 | 29 | 0.785714 | 7 | 56 | 6.142857 | 0.571429 | 0.697674 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.142857 | 56 | 2 | 30 | 28 | 0.895833 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
a9a2712634bba5800aa4aec0b44d017b17f4a777 | 25 | py | Python | src/baselines/__init__.py | shaliniiit/CVDD-PyTorch | c07e1bd24fad81c1a1c51a70d90474b333d19f57 | [
"MIT"
] | 48 | 2019-07-30T12:34:41.000Z | 2022-02-23T10:56:42.000Z | src/baselines/__init__.py | Wuliyuanulb/CVDD-PyTorch | aa2b033ed8216ce132ef6977da1e4fae665fb0c0 | [
"MIT"
] | 4 | 2019-11-28T14:26:38.000Z | 2021-11-16T14:53:17.000Z | src/baselines/__init__.py | Wuliyuanulb/CVDD-PyTorch | aa2b033ed8216ce132ef6977da1e4fae665fb0c0 | [
"MIT"
] | 19 | 2019-07-30T02:44:57.000Z | 2022-02-02T00:39:13.000Z | from .ocsvm import OCSVM
| 12.5 | 24 | 0.8 | 4 | 25 | 5 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.16 | 25 | 1 | 25 | 25 | 0.952381 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
8d0e6a07f23831fe297f1d790fba699997b1f902 | 24 | py | Python | keras_lr_finder/__init__.py | sbarman-mi9/keras_lr_finder | 4ecf6493c2835beb1745c75919cae8e4c96e8c14 | [
"MIT"
] | null | null | null | keras_lr_finder/__init__.py | sbarman-mi9/keras_lr_finder | 4ecf6493c2835beb1745c75919cae8e4c96e8c14 | [
"MIT"
] | null | null | null | keras_lr_finder/__init__.py | sbarman-mi9/keras_lr_finder | 4ecf6493c2835beb1745c75919cae8e4c96e8c14 | [
"MIT"
] | null | null | null | from .lr_finder import * | 24 | 24 | 0.791667 | 4 | 24 | 4.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.125 | 24 | 1 | 24 | 24 | 0.857143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
8d1be5e912e095c1f58c5f233f0d966890c01de2 | 113 | py | Python | peacemakr/exception/failed_to_download_key.py | peacemakr-io/peacemakr-python-sdk | 180bbc2e480ea855dddf0e28c2f27e83a17bfb84 | [
"Apache-2.0"
] | 3 | 2020-01-27T10:07:29.000Z | 2021-05-17T16:45:59.000Z | peacemakr/exception/failed_to_download_key.py | peacemakr-io/peacemakr-python-sdk | 180bbc2e480ea855dddf0e28c2f27e83a17bfb84 | [
"Apache-2.0"
] | 7 | 2020-06-24T03:55:36.000Z | 2021-03-30T00:43:51.000Z | peacemakr/exception/failed_to_download_key.py | peacemakr-io/peacemakr-python-sdk | 180bbc2e480ea855dddf0e28c2f27e83a17bfb84 | [
"Apache-2.0"
] | 1 | 2021-04-27T04:12:30.000Z | 2021-04-27T04:12:30.000Z | from peacemakr.exception.peacemakr import PeacemakrError
class FailedToDownloadKeyError(PeacemakrError):
pass
| 22.6 | 56 | 0.867257 | 10 | 113 | 9.8 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.088496 | 113 | 4 | 57 | 28.25 | 0.951456 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
8d35967f22397e4c9770c74efc3ce94e973e5b1c | 94 | py | Python | project-euler/completed/euler28.py | davidxmoody/kata | d88569584c2390da0e8127258d8751f8a47b7d83 | [
"MIT"
] | null | null | null | project-euler/completed/euler28.py | davidxmoody/kata | d88569584c2390da0e8127258d8751f8a47b7d83 | [
"MIT"
] | null | null | null | project-euler/completed/euler28.py | davidxmoody/kata | d88569584c2390da0e8127258d8751f8a47b7d83 | [
"MIT"
] | null | null | null | #!/usr/bin/env python3
def f(n):
return (16*n**3 - 18*n**2 + 14*n)//3 - 3
print(f(501))
| 13.428571 | 44 | 0.521277 | 21 | 94 | 2.333333 | 0.714286 | 0.081633 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.186667 | 0.202128 | 94 | 6 | 45 | 15.666667 | 0.466667 | 0.223404 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0 | 0.333333 | 0.666667 | 0.333333 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
a5d8431b6074219fbbe121f5c4a2bd885656c9e0 | 236 | py | Python | voyager/client/apis/__init__.py | voyager-client/python | d30de935c9cf30fa9e9e4c90714f3868767f4065 | [
"Apache-2.0"
] | null | null | null | voyager/client/apis/__init__.py | voyager-client/python | d30de935c9cf30fa9e9e4c90714f3868767f4065 | [
"Apache-2.0"
] | 1 | 2018-06-24T20:33:11.000Z | 2018-06-24T20:33:11.000Z | voyager/client/apis/__init__.py | voyager-client/python | d30de935c9cf30fa9e9e4c90714f3868767f4065 | [
"Apache-2.0"
] | null | null | null | from __future__ import absolute_import
# import apis into api package
from .apis_api import ApisApi
from .voyager_appscode_com_api import VoyagerAppscodeComApi
from .voyager_appscode_com_v1beta1_api import VoyagerAppscodeComV1beta1Api
| 33.714286 | 74 | 0.885593 | 30 | 236 | 6.533333 | 0.5 | 0.137755 | 0.193878 | 0.22449 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.018779 | 0.097458 | 236 | 6 | 75 | 39.333333 | 0.901408 | 0.118644 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
a5ea9dd42f0f8b46ebfaa05a8372c4abdd48d434 | 39 | py | Python | binding/python/tasks/__init__.py | SaeidSamadi/Tasks | cb3f29a5545a96df83a7d49730799c90bfb0b6f7 | [
"BSD-2-Clause"
] | 60 | 2016-04-08T05:48:58.000Z | 2022-02-18T16:54:31.000Z | binding/python/tasks/__init__.py | SaeidSamadi/Tasks | cb3f29a5545a96df83a7d49730799c90bfb0b6f7 | [
"BSD-2-Clause"
] | 48 | 2016-04-01T09:50:12.000Z | 2021-08-05T02:12:27.000Z | binding/python/tasks/__init__.py | SaeidSamadi/Tasks | cb3f29a5545a96df83a7d49730799c90bfb0b6f7 | [
"BSD-2-Clause"
] | 21 | 2017-01-10T16:23:34.000Z | 2021-11-18T08:40:20.000Z | from . tasks import *
from . import qp
| 13 | 21 | 0.692308 | 6 | 39 | 4.5 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.230769 | 39 | 2 | 22 | 19.5 | 0.9 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
570d00a6d210f0cf63a281110e9c90f54d4defda | 54 | py | Python | pymarlin/plugins/hf_seq_classification/__init__.py | nifarn/PyMarlin | ea1f5f927aa85112ecebc206d53b5c3ee65704fa | [
"MIT"
] | 20 | 2021-06-09T18:46:45.000Z | 2022-02-09T01:08:13.000Z | pymarlin/plugins/hf_seq_classification/__init__.py | nifarn/PyMarlin | ea1f5f927aa85112ecebc206d53b5c3ee65704fa | [
"MIT"
] | 50 | 2021-06-09T17:50:35.000Z | 2022-02-07T23:02:30.000Z | pymarlin/plugins/hf_seq_classification/__init__.py | nifarn/PyMarlin | ea1f5f927aa85112ecebc206d53b5c3ee65704fa | [
"MIT"
] | 5 | 2021-06-21T22:24:30.000Z | 2021-12-21T17:08:21.000Z | from .implementation import HfSeqClassificationPlugin
| 27 | 53 | 0.907407 | 4 | 54 | 12.25 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.074074 | 54 | 1 | 54 | 54 | 0.98 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
570d94cfde8239ed4ad29189ec3149dfe4479b12 | 31 | py | Python | 16_Standard_Library/A_Modules/_this.py | Oscar-Oliveira/Python3 | fa791225a6810b75890d24407b73c5e1b514acbe | [
"MIT"
] | null | null | null | 16_Standard_Library/A_Modules/_this.py | Oscar-Oliveira/Python3 | fa791225a6810b75890d24407b73c5e1b514acbe | [
"MIT"
] | null | null | null | 16_Standard_Library/A_Modules/_this.py | Oscar-Oliveira/Python3 | fa791225a6810b75890d24407b73c5e1b514acbe | [
"MIT"
] | null | null | null | """
this
"""
import this
| 5.166667 | 12 | 0.451613 | 3 | 31 | 4.666667 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.354839 | 31 | 5 | 13 | 6.2 | 0.7 | 0.129032 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
57305ace60f2b0ff94272df5b50ecd62a3681b49 | 412 | py | Python | webhelpers2/html/__init__.py | einSelbst/WebHelpers2 | 1675e2f7e53f296e7750499176be1fabca0454f3 | [
"BSD-3-Clause"
] | null | null | null | webhelpers2/html/__init__.py | einSelbst/WebHelpers2 | 1675e2f7e53f296e7750499176be1fabca0454f3 | [
"BSD-3-Clause"
] | null | null | null | webhelpers2/html/__init__.py | einSelbst/WebHelpers2 | 1675e2f7e53f296e7750499176be1fabca0454f3 | [
"BSD-3-Clause"
] | null | null | null | """HTML generation helpers.
All public objects in the ``webhelpers2.html.builder`` subpackage are also
available in the ``webhelpers2.html`` namespace. Most programs will want
to put this line in their code::
from webhelpers2.html import *
Or you can import the most frequently-used objects explicitly::
from webhelpers2.html import HTML, escape, literal
"""
from webhelpers2.html.builder import *
| 27.466667 | 74 | 0.762136 | 57 | 412 | 5.508772 | 0.614035 | 0.238854 | 0.181529 | 0.127389 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.014451 | 0.160194 | 412 | 14 | 75 | 29.428571 | 0.893064 | 0.883495 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
57337b694162528235cc0732ec4fceb620501cd9 | 96 | py | Python | venv/lib/python3.8/site-packages/pkg_resources/tests/data/my-test-package-source/setup.py | Retraces/UkraineBot | 3d5d7f8aaa58fa0cb8b98733b8808e5dfbdb8b71 | [
"MIT"
] | 2 | 2022-03-13T01:58:52.000Z | 2022-03-31T06:07:54.000Z | venv/lib/python3.8/site-packages/pkg_resources/tests/data/my-test-package-source/setup.py | DesmoSearch/Desmobot | b70b45df3485351f471080deb5c785c4bc5c4beb | [
"MIT"
] | 19 | 2021-11-20T04:09:18.000Z | 2022-03-23T15:05:55.000Z | venv/lib/python3.8/site-packages/pkg_resources/tests/data/my-test-package-source/setup.py | DesmoSearch/Desmobot | b70b45df3485351f471080deb5c785c4bc5c4beb | [
"MIT"
] | null | null | null | /home/runner/.cache/pip/pool/32/b7/b3/9779eac646248c26292319a3861838011f21822e1065d1189a4f88ed1f | 96 | 96 | 0.895833 | 9 | 96 | 9.555556 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.510417 | 0 | 96 | 1 | 96 | 96 | 0.385417 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
93a6af7827508aaa109e73400042e24c02afe2c8 | 32 | py | Python | line_overlap/line_overlap/__init__.py | uppinder/uppinder_chugh_test | ead56c8d79e64c528a383d7241b5d458aecda41a | [
"MIT"
] | null | null | null | line_overlap/line_overlap/__init__.py | uppinder/uppinder_chugh_test | ead56c8d79e64c528a383d7241b5d458aecda41a | [
"MIT"
] | null | null | null | line_overlap/line_overlap/__init__.py | uppinder/uppinder_chugh_test | ead56c8d79e64c528a383d7241b5d458aecda41a | [
"MIT"
] | null | null | null | from .core import check_overlap
| 16 | 31 | 0.84375 | 5 | 32 | 5.2 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.125 | 32 | 1 | 32 | 32 | 0.928571 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
93ffc34ea684fd991bdd5a9d5322ad54bca71d01 | 1,156 | py | Python | container_service_extension/pksclient/models/__init__.py | tschoergez/container-service-extension | e1fbaf7e9c242a416d3f580880c1051286847cfd | [
"BSD-2-Clause",
"BSD-3-Clause"
] | null | null | null | container_service_extension/pksclient/models/__init__.py | tschoergez/container-service-extension | e1fbaf7e9c242a416d3f580880c1051286847cfd | [
"BSD-2-Clause",
"BSD-3-Clause"
] | null | null | null | container_service_extension/pksclient/models/__init__.py | tschoergez/container-service-extension | e1fbaf7e9c242a416d3f580880c1051286847cfd | [
"BSD-2-Clause",
"BSD-3-Clause"
] | null | null | null | # coding: utf-8
# flake8: noqa
"""
PKS
PKS API # noqa: E501
OpenAPI spec version: 1.1.0
Generated by: https://github.com/swagger-api/swagger-codegen.git
"""
from __future__ import absolute_import
# import models into model package
from container_service_extension.pksclient.models.cluster import Cluster
from container_service_extension.pksclient.models.cluster_parameters import ClusterParameters
from container_service_extension.pksclient.models.cluster_request import ClusterRequest
from container_service_extension.pksclient.models.compute_profile import ComputeProfile
from container_service_extension.pksclient.models.compute_profile_request import ComputeProfileRequest
from container_service_extension.pksclient.models.error_response import ErrorResponse
from container_service_extension.pksclient.models.network_profile import NetworkProfile
from container_service_extension.pksclient.models.network_profile_request import NetworkProfileRequest
from container_service_extension.pksclient.models.plan import Plan
from container_service_extension.pksclient.models.update_cluster_parameters import UpdateClusterParameters
| 41.285714 | 106 | 0.865052 | 137 | 1,156 | 7.036496 | 0.372263 | 0.134855 | 0.207469 | 0.30083 | 0.536307 | 0.536307 | 0.399378 | 0.240664 | 0 | 0 | 0 | 0.007569 | 0.08564 | 1,156 | 27 | 107 | 42.814815 | 0.904447 | 0.157439 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
f53493c87dced752c3e0082e0324838a166997b6 | 89 | py | Python | source/ports/scala_port/src/test/scala/scripts/s1.py | Tabzz98/core | 02ddfe5e0f7ecaa833a8c36dbc059a968479d8ce | [
"Apache-2.0"
] | 5 | 2021-03-08T11:08:23.000Z | 2021-03-11T13:19:23.000Z | source/ports/scala_port/src/test/scala/scripts/s1.py | Tabzz98/core | 02ddfe5e0f7ecaa833a8c36dbc059a968479d8ce | [
"Apache-2.0"
] | null | null | null | source/ports/scala_port/src/test/scala/scripts/s1.py | Tabzz98/core | 02ddfe5e0f7ecaa833a8c36dbc059a968479d8ce | [
"Apache-2.0"
] | null | null | null |
def fn_in_s1():
return 'Hello from s1'
def other_fn_in_s1(x, y):
return x + y
| 11.125 | 26 | 0.617978 | 18 | 89 | 2.777778 | 0.555556 | 0.16 | 0.24 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.046154 | 0.269663 | 89 | 7 | 27 | 12.714286 | 0.723077 | 0 | 0 | 0 | 0 | 0 | 0.147727 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | false | 0 | 0 | 0.5 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
f54d37c6202a973a739ec8b8cfd315b8d789ad98 | 108 | py | Python | TelegramBot/__init__.py | TomerGoldfeder/teltebot | 8712bc6d929ea05cc83e2ea0f7d1315f686cfce5 | [
"MIT"
] | 2 | 2021-01-24T18:02:22.000Z | 2021-01-26T16:26:30.000Z | TelegramBot/__init__.py | TomerGoldfeder/teltebot | 8712bc6d929ea05cc83e2ea0f7d1315f686cfce5 | [
"MIT"
] | null | null | null | TelegramBot/__init__.py | TomerGoldfeder/teltebot | 8712bc6d929ea05cc83e2ea0f7d1315f686cfce5 | [
"MIT"
] | 1 | 2021-01-24T17:22:51.000Z | 2021-01-24T17:22:51.000Z | from teltebot.TelegramNotifierBot import TelegramBot
from teltebot.TelegramCallback import TelegramNotifier
| 36 | 54 | 0.907407 | 10 | 108 | 9.8 | 0.7 | 0.244898 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.074074 | 108 | 2 | 55 | 54 | 0.98 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
1910d724a84254f087970632158cb168f49d7119 | 21 | py | Python | pyrobud/__main__.py | x0x8x/pyrobud | a03fb2c492d4ee5fe2e9f4e6d2b13614b09452e7 | [
"MIT"
] | null | null | null | pyrobud/__main__.py | x0x8x/pyrobud | a03fb2c492d4ee5fe2e9f4e6d2b13614b09452e7 | [
"MIT"
] | null | null | null | pyrobud/__main__.py | x0x8x/pyrobud | a03fb2c492d4ee5fe2e9f4e6d2b13614b09452e7 | [
"MIT"
] | 1 | 2021-09-26T14:10:28.000Z | 2021-09-26T14:10:28.000Z | from . import launch
| 10.5 | 20 | 0.761905 | 3 | 21 | 5.333333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.190476 | 21 | 1 | 21 | 21 | 0.941176 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.