hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
06f0db028f642ae4780997de45b1641ca0132cd6 | 23 | py | Python | src/python/__init__.py | mesnardo/petibm-flapping | 0a96126ec89bd22de9065ea2922eecd9d4cc110e | [
"BSD-3-Clause"
] | null | null | null | src/python/__init__.py | mesnardo/petibm-flapping | 0a96126ec89bd22de9065ea2922eecd9d4cc110e | [
"BSD-3-Clause"
] | null | null | null | src/python/__init__.py | mesnardo/petibm-flapping | 0a96126ec89bd22de9065ea2922eecd9d4cc110e | [
"BSD-3-Clause"
] | null | null | null | from .flapping import * | 23 | 23 | 0.782609 | 3 | 23 | 6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.130435 | 23 | 1 | 23 | 23 | 0.9 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
b0a99b7770237b772213f4c77073bf8bca6266ca | 151 | py | Python | plotly/graph_objs/layout/yaxis/__init__.py | mprostock/plotly.py | 3471c3dfbf783927c203c676422260586514b341 | [
"MIT"
] | 12 | 2020-04-18T18:10:22.000Z | 2021-12-06T10:11:15.000Z | plotly/graph_objs/layout/yaxis/__init__.py | Vesauza/plotly.py | e53e626d59495d440341751f60aeff73ff365c28 | [
"MIT"
] | 27 | 2020-04-28T21:23:12.000Z | 2021-06-25T15:36:38.000Z | plotly/graph_objs/layout/yaxis/__init__.py | Vesauza/plotly.py | e53e626d59495d440341751f60aeff73ff365c28 | [
"MIT"
] | 6 | 2020-04-18T23:07:08.000Z | 2021-11-18T07:53:06.000Z | from ._title import Title
from plotly.graph_objs.layout.yaxis import title
from ._tickformatstop import Tickformatstop
from ._tickfont import Tickfont
| 30.2 | 48 | 0.854305 | 20 | 151 | 6.25 | 0.5 | 0.176 | 0.24 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.10596 | 151 | 4 | 49 | 37.75 | 0.925926 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
7c07e832bd0c0cd10d051e51f33029eccaf823de | 240 | py | Python | src/core/toga/sources/__init__.py | luizoti/toga | 3c49e685f325f1aba2ce048b253402d7e4519f97 | [
"BSD-3-Clause"
] | 1,261 | 2019-03-31T16:28:47.000Z | 2022-03-31T09:01:23.000Z | src/core/toga/sources/__init__.py | luizoti/toga | 3c49e685f325f1aba2ce048b253402d7e4519f97 | [
"BSD-3-Clause"
] | 597 | 2019-04-02T20:02:42.000Z | 2022-03-30T10:28:47.000Z | src/core/toga/sources/__init__.py | luizoti/toga | 3c49e685f325f1aba2ce048b253402d7e4519f97 | [
"BSD-3-Clause"
] | 318 | 2019-03-31T18:32:00.000Z | 2022-03-30T18:07:13.000Z | from .accessors import to_accessor # noqa: F401
from .base import Source # noqa: F401
from .list_source import ListSource # noqa: F401
from .tree_source import TreeSource # noqa: F401
from .value_source import ValueSource # noqa: F401
| 40 | 51 | 0.770833 | 34 | 240 | 5.323529 | 0.441176 | 0.220994 | 0.265193 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.075 | 0.166667 | 240 | 5 | 52 | 48 | 0.83 | 0.225 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
9fee7556736cfdbfd2dfc56f7a9eb646078a8d9a | 152 | py | Python | mfrc522reader/__init__.py | bcurnow/mfrc522-reader | fc9293ef162f7a3223482c1bd3ea39ea0f1170fa | [
"Apache-2.0"
] | 1 | 2021-01-06T16:47:22.000Z | 2021-01-06T16:47:22.000Z | mfrc522reader/__init__.py | bcurnow/mfrc522-reader | fc9293ef162f7a3223482c1bd3ea39ea0f1170fa | [
"Apache-2.0"
] | null | null | null | mfrc522reader/__init__.py | bcurnow/mfrc522-reader | fc9293ef162f7a3223482c1bd3ea39ea0f1170fa | [
"Apache-2.0"
] | null | null | null | # Bring the MFRC class up the top level to avoid having to import mfrc522reader.mfrc522.MFRC522
from mfrc522reader.mfrc522 import MFRC522 # noqa: F401
| 50.666667 | 95 | 0.809211 | 23 | 152 | 5.347826 | 0.695652 | 0.325203 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.162791 | 0.151316 | 152 | 2 | 96 | 76 | 0.790698 | 0.684211 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
9ff3599cff2fbc7200e5b32db7f775087195a51d | 42 | py | Python | scripts/figures_for_paper/__init__.py | fang-ren/on_the_fly_assessment | 102a7985d1765b11e6a7fdc1a11ac973cbc5fe3d | [
"BSD-3-Clause-LBNL"
] | 1 | 2017-03-02T23:42:19.000Z | 2017-03-02T23:42:19.000Z | scripts/on_the_fly_assessment/__init__.py | fang-ren/on_the_fly_assessment | 102a7985d1765b11e6a7fdc1a11ac973cbc5fe3d | [
"BSD-3-Clause-LBNL"
] | null | null | null | scripts/on_the_fly_assessment/__init__.py | fang-ren/on_the_fly_assessment | 102a7985d1765b11e6a7fdc1a11ac973cbc5fe3d | [
"BSD-3-Clause-LBNL"
] | 4 | 2017-08-07T15:12:17.000Z | 2019-12-24T13:08:10.000Z | """
author: Fang Ren (SSRL)
4/27/2017
""" | 8.4 | 23 | 0.571429 | 7 | 42 | 3.428571 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.2 | 0.166667 | 42 | 5 | 24 | 8.4 | 0.485714 | 0.809524 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
c661855013101673976e18f73fd20d900249042e | 1,677 | py | Python | src/anaplanConnector/endpoints.py | matt-budd/anaplan-connector | 885a9efc81973129dc76d86962e943aa3ec8b570 | [
"MIT"
] | null | null | null | src/anaplanConnector/endpoints.py | matt-budd/anaplan-connector | 885a9efc81973129dc76d86962e943aa3ec8b570 | [
"MIT"
] | null | null | null | src/anaplanConnector/endpoints.py | matt-budd/anaplan-connector | 885a9efc81973129dc76d86962e943aa3ec8b570 | [
"MIT"
] | null | null | null | class Endpoints:
def __init__(self,workspaceId=None,modelId=None):
self.workspaceId = workspaceId
self.modelId = modelId
self.fileId = None
self.auth = 'https://auth.anaplan.com'
self.api = 'https://api.anaplan.com/2/0'
self.token = f'{self.auth}/token/authenticate'
self.workspaces = f'{self.api}/workspaces'
def models(self):
return f'{self.api}/workspaces/{self.workspaceId}/models'
def files(self):
return f'{self.api}/workspaces/{self.workspaceId}/models/{self.modelId}/files'
def file(self):
return f'{self.api}/workspaces/{self.workspaceId}/models/{self.modelId}/files/{self.fileId}'
def processes(self):
return f'{self.api}/workspaces/{self.workspaceId}/models/{self.modelId}/processes'
def runProcess(self, processId):
return f'{self.api}/workspaces/{self.workspaceId}/models/{self.modelId}/processes/{processId}/tasks'
def chunk(self, fileId, chunkNum):
return f'{self.api}/workspaces/{self.workspaceId}/models/{self.modelId}/files/{fileId}/chunks/{chunkNum}'
def exports(self):
return f'{self.api}/workspaces/{self.workspaceId}/models/{self.modelId}/exports'
def startExport(self,exportId):
return f'{self.api}/workspaces/{self.workspaceId}/models/{self.modelId}/exports/{exportId}/tasks'
def taskStatus(self, exportId, taskId):
return f'{self.api}/workspaces/{self.workspaceId}/models/{self.modelId}/exports/{exportId}/tasks/{taskId}'
def getNumChunks(self, fileId):
return f'{self.api}/workspaces/{self.workspaceId}/models/{self.modelId}/files/{fileId}/chunks'
| 40.902439 | 114 | 0.673822 | 203 | 1,677 | 5.546798 | 0.182266 | 0.159858 | 0.078153 | 0.175844 | 0.602131 | 0.602131 | 0.602131 | 0.602131 | 0.602131 | 0.558615 | 0 | 0.001426 | 0.163387 | 1,677 | 41 | 115 | 40.902439 | 0.80114 | 0 | 0 | 0 | 0 | 0.206897 | 0.532181 | 0.501788 | 0 | 0 | 0 | 0 | 0 | 1 | 0.37931 | false | 0 | 0 | 0.344828 | 0.758621 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
c675f662e9f0ac049c0c05214976d0ed4b9df2a1 | 3,574 | py | Python | deepqmc/wavefunction/radial_functions.py | NLESC-JCER/DeepQMC | 1e1ed24bf8e0de43be68bcd966425c119359c8b8 | [
"Apache-2.0"
] | 6 | 2019-12-10T22:49:51.000Z | 2020-06-19T08:23:32.000Z | deepqmc/wavefunction/radial_functions.py | NLESC-JCER/QMC | 1e1ed24bf8e0de43be68bcd966425c119359c8b8 | [
"Apache-2.0"
] | 10 | 2019-08-19T08:01:44.000Z | 2020-01-07T12:09:51.000Z | deepqmc/wavefunction/radial_functions.py | NLESC-JCER/QMC | 1e1ed24bf8e0de43be68bcd966425c119359c8b8 | [
"Apache-2.0"
] | 2 | 2019-09-30T22:48:15.000Z | 2020-06-19T08:23:39.000Z | import torch
def radial_slater(
R,
bas_n,
bas_exp,
xyz=None,
derivative=0,
jacobian=True):
"""Compute the radial part of STOs (or its derivative).
Arguments:
R {torch.tensor} -- distance between each electron and each atom
bas_n {torch.tensor} -- principal quantum number
bas_exp {torch.tensor} -- exponents of the exponential
Keyword Arguments:
xyz {torch.tensor} -- positions of the electrons (needed for derivative) (default: {None})
derivative {int} -- degree of the derivative (default: {0})
jacobian {bool} -- return the jacobian, i.e the sum of the gradients (default: {True})
Returns:
torch.tensor -- values of each orbital radial part at each position
"""
if derivative == 0:
return R**bas_n * torch.exp(-bas_exp * R)
elif derivative > 0:
rn = R**(bas_n)
nabla_rn = (bas_n * R**(bas_n - 2)).unsqueeze(-1) * xyz
er = torch.exp(-bas_exp * R)
nabla_er = -(bas_exp * er).unsqueeze(-1) * \
xyz / R.unsqueeze(-1)
if derivative == 1:
if jacobian:
nabla_rn = nabla_rn.sum(3)
nabla_er = nabla_er.sum(3)
return nabla_rn * er + rn * nabla_er
else:
return nabla_rn * \
er.unsqueeze(-1) + rn.unsqueeze(-1) * nabla_er
elif derivative == 2:
sum_xyz2 = (xyz**2).sum(3)
lap_rn = bas_n * (3 * R**(bas_n - 2) +
sum_xyz2 * (bas_n - 2) * R**(bas_n - 4))
lap_er = bas_exp**2 * er * sum_xyz2 / R**2 \
- 2 * bas_exp * er * sum_xyz2 / R**3
return lap_rn * er + 2 * \
(nabla_rn * nabla_er).sum(3) + rn * lap_er
def radial_gaussian(
R,
bas_n,
bas_exp,
xyz=None,
derivative=0,
jacobian=True):
"""Compute the radial part of GTOs (or its derivative).
Arguments:
R {torch.tensor} -- distance between each electron and each atom
bas_n {torch.tensor} -- principal quantum number
bas_exp {torch.tensor} -- exponents of the exponential
Keyword Arguments:
xyz {torch.tensor} -- positions of the electrons (needed for derivative) (default: {None})
derivative {int} -- degree of the derivative (default: {0})
jacobian {bool} -- return the jacobian, i.e the sum of the gradients (default: {True})
Returns:
torch.tensor -- values of each orbital radial part at each position
"""
if derivative == 0:
return R**bas_n * torch.exp(-bas_exp * R**2)
elif derivative > 0:
rn = R**(bas_n)
nabla_rn = (bas_n * R**(bas_n - 2)).unsqueeze(-1) * xyz
er = torch.exp(-bas_exp * R**2)
nabla_er = -2 * (bas_exp * er).unsqueeze(-1) * xyz
if derivative == 1:
if jacobian:
nabla_rn = nabla_rn.sum(3)
nabla_er = nabla_er.sum(3)
return nabla_rn * er + rn * nabla_er
else:
return nabla_rn * \
er.unsqueeze(-1) + rn.unsqueeze(-1) * nabla_er
elif derivative == 2:
lap_rn = bas_n * (3 * R**(bas_n - 2)
+ (xyz**2).sum(3) * (bas_n - 2) * R**(bas_n - 4))
lap_er = 4 * bas_exp**2 * (xyz**2).sum(3) * er \
- 6 * bas_exp * er
return lap_rn * er + 2 * \
(nabla_rn * nabla_er).sum(3) + rn * lap_er
| 31.078261 | 98 | 0.521544 | 473 | 3,574 | 3.788584 | 0.147992 | 0.044643 | 0.033482 | 0.03125 | 0.914621 | 0.905134 | 0.88058 | 0.88058 | 0.88058 | 0.844866 | 0 | 0.02573 | 0.358422 | 3,574 | 114 | 99 | 31.350877 | 0.755778 | 0.334359 | 0 | 0.688525 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.032787 | false | 0 | 0.016393 | 0 | 0.180328 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
c686c15b9e3b0ce20f4fcceacb7366bca04f08d2 | 270 | py | Python | blocks/kafka/__init__.py | severstal-digital/typed-blocks | 276e65d22772057ba58198332406274d06b87788 | [
"Apache-2.0"
] | null | null | null | blocks/kafka/__init__.py | severstal-digital/typed-blocks | 276e65d22772057ba58198332406274d06b87788 | [
"Apache-2.0"
] | null | null | null | blocks/kafka/__init__.py | severstal-digital/typed-blocks | 276e65d22772057ba58198332406274d06b87788 | [
"Apache-2.0"
] | null | null | null | from blocks.kafka.app import KafkaApp
from blocks.kafka.events import Batch, CommitEvent, NoNewEvents
from blocks.kafka.topics import InputTopic, OutputTopic
from blocks.kafka.sources import KafkaSource
from blocks.kafka.processors import KafkaProducer, OffsetCommitter
| 45 | 66 | 0.859259 | 34 | 270 | 6.823529 | 0.529412 | 0.215517 | 0.323276 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.088889 | 270 | 5 | 67 | 54 | 0.943089 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
c6984f6951db319db071cb9f8624bcd686f7ebc3 | 976 | py | Python | scrabble_player/utils/board_configurations.py | CodyWoolaver/scrabble-player | 1e81030e68b783b4842b345fd3b7db20a4f99891 | [
"MIT"
] | null | null | null | scrabble_player/utils/board_configurations.py | CodyWoolaver/scrabble-player | 1e81030e68b783b4842b345fd3b7db20a4f99891 | [
"MIT"
] | 1 | 2019-12-21T00:23:39.000Z | 2019-12-21T00:23:39.000Z | scrabble_player/utils/board_configurations.py | CodyWoolaver/ScrabblePlayer | 1e81030e68b783b4842b345fd3b7db20a4f99891 | [
"MIT"
] | null | null | null | # - DefaultTile
# 1 TrippleWord
# 2 TrippleLetter
# 3 DoubleWord
# 4 DoubleLetter
# 5 Center
DATA = {
"Scrabble": (
15, 15,
"1--4---1---4--1"
"-3---2---2---3-"
"--3---4-4---3--"
"4--3---4---3---"
"----3-----3----"
"-2---2---2---2-"
"--4---4-4---4--"
"1--4---5---4--1"
"--4---4-4---4--"
"-2---2---2---2-"
"----3-----3----"
"4--3---4---3---"
"--3---4-4---3--"
"-3---2---2---3-"
"1--4---1---4--1"
),
"Words With Friends": (
15, 15,
"---1--2-2--1---"
"--4--3---3--4--"
"-4--4-----4--4-"
"1--2---3---2--1"
"--4---4-4---4--"
"-3---2---2---3-"
"2---4-----4---2"
"---3-------3---"
"2---4-----4---2"
"-3---2---2---3-"
"--4---4-4---4--"
"1--2---3---2--1"
"-4--4-----4--4-"
"--4--3---3--4--"
"---1--2-2--1---"
)
}
| 21.217391 | 27 | 0.228484 | 139 | 976 | 1.604317 | 0.122302 | 0.215247 | 0.188341 | 0.143498 | 0.403587 | 0.224215 | 0.116592 | 0.116592 | 0.116592 | 0.116592 | 0 | 0.206573 | 0.345287 | 976 | 45 | 28 | 21.688889 | 0.14241 | 0.081967 | 0 | 0.789474 | 0 | 0 | 0.535433 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
c69d559012d0683664f4e35cc02818f3b1a40a87 | 199 | py | Python | rosys/communication/__init__.py | zauberzeug/rosys | 10271c88ffd5dcc4fb8eec93d46fe4144a9e40d8 | [
"MIT"
] | 1 | 2022-02-20T08:21:07.000Z | 2022-02-20T08:21:07.000Z | rosys/communication/__init__.py | zauberzeug/rosys | 10271c88ffd5dcc4fb8eec93d46fe4144a9e40d8 | [
"MIT"
] | 1 | 2022-03-08T12:46:09.000Z | 2022-03-08T12:46:09.000Z | rosys/communication/__init__.py | zauberzeug/rosys | 10271c88ffd5dcc4fb8eec93d46fe4144a9e40d8 | [
"MIT"
] | null | null | null | from .communication import Communication
from .communication_factory import CommunicationFactory
from .serial_communication import SerialCommunication
from .web_communication import WebCommunication
| 39.8 | 55 | 0.899497 | 19 | 199 | 9.263158 | 0.473684 | 0.323864 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.080402 | 199 | 4 | 56 | 49.75 | 0.961749 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
c6b5d0c6fa4f472b0570b72bd7c52e59fa20d381 | 229 | py | Python | src/datalabs/operations/aggregate/__init__.py | xcfcode/DataLab | d1a310de4986cb704b1fe3dea859452b8c14fc71 | [
"Apache-2.0"
] | null | null | null | src/datalabs/operations/aggregate/__init__.py | xcfcode/DataLab | d1a310de4986cb704b1fe3dea859452b8c14fc71 | [
"Apache-2.0"
] | null | null | null | src/datalabs/operations/aggregate/__init__.py | xcfcode/DataLab | d1a310de4986cb704b1fe3dea859452b8c14fc71 | [
"Apache-2.0"
] | null | null | null | from .general import *
from .aggregating import aggregating
# from .text_classification import *
# from .sequence_labeling import *
# from .summarization import *
# from .text_matching import *
# from .kg_link_prediction import * | 32.714286 | 36 | 0.781659 | 27 | 229 | 6.444444 | 0.481481 | 0.287356 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.139738 | 229 | 7 | 37 | 32.714286 | 0.883249 | 0.694323 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
05a14d872479130cb7320d6733aa4c5a9cf86919 | 44 | py | Python | Helloworld.py | Ehsan746/Network-1 | 2587134ee4e6be585f37d588c276ca5acf29d6fd | [
"MIT"
] | null | null | null | Helloworld.py | Ehsan746/Network-1 | 2587134ee4e6be585f37d588c276ca5acf29d6fd | [
"MIT"
] | null | null | null | Helloworld.py | Ehsan746/Network-1 | 2587134ee4e6be585f37d588c276ca5acf29d6fd | [
"MIT"
] | null | null | null | import time
print(time.strftime("%H : %M"))
| 14.666667 | 31 | 0.659091 | 7 | 44 | 4.142857 | 0.857143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.113636 | 44 | 2 | 32 | 22 | 0.74359 | 0 | 0 | 0 | 0 | 0 | 0.159091 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0.5 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 6 |
05b4636d7c1f721f3e42727ce1c917798670e05f | 859 | py | Python | nfv/nfv-common/nfv_common/debug/__init__.py | SidneyAn/nfv | 5f0262a5b6ea4be59f977b9c587c483cbe0e373d | [
"Apache-2.0"
] | 2 | 2020-02-07T19:01:36.000Z | 2022-02-23T01:41:46.000Z | nfv/nfv-common/nfv_common/debug/__init__.py | SidneyAn/nfv | 5f0262a5b6ea4be59f977b9c587c483cbe0e373d | [
"Apache-2.0"
] | 1 | 2021-01-14T12:02:25.000Z | 2021-01-14T12:02:25.000Z | nfv/nfv-common/nfv_common/debug/__init__.py | SidneyAn/nfv | 5f0262a5b6ea4be59f977b9c587c483cbe0e373d | [
"Apache-2.0"
] | 2 | 2021-01-13T08:39:21.000Z | 2022-02-09T00:21:55.000Z | # Copyright (c) 2015-2016 Wind River Systems, Inc.
#
# SPDX-License-Identifier: Apache-2.0
#
from nfv_common.debug._debug_defs import DEBUG_LEVEL # noqa: F401
from nfv_common.debug._debug_log import debug_dump_loggers # noqa: F401
from nfv_common.debug._debug_log import debug_get_logger # noqa: F401
from nfv_common.debug._debug_log import debug_trace # noqa: F401
from nfv_common.debug._debug_module import debug_deregister_config_change_callback # noqa: F401
from nfv_common.debug._debug_module import debug_finalize # noqa: F401
from nfv_common.debug._debug_module import debug_get_config # noqa: F401
from nfv_common.debug._debug_module import debug_initialize # noqa: F401
from nfv_common.debug._debug_module import debug_register_config_change_callback # noqa: F401
from nfv_common.debug._debug_module import debug_reload_config # noqa: F401
| 57.266667 | 96 | 0.828871 | 134 | 859 | 4.940299 | 0.283582 | 0.10574 | 0.196375 | 0.271903 | 0.734139 | 0.699396 | 0.699396 | 0.699396 | 0.699396 | 0.699396 | 0 | 0.052219 | 0.108265 | 859 | 14 | 97 | 61.357143 | 0.81201 | 0.225844 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
05dcad0a4251c714e16f9f9946953cb0b8e8c088 | 30 | py | Python | autoedakit.py | ankitshaw/auto-eda | 0836f64390e5c5268034c4808c6825a5aeffb227 | [
"Apache-2.0"
] | null | null | null | autoedakit.py | ankitshaw/auto-eda | 0836f64390e5c5268034c4808c6825a5aeffb227 | [
"Apache-2.0"
] | null | null | null | autoedakit.py | ankitshaw/auto-eda | 0836f64390e5c5268034c4808c6825a5aeffb227 | [
"Apache-2.0"
] | null | null | null | print("Welcome to AutoEdaKit") | 30 | 30 | 0.8 | 4 | 30 | 6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.066667 | 30 | 1 | 30 | 30 | 0.857143 | 0 | 0 | 0 | 0 | 0 | 0.677419 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
af15542db96df1f41a5761fe715fe0073513c3d9 | 2,492 | py | Python | core/migrations/0037_remove_facet_models.py | kingsdigitallab/egomedia-django | 7347fc96ca52ac195b388e43204d0a0faab0c88f | [
"MIT"
] | null | null | null | core/migrations/0037_remove_facet_models.py | kingsdigitallab/egomedia-django | 7347fc96ca52ac195b388e43204d0a0faab0c88f | [
"MIT"
] | 10 | 2021-04-06T18:17:44.000Z | 2022-03-01T12:21:40.000Z | core/migrations/0037_remove_facet_models.py | kingsdigitallab/egomedia-django | 7347fc96ca52ac195b388e43204d0a0faab0c88f | [
"MIT"
] | null | null | null | # Generated by Django 2.2.2 on 2019-08-07 10:46
from django.db import migrations
class Migration(migrations.Migration):
dependencies = [
('core', '0036_add_facettype_to_basefacet'),
]
operations = [
migrations.AlterUniqueTogether(
name='focus',
unique_together=None,
),
migrations.RemoveField(
model_name='focus',
name='facet_type',
),
migrations.AlterUniqueTogether(
name='keyword',
unique_together=None,
),
migrations.RemoveField(
model_name='keyword',
name='facet_type',
),
migrations.AlterUniqueTogether(
name='method',
unique_together=None,
),
migrations.RemoveField(
model_name='method',
name='facet_type',
),
migrations.RemoveField(
model_name='projectpage',
name='disciplines',
),
migrations.RemoveField(
model_name='projectpage',
name='focus',
),
migrations.RemoveField(
model_name='projectpage',
name='keywords',
),
migrations.RemoveField(
model_name='projectpage',
name='methods',
),
migrations.RemoveField(
model_name='researcherpage',
name='disciplines',
),
migrations.RemoveField(
model_name='researcherpage',
name='focus',
),
migrations.RemoveField(
model_name='researcherpage',
name='keywords',
),
migrations.RemoveField(
model_name='researcherpage',
name='methods',
),
migrations.RemoveField(
model_name='themepage',
name='disciplines',
),
migrations.RemoveField(
model_name='themepage',
name='focus',
),
migrations.RemoveField(
model_name='themepage',
name='keywords',
),
migrations.RemoveField(
model_name='themepage',
name='methods',
),
migrations.DeleteModel(
name='Discipline',
),
migrations.DeleteModel(
name='Focus',
),
migrations.DeleteModel(
name='Keyword',
),
migrations.DeleteModel(
name='Method',
),
]
| 25.428571 | 52 | 0.504013 | 174 | 2,492 | 7.074713 | 0.252874 | 0.25589 | 0.316816 | 0.365556 | 0.703493 | 0.703493 | 0.116978 | 0 | 0 | 0 | 0 | 0.012475 | 0.388844 | 2,492 | 97 | 53 | 25.690722 | 0.795798 | 0.018058 | 0 | 0.824176 | 1 | 0 | 0.146421 | 0.012679 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.010989 | 0 | 0.043956 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
af1bcd524f8d79099d180bc5b17ccac2ef92f8b7 | 39 | py | Python | wifi_password/__main__.py | topdefaultuser/wifi-password | 3ac73b8f48cb520158ca1eec93ed2261f4a6b61a | [
"MIT"
] | 2,552 | 2021-01-25T21:43:11.000Z | 2022-03-30T10:45:00.000Z | wifi_password/__main__.py | topdefaultuser/wifi-password | 3ac73b8f48cb520158ca1eec93ed2261f4a6b61a | [
"MIT"
] | 69 | 2021-01-25T22:15:58.000Z | 2022-01-22T16:01:21.000Z | wifi_password/__main__.py | topdefaultuser/wifi-password | 3ac73b8f48cb520158ca1eec93ed2261f4a6b61a | [
"MIT"
] | 293 | 2021-01-26T12:44:54.000Z | 2022-03-30T01:50:04.000Z | from wifi_password import main
main()
| 9.75 | 30 | 0.794872 | 6 | 39 | 5 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.153846 | 39 | 3 | 31 | 13 | 0.909091 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.5 | 0.5 | 0 | 0.5 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 6 |
af5d1af13ddda9e452c400b539ca07bd6b2b3da4 | 33,130 | py | Python | jogo_de_cartas_21.py | danilodelucio/Jogo_de_Cartas_21 | 28159c4967db03041830c23b555884db1830d8bf | [
"MIT"
] | null | null | null | jogo_de_cartas_21.py | danilodelucio/Jogo_de_Cartas_21 | 28159c4967db03041830c23b555884db1830d8bf | [
"MIT"
] | null | null | null | jogo_de_cartas_21.py | danilodelucio/Jogo_de_Cartas_21 | 28159c4967db03041830c23b555884db1830d8bf | [
"MIT"
] | null | null | null | from funcoes import *
from random import randint
from time import sleep
# IDIOMA DO JOGO
language = 0
while True:
try:
language = int(input('[1] ENGLISH\n[2] PORTUGUÊS\n-> '))
except:
linha()
print('PLEASE SELECT A LANGUAGE! / POR FAVOR, SELECIONE UM IDIOMA!')
linha()
continue
if language == 1:
linha()
print('<<< English language selected >>>')
linha()
break
elif language == 2:
linha()
print('<<< Idioma em Português selecionado >>>')
linha()
break
else:
linha()
print('PLEASE SELECT A LANGUAGE! / POR FAVOR, SELECIONE UM IDIOMA!')
linha()
idiom(language,
' WELCOME TO 21 CARD GAME \n',
' BEM VINDO AO JOGO DE CARTAS 21 \n')
# SINGLE PLAYER OR MULTIPLAYER MODE
playerMode = 0
while True:
try:
if language == 1:
playerMode = int(input('[1] SINGLE PLAYER\n[2] 1 VS 1\n[3] GAME RULES\n-> '))
elif language == 2:
playerMode = int(input('[1] UM JOGADOR\n[2] 1 CONTRA 1\n[3] REGRAS DO JOGO\n-> '))
except:
msgERROR(language)
continue
if playerMode == 1:
linha()
idiom(language,
'<<< Single Player mode selected >>>',
'<<< Modo de UM JOGADOR selecionado >>>')
linha()
break
elif playerMode == 2:
linha()
idiom(language,
'<<< 1 vs 1 mode selected >>>',
'<<< Modo de 1 CONTRA 1 selecionado >>>')
linha()
break
elif playerMode == 3:
regras(language)
continue
else:
msgERROR(language)
continue
# NAMES
nome_Player1 = ''
nome_Player2 = ''
if playerMode == 1:
while True:
try:
if language == 1:
nome_Player1 = str(input('Type your name: ')).title().strip()
elif language == 2:
nome_Player1 = str(input('Digite seu nome: ')).title().strip()
except:
msgERROR(language)
continue
if nome_Player1.isnumeric():
msgERROR(language)
continue
elif nome_Player1 == '':
msgERROR(language)
continue
else:
break
if playerMode == 2:
# PLAYER 1
while True:
try:
if language == 1:
nome_Player1 = str(input('Player1 name: ')).upper().strip()
elif language == 2:
nome_Player1 = str(input('Nome do Jogador1: ')).upper().strip()
except:
msgERROR(language)
continue
if nome_Player1 == '':
msgERROR(language)
continue
elif nome_Player1.isnumeric():
msgERROR(language)
continue
else:
break
# PLAYER 2
while True:
try:
if language == 1:
nome_Player2 = str(input('Player2 name: ')).upper().strip()
elif language == 2:
nome_Player2 = str(input('Nome do Jogador2: ')).upper().strip()
except:
msgERROR(language)
continue
if nome_Player2 == '':
msgERROR(language)
continue
elif nome_Player2.isnumeric():
msgERROR(language)
continue
else:
break
print()
# ACUMULADORES 01
vitoriasP1 = vitoriasP2 = derrotas = empates = partidas = 0
# PLAYER 1 MODE
if playerMode == 1:
while True:
valor = ['A', 2, 3, 4, 5, 6, 7, 8, 9, 10, 'J', 'Q', 'K']
naipe = []
if language == 1:
naipe = ['Spades', 'Hearts', 'Clubs', 'Dimonds']
elif language == 2:
naipe = ['Espadas', 'Copas', 'Paus', 'Ouros']
# SORTEIO DE 2 CARTAS (VALORES E NAIPES) / CARTAS DE ENTRADA
sorteio_valor1 = valor[randint(0, 12)]
sorteio_valor2 = valor[randint(0, 12)]
while True:
if sorteio_valor2 == sorteio_valor1:
sorteio_valor2 = valor[randint(0, 12)]
elif sorteio_valor2 != sorteio_valor1:
break
sorteio_valor1_BOT = valor[randint(0, 12)]
sorteio_valor2_BOT = valor[randint(0, 12)]
while True:
if sorteio_valor2_BOT == sorteio_valor1_BOT:
sorteio_valor2_BOT = valor[randint(0, 12)]
elif sorteio_valor2_BOT != sorteio_valor1_BOT:
break
sorteio_naipe1 = randint(0, 3)
sorteio_naipe2 = randint(0, 3)
while True:
if sorteio_naipe2 == sorteio_naipe1:
sorteio_naipe2 = randint(0, 3)
elif sorteio_naipe2 != sorteio_naipe1:
break
sorteio_naipe1_BOT = randint(0, 3)
sorteio_naipe2_BOT = randint(0, 3)
while True:
if sorteio_naipe2_BOT == sorteio_naipe1_BOT:
sorteio_naipe2_BOT = randint(0, 3)
elif sorteio_naipe2_BOT != sorteio_naipe1_BOT:
break
# MOSTRANDO O VALOR DA CARTA E O NAIPE
idiom(language,
'SHUFFLING THE CARDS...',
'EMBARALHANDO AS CARTAS...')
print()
sleep(1)
idiom(language,
f'Cards from the player {nome_Player1}:',
f'Cartas do jogador {nome_Player1}:')
sleep(1)
de = ''
if language == 1:
de = 'of'
elif language == 2:
de = 'de'
carta1 = f'{sorteio_valor1} {de} {naipe[sorteio_naipe1]}'
print(carta1)
sleep(1)
carta2 = f'{sorteio_valor2} {de} {naipe[sorteio_naipe2]}'
print(carta2)
sleep(1)
print()
cartas_Player1 = [carta1, carta2]
idiom(language,
f'Cards from BOT:',
f'Cartas do BOT:')
sleep(1)
carta3 = f'{sorteio_valor1_BOT} {de} {naipe[sorteio_naipe1_BOT]}'
print(carta3)
sleep(1)
carta4 = f'{sorteio_valor2_BOT} {de} {naipe[sorteio_naipe2_BOT]}'
print(carta4)
sleep(1)
cartas_BOT = [carta3, carta4]
# SOMA DAS CARTAS
print()
linha()
soma1 = validacaoLetras(sorteio_valor1) + validacaoLetras(sorteio_valor2)
soma_BOT = validacaoLetras(sorteio_valor1_BOT) + validacaoLetras(sorteio_valor2_BOT)
if language == 1:
print(f'The total sum of the two cards from {nome_Player1}: {soma1}')
print()
print(f'The total sum of the two cards from BOT: {soma_BOT}')
elif language == 2:
print(f'Soma total das duas cartas do {nome_Player1}: {soma1}')
print()
print(f'Soma total das duas cartas do BOT: {soma_BOT}')
linha()
# ACUMULADORES 02
somaFinal_Player1 = soma1 + 0
somaFinal_BOT = soma_BOT + 0
jogadaParada_BOT = 0
while True:
comprar = ''
while True:
try:
if language == 1:
comprar = str(input(f'Do you wish to take another card {nome_Player1}? [Y/N] ')).upper().strip()[0]
if comprar == 'Y':
comprar = 'S'
elif language == 2:
comprar = str(input(f'Deseja comprar mais uma carta {nome_Player1}? [S/N] ')).upper().strip()[0]
except:
msgERROR(language)
continue
if comprar.isnumeric():
msgERROR(language)
continue
elif comprar == 'N' or comprar == 'S':
break
else:
msgERROR(language)
continue
if comprar == 'S':
# COMPRAR CARTA EXTRA
sorteio_valorExtra = valor[randint(0, 12)]
sorteio_naipeExtra = randint(0, 3)
carta_Extra = f'{sorteio_valorExtra} {de} {naipe[sorteio_naipeExtra]}'
sleep(1)
print()
idiom(language,
f'Extra card: {carta_Extra}.',
f'Carta Extra Sorteada: {carta_Extra}.')
print()
cartas_Player1.append(carta_Extra)
somaFinal_Player1 += validacaoLetras(sorteio_valorExtra)
sleep(1)
idiom(language,
f'Total sum: {somaFinal_Player1}.',
f'Soma total: {somaFinal_Player1}.')
linha()
if somaFinal_Player1 == 21:
vitoria(language)
vitoriasP1 += 1
status(language, cartas_BOT, cartas_Player1, vitoriasP1)
break
if somaFinal_Player1 > 21:
Player1_Estourou(language)
derrotas += 1
status(language, cartas_BOT, cartas_Player1, vitoriasP1)
break
if somaFinal_Player1 == somaFinal_BOT and jogadaParada_BOT == 1:
empate(language)
empates += 1
status(language, cartas_BOT, cartas_Player1, vitoriasP1)
break
elif comprar == 'N':
paradaPlayer1(language, nome_Player1, somaFinal_Player1)
if jogadaParada_BOT == 1:
idiom(language,
f'The BOT stopped in {somaFinal_BOT}.',
f'O BOT tinha parado no valor {somaFinal_BOT}.')
linha()
if somaFinal_BOT < somaFinal_Player1 < 21:
break
else:
continue
# JOGADA DO BOT
sleep(1)
idiom(language,
"Now it's BOT's turn...",
'Agora é a vez do BOT...')
sleep(1)
if somaFinal_BOT == 21:
BOT_venceu(language)
derrotas += 1
status(language, cartas_BOT, cartas_Player1, vitoriasP1)
break
if somaFinal_BOT > 21:
BOT_Estourou(language)
vitoriasP1 += 1
status(language, cartas_BOT, cartas_Player1, vitoriasP1)
break
if comprar == 'N' and somaFinal_BOT > somaFinal_Player1 and jogadaParada_BOT == 1:
break
if comprar == 'S' and somaFinal_BOT == somaFinal_Player1 and somaFinal_BOT >= 18:
paradaBOT(language, somaFinal_BOT)
jogadaParada_BOT = 1
print()
linha()
continue
if comprar == 'S' and somaFinal_BOT == somaFinal_Player1 and somaFinal_BOT < 18:
# --------------------- / / ----------------------- #
idiom(language,
'The BOT is taking a card... ',
'O BOT está comprando mais uma carta... ')
sorteio_valor_extra_bot = valor[randint(0, 12)]
sorteio_naipe_extra_bot = randint(0, 3)
carta_extra_bot = f'{sorteio_valor_extra_bot} {de} {naipe[sorteio_naipe_extra_bot]}'
sleep(1)
print()
idiom(language,
f'Extra card: {carta_extra_bot}',
f'Carta Extra Sorteada: {carta_extra_bot}')
print()
cartas_BOT.append(carta_extra_bot)
somaFinal_BOT += validacaoLetras(sorteio_valor_extra_bot)
sleep(1)
idiom(language,
f'Total sum of the cards from BOT: {somaFinal_BOT}.',
f'Soma total das cartas do BOT: {somaFinal_BOT}.')
linha()
# --------------------- / / ----------------------- #
if somaFinal_BOT == 21:
BOT_venceu(language)
derrotas += 1
status(language, cartas_BOT, cartas_Player1, vitoriasP1)
break
if somaFinal_BOT > 21:
BOT_Estourou(language)
vitoriasP1 += 1
status(language, cartas_BOT, cartas_Player1, vitoriasP1)
break
elif somaFinal_BOT > somaFinal_Player1:
continue
while comprar == 'N' and somaFinal_BOT == somaFinal_Player1 and somaFinal_BOT < 12:
# --------------------- / / ----------------------- #
idiom(language,
'The BOT is taking a card... ',
'O BOT está comprando mais uma carta... ')
sorteio_valor_extra_bot = valor[randint(0, 12)]
sorteio_naipe_extra_bot = randint(0, 3)
carta_extra_bot = f'{sorteio_valor_extra_bot} {de} {naipe[sorteio_naipe_extra_bot]}'
sleep(1)
print()
idiom(language,
f'Extra card: {carta_extra_bot}',
f'Carta Extra Sorteada: {carta_extra_bot}')
print()
cartas_BOT.append(carta_extra_bot)
somaFinal_BOT += validacaoLetras(sorteio_valor_extra_bot)
sleep(1)
idiom(language,
f'Total sum of the cards from BOT: {somaFinal_BOT}.',
f'Soma total das cartas do BOT: {somaFinal_BOT}.')
linha()
# --------------------- / / ----------------------- #
if somaFinal_BOT == 21:
BOT_venceu(language)
derrotas += 1
status(language, cartas_BOT, cartas_Player1, vitoriasP1)
break
if somaFinal_BOT > 21:
BOT_Estourou(language)
vitoriasP1 += 1
status(language, cartas_BOT, cartas_Player1, vitoriasP1)
break
elif somaFinal_BOT > somaFinal_Player1:
break
if comprar == 'N' and somaFinal_BOT == somaFinal_Player1:
empate(language)
empates += 1
status(language, cartas_BOT, cartas_Player1, vitoriasP1)
break
while comprar == 'N' and somaFinal_BOT < somaFinal_Player1:
# --------------------- / / ----------------------- #
idiom(language,
'The BOT is taking a card... ',
'O BOT está comprando mais uma carta... ')
sorteio_valor_extra_bot = valor[randint(0, 12)]
sorteio_naipe_extra_bot = randint(0, 3)
carta_extra_bot = f'{sorteio_valor_extra_bot} {de} {naipe[sorteio_naipe_extra_bot]}'
sleep(1)
print()
idiom(language,
f'Extra card: {carta_extra_bot}',
f'Carta Extra Sorteada: {carta_extra_bot}')
print()
cartas_BOT.append(carta_extra_bot)
somaFinal_BOT += validacaoLetras(sorteio_valor_extra_bot)
sleep(1)
idiom(language,
f'Total sum of the cards from BOT: {somaFinal_BOT}.',
f'Soma total das cartas do BOT: {somaFinal_BOT}.')
linha()
# --------------------- / / ----------------------- #
if somaFinal_BOT == 21:
BOT_venceu(language)
derrotas += 1
status(language, cartas_BOT, cartas_Player1, vitoriasP1)
break
if somaFinal_BOT > 21:
BOT_Estourou(language)
vitoriasP1 += 1
status(language, cartas_BOT, cartas_Player1, vitoriasP1)
break
elif somaFinal_BOT > somaFinal_Player1:
break
elif somaFinal_BOT == somaFinal_Player1:
empate(language)
empates += 1
status(language, cartas_BOT, cartas_Player1, vitoriasP1)
break
if comprar == 'N' and somaFinal_BOT == somaFinal_Player1:
break
if comprar == 'S' and somaFinal_BOT < somaFinal_Player1 and jogadaParada_BOT == 0:
# --------------------- / / ----------------------- #
idiom(language,
'The BOT is taking a card... ',
'O BOT está comprando mais uma carta... ')
sorteio_valor_extra_bot = valor[randint(0, 12)]
sorteio_naipe_extra_bot = randint(0, 3)
carta_extra_bot = f'{sorteio_valor_extra_bot} {de} {naipe[sorteio_naipe_extra_bot]}'
sleep(1)
print()
idiom(language,
f'Extra card: {carta_extra_bot}',
f'Carta Extra Sorteada: {carta_extra_bot}')
print()
cartas_BOT.append(carta_extra_bot)
somaFinal_BOT += validacaoLetras(sorteio_valor_extra_bot)
sleep(1)
idiom(language,
f'Total sum of the cards from BOT: {somaFinal_BOT}.',
f'Soma total das cartas do BOT: {somaFinal_BOT}.')
linha()
# --------------------- / / ----------------------- #
if somaFinal_BOT == 21:
BOT_venceu(language)
derrotas += 1
status(language, cartas_BOT, cartas_Player1, vitoriasP1)
break
if somaFinal_BOT > 21:
BOT_Estourou(language)
vitoriasP1 += 1
status(language, cartas_BOT, cartas_Player1, vitoriasP1)
break
if comprar == 'N' and somaFinal_BOT == somaFinal_Player1:
empate(language)
empates += 1
status(language, cartas_BOT, cartas_Player1, vitoriasP1)
break
if somaFinal_BOT < somaFinal_Player1:
continue
if comprar == 'N' and somaFinal_BOT > somaFinal_Player1:
print()
break
if comprar == 'S' and somaFinal_BOT > somaFinal_Player1:
paradaBOT(language, somaFinal_BOT)
jogadaParada_BOT = 1
print()
linha()
continue
if comprar == 'S' and somaFinal_BOT > 21:
BOT_Estourou(language)
vitoriasP1 += 1
statusFinal(vitoriasP1, derrotas, empates, partidas)
continue
if somaFinal_BOT == somaFinal_Player1 and somaFinal_BOT < 15:
# --------------------- / / ----------------------- #
idiom(language,
'The BOT is taking a card... ',
'O BOT está comprando mais uma carta... ')
sorteio_valor_extra_bot = valor[randint(0, 12)]
sorteio_naipe_extra_bot = randint(0, 3)
carta_extra_bot = f'{sorteio_valor_extra_bot} {de} {naipe[sorteio_naipe_extra_bot]}'
sleep(1)
print()
idiom(language,
f'Extra card: {carta_extra_bot}',
f'Carta Extra Sorteada: {carta_extra_bot}')
print()
cartas_BOT.append(carta_extra_bot)
somaFinal_BOT += validacaoLetras(sorteio_valor_extra_bot)
sleep(1)
idiom(language,
f'Total sum of the cards from BOT: {somaFinal_BOT}.',
f'Soma total das cartas do BOT: {somaFinal_BOT}.')
linha()
# --------------------- / / ----------------------- #
continue
# DEFININDO VENCEDOR
if somaFinal_BOT < somaFinal_Player1 < 21 and jogadaParada_BOT == 1:
vitoria(language)
vitoriasP1 += 1
print()
linha()
if somaFinal_Player1 < somaFinal_BOT < 21:
BOT_venceu(language)
derrotas += 1
status(language, cartas_BOT, cartas_Player1, vitoriasP1)
# CONTINUAR
sleep(1)
continuar = ''
while True:
try:
if language == 1:
continuar = str(input('Do you wish to play again? [Y/N] ')).upper().strip()[0]
if continuar == 'Y':
continuar = 'S'
elif language == 2:
continuar = str(input('Deseja jogar de novo? [S/N] ')).upper().strip()[0]
except:
msgERROR(language)
continue
if continuar.isnumeric():
msgERROR(language)
continue
if continuar == 'S' or continuar == 'N':
partidas += 1
break
else:
msgERROR(language)
continue
if continuar == 'N':
break
elif continuar == 'S':
linha()
print()
continue
# else:
# print()
# msgERROR(language)
# continue
# 1 VS 1 MODE
elif playerMode == 2:
while True:
valor = ['A', 2, 3, 4, 5, 6, 7, 8, 9, 10, 'J', 'Q', 'K']
naipe = []
if language == 1:
naipe = ['Spades', 'Hearts', 'Clubs', 'Dimonds']
elif language == 2:
naipe = ['Espadas', 'Copas', 'Paus', 'Ouros']
# SORTEIO DE 2 CARTAS (VALORES E NAIPES) / CARTAS DE ENTRADA
sorteio_valor1 = valor[randint(0, 12)]
sorteio_valor2 = valor[randint(0, 12)]
while True:
if sorteio_valor2 == sorteio_valor1:
sorteio_valor2 = valor[randint(0, 12)]
elif sorteio_valor2 != sorteio_valor1:
break
sorteio_valor3 = valor[randint(0, 12)]
sorteio_valor4 = valor[randint(0, 12)]
while True:
if sorteio_valor3 == sorteio_valor4:
sorteio_valor3 = valor[randint(0, 12)]
elif sorteio_valor3 != sorteio_valor4:
break
sorteio_naipe1 = randint(0, 3)
sorteio_naipe2 = randint(0, 3)
while True:
if sorteio_naipe2 == sorteio_naipe1:
sorteio_naipe2 = randint(0, 3)
elif sorteio_naipe2 != sorteio_naipe1:
break
sorteio_naipe3 = randint(0, 3)
sorteio_naipe4 = randint(0, 3)
while True:
if sorteio_naipe3 == sorteio_naipe4:
sorteio_naipe3 = randint(0, 3)
elif sorteio_naipe3 != sorteio_naipe4:
break
# MOSTRANDO O VALOR DA CARTA E O NAIPE
idiom(language,
'SHUFFLING THE CARDS...',
'EMBARALHANDO AS CARTAS...')
print()
sleep(1)
idiom(language,
f'Cards from the player {nome_Player1}:',
f'Cartas do jogador {nome_Player1}:')
sleep(1)
de = ''
if language == 1:
de = 'of'
elif language == 2:
de = 'de'
carta1 = f'{sorteio_valor1} {de} {naipe[sorteio_naipe1]}'
print(carta1)
sleep(1)
carta2 = f'{sorteio_valor2} {de} {naipe[sorteio_naipe2]}'
print(carta2)
sleep(1)
print()
cartas_Player1 = [carta1, carta2]
idiom(language,
f'Cards from the player {nome_Player2}:',
f'Cartas do jogador {nome_Player2}:')
sleep(1)
carta3 = f'{sorteio_valor3} {de} {naipe[sorteio_naipe3]}'
print(carta3)
sleep(1)
carta4 = f'{sorteio_valor4} {de} {naipe[sorteio_naipe4]}'
print(carta4)
sleep(1)
cartas_Player2 = [carta3, carta4]
# SOMA DAS CARTAS
print()
linha()
soma1 = validacaoLetras(sorteio_valor1) + validacaoLetras(sorteio_valor2)
soma2 = validacaoLetras(sorteio_valor3) + validacaoLetras(sorteio_valor4)
if language == 1:
print(f'The total sum of the two cards from {nome_Player1}: {soma1}')
print()
print(f'The total sum of the two cards from {nome_Player2}: {soma2}')
elif language == 2:
print(f'Soma total das duas cartas do {nome_Player1}: {soma1}')
print()
print(f'Soma total das duas cartas do {nome_Player2}: {soma2}')
linha()
# ACUMULADORES 02
somaFinal_Player1 = soma1 + 0
somaFinal_Player2 = soma2 + 0
Player1_Parou = Player2_Parou = 0
while True:
comprar_Player1 = ''
comprar_Player2 = ''
# -------------------------- PLAYER 1 -------------------------- #
while True and Player1_Parou == 0:
try:
if language == 1:
comprar_Player1 = str(input(f'Do you wish to take another card {nome_Player1}? [Y/N] ')).upper().strip()[0]
if comprar_Player1 == 'Y':
comprar_Player1 = 'S'
elif language == 2:
comprar_Player1 = str(input(f'Deseja comprar mais uma carta {nome_Player1}? [S/N] ')).upper().strip()[0]
except:
msgERROR(language)
if comprar_Player1.isnumeric():
continue
elif comprar_Player1 == 'N' or comprar_Player1 == 'S':
break
if comprar_Player1 == 'N' and somaFinal_Player1 < somaFinal_Player2:
break
if comprar_Player1 == 'S':
# COMPRAR CARTA EXTRA
sorteio_valorExtra = valor[randint(0, 12)]
sorteio_naipeExtra = randint(0, 3)
carta_Extra = f'{sorteio_valorExtra} {de} {naipe[sorteio_naipeExtra]}'
sleep(1)
print()
idiom(language,
f'Extra card: {carta_Extra}.',
f'Carta Extra Sorteada: {carta_Extra}.')
print()
cartas_Player1.append(carta_Extra)
somaFinal_Player1 += validacaoLetras(sorteio_valorExtra)
sleep(1)
idiom(language,
f'Total sum: {somaFinal_Player1}.',
f'Soma total: {somaFinal_Player1}.')
linha()
if somaFinal_Player1 == 21:
vitoriaPlayer1(language, nome_Player1)
vitoriasP1 += 1
statusPlayers(language, nome_Player1, cartas_Player1, nome_Player2,
cartas_Player2, vitoriasP1, vitoriasP2)
break
if somaFinal_Player1 > 21:
Player1_Estourou(language)
vitoriasP2 += 1
statusPlayers(language, nome_Player1, cartas_Player1, nome_Player2,
cartas_Player2, vitoriasP1, vitoriasP2)
break
if Player2_Parou == 1 and somaFinal_Player1 > somaFinal_Player2:
break
# if somaFinal_Player1 == somaFinal_Player2:
# empate(language)
# empates += 1
# status(language, cartas_BOT, cartas_Player1, vitoriasP1)
# break
elif comprar_Player1 == 'N':
paradaPlayer1(language, nome_Player1, somaFinal_Player1)
Player1_Parou = 1
continue
if Player1_Parou == 1 and Player2_Parou == 1:
break
# -------------------------- PLAYER 2 -------------------------- #
while True and Player2_Parou == 0:
try:
if language == 1:
comprar_Player2 = \
str(input(f'Do you wish to take another card {nome_Player2}? [Y/N] ')).upper().strip()[0]
if comprar_Player2 == 'Y':
comprar_Player2 = 'S'
elif language == 2:
comprar_Player2 = \
str(input(f'Deseja comprar mais uma carta {nome_Player2}? [S/N] ')).upper().strip()[0]
except:
msgERROR(language)
if comprar_Player2.isnumeric():
continue
elif comprar_Player2 == 'N' or comprar_Player2 == 'S':
break
if comprar_Player2 == 'N' and somaFinal_Player1 > somaFinal_Player2:
break
if comprar_Player2 == 'S':
# COMPRAR CARTA EXTRA
sorteio_valorExtra = valor[randint(0, 12)]
sorteio_naipeExtra = randint(0, 3)
carta_Extra = f'{sorteio_valorExtra} {de} {naipe[sorteio_naipeExtra]}'
sleep(1)
print()
idiom(language,
f'Extra card: {carta_Extra}.',
f'Carta Extra Sorteada: {carta_Extra}.')
print()
cartas_Player2.append(carta_Extra)
somaFinal_Player2 += validacaoLetras(sorteio_valorExtra)
sleep(1)
idiom(language,
f'Total sum: {somaFinal_Player2}.',
f'Soma total: {somaFinal_Player2}.')
linha()
if somaFinal_Player2 == 21:
vitoriaPlayer2(language, nome_Player2)
vitoriasP2 += 1
statusPlayers(language, nome_Player1, cartas_Player1, nome_Player2,
cartas_Player2, vitoriasP1, vitoriasP2)
break
if somaFinal_Player2 > 21:
Player2_Estourou(language)
vitoriasP1 += 1
statusPlayers(language, nome_Player1, cartas_Player1, nome_Player2,
cartas_Player2, vitoriasP1, vitoriasP2)
break
if Player1_Parou == 1 and somaFinal_Player2 > somaFinal_Player1:
break
if comprar_Player1 == 'N' and comprar_Player2 == 'N':
break
elif comprar_Player2 == 'N':
paradaPlayer2(language, nome_Player2, somaFinal_Player2)
Player2_Parou = 1
continue
if Player1_Parou == 1 and Player2_Parou == 1:
break
# DECIDINDO VENCEDOR
if somaFinal_Player2 < somaFinal_Player1 < 21:
vitoriaPlayer1(language, nome_Player1)
vitoriasP1 += 1
statusPlayers(language, nome_Player1, cartas_Player1, nome_Player2,
cartas_Player2, vitoriasP1, vitoriasP2)
elif somaFinal_Player1 < somaFinal_Player2 < 21:
vitoriaPlayer2(language, nome_Player2)
vitoriasP2 += 1
statusPlayers(language, nome_Player1, cartas_Player1, nome_Player2,
cartas_Player2, vitoriasP1, vitoriasP2)
elif somaFinal_Player1 == somaFinal_Player2:
empate(language)
empates += 1
statusPlayers(language, nome_Player1, cartas_Player1, nome_Player2,
cartas_Player2, vitoriasP1, vitoriasP2)
# CONTINUAR
sleep(1)
continuar = ''
while True:
try:
if language == 1:
continuar = str(input('Do you wish to play another match? [Y/N] ')).upper().strip()[0]
if continuar == 'Y':
continuar = 'S'
elif language == 2:
continuar = str(input('Deseja jogar outra partida? [S/N] ')).upper().strip()[0]
except:
msgERROR(language)
continue
if continuar == 'N' or continuar == 'S':
partidas += 1
break
else:
msgERROR(language)
continue
if continuar == 'N':
break
elif continuar == 'S':
linha()
print()
continue
else:
print()
msgERROR(language)
continue
# ENCERRAMENTO DO PROGRAMA
linha()
sleep(1)
idiom(language,
'FINALIZING SYSTEM...',
'ENCERRANDO O PROGRAMA...')
sleep(1)
print()
if playerMode == 1:
statusFinal(vitoriasP1, derrotas, empates, partidas)
elif playerMode == 2:
statusFinalPlayers(language, nome_Player1, vitoriasP1, nome_Player2, vitoriasP2, empates, partidas)
sleep(1)
print()
assinatura(language)
sleep(20)
| 35.095339 | 131 | 0.48355 | 3,082 | 33,130 | 5.02401 | 0.070084 | 0.0434 | 0.03255 | 0.019375 | 0.832408 | 0.788039 | 0.7438 | 0.712865 | 0.666947 | 0.652286 | 0 | 0.037134 | 0.413945 | 33,130 | 943 | 132 | 35.132556 | 0.760352 | 0.041382 | 0 | 0.808399 | 0 | 0.002625 | 0.143218 | 0.017603 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.003937 | 0 | 0.003937 | 0.073491 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
af9f27cd4cfe62f8f4ef30881d8fc65efc4a7e35 | 23 | py | Python | run.py | Xitog/jyx | e84b886ddd3306475436b73d6c8542c9e7dbefe9 | [
"MIT"
] | null | null | null | run.py | Xitog/jyx | e84b886ddd3306475436b73d6c8542c9e7dbefe9 | [
"MIT"
] | 3 | 2016-07-25T22:57:50.000Z | 2016-07-27T21:12:08.000Z | run.py | Xitog/jyx | e84b886ddd3306475436b73d6c8542c9e7dbefe9 | [
"MIT"
] | null | null | null | import jyx
jyx.Jyx()
| 7.666667 | 11 | 0.652174 | 4 | 23 | 3.75 | 0.5 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.217391 | 23 | 2 | 12 | 11.5 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 1 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
afa110e03141ef50e4e09b189f62af79bcf9d29b | 13,376 | py | Python | tests/api/test_mixin_with_tier_configuration_helper.py | JosePaniagua7/connect-processors-toolkit | 0fa7de0e29ac52bd7633d2ef34a48a37b71c2714 | [
"Apache-2.0"
] | null | null | null | tests/api/test_mixin_with_tier_configuration_helper.py | JosePaniagua7/connect-processors-toolkit | 0fa7de0e29ac52bd7633d2ef34a48a37b71c2714 | [
"Apache-2.0"
] | null | null | null | tests/api/test_mixin_with_tier_configuration_helper.py | JosePaniagua7/connect-processors-toolkit | 0fa7de0e29ac52bd7633d2ef34a48a37b71c2714 | [
"Apache-2.0"
] | null | null | null | import pytest
import os
from connect.client import ConnectClient, ClientError
from connect.devops_testing import asserts
from connect.processors_toolkit.requests.tier_configurations import TierConfigurationBuilder
from connect.processors_toolkit.requests import RequestBuilder
from connect.processors_toolkit.api.mixins import WithTierConfigurationHelper
class Helper(WithTierConfigurationHelper):
def __init__(self, client: ConnectClient):
self.client = client
BAD_REQUEST_400 = "400 Bad Request"
ASSET_REQUEST_FILE = '/request_asset.json'
TIER_CONFIG_REQUEST_FILE = '/request_tier_configuration.json'
def test_helper_should_retrieve_a_tier_configuration_by_id(sync_client_factory, response_factory):
tier_on_server = TierConfigurationBuilder()
tier_on_server.with_tier_configuration_id('TC-9091-4850-9712')
client = sync_client_factory([
response_factory(value=tier_on_server.raw(), status=200)
])
tc = Helper(client).find_tier_configuration('TC-9091-4850-9712')
assert isinstance(tc, TierConfigurationBuilder)
assert tc.tier_configuration_id() == 'TC-9091-4850-9712'
def test_helper_should_match_all_tier_configurations(sync_client_factory, response_factory):
content = [
TierConfigurationBuilder({'id': 'TC-000-000-001'}).raw(),
TierConfigurationBuilder({'id': 'TC-000-000-002'}).raw()
]
client = sync_client_factory([
response_factory(count=len(content), value=content)
])
templates = Helper(client).match_tier_configuration({})
assert len(templates) == 2
def test_helper_should_match_tier_configurations(sync_client_factory, response_factory):
content = [
TierConfigurationBuilder({'id': 'TC-000-000-001'}).raw(),
]
client = sync_client_factory([
response_factory(count=len(content), value=content)
])
templates = Helper(client).match_tier_configuration({'id': 'TC-000-000-001'})
assert len(templates) == 1
def test_helper_should_retrieve_a_tier_configuration_request_by_id(sync_client_factory, response_factory):
tier_on_server = TierConfigurationBuilder()
tier_on_server.with_tier_configuration_id('TC-9091-4850-9712')
on_server = RequestBuilder()
on_server.with_id('TCR-9091-4850-9712-001')
on_server.with_type('setup')
on_server.with_status('pending')
on_server.with_tier_configuration(tier_on_server)
client = sync_client_factory([
response_factory(value=on_server.raw(), status=200)
])
request = Helper(client).find_tier_configuration_request('TCR-9091-4850-9712-001')
assert isinstance(request, RequestBuilder)
assert request.id() == 'TCR-9091-4850-9712-001'
def test_helper_should_match_all_tier_configuration_requests(sync_client_factory, response_factory):
content = [
RequestBuilder({'id': 'TCR-000-000-001-001'}).raw(),
RequestBuilder({'id': 'TCR-000-000-002-002'}).raw()
]
client = sync_client_factory([
response_factory(count=len(content), value=content)
])
templates = Helper(client).match_tier_configuration_request({})
assert len(templates) == 2
def test_helper_should_match_tier_configuration_requests(sync_client_factory, response_factory):
content = [
RequestBuilder({'id': 'TCR-000-000-001-001'}).raw(),
]
client = sync_client_factory([
response_factory(count=len(content), value=content)
])
templates = Helper(client).match_tier_configuration_request({'id': 'TCR-000-000-001-001'})
assert len(templates) == 1
def test_helper_should_approve_a_tier_configuration_request(sync_client_factory, response_factory):
tier_on_server = TierConfigurationBuilder()
tier_on_server.with_tier_configuration_id('TC-8027-7606-7082')
tier_on_server.with_tier_configuration_status('active')
on_server = RequestBuilder()
on_server.with_id('TCR-8027-7606-7082-001')
on_server.with_type('setup')
on_server.with_status('approved')
on_server.with_tier_configuration(tier_on_server)
client = sync_client_factory([
response_factory(value=on_server.raw(), status=200)
])
tier = on_server.tier_configuration()
tier.with_tier_configuration_status('processing')
request = RequestBuilder()
request.with_id('PR-8027-7606-7082-001')
request.with_type('setup')
request.with_status('pending')
request.with_tier_configuration(tier)
request = Helper(client).approve_tier_configuration_request(request, 'TL-662-440-096')
assert request.id() == 'TCR-8027-7606-7082-001'
asserts.request_status(request.raw(), 'approved')
def test_helper_should_approve_an_already_approved_tier_configuration_request(sync_client_factory, response_factory):
exception = ClientError(
message=BAD_REQUEST_400,
status_code=400,
error_code="TC_006",
errors=["Tier configuration request status transition is not allowed."]
)
client = sync_client_factory([
response_factory(exception=exception, status=exception.status_code)
])
tier = TierConfigurationBuilder()
tier.with_tier_configuration_id('TC-8027-7606-7082')
tier.with_tier_configuration_status('active')
request = RequestBuilder()
request.with_id('TCR-8027-7606-7082-001')
request.with_type('setup')
request.with_status('approved')
request.with_tier_configuration(tier)
request = Helper(client).approve_tier_configuration_request(request, 'TL-662-440-096')
assert request.id() == 'TCR-8027-7606-7082-001'
asserts.request_status(request.raw(), 'approved')
def test_helper_should_fail_approving_a_tier_configuration_request(sync_client_factory, response_factory):
exception = ClientError(
message=BAD_REQUEST_400,
status_code=400,
error_code="TC_012",
errors=[
"There is no tier configuration request template with such id."
]
)
client = sync_client_factory([
response_factory(exception=exception, status=exception.status_code)
])
request = RequestBuilder()
request.with_id('PR-8027-7606-7082-001')
request.with_tier_configuration(TierConfigurationBuilder())
with pytest.raises(ClientError):
Helper(client).approve_tier_configuration_request(request, 'TL-662-440-096')
def test_helper_should_fail_a_tier_configuration_request(sync_client_factory, response_factory):
reason = 'I don\'t like you :P'
tier_on_server = TierConfigurationBuilder()
tier_on_server.with_tier_configuration_id('TC-8027-7606-7082')
tier_on_server.with_tier_configuration_status('processing')
on_server = RequestBuilder()
on_server.with_id('TCR-8027-7606-7082-001')
on_server.with_type('setup')
on_server.with_status('failed')
on_server.with_tier_configuration(tier_on_server)
on_server.with_reason(reason)
client = sync_client_factory([
response_factory(value=on_server.raw(), status=200)
])
request = RequestBuilder()
request.with_id('TCR-8027-7606-7082-001')
request.with_status('pending')
request.with_tier_configuration(tier_on_server)
request = Helper(client).fail_tier_configuration_request(request, reason)
assert request.id() == 'TCR-8027-7606-7082-001'
asserts.request_status(request.raw(), 'failed')
asserts.request_reason(request.raw(), reason)
def test_helper_should_fail_an_already_failed_tier_configuration_request(sync_client_factory, response_factory):
exception = ClientError(
message=BAD_REQUEST_400,
status_code=400,
error_code="TC_006",
errors=["Tier configuration request status transition is not allowed."]
)
client = sync_client_factory([
response_factory(exception=exception, status=exception.status_code)
])
tier = TierConfigurationBuilder()
tier.with_tier_configuration_id('TC-8027-7606-7082')
tier.with_tier_configuration_status('processing')
request = RequestBuilder()
request.with_id('TCR-8027-7606-7082-001')
request.with_type('setup')
request.with_status('failed')
request.with_tier_configuration(tier)
request = Helper(client).fail_tier_configuration_request(request, 'It is my will')
assert request.id() == 'TCR-8027-7606-7082-001'
asserts.request_status(request.raw(), 'failed')
def test_helper_should_fail_failing_a_tier_configuration_request(sync_client_factory, response_factory):
exception = ClientError(
message=BAD_REQUEST_400,
status_code=400,
error_code="VAL_001",
errors=["reason: This field may not be blank."]
)
client = sync_client_factory([
response_factory(exception=exception, status=exception.status_code)
])
request = RequestBuilder()
request.with_id('TCR-8027-7606-7082-001')
request.with_tier_configuration(TierConfigurationBuilder())
with pytest.raises(ClientError):
Helper(client).fail_tier_configuration_request(request, "")
def test_helper_should_inquire_a_tier_configuration_request(sync_client_factory, response_factory):
tier = TierConfigurationBuilder()
tier.with_tier_configuration_id('AS-8027-7606-7082')
tier.with_tier_configuration_status('processing')
on_server = RequestBuilder()
on_server.with_id('TCR-8027-7606-7082-001')
on_server.with_type('setup')
on_server.with_status('inquiring')
on_server.with_tier_configuration(tier)
client = sync_client_factory([
response_factory(value=on_server.raw(), status=200)
])
request = RequestBuilder()
request.with_id('TCR-8027-7606-7082-001')
request.with_type('setup')
request.with_status('pending')
request.with_tier_configuration(tier)
request = Helper(client).inquire_tier_configuration_request(request)
assert request.id() == 'TCR-8027-7606-7082-001'
asserts.request_status(request.raw(), 'inquiring')
def test_helper_should_inquire_an_already_inquired_tier_configuration_request(sync_client_factory, response_factory):
exception = ClientError(
message=BAD_REQUEST_400,
status_code=400,
error_code="TC_006",
errors=["Tier configuration request status transition is not allowed."]
)
client = sync_client_factory([
response_factory(exception=exception, status=exception.status_code)
])
tier = TierConfigurationBuilder()
tier.with_tier_configuration_id('TC-8027-7606-7082')
tier.with_tier_configuration_status('processing')
request = RequestBuilder()
request.with_id('TCR-8027-7606-7082-001')
request.with_type('setup')
request.with_status('inquiring')
request.with_tier_configuration(tier)
request = Helper(client).inquire_tier_configuration_request(request)
assert request.id() == 'TCR-8027-7606-7082-001'
asserts.request_status(request.raw(), 'inquiring')
def test_helper_should_fail_inquiring_a_tier_configuration_request(sync_client_factory, response_factory):
exception = ClientError(
message=BAD_REQUEST_400,
status_code=400,
error_code="TC_006",
errors=["Some weird error..."]
)
client = sync_client_factory([
response_factory(exception=exception, status=exception.status_code)
])
request = RequestBuilder()
request.with_id('TCR-8027-7606-7082-001')
request.with_tier_configuration(TierConfigurationBuilder())
with pytest.raises(ClientError):
Helper(client).inquire_tier_configuration_request(request)
def test_helper_should_update_a_request_tier_configuration_params(sync_client_factory, response_factory, load_json):
on_server = RequestBuilder(load_json(os.path.dirname(__file__) + TIER_CONFIG_REQUEST_FILE))
after_update = RequestBuilder(load_json(os.path.dirname(__file__) + TIER_CONFIG_REQUEST_FILE))
after_update.with_param('TIER_SIGNATURE', 'the-tier-signature-updated')
tier = after_update.tier_configuration()
tier.with_tier_configuration_param('TIER_SIGNATURE', 'the-tier-signature-updated')
after_update.with_tier_configuration(tier)
client = sync_client_factory([
response_factory(value=on_server.raw(), status=200),
response_factory(value=after_update.raw(), status=200)
])
request = RequestBuilder(load_json(os.path.dirname(__file__) + TIER_CONFIG_REQUEST_FILE))
request.with_param('TIER_SIGNATURE', 'the-tier-signature-updated')
print(request.param('TIER_SIGNATURE'))
request = Helper(client).update_tier_configuration_parameters(request)
assert request.raw()['params'][0]['id'] == 'TIER_SIGNATURE'
assert request.raw()['params'][0]['value'] == 'the-tier-signature-updated'
asserts.tier_configuration_param_value_equal(request.raw(), 'TIER_SIGNATURE', 'the-tier-signature-updated')
def test_helper_should_not_update_request_tier_configuration_params(sync_client_factory, response_factory, load_json):
request = RequestBuilder(load_json(os.path.dirname(__file__) + TIER_CONFIG_REQUEST_FILE))
client = sync_client_factory([
response_factory(value=request.raw(), status=200),
])
request = Helper(client).update_tier_configuration_parameters(request)
assert request.raw()['params'][0]['id'] == 'TIER_SIGNATURE'
assert request.raw()['params'][0]['value'] == ''
asserts.tier_configuration_param_value_equal(request.raw(), 'TIER_SIGNATURE', '')
| 34.924282 | 118 | 0.742673 | 1,630 | 13,376 | 5.744785 | 0.087117 | 0.130713 | 0.061726 | 0.090773 | 0.887655 | 0.845258 | 0.827317 | 0.797843 | 0.746476 | 0.725117 | 0 | 0.055838 | 0.145783 | 13,376 | 382 | 119 | 35.015707 | 0.763697 | 0 | 0 | 0.643636 | 0 | 0 | 0.130458 | 0.046501 | 0 | 0 | 0 | 0 | 0.101818 | 1 | 0.065455 | false | 0 | 0.025455 | 0 | 0.094545 | 0.003636 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
afc27d86b52a94a20737a42eb2ab75d60ff9041d | 43 | py | Python | image_gen/src/module/filters/__init__.py | Ovlic/ovlic.py | e776f5f84fbb15c12866a2d49997a21acde29fdb | [
"MIT"
] | null | null | null | image_gen/src/module/filters/__init__.py | Ovlic/ovlic.py | e776f5f84fbb15c12866a2d49997a21acde29fdb | [
"MIT"
] | null | null | null | image_gen/src/module/filters/__init__.py | Ovlic/ovlic.py | e776f5f84fbb15c12866a2d49997a21acde29fdb | [
"MIT"
] | null | null | null | def placeholder():
print("Placeholder") | 21.5 | 24 | 0.697674 | 4 | 43 | 7.5 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.139535 | 43 | 2 | 24 | 21.5 | 0.810811 | 0 | 0 | 0 | 0 | 0 | 0.25 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | true | 0 | 0 | 0 | 0.5 | 0.5 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
bbc4ab1669b61bc4a2fd42bb3f81e3c36812ed77 | 33 | py | Python | memover/__main__.py | Alecktos/Directory-Tree-File-Mover | ac642ba0599534cdd248e56e8db842dbf1972496 | [
"MIT"
] | 1 | 2021-11-23T21:17:24.000Z | 2021-11-23T21:17:24.000Z | memover/__main__.py | Alecktos/Directory-Tree-File-Mover | ac642ba0599534cdd248e56e8db842dbf1972496 | [
"MIT"
] | null | null | null | memover/__main__.py | Alecktos/Directory-Tree-File-Mover | ac642ba0599534cdd248e56e8db842dbf1972496 | [
"MIT"
] | null | null | null | from . import main
main.main()
| 6.6 | 18 | 0.666667 | 5 | 33 | 4.4 | 0.6 | 0.727273 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.212121 | 33 | 4 | 19 | 8.25 | 0.846154 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 1 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
a52842e2e6d2b31fd06aa8008514a036c1e483ba | 4,088 | py | Python | tests/instagram/tests_instagram_account.py | sgaynetdinov/instasave_bot | b54c1f53917b0a81dffd7a2e5e0980fa878b3d73 | [
"MIT"
] | 13 | 2017-08-16T18:48:43.000Z | 2021-07-27T22:47:28.000Z | tests/instagram/tests_instagram_account.py | sgaynetdinov/instasave_bot | b54c1f53917b0a81dffd7a2e5e0980fa878b3d73 | [
"MIT"
] | 14 | 2019-01-17T07:49:23.000Z | 2021-09-16T15:13:13.000Z | tests/instagram/tests_instagram_account.py | sgaynetdinov/instasave_bot | b54c1f53917b0a81dffd7a2e5e0980fa878b3d73 | [
"MIT"
] | 2 | 2019-06-08T23:34:53.000Z | 2021-06-01T07:36:35.000Z | import unittest
from unittest.mock import MagicMock, patch
from urllib.error import HTTPError
from bot.instagram import (Instagram, Instagram404Error, InstagramLinkError,
urlopen)
class InstagramAccountTestCase(unittest.TestCase):
@patch('bot.instagram.urlopen')
def test_get_photos_and_video_url__single_photo(self, mock):
with open('tests/instagram/account.json') as fd:
m = MagicMock()
m.read.return_value = fd.read().encode()
mock.return_value = m
insta = Instagram.from_url('https://www.instagram.com/nasa/')
self.assertEqual(len(insta.get_photos_and_video_url()), 1)
self.assertEqual(insta.get_photos_and_video_url()[0], 'https://scontent-lax3-2.cdninstagram.com/v/account.jpg')
@patch('bot.instagram.urlopen')
def test_get_text(self, mock):
with open('tests/instagram/account.json') as fd:
m = MagicMock()
m.read.return_value = fd.read().encode()
mock.return_value = m
insta = Instagram.from_url('https://www.instagram.com/nasa/')
self.assertEqual(insta.get_text(), "NASA\n\nExplore the universe and discover our home planet with the official NASA Instagram account\nhttp://www.nasa.gov/")
@patch('bot.instagram.urlopen')
def test_get_text_if_not_url(self, mock):
with open('tests/instagram/account.json') as fd:
m = MagicMock()
m.read.return_value = fd.read().encode()
mock.return_value = m
insta = Instagram.from_url('https://www.instagram.com/nasa/')
del insta._content['external_url']
self.assertEqual(insta.get_text(), "NASA\n\nExplore the universe and discover our home planet with the official NASA Instagram account")
@patch('bot.instagram.urlopen')
def test_get_text_if_url_is_None(self, mock):
with open('tests/instagram/account.json') as fd:
m = MagicMock()
m.read.return_value = fd.read().encode()
mock.return_value = m
insta = Instagram.from_url('https://www.instagram.com/nasa/')
insta._content['external_url'] = None
self.assertEqual(insta.get_text(), "NASA\n\nExplore the universe and discover our home planet with the official NASA Instagram account")
@patch('bot.instagram.urlopen')
def test_get_text_if_not_full_name(self, mock):
with open('tests/instagram/account.json') as fd:
m = MagicMock()
m.read.return_value = fd.read().encode()
mock.return_value = m
insta = Instagram.from_url('https://www.instagram.com/nasa/')
del insta._content['full_name']
self.assertEqual(insta.get_text(), "Explore the universe and discover our home planet with the official NASA Instagram account\nhttp://www.nasa.gov/")
@patch('bot.instagram.urlopen')
def test_full_name(self, mock):
with open('tests/instagram/account.json') as fd:
m = MagicMock()
m.read.return_value = fd.read().encode()
mock.return_value = m
insta = Instagram.from_url('https://www.instagram.com/nasa/')
self.assertEqual(insta._full_name, "NASA")
@patch('bot.instagram.urlopen')
def test_full_name_if_key_error(self, mock):
with open('tests/instagram/account.json') as fd:
m = MagicMock()
m.read.return_value = fd.read().encode()
mock.return_value = m
insta = Instagram.from_url('https://www.instagram.com/nasa/')
del insta._content['full_name']
self.assertEqual(insta._full_name, "")
@patch('bot.instagram.urlopen')
def test_text_empty(self, mock):
with open('tests/instagram/account.json') as fd:
m = MagicMock()
m.read.return_value = fd.read().encode()
mock.return_value = m
insta = Instagram.from_url('https://www.instagram.com/nasa/')
insta._content['biography'] = ''
self.assertEqual(insta.get_text(), 'NASA')
| 39.68932 | 166 | 0.639432 | 522 | 4,088 | 4.842912 | 0.151341 | 0.06962 | 0.053797 | 0.075949 | 0.861551 | 0.845728 | 0.801424 | 0.787975 | 0.757516 | 0.757516 | 0 | 0.002234 | 0.233611 | 4,088 | 102 | 167 | 40.078431 | 0.80466 | 0 | 0 | 0.684211 | 0 | 0.026316 | 0.288894 | 0.09589 | 0 | 0 | 0 | 0 | 0.118421 | 1 | 0.105263 | false | 0 | 0.052632 | 0 | 0.171053 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
a56e2623f82bbcd348b15b560c6aea3a58f45033 | 181 | py | Python | folder_compiler_static_website/__init__.py | d-krupke/folder_compiler_static_website | 670d2895cdf3cd5e3b6cfd78344adf296fff951d | [
"MIT"
] | null | null | null | folder_compiler_static_website/__init__.py | d-krupke/folder_compiler_static_website | 670d2895cdf3cd5e3b6cfd78344adf296fff951d | [
"MIT"
] | null | null | null | folder_compiler_static_website/__init__.py | d-krupke/folder_compiler_static_website | 670d2895cdf3cd5e3b6cfd78344adf296fff951d | [
"MIT"
] | null | null | null | from .bibtex_processor import BibtexProcessor
from .html_processor import HtmlProcessor
from .jinja_processor import JinjaProcessor
from .markdown_processor import MarkdownProcessor | 45.25 | 49 | 0.895028 | 20 | 181 | 7.9 | 0.55 | 0.379747 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.082873 | 181 | 4 | 49 | 45.25 | 0.951807 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
a571e69b4d7df696efa313b435e93278ca472564 | 1,124 | py | Python | financialmodelingprep/technical_indicators.py | Porter97/financialmodelingprep-python | fd1c14ac8ab36c34842dcc915c399ad4ed72ce2c | [
"MIT"
] | 1 | 2021-03-15T17:18:25.000Z | 2021-03-15T17:18:25.000Z | financialmodelingprep/technical_indicators.py | Porter97/financialmodelingprep-python | fd1c14ac8ab36c34842dcc915c399ad4ed72ce2c | [
"MIT"
] | null | null | null | financialmodelingprep/technical_indicators.py | Porter97/financialmodelingprep-python | fd1c14ac8ab36c34842dcc915c399ad4ed72ce2c | [
"MIT"
] | 1 | 2021-03-15T17:16:50.000Z | 2021-03-15T17:16:50.000Z | from financialmodelingprep.decorator import get_json_data
class technical_indicators():
BASE_URL = 'https://financialmodelingprep.com'
API_KEY = ''
def __init__(self, API_KEY):
self.API = API_KEY
@get_json_data
def stock_price(ticker: str, interval: str, period: str, indicator_type: str):
'''
Earnings Calendar
interval: | 1min | 5min | 15min | 30min | 1hour | 4hour | daily |
type: | SMA | EMA | WMA | DEMA | TEMA | williams | RSI | ADX | standardDeviation
'''
return f'{self.BASE_URL}/api/v3/technical_indicator/{interval}/{ticker}?period={period}&type={indicator_type}?apikey={self.API}'
@get_json_data
def stock_price(ticker: str, interval: str, period: str, indicator_type: str):
'''
Earnings Calendar
interval: | 1min | 5min | 15min | 30min | 1hour | 4hour | daily |
type: | SMA | EMA | WMA | DEMA | TEMA | williams | RSI | ADX | standardDeviation
'''
return f'{self.BASE_URL}/api/v3/technical_indicator/{interval}/{ticker}?period={period}&type={indicator_type}?apikey={self.API}' | 41.62963 | 136 | 0.642349 | 134 | 1,124 | 5.201493 | 0.365672 | 0.040172 | 0.047346 | 0.040172 | 0.789096 | 0.789096 | 0.789096 | 0.789096 | 0.789096 | 0.789096 | 0 | 0.020737 | 0.227758 | 1,124 | 27 | 137 | 41.62963 | 0.782258 | 0.292705 | 0 | 0.5 | 0 | 0.166667 | 0.375174 | 0.329149 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.083333 | 0 | 0.75 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
36fd9674d88861f8bfe00e6a3d8d2175a079ecb9 | 413 | py | Python | kissim/encoding/__init__.py | volkamerlab/kissim | 35198a5efd4b651dd3952bf26ac5098fd1c4dfaa | [
"MIT"
] | 15 | 2020-06-23T14:46:07.000Z | 2022-02-03T04:23:56.000Z | kissim/encoding/__init__.py | volkamerlab/kissim | 35198a5efd4b651dd3952bf26ac5098fd1c4dfaa | [
"MIT"
] | 66 | 2020-11-05T11:45:21.000Z | 2021-12-15T12:11:20.000Z | kissim/encoding/__init__.py | volkamerlab/kissim | 35198a5efd4b651dd3952bf26ac5098fd1c4dfaa | [
"MIT"
] | 3 | 2021-02-27T12:56:27.000Z | 2022-02-03T04:23:57.000Z | """
Encode kinase pockets as subpocket-based structural fingerprint.
"""
from .fingerprint_base import FingerprintBase
from .fingerprint import Fingerprint
from .fingerprint_normalized import FingerprintNormalized
from .fingerprint_generator_base import FingerprintGeneratorBase
from .fingerprint_generator import FingerprintGenerator
from .fingerprint_generator_normalized import FingerprintGeneratorNormalized
| 37.545455 | 76 | 0.883777 | 39 | 413 | 9.179487 | 0.461538 | 0.251397 | 0.201117 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.082324 | 413 | 10 | 77 | 41.3 | 0.944591 | 0.154964 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 1 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | 6 |
3c230d1d9882dafe642cfe266bafd27f94197090 | 53 | py | Python | backend/initiatives/admin/__init__.py | danesjenovdan/izboljsajmo-maribor | cd2f388ceb89d7989952ab05154fd8e7341c2b2b | [
"CC0-1.0"
] | null | null | null | backend/initiatives/admin/__init__.py | danesjenovdan/izboljsajmo-maribor | cd2f388ceb89d7989952ab05154fd8e7341c2b2b | [
"CC0-1.0"
] | null | null | null | backend/initiatives/admin/__init__.py | danesjenovdan/izboljsajmo-maribor | cd2f388ceb89d7989952ab05154fd8e7341c2b2b | [
"CC0-1.0"
] | null | null | null | from .admin import *
from .initiative_admin import *
| 17.666667 | 31 | 0.773585 | 7 | 53 | 5.714286 | 0.571429 | 0.55 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.150943 | 53 | 2 | 32 | 26.5 | 0.888889 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
3c4ecbeb702dd316495f951526939660fda7f495 | 447 | py | Python | eggdriver/resources/math/__init__.py | PythonForChange/eggdriver | bcf1da6dcb2a8daf3144c7af8d1d04f8844be2fc | [
"MIT"
] | 3 | 2021-09-25T01:22:31.000Z | 2021-11-28T23:25:46.000Z | eggdriver/resources/math/__init__.py | PythonForChange/eggdriver | bcf1da6dcb2a8daf3144c7af8d1d04f8844be2fc | [
"MIT"
] | null | null | null | eggdriver/resources/math/__init__.py | PythonForChange/eggdriver | bcf1da6dcb2a8daf3144c7af8d1d04f8844be2fc | [
"MIT"
] | null | null | null | from eggdriver.resources.math.algorithms import solve, root
from eggdriver.resources.math.theoric import *
from eggdriver.resources.math.float import *
from eggdriver.resources.math.polynomial import Polynomial
from eggdriver.resources.math.constants import inf, e, pi
from eggdriver.resources.math.functions import log, ln, cos, sin, tan
from eggdriver.resources.math.linear import Vector, Matrix
from eggdriver.resources.math.calculus import *
| 44.7 | 69 | 0.829978 | 61 | 447 | 6.081967 | 0.409836 | 0.280323 | 0.474394 | 0.560647 | 0.172507 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.091723 | 447 | 9 | 70 | 49.666667 | 0.913793 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
3c5c6e9990c7d4fac835a2c442c43a70119e4610 | 56 | py | Python | octopus_energy_api/__init__.py | euanacampbell/octopus_energy_api | 61fe9bf659269636150d24957e5a11886ca142e7 | [
"MIT"
] | 2 | 2021-06-15T22:49:31.000Z | 2021-07-31T14:39:37.000Z | octopus_energy_api/__init__.py | euanacampbell/octopus_energy_api | 61fe9bf659269636150d24957e5a11886ca142e7 | [
"MIT"
] | null | null | null | octopus_energy_api/__init__.py | euanacampbell/octopus_energy_api | 61fe9bf659269636150d24957e5a11886ca142e7 | [
"MIT"
] | 1 | 2022-02-21T22:15:00.000Z | 2022-02-21T22:15:00.000Z | from octopus_energy_api.octopus_energy_api import oe_api | 56 | 56 | 0.928571 | 10 | 56 | 4.7 | 0.6 | 0.553191 | 0.680851 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.053571 | 56 | 1 | 56 | 56 | 0.886792 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
3c8cd5713c79193a7c627102b773a6f60f20bba3 | 40 | py | Python | log/__init__.py | alexsmith2910/Strat_UN | 57f79beb923cebed9ced940ccaea9df9172541fe | [
"MIT",
"Unlicense"
] | null | null | null | log/__init__.py | alexsmith2910/Strat_UN | 57f79beb923cebed9ced940ccaea9df9172541fe | [
"MIT",
"Unlicense"
] | 3 | 2020-10-10T11:10:55.000Z | 2021-03-30T13:16:52.000Z | log/__init__.py | alexsmith2910/Strat_UN | 57f79beb923cebed9ced940ccaea9df9172541fe | [
"MIT",
"Unlicense"
] | null | null | null | from .log import log_write, error_write
| 20 | 39 | 0.825 | 7 | 40 | 4.428571 | 0.714286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.125 | 40 | 1 | 40 | 40 | 0.885714 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
b1bc60f136215ff744593f4d700e18afb7b549c6 | 96 | py | Python | venv/lib/python3.8/site-packages/cryptography/hazmat/backends/openssl/backend.py | GiulianaPola/select_repeats | 17a0d053d4f874e42cf654dd142168c2ec8fbd11 | [
"MIT"
] | 1 | 2021-11-07T22:40:27.000Z | 2021-11-07T22:40:27.000Z | venv/lib/python3.8/site-packages/cryptography/hazmat/backends/openssl/backend.py | GiulianaPola/select_repeats | 17a0d053d4f874e42cf654dd142168c2ec8fbd11 | [
"MIT"
] | 19 | 2021-11-20T04:09:18.000Z | 2022-03-23T15:05:55.000Z | venv/lib/python3.8/site-packages/cryptography/hazmat/backends/openssl/backend.py | GiulianaPola/select_repeats | 17a0d053d4f874e42cf654dd142168c2ec8fbd11 | [
"MIT"
] | null | null | null | /home/runner/.cache/pip/pool/7e/65/ee/200d991e43cd86a0907aab8ec16d798d074658390e3c0e8a43423cd2cf | 96 | 96 | 0.895833 | 9 | 96 | 9.555556 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.427083 | 0 | 96 | 1 | 96 | 96 | 0.46875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
b1c55e31c4de2cc4776b0ab7dbbbef1b7b6e1eaf | 26 | py | Python | PythonTestOne.py | HackTheAR/Transit | 27598670b5d1f68d0c6a09ba14eaf8b6ebe88330 | [
"Apache-2.0"
] | null | null | null | PythonTestOne.py | HackTheAR/Transit | 27598670b5d1f68d0c6a09ba14eaf8b6ebe88330 | [
"Apache-2.0"
] | null | null | null | PythonTestOne.py | HackTheAR/Transit | 27598670b5d1f68d0c6a09ba14eaf8b6ebe88330 | [
"Apache-2.0"
] | null | null | null | import os
print("works") | 6.5 | 14 | 0.692308 | 4 | 26 | 4.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.153846 | 26 | 4 | 14 | 6.5 | 0.818182 | 0 | 0 | 0 | 0 | 0 | 0.185185 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0.5 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 6 |
3ca2d0cbce80b402a14dc75dcab5623b9fd678ea | 21,716 | py | Python | Read_write_balorc_desp.py | luizschmall/tce_siconfi_inconsistencies | c55ed79332241a9d40dbb9528535a8fc8bf210d2 | [
"MIT"
] | null | null | null | Read_write_balorc_desp.py | luizschmall/tce_siconfi_inconsistencies | c55ed79332241a9d40dbb9528535a8fc8bf210d2 | [
"MIT"
] | null | null | null | Read_write_balorc_desp.py | luizschmall/tce_siconfi_inconsistencies | c55ed79332241a9d40dbb9528535a8fc8bf210d2 | [
"MIT"
] | null | null | null | import pandas
import string
import math
import csv
import os
from unicodedata import normalize
def remover_acentos(txt):
return normalize('NFKD', txt).encode('ASCII', 'ignore').decode('ASCII')
def containsNumber(line):
res = False
numero = 0
if any(i.isdigit() for i in str(line)):
res = True
line = str(line).split(" ")
for l in line:
if any(k.isdigit() for k in l):
l = l.replace(".", "")
l = l.replace(",", "")
l = l[:-2] + "." + l[-2:]
try:
numero = float(l)
except:
numero = 0
return res, numero
def buscaKeyParts(diretorio, file, key):
df = pandas.read_csv(diretorio + file, names=list(range(0, 10)))
mask = df.applymap(lambda x: key.upper() in remover_acentos(str(x).upper()))
# print(mask)
df1 = df[mask.any(axis=1)]
print(df1)
i = 0
j = 0
resultado = [0, 0, 0, 0, 0, 0]
if df1.empty == False:
for (columnName, columnData) in df1.iteritems():
if key.upper() in remover_acentos(str(columnData.values[0]).upper()):
j = 1
print('Colunm Name : ', columnName)
print('Column Contents : ', columnData.values)
if j == 1 and columnData.values[0] and (
isinstance(columnData.values[0], float) and math.isnan(columnData.values[0])) == False:
containnumber1, containnumber2 = containsNumber(columnData.values[0])
print('contain number : ', containnumber1, containnumber2)
if containnumber1 == True and i < 6:
resultado[i] = containnumber2
i += 1
return resultado
def main():
diretorio = "C:\\Users\\schmall\\Documents\\FGV\\Tese\\Balanços_PI\\BALORC\\ORIG\\RESULT2_despesas\\"
files = os.listdir(diretorio)
csv_files = [f for f in files if f.endswith('.csv')]
files2 = [d for d in csv_files if 'tables' in d]
new = ""
despesas_correntes = [" ", " ", " ", " ", " ", " "]
pessoal_encargos_sociais = [" ", " ", " ", " ", " ", " "]
juros_encargos_divida = [" ", " ", " ", " ", " ", " "]
outras_despesas_correntes = [" ", " ", " ", " ", " ", " "]
despesas_capital = [" ", " ", " ", " ", " ", " "]
investimentos = [" ", " ", " ", " ", " ", " "]
inversoes_financeiras = [" ", " ", " ", " ", " ", " "]
amortizacao_divida = [" ", " ", " ", " ", " ", " "]
reserva_contingencia = [" ", " ", " ", " ", " ", " "]
reserva_rpps = [" ", " ", " ", " ", " ", " "]
subtotal_despesas = [" ", " ", " ", " ", " ", " "]
amortizacao_divida_refinanciamento = [" ", " ", " ", " ", " ", " "]
subtotal_refinanciamento_d = [" ", " ", " ", " ", " ", " "]
for file in files2:
print(file)
file_parts = file.split(".")
if file_parts[0] != new:
with open(diretorio + new + "_tratado.csv", mode='a+') as balorc_file:
balorc_writer = csv.writer(balorc_file, delimiter=';', quoting=csv.QUOTE_NONNUMERIC)
if despesas_correntes[0] == 0 and despesas_correntes[1] == 0:
balorc_writer.writerow(['DESPESAS CORRENTES', despesas_correntes[0], despesas_correntes[1], despesas_correntes[2], despesas_correntes[3], despesas_correntes[4], despesas_correntes[5]])
if pessoal_encargos_sociais[0] == 0 and pessoal_encargos_sociais[1] == 0:
balorc_writer.writerow(['PESSOAL E ENCARGOS SOCIAIS', pessoal_encargos_sociais[0], pessoal_encargos_sociais[1], pessoal_encargos_sociais[2], pessoal_encargos_sociais[3], pessoal_encargos_sociais[4], pessoal_encargos_sociais[5]])
if juros_encargos_divida[0] == 0 and juros_encargos_divida[1] == 0:
balorc_writer.writerow(['JUROS E ENCARGOS DA DIVIDA', juros_encargos_divida[0], juros_encargos_divida[1], juros_encargos_divida[2], juros_encargos_divida[3], juros_encargos_divida[4], juros_encargos_divida[5]])
if outras_despesas_correntes[0] == 0 and outras_despesas_correntes[1] == 0:
balorc_writer.writerow(['OUTRAS DESPESAS CORRENTES', outras_despesas_correntes[0], outras_despesas_correntes[1], outras_despesas_correntes[2], outras_despesas_correntes[3], outras_despesas_correntes[4], outras_despesas_correntes[5]])
if despesas_capital[0] == 0 and despesas_capital[1] == 0:
balorc_writer.writerow(['DESPESAS DE CAPITAL', despesas_capital[0], despesas_capital[1], despesas_capital[2], despesas_capital[3], despesas_capital[4], despesas_capital[5]])
if investimentos[0] == 0 and investimentos[1] == 0:
balorc_writer.writerow(['INVESTIMENTOS', investimentos[0], investimentos[1], investimentos[2], investimentos[3], investimentos[4], investimentos[5]])
if inversoes_financeiras[0] == 0 and inversoes_financeiras[1] == 0:
balorc_writer.writerow(
['INVERSOES FINANCEIRAS', inversoes_financeiras[0], inversoes_financeiras[1], inversoes_financeiras[2], inversoes_financeiras[3], inversoes_financeiras[4], inversoes_financeiras[5]])
if amortizacao_divida[0] == 0 and amortizacao_divida[1] == 0:
balorc_writer.writerow(['AMORTIZACAO DA DIVIDA', amortizacao_divida[0], amortizacao_divida[1], amortizacao_divida[2], amortizacao_divida[3], amortizacao_divida[4], amortizacao_divida[5]])
if reserva_contingencia[0] == 0 and reserva_contingencia[1] == 0:
balorc_writer.writerow(['RESERVA DE CONTINGENCIA', reserva_contingencia[0], reserva_contingencia[1], reserva_contingencia[2], reserva_contingencia[3], reserva_contingencia[4], reserva_contingencia[5]])
if reserva_rpps[0] == 0 and reserva_rpps[1] == 0:
balorc_writer.writerow(['RESERVA DO RPPS', reserva_rpps[0], reserva_rpps[1], reserva_rpps[2], reserva_rpps[3], reserva_rpps[4], reserva_rpps[5]])
if subtotal_despesas[0] == 0 and subtotal_despesas[1] == 0:
balorc_writer.writerow(['SUBTOTAL DAS DESPESAS', subtotal_despesas[0], subtotal_despesas[1], subtotal_despesas[2], subtotal_despesas[3], subtotal_despesas[4], subtotal_despesas[5]])
if amortizacao_divida_refinanciamento[0] == 0 and amortizacao_divida_refinanciamento[1] == 0:
balorc_writer.writerow(
['AMORTIZACAO DA DIVIDA - REFINANCIAMENTO', amortizacao_divida_refinanciamento[0], amortizacao_divida_refinanciamento[1], amortizacao_divida_refinanciamento[2], amortizacao_divida_refinanciamento[3], amortizacao_divida_refinanciamento[4], amortizacao_divida_refinanciamento[5]])
if subtotal_refinanciamento_d[0] == 0 and subtotal_refinanciamento_d[1] == 0:
balorc_writer.writerow(['SUBTOTAL COM REFINANCIAMENTO (XV)', subtotal_refinanciamento_d[0], subtotal_refinanciamento_d[1], subtotal_refinanciamento_d[2], subtotal_refinanciamento_d[3], subtotal_refinanciamento_d[4], subtotal_refinanciamento_d[5]])
new = file_parts[0]
with open(diretorio + file_parts[0] + "_tratado.csv", mode='w+') as balorc_file:
balorc_writer = csv.writer(balorc_file, delimiter=';', quoting=csv.QUOTE_NONNUMERIC)
balorc_writer.writerow(["Key", "1", "2", "3", "4", "5", "6"])
despesas_correntes = buscaKeyParts(diretorio, file, 'DESPESAS CORRENTES')
print("despesas_correntes", despesas_correntes)
if despesas_correntes[0] != 0 or despesas_correntes[1] != 0:
balorc_writer.writerow(['DESPESAS CORRENTES', despesas_correntes[0], despesas_correntes[1], despesas_correntes[2], despesas_correntes[3], despesas_correntes[4], despesas_correntes[5]])
pessoal_encargos_sociais = buscaKeyParts(diretorio, file, 'PESSOAL E ENCARGOS SOCIAIS')
print("pessoal_encargos_sociais", pessoal_encargos_sociais)
if pessoal_encargos_sociais[0] != 0 or pessoal_encargos_sociais[1] != 0:
balorc_writer.writerow(['PESSOAL E ENCARGOS SOCIAIS', pessoal_encargos_sociais[0], pessoal_encargos_sociais[1], pessoal_encargos_sociais[2], pessoal_encargos_sociais[3], pessoal_encargos_sociais[4], pessoal_encargos_sociais[5]])
juros_encargos_divida = buscaKeyParts(diretorio, file, 'JUROS E ENCARGOS DA DIVIDA')
print("juros_encargos_divida", juros_encargos_divida)
if juros_encargos_divida[0] != 0 or juros_encargos_divida[1] != 0:
balorc_writer.writerow(['JUROS E ENCARGOS DA DIVIDA', juros_encargos_divida[0], juros_encargos_divida[1], juros_encargos_divida[2], juros_encargos_divida[3], juros_encargos_divida[4], juros_encargos_divida[5]])
outras_despesas_correntes = buscaKeyParts(diretorio, file, 'OUTRAS DESPESAS CORRENTES')
print("outras_despesas_correntes", outras_despesas_correntes)
if outras_despesas_correntes[0] != 0 or outras_despesas_correntes[1] != 0:
balorc_writer.writerow(['OUTRAS DESPESAS CORRENTES', outras_despesas_correntes[0], outras_despesas_correntes[1], outras_despesas_correntes[2], outras_despesas_correntes[3], outras_despesas_correntes[4], outras_despesas_correntes[5]])
despesas_capital = buscaKeyParts(diretorio, file, 'DESPESAS DE CAPITAL')
print("despesas_capital", despesas_capital)
if despesas_capital[0] != 0 or despesas_capital[1] != 0:
balorc_writer.writerow(['DESPESAS DE CAPITAL', despesas_capital[0], despesas_capital[1], despesas_capital[2], despesas_capital[3], despesas_capital[4], despesas_capital[5]])
investimentos = buscaKeyParts(diretorio, file, 'INVESTIMENTOS')
print("investimentos", investimentos)
if investimentos[0] != 0 or investimentos[1] != 0:
balorc_writer.writerow(['INVESTIMENTOS', investimentos[0], investimentos[1], investimentos[2], investimentos[3], investimentos[4], investimentos[5]])
inversoes_financeiras = buscaKeyParts(diretorio, file, 'INVERSOES FINANCEIRAS')
print("inversoes_financeiras", inversoes_financeiras)
if inversoes_financeiras[0] != 0 or inversoes_financeiras[1] != 0:
balorc_writer.writerow(
['INVERSOES FINANCEIRAS', inversoes_financeiras[0], inversoes_financeiras[1], inversoes_financeiras[2], inversoes_financeiras[3], inversoes_financeiras[4], inversoes_financeiras[5]])
amortizacao_divida = buscaKeyParts(diretorio, file, 'AMORTIZACAO DA DIVIDA')
print("amortizacao_divida", amortizacao_divida)
if amortizacao_divida[0] != 0 or amortizacao_divida[1] != 0:
balorc_writer.writerow(['AMORTIZACAO DA DIVIDA', amortizacao_divida[0], amortizacao_divida[1], amortizacao_divida[2], amortizacao_divida[3], amortizacao_divida[4], amortizacao_divida[5]])
reserva_contingencia = buscaKeyParts(diretorio, file, 'RESERVA DE CONTINGENCIA')
print("reserva_contingencia", reserva_contingencia)
if reserva_contingencia[0] != 0 or reserva_contingencia[1] != 0:
balorc_writer.writerow(['RESERVA DE CONTINGENCIA', reserva_contingencia[0], reserva_contingencia[1], reserva_contingencia[2], reserva_contingencia[3], reserva_contingencia[4], reserva_contingencia[5]])
reserva_rpps = buscaKeyParts(diretorio, file, 'RESERVA DO RPPS')
print("reserva_rpps", reserva_rpps)
if reserva_rpps[0] != 0 or reserva_rpps[1] != 0:
balorc_writer.writerow(['RESERVA DO RPPS', reserva_rpps[0], reserva_rpps[1], reserva_rpps[2], reserva_rpps[3], reserva_rpps[4], reserva_rpps[5]])
subtotal_despesas = buscaKeyParts(diretorio, file, 'SUBTOTAL DAS DESPESAS')
print("subtotal_despesas", subtotal_despesas)
if subtotal_despesas[0] != 0 or subtotal_despesas[1] != 0:
balorc_writer.writerow(['SUBTOTAL DAS DESPESAS', subtotal_despesas[0], subtotal_despesas[1], subtotal_despesas[2], subtotal_despesas[3], subtotal_despesas[4], subtotal_despesas[5]])
amortizacao_divida_refinanciamento = buscaKeyParts(diretorio, file, 'AMORTIZACAO DA DIVIDA - REFINANCIAMENTO')
print("amortizacao_divida_refinanciamento", amortizacao_divida_refinanciamento)
if amortizacao_divida_refinanciamento[0] != 0 or amortizacao_divida_refinanciamento[1] != 0:
balorc_writer.writerow(
['AMORTIZACAO DA DIVIDA - REFINANCIAMENTO', amortizacao_divida_refinanciamento[0], amortizacao_divida_refinanciamento[1], amortizacao_divida_refinanciamento[2], amortizacao_divida_refinanciamento[3], amortizacao_divida_refinanciamento[4], amortizacao_divida_refinanciamento[5]])
subtotal_refinanciamento_d = buscaKeyParts(diretorio, file, 'SUBTOTAL COM REFINANCIAMENTO (XV)')
print("subtotal_refinanciamento_d", subtotal_refinanciamento_d)
if subtotal_refinanciamento_d[0] != 0 or subtotal_refinanciamento_d[1] != 0:
balorc_writer.writerow(['SUBTOTAL COM REFINANCIAMENTO (XV)', subtotal_refinanciamento_d[0], subtotal_refinanciamento_d[1], subtotal_refinanciamento_d[2], subtotal_refinanciamento_d[3], subtotal_refinanciamento_d[4], subtotal_refinanciamento_d[5]])
else:
with open(diretorio + file_parts[0] + "_tratado.csv", mode='a+') as balorc_file:
balorc_writer = csv.writer(balorc_file, delimiter=';', quoting=csv.QUOTE_NONNUMERIC)
if despesas_correntes[0] == 0 and despesas_correntes[1] == 0:
despesas_correntes = buscaKeyParts(diretorio, file, 'DESPESAS CORRENTES')
print("despesas_correntes", despesas_correntes)
if despesas_correntes[0] != 0 or despesas_correntes[1] != 0:
balorc_writer.writerow(['DESPESAS CORRENTES', despesas_correntes[0], despesas_correntes[1], despesas_correntes[2], despesas_correntes[3], despesas_correntes[4], despesas_correntes[5]])
if pessoal_encargos_sociais[0] == 0 and pessoal_encargos_sociais[1] == 0:
pessoal_encargos_sociais = buscaKeyParts(diretorio, file, 'PESSOAL E ENCARGOS SOCIAIS')
print("pessoal_encargos_sociais", pessoal_encargos_sociais)
if pessoal_encargos_sociais[0] != 0 or pessoal_encargos_sociais[1] != 0:
balorc_writer.writerow(['PESSOAL E ENCARGOS SOCIAIS', pessoal_encargos_sociais[0], pessoal_encargos_sociais[1], pessoal_encargos_sociais[2], pessoal_encargos_sociais[3], pessoal_encargos_sociais[4], pessoal_encargos_sociais[5]])
if juros_encargos_divida[0] == 0 and juros_encargos_divida[1] == 0:
juros_encargos_divida = buscaKeyParts(diretorio, file, 'JUROS E ENCARGOS DA DIVIDA')
print("juros_encargos_divida", juros_encargos_divida)
if juros_encargos_divida[0] != 0 or juros_encargos_divida[1] != 0:
balorc_writer.writerow(['JUROS E ENCARGOS DA DIVIDA', juros_encargos_divida[0], juros_encargos_divida[1], juros_encargos_divida[2], juros_encargos_divida[3], juros_encargos_divida[4], juros_encargos_divida[5]])
if outras_despesas_correntes[0] == 0 and outras_despesas_correntes[1] == 0:
outras_despesas_correntes = buscaKeyParts(diretorio, file, 'OUTRAS DESPESAS CORRENTES')
print("outras_despesas_correntes", outras_despesas_correntes)
if outras_despesas_correntes[0] != 0 or outras_despesas_correntes[1] != 0:
balorc_writer.writerow(['OUTRAS DESPESAS CORRENTES', outras_despesas_correntes[0], outras_despesas_correntes[1], outras_despesas_correntes[2], outras_despesas_correntes[3], outras_despesas_correntes[4], outras_despesas_correntes[5]])
if despesas_capital[0] == 0 and despesas_capital[1] == 0:
despesas_capital = buscaKeyParts(diretorio, file, 'DESPESAS DE CAPITAL')
print("despesas_capital", despesas_capital)
if despesas_capital[0] != 0 or despesas_capital[1] != 0:
balorc_writer.writerow(['DESPESAS DE CAPITAL', despesas_capital[0], despesas_capital[1], despesas_capital[2], despesas_capital[3], despesas_capital[4], despesas_capital[5]])
if investimentos[0] == 0 and investimentos[1] == 0:
investimentos = buscaKeyParts(diretorio, file, 'INVESTIMENTOS')
print("investimentos", investimentos)
if investimentos[0] != 0 or investimentos[1] != 0:
balorc_writer.writerow(['INVESTIMENTOS', investimentos[0], investimentos[1], investimentos[2], investimentos[3], investimentos[4], investimentos[5]])
if inversoes_financeiras[0] == 0 and inversoes_financeiras[1] == 0:
inversoes_financeiras = buscaKeyParts(diretorio, file, 'INVERSOES FINANCEIRAS')
print("inversoes_financeiras", inversoes_financeiras)
if inversoes_financeiras[0] != 0 or inversoes_financeiras[1] != 0:
balorc_writer.writerow(
['INVERSOES FINANCEIRAS', inversoes_financeiras[0], inversoes_financeiras[1], inversoes_financeiras[2], inversoes_financeiras[3], inversoes_financeiras[4], inversoes_financeiras[5]])
if amortizacao_divida[0] == 0 and amortizacao_divida[1] == 0:
amortizacao_divida = buscaKeyParts(diretorio, file, 'AMORTIZACAO DA DIVIDA')
print("amortizacao_divida", amortizacao_divida)
if amortizacao_divida[0] != 0 or amortizacao_divida[1] != 0:
balorc_writer.writerow(['AMORTIZACAO DA DIVIDA', amortizacao_divida[0], amortizacao_divida[1], amortizacao_divida[2], amortizacao_divida[3], amortizacao_divida[4], amortizacao_divida[5]])
if reserva_contingencia[0] == 0 and reserva_contingencia[1] == 0:
reserva_contingencia = buscaKeyParts(diretorio, file, 'RESERVA DE CONTINGENCIA')
print("reserva_contingencia", reserva_contingencia)
if reserva_contingencia[0] != 0 or reserva_contingencia[1] != 0:
balorc_writer.writerow(['RESERVA DE CONTINGENCIA', reserva_contingencia[0], reserva_contingencia[1], reserva_contingencia[2], reserva_contingencia[3], reserva_contingencia[4], reserva_contingencia[5]])
if reserva_rpps[0] == 0 and reserva_rpps[1] == 0:
reserva_rpps = buscaKeyParts(diretorio, file, 'RESERVA DO RPPS')
print("reserva_rpps", reserva_rpps)
if reserva_rpps[0] != 0 or reserva_rpps[1] != 0:
balorc_writer.writerow(['RESERVA DO RPPS', reserva_rpps[0], reserva_rpps[1], reserva_rpps[2], reserva_rpps[3], reserva_rpps[4], reserva_rpps[5]])
if subtotal_despesas[0] == 0 and subtotal_despesas[1] == 0:
subtotal_despesas = buscaKeyParts(diretorio, file, 'SUBTOTAL DAS DESPESAS')
print("subtotal_despesas", subtotal_despesas)
if subtotal_despesas[0] != 0 or subtotal_despesas[1] != 0:
balorc_writer.writerow(['SUBTOTAL DAS DESPESAS', subtotal_despesas[0], subtotal_despesas[1], subtotal_despesas[2], subtotal_despesas[3], subtotal_despesas[4], subtotal_despesas[5]])
if amortizacao_divida_refinanciamento[0] == 0 and amortizacao_divida_refinanciamento[1] == 0:
amortizacao_divida_refinanciamento = buscaKeyParts(diretorio, file, 'REFINANCIAMENTO DA DIVIDA - REFINANCIAMENTO')
print("amortizacao_divida_refinanciamento", amortizacao_divida_refinanciamento)
if amortizacao_divida_refinanciamento[0] != 0 or amortizacao_divida_refinanciamento[1] != 0:
balorc_writer.writerow(
['REFINANCIAMENTO DA DIVIDA - REFINANCIAMENTO', amortizacao_divida_refinanciamento[0], amortizacao_divida_refinanciamento[1], amortizacao_divida_refinanciamento[2], amortizacao_divida_refinanciamento[3], amortizacao_divida_refinanciamento[4], amortizacao_divida_refinanciamento[5]])
if subtotal_refinanciamento_d[0] == 0 and subtotal_refinanciamento_d[1] == 0:
subtotal_refinanciamento_d = buscaKeyParts(diretorio, file, 'SUBTOTAL COM REFINANCIAMENTO (XV)')
print("subtotal_refinanciamento_d", subtotal_refinanciamento_d)
if subtotal_refinanciamento_d[0] != 0 or subtotal_refinanciamento_d[1] != 0:
balorc_writer.writerow(['SUBTOTAL COM REFINANCIAMENTO (XV)', subtotal_refinanciamento_d[0], subtotal_refinanciamento_d[1], subtotal_refinanciamento_d[2], subtotal_refinanciamento_d[3], subtotal_refinanciamento_d[4], subtotal_refinanciamento_d[5]])
if __name__ == "__main__":
main()
| 75.141869 | 311 | 0.645607 | 2,311 | 21,716 | 5.791 | 0.061878 | 0.09654 | 0.059777 | 0.040798 | 0.891429 | 0.889487 | 0.875962 | 0.875962 | 0.875962 | 0.870881 | 0 | 0.029775 | 0.242172 | 21,716 | 288 | 312 | 75.402778 | 0.783436 | 0.000507 | 0 | 0.659389 | 0 | 0 | 0.109689 | 0.018165 | 0 | 0 | 0 | 0 | 0 | 1 | 0.017467 | false | 0 | 0.026201 | 0.004367 | 0.056769 | 0.135371 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
3cffc05e45986d3dc42e910e5174e501b7b56aa8 | 195 | py | Python | src/dummynet/__init__.py | steinwurf/dummynet-python | 4c72ed4c3a217ec6c69dfa00032b275b9bd6a40e | [
"BSD-3-Clause"
] | null | null | null | src/dummynet/__init__.py | steinwurf/dummynet-python | 4c72ed4c3a217ec6c69dfa00032b275b9bd6a40e | [
"BSD-3-Clause"
] | null | null | null | src/dummynet/__init__.py | steinwurf/dummynet-python | 4c72ed4c3a217ec6c69dfa00032b275b9bd6a40e | [
"BSD-3-Clause"
] | null | null | null | from .dummy_net import DummyNet
from .dummy_net_factory import DummyNetFactory
from .namespace_shell import NamespaceShell
from .docker_shell import DockerShell
from .host_shell import HostShell
| 32.5 | 46 | 0.871795 | 26 | 195 | 6.307692 | 0.538462 | 0.20122 | 0.146341 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.102564 | 195 | 5 | 47 | 39 | 0.937143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
5956ec6e29720d187b0b60d95bd9bac544029e78 | 153 | py | Python | 905. Sort Array By Parity/905. Sort Array By Parity.py | JawadAsif/leetcode | 13046653d8203ee3d8b524b7402f59c4e2bec7d0 | [
"MIT"
] | null | null | null | 905. Sort Array By Parity/905. Sort Array By Parity.py | JawadAsif/leetcode | 13046653d8203ee3d8b524b7402f59c4e2bec7d0 | [
"MIT"
] | null | null | null | 905. Sort Array By Parity/905. Sort Array By Parity.py | JawadAsif/leetcode | 13046653d8203ee3d8b524b7402f59c4e2bec7d0 | [
"MIT"
] | null | null | null | class Solution(object):
def sortArrayByParity(self, A):
return ([x for x in A if x % 2 == 0] +
[x for x in A if x % 2 == 1])
| 30.6 | 46 | 0.496732 | 26 | 153 | 2.923077 | 0.576923 | 0.105263 | 0.131579 | 0.184211 | 0.315789 | 0.315789 | 0.315789 | 0.315789 | 0 | 0 | 0 | 0.041667 | 0.372549 | 153 | 4 | 47 | 38.25 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0 | 0.25 | 0.75 | 0 | 1 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
abd2968bb5d2e8f41794423f3f052d1d4e7bd3af | 301 | py | Python | vivid/out_of_fold/boosting/__init__.py | upura/vivid | 6139697d60656d4774aceae880f5a07d929124a8 | [
"BSD-2-Clause-FreeBSD"
] | null | null | null | vivid/out_of_fold/boosting/__init__.py | upura/vivid | 6139697d60656d4774aceae880f5a07d929124a8 | [
"BSD-2-Clause-FreeBSD"
] | null | null | null | vivid/out_of_fold/boosting/__init__.py | upura/vivid | 6139697d60656d4774aceae880f5a07d929124a8 | [
"BSD-2-Clause-FreeBSD"
] | null | null | null | """
Boosting Module
Gradient Boosted Decision Tree 系統のアルゴリズムを使った Out Of Fold を定義するモジュール
"""
from .lgbm import LGBMClassifierOutOfFold, LGBMRegressorOutOfFold
from .xgboost import XGBoostRegressorOutOfFold, XGBoostClassifierOutOfFold, OptunaXGBClassifierOutOfFold, \
OptunaXGBRegressionOutOfFold
| 30.1 | 107 | 0.850498 | 23 | 301 | 11.130435 | 0.913043 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.106312 | 301 | 9 | 108 | 33.444444 | 0.951673 | 0.27907 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
abd47f3b6fbeb154d77e6a54c66e5aa9d493914a | 81 | py | Python | ubvi/optimization/__init__.py | trevorcampbell/ubvi | a71c14c9a1d588702f157f83a50619647856fd8b | [
"MIT"
] | 5 | 2019-07-22T14:40:19.000Z | 2020-10-15T13:23:08.000Z | ubvi/optimization/__init__.py | trevorcampbell/ubvi | a71c14c9a1d588702f157f83a50619647856fd8b | [
"MIT"
] | 3 | 2019-10-02T20:22:56.000Z | 2019-10-04T20:34:44.000Z | ubvi/optimization/__init__.py | trevorcampbell/ubvi | a71c14c9a1d588702f157f83a50619647856fd8b | [
"MIT"
] | 2 | 2019-07-23T02:11:49.000Z | 2019-10-24T06:57:23.000Z | from .adam import adam
from .sgd import sgd
from .simplex_sgd import simplex_sgd
| 20.25 | 36 | 0.814815 | 14 | 81 | 4.571429 | 0.357143 | 0.28125 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.148148 | 81 | 3 | 37 | 27 | 0.927536 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
e631631cab6fd6ff7953124cce378bb10ea30475 | 34 | wsgi | Python | annotator-store.wsgi | FUB-HCC/annotator-store | fb1f03d18770078f84c6a73a41cfba292235a59a | [
"MIT"
] | null | null | null | annotator-store.wsgi | FUB-HCC/annotator-store | fb1f03d18770078f84c6a73a41cfba292235a59a | [
"MIT"
] | null | null | null | annotator-store.wsgi | FUB-HCC/annotator-store | fb1f03d18770078f84c6a73a41cfba292235a59a | [
"MIT"
] | null | null | null | from run import app as application | 34 | 34 | 0.852941 | 6 | 34 | 4.833333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.147059 | 34 | 1 | 34 | 34 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
058a098556885dd6222851b0ac0695f739179ebc | 231 | py | Python | testsuite/utils/cached_property.py | itrofimow/yandex-taxi-testsuite | bea758af35ae19db929f4b2b99d2a2917ff4c147 | [
"MIT"
] | null | null | null | testsuite/utils/cached_property.py | itrofimow/yandex-taxi-testsuite | bea758af35ae19db929f4b2b99d2a2917ff4c147 | [
"MIT"
] | null | null | null | testsuite/utils/cached_property.py | itrofimow/yandex-taxi-testsuite | bea758af35ae19db929f4b2b99d2a2917ff4c147 | [
"MIT"
] | null | null | null | # pylint: disable=import-only-modules
# flake8: noqa
import sys
if sys.version_info > (3, 8):
# pylint: disable=no-name-in-module
from functools import cached_property
else:
from cached_property import cached_property
| 23.1 | 47 | 0.748918 | 33 | 231 | 5.121212 | 0.666667 | 0.248521 | 0.236686 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.015544 | 0.164502 | 231 | 9 | 48 | 25.666667 | 0.860104 | 0.354978 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.6 | 0 | 0.6 | 0 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
058cbecdb39437f68bfbce67210d01c0619bb488 | 49 | py | Python | icarus_simulator/strategies/zone_bneck/__init__.py | RubenFr/ICARUS-framework | e57a1f50c3bb9522b2a279fee6b625628afd056f | [
"MIT"
] | 5 | 2021-08-31T08:07:41.000Z | 2022-01-04T02:09:25.000Z | icarus_simulator/strategies/zone_bneck/__init__.py | RubenFr/ICARUS-framework | e57a1f50c3bb9522b2a279fee6b625628afd056f | [
"MIT"
] | 3 | 2021-09-23T09:06:35.000Z | 2021-12-08T04:53:01.000Z | icarus_simulator/strategies/zone_bneck/__init__.py | RubenFr/ICARUS-framework | e57a1f50c3bb9522b2a279fee6b625628afd056f | [
"MIT"
] | 2 | 2022-01-19T17:50:56.000Z | 2022-03-06T18:59:41.000Z | from .detect_bneck_strat import DetectBneckStrat
| 24.5 | 48 | 0.897959 | 6 | 49 | 7 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.081633 | 49 | 1 | 49 | 49 | 0.933333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
55ab1e9def726a59e11e7f5e75a4148b40eb4682 | 1,391 | py | Python | tests/test_swap_exceptions.py | tomgrin10/swap-exceptions | 45747b7e61b974646d2a4eca2ceff9587f44d629 | [
"MIT"
] | 2 | 2020-08-28T14:48:12.000Z | 2020-10-08T09:05:01.000Z | tests/test_swap_exceptions.py | tomgrin10/swap-exceptions | 45747b7e61b974646d2a4eca2ceff9587f44d629 | [
"MIT"
] | null | null | null | tests/test_swap_exceptions.py | tomgrin10/swap-exceptions | 45747b7e61b974646d2a4eca2ceff9587f44d629 | [
"MIT"
] | null | null | null | from swap_exceptions import swap_exceptions
def test__swap_exceptions__context_manager():
# Arrange
mapping = {ValueError: KeyError("AAAA")}
exc_to_raise = ValueError()
expected_raised_exc = mapping[type(exc_to_raise)]
# Act
raised_exc = None
try:
with swap_exceptions(mapping):
raise exc_to_raise
except Exception as e:
raised_exc = e
# Assert
assert raised_exc is expected_raised_exc
def test__swap_exceptions__decorator():
# Arrange
mapping = {ValueError: KeyError("AAAA")}
exc_to_raise = ValueError()
expected_raised_exc = mapping[type(exc_to_raise)]
@swap_exceptions(mapping)
def foo():
raise exc_to_raise
# Act
raised_exc = None
try:
foo()
except Exception as e:
raised_exc = e
# Assert
assert raised_exc is expected_raised_exc
def test__swap_exceptions__lambda_exception_target():
# Arrange
mapping = {ValueError: lambda e: KeyError(e)}
exc_to_raise = ValueError()
expected_raised_exc = mapping[type(exc_to_raise)](exc_to_raise)
# Act
raised_exc = None
try:
with swap_exceptions(mapping):
raise exc_to_raise
except Exception as e:
raised_exc = e
# Assert
assert type(raised_exc) is type(expected_raised_exc)
assert raised_exc.args == expected_raised_exc.args
| 23.183333 | 67 | 0.677211 | 176 | 1,391 | 4.971591 | 0.193182 | 0.174857 | 0.114286 | 0.068571 | 0.715429 | 0.715429 | 0.715429 | 0.715429 | 0.715429 | 0.676571 | 0 | 0 | 0.25018 | 1,391 | 59 | 68 | 23.576271 | 0.838926 | 0.040259 | 0 | 0.702703 | 0 | 0 | 0.006038 | 0 | 0 | 0 | 0 | 0 | 0.108108 | 1 | 0.108108 | false | 0 | 0.027027 | 0 | 0.135135 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
55c01f89c4e6f9945d3408f49c4504a8632fcbf0 | 23 | py | Python | contrib/diggext/drivers/devices/consoleservers/__init__.py | thekad/clusto | c141ea3ef4931c6a21fdf42845c6e9de5ee08caa | [
"BSD-3-Clause"
] | 216 | 2015-01-10T17:03:25.000Z | 2022-03-24T07:23:41.000Z | contrib/diggext/drivers/devices/consoleservers/__init__.py | thekad/clusto | c141ea3ef4931c6a21fdf42845c6e9de5ee08caa | [
"BSD-3-Clause"
] | 23 | 2015-01-08T16:51:22.000Z | 2021-03-13T12:56:04.000Z | contrib/diggext/drivers/devices/consoleservers/__init__.py | thekad/clusto | c141ea3ef4931c6a21fdf42845c6e9de5ee08caa | [
"BSD-3-Clause"
] | 49 | 2015-01-08T00:13:17.000Z | 2021-09-22T02:01:20.000Z | from opengear import *
| 11.5 | 22 | 0.782609 | 3 | 23 | 6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.173913 | 23 | 1 | 23 | 23 | 0.947368 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
e96cc262caf292659ec3b967cff4f10c1e819e79 | 103 | py | Python | argument_tuple_unpacking.py | shankar-shiv/CS1010E_Kattis_practice | 9a8597b7ab61d5afa108a8b943ca2bb3603180c6 | [
"MIT"
] | null | null | null | argument_tuple_unpacking.py | shankar-shiv/CS1010E_Kattis_practice | 9a8597b7ab61d5afa108a8b943ca2bb3603180c6 | [
"MIT"
] | null | null | null | argument_tuple_unpacking.py | shankar-shiv/CS1010E_Kattis_practice | 9a8597b7ab61d5afa108a8b943ca2bb3603180c6 | [
"MIT"
] | null | null | null | def f(a, b, c):
print(f"a = {a}")
print(f"b = {b}")
print(f"c = {c}")
t = (1, 2, 3)
f(*t)
| 12.875 | 21 | 0.359223 | 23 | 103 | 1.608696 | 0.434783 | 0.486486 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.042254 | 0.31068 | 103 | 7 | 22 | 14.714286 | 0.478873 | 0 | 0 | 0 | 0 | 0 | 0.203884 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0 | 0 | 0 | 0.166667 | 0.5 | 1 | 0 | 1 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
e9a8cf6ace2b653fa05de7a56912eb72211ee231 | 65 | py | Python | mappgene/subscripts/__init__.py | aavilaherrera/mappgene | 899e54217221e18b3b7c32afb2dec0cae43f203c | [
"BSD-3-Clause"
] | 7 | 2021-04-15T05:06:55.000Z | 2022-02-23T22:18:49.000Z | mappgene/subscripts/__init__.py | aavilaherrera/mappgene | 899e54217221e18b3b7c32afb2dec0cae43f203c | [
"BSD-3-Clause"
] | 1 | 2021-07-16T23:50:15.000Z | 2021-07-16T23:50:15.000Z | mappgene/subscripts/__init__.py | aavilaherrera/mappgene | 899e54217221e18b3b7c32afb2dec0cae43f203c | [
"BSD-3-Clause"
] | 5 | 2021-04-16T05:03:56.000Z | 2021-12-21T18:53:14.000Z | from .utilities import *
from .vpipe import *
from .ivar import * | 21.666667 | 24 | 0.738462 | 9 | 65 | 5.333333 | 0.555556 | 0.416667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.169231 | 65 | 3 | 25 | 21.666667 | 0.888889 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
e9bddfe5b3d78a3c792aa4324772742f24c31003 | 32 | py | Python | src/dbspy/core/spectrum/dbs/raw/__init__.py | ZhengKeli/PositronSpector | be0281fe50fe634183b6f239f03b7140c1dc0b7f | [
"MIT"
] | 1 | 2019-06-18T09:23:42.000Z | 2019-06-18T09:23:42.000Z | src/dbspy/core/spectrum/dbs/raw/__init__.py | ZhengKeli/DBSpy | be0281fe50fe634183b6f239f03b7140c1dc0b7f | [
"MIT"
] | null | null | null | src/dbspy/core/spectrum/dbs/raw/__init__.py | ZhengKeli/DBSpy | be0281fe50fe634183b6f239f03b7140c1dc0b7f | [
"MIT"
] | null | null | null | from ._raw import Conf, Process
| 16 | 31 | 0.78125 | 5 | 32 | 4.8 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.15625 | 32 | 1 | 32 | 32 | 0.888889 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
e9c52ac09ab88cfc0a665bd36daa94992c03e4d4 | 30,722 | py | Python | tests/conftest.py | BenSchZA/aquarius | 2041605bc44ca03d95617fd30bc9ebf312f90beb | [
"Apache-2.0"
] | null | null | null | tests/conftest.py | BenSchZA/aquarius | 2041605bc44ca03d95617fd30bc9ebf312f90beb | [
"Apache-2.0"
] | null | null | null | tests/conftest.py | BenSchZA/aquarius | 2041605bc44ca03d95617fd30bc9ebf312f90beb | [
"Apache-2.0"
] | null | null | null | # Copyright 2018 Ocean Protocol Foundation
# SPDX-License-Identifier: Apache-2.0
import copy
import json
import pytest
from aquarius.constants import BaseURLs
from aquarius.run import app
app = app
@pytest.fixture
def base_ddo_url():
return BaseURLs.BASE_AQUARIUS_URL + '/assets/ddo'
@pytest.fixture
def client_with_no_data():
client = app.test_client()
client.delete(BaseURLs.BASE_AQUARIUS_URL + '/assets/ddo')
yield client
@pytest.fixture
def client():
client = app.test_client()
client.delete(BaseURLs.BASE_AQUARIUS_URL + '/assets/ddo')
post = client.post(BaseURLs.BASE_AQUARIUS_URL + '/assets/ddo',
data=json.dumps(json_update),
content_type='application/json')
if post.status_code not in (200, 201):
raise AssertionError(f'register asset failed: {post}')
post2 = client.post(BaseURLs.BASE_AQUARIUS_URL + '/assets/ddo',
data=json.dumps(json_dict),
content_type='application/json')
yield client
client.delete(
BaseURLs.BASE_AQUARIUS_URL + '/assets/ddo/%s' % json.loads(post.data.decode('utf-8'))['id'])
client.delete(
BaseURLs.BASE_AQUARIUS_URL + '/assets/ddo/%s' % json.loads(post2.data.decode('utf-8'))[
'id'])
json_dict = {
"@context": "https://w3id.org/did/v1",
"id": "did:op:0c184915b07b44c888d468be85a9b28253e80070e5294b1aaed81c2f0264e430",
"created": "2019-05-22T08:44:27Z",
"publicKey": [
{
"id": "did:op:0c184915b07b44c888d468be85a9b28253e80070e5294b1aaed81c2f0264e430",
"type": "EthereumECDSAKey",
"owner": "0x00Bd138aBD70e2F00903268F3Db08f2D25677C9e"
}
],
"authentication": [
{
"type": "RsaSignatureAuthentication2018",
"publicKey": "did:op:0c184915b07b44c888d468be85a9b28253e80070e5294b1aaed81c2f0264e430"
}
],
"service": [
{
"type": "authorization",
"serviceEndpoint": "http://localhost:12001",
"service": "SecretStore",
"index": 0
},
{
"type": "access",
"serviceEndpoint": "http://localhost:8030/api/v1/brizo/services/consume",
"purchaseEndpoint": "http://localhost:8030/api/v1/brizo/services/access/initialize",
"index": 1,
"templateId": "0x208aca4B0316C9996F085cbD57E01c11Bc0E7cb1",
"name": "dataAssetAccessServiceAgreement",
"creator": "",
"serviceAgreementTemplate": {
"contractName": "EscrowAccessSecretStoreTemplate",
"events": [
{
"name": "AgreementCreated",
"actorType": "consumer",
"handler": {
"moduleName": "escrowAccessSecretStoreTemplate",
"functionName": "fulfillLockRewardCondition",
"version": "0.1"
}
}
],
"fulfillmentOrder": [
"lockReward.fulfill",
"accessSecretStore.fulfill",
"escrowReward.fulfill"
],
"conditionDependency": {
"lockReward": [],
"accessSecretStore": [],
"escrowReward": [
"lockReward",
"accessSecretStore"
]
},
"conditions": [
{
"name": "lockReward",
"timelock": 0,
"timeout": 0,
"contractName": "LockRewardCondition",
"functionName": "fulfill",
"events": [
{
"name": "Fulfilled",
"actorType": "publisher",
"handler": {
"moduleName": "lockRewardCondition",
"functionName": "fulfillAccessSecretStoreCondition",
"version": "0.1"
}
}
],
"parameters": [
{
"name": "_rewardAddress",
"type": "address",
"value": "0x2AaC920AA4D10b80db9ed0E4EC04A3ff612F2bc6"
},
{
"name": "_amount",
"type": "uint256",
"value": "888000000000000000000000000000000"
}
]
},
{
"name": "accessSecretStore",
"timelock": 0,
"timeout": 0,
"contractName": "AccessSecretStoreCondition",
"functionName": "fulfill",
"events": [
{
"name": "Fulfilled",
"actorType": "publisher",
"handler": {
"moduleName": "accessSecretStore",
"functionName": "fulfillEscrowRewardCondition",
"version": "0.1"
}
},
{
"name": "TimedOut",
"actorType": "consumer",
"handler": {
"moduleName": "accessSecretStore",
"functionName": "fulfillEscrowRewardCondition",
"version": "0.1"
}
}
],
"parameters": [
{
"name": "_documentId",
"type": "bytes32",
"value": "0c184915b07b44c888d468be85a9b28253e80070e5294b1aaed81c2f0264e430"
},
{
"name": "_grantee",
"type": "address",
"value": ""
}
]
},
{
"name": "escrowReward",
"timelock": 0,
"timeout": 0,
"contractName": "EscrowReward",
"functionName": "fulfill",
"events": [
{
"name": "Fulfilled",
"actorType": "publisher",
"handler": {
"moduleName": "escrowRewardCondition",
"functionName": "verifyRewardTokens",
"version": "0.1"
}
}
],
"parameters": [
{
"name": "_amount",
"type": "uint256",
"value": "888000000000000000000000000000000"
},
{
"name": "_receiver",
"type": "address",
"value": ""
},
{
"name": "_sender",
"type": "address",
"value": ""
},
{
"name": "_lockCondition",
"type": "bytes32",
"value": ""
},
{
"name": "_releaseCondition",
"type": "bytes32",
"value": ""
}
]
}
]
}
},
{
"type": "metadata",
"serviceEndpoint": "http://localhost:5000/api/v1/aquarius/assets/ddo/did:op:0c184915b07b44c888d468be85a9b28253e80070e5294b1aaed81c2f0264e430",
"attributes": {
"main": {
"name": "Ocean protocol white paper",
"type": "dataset",
"dateCreated": "2012-10-10T17:00:00Z",
"datePublished": "2012-10-10T17:00:00Z",
"author": "Ocean Protocol Foundation Ltd.",
"license": "CC-BY",
"price": "888000000000000000000000000000000",
"files": [
{
"checksum": "efb2c764274b745f5fc37f97c6b0e761",
"contentType": "text/csv",
"checksumType": "MD5",
"contentLength": "4535431",
"resourceId": "access-log2018-02-13-15-17-29-18386C502CAEA932",
"index": 0
},
{
"checksum": "efb2c764274b745f5fc37f97c6b0e761",
"contentType": "text/csv",
"contentLength": "4535431",
"resourceId": "access-log2018-02-13-15-17-29-18386C502CAEA932",
"index": 1
},
{
"index": 2,
"contentType": "text/csv",
}
]
},
"encryptedFiles": "<tests.resources.mocks.secret_store_mock.SecretStoreMock object at 0x7f8146a94710>.0c184915b07b44c888d468be85a9b28253e80070e5294b1aaed81c2f0264e430!![{\"url\": \"https://testocnfiles.blob.core.windows.net/testfiles/testzkp.pdf\", \"checksum\": \"efb2c764274b745f5fc37f97c6b0e761\", \"checksumType\": \"MD5\", \"contentLength\": \"4535431\", \"resourceId\": \"access-log2018-02-13-15-17-29-18386C502CAEA932\"}, {\"url\": \"s3://ocean-test-osmosis-data-plugin-dataseeding-1537375953/data.txt\", \"checksum\": \"efb2c764274b745f5fc37f97c6b0e761\", \"contentLength\": \"4535431\", \"resourceId\": \"access-log2018-02-13-15-17-29-18386C502CAEA932\"}, {\"url\": \"http://ipv4.download.thinkbroadband.com/5MB.zip\"}]!!0",
"curation": {
"rating": 0.93,
"numVotes": 123,
"schema": "Binary Voting"
},
"additionalInformation": {
"description": "Introduce the main concepts and vision behind ocean protocol",
"copyrightHolder": "Ocean Protocol Foundation Ltd.",
"workExample": "Text PDF",
"inLanguage": "en",
"categories": [
"white-papers"
],
"tags": ["data exchange", "sharing", "curation", "bonding curve"],
"links": [
{
"url": "http://data.ceda.ac.uk/badc/ukcp09/data/gridded-land-obs/gridded-land-obs"
"-daily/"
},
{
"url": "http://data.ceda.ac.uk/badc/ukcp09/data/gridded-land-obs/gridded-land-obs"
"-averages-25km/"
},
{
"url": "http://data.ceda.ac.uk/badc/ukcp09/"
}
],
"updateFrequency": "yearly",
"structuredMarkup": [
{
"uri": "http://skos.um.es/unescothes/C01194/jsonld",
"mediaType": "application/ld+json"
},
{
"uri": "http://skos.um.es/unescothes/C01194/turtle",
"mediaType": "text/turtle"
}
]
}
},
"index": 2
}
],
"proof": {
"type": "DDOIntegritySignature",
"created": "2019-05-22T08:44:27Z",
"creator": "0x00Bd138aBD70e2F00903268F3Db08f2D25677C9e",
"signatureValue": "0xbd7b46b3ac664167bc70ac211b1a1da0baed9ead91613a5f02dfc25c1bb6e3ff40861b455017e8a587fd4e37b703436072598c3a81ec88be28bfe33b61554a471b"
}
}
json_dict2 = {
"@context": "https://w3id.org/did/v1",
"id": "did:op:0c184915b07b44c888d468be85a9b28253e80070e5294b1aaed81c2f0264e430",
"created": "2019-05-22T08:44:27Z",
"publicKey": [
{
"id": "did:op:0c184915b07b44c888d468be85a9b28253e80070e5294b1aaed81c2f0264e430",
"type": "EthereumECDSAKey",
"owner": "0x00Bd138aBD70e2F00903268F3Db08f2D25677C9e"
}
],
"authentication": [
{
"type": "RsaSignatureAuthentication2018",
"publicKey": "did:op:0c184915b07b44c888d468be85a9b28253e80070e5294b1aaed81c2f0264e430"
}
],
"service": [
{
"type": "authorization",
"serviceEndpoint": "http://localhost:12001",
"service": "SecretStore",
"index": 0
},
{
"type": "access",
"serviceEndpoint": "http://localhost:8030/api/v1/brizo/services/consume",
"purchaseEndpoint": "http://localhost:8030/api/v1/brizo/services/access/initialize",
"index": 1,
"templateId": "0x208aca4B0316C9996F085cbD57E01c11Bc0E7cb1",
"name": "dataAssetAccessServiceAgreement",
"creator": "",
"serviceAgreementTemplate": {
"contractName": "EscrowAccessSecretStoreTemplate",
"events": [
{
"name": "AgreementCreated",
"actorType": "consumer",
"handler": {
"moduleName": "escrowAccessSecretStoreTemplate",
"functionName": "fulfillLockRewardCondition",
"version": "0.1"
}
}
],
"fulfillmentOrder": [
"lockReward.fulfill",
"accessSecretStore.fulfill",
"escrowReward.fulfill"
],
"conditionDependency": {
"lockReward": [],
"accessSecretStore": [],
"escrowReward": [
"lockReward",
"accessSecretStore"
]
},
"conditions": [
{
"name": "lockReward",
"timelock": 0,
"timeout": 0,
"contractName": "LockRewardCondition",
"functionName": "fulfill",
"events": [
{
"name": "Fulfilled",
"actorType": "publisher",
"handler": {
"moduleName": "lockRewardCondition",
"functionName": "fulfillAccessSecretStoreCondition",
"version": "0.1"
}
}
],
"parameters": [
{
"name": "_rewardAddress",
"type": "address",
"value": "0x2AaC920AA4D10b80db9ed0E4EC04A3ff612F2bc6"
},
{
"name": "_amount",
"type": "uint256",
"value": "888000000000000000000000000000000"
}
]
},
{
"name": "accessSecretStore",
"timelock": 0,
"timeout": 0,
"contractName": "AccessSecretStoreCondition",
"functionName": "fulfill",
"events": [
{
"name": "Fulfilled",
"actorType": "publisher",
"handler": {
"moduleName": "accessSecretStore",
"functionName": "fulfillEscrowRewardCondition",
"version": "0.1"
}
},
{
"name": "TimedOut",
"actorType": "consumer",
"handler": {
"moduleName": "accessSecretStore",
"functionName": "fulfillEscrowRewardCondition",
"version": "0.1"
}
}
],
"parameters": [
{
"name": "_documentId",
"type": "bytes32",
"value": "0c184915b07b44c888d468be85a9b28253e80070e5294b1aaed81c2f0264e430"
},
{
"name": "_grantee",
"type": "address",
"value": ""
}
]
},
{
"name": "escrowReward",
"timelock": 0,
"timeout": 0,
"contractName": "EscrowReward",
"functionName": "fulfill",
"events": [
{
"name": "Fulfilled",
"actorType": "publisher",
"handler": {
"moduleName": "escrowRewardCondition",
"functionName": "verifyRewardTokens",
"version": "0.1"
}
}
],
"parameters": [
{
"name": "_amount",
"type": "uint256",
"value": "888000000000000000000000000000000"
},
{
"name": "_receiver",
"type": "address",
"value": ""
},
{
"name": "_sender",
"type": "address",
"value": ""
},
{
"name": "_lockCondition",
"type": "bytes32",
"value": ""
},
{
"name": "_releaseCondition",
"type": "bytes32",
"value": ""
}
]
}
]
}
},
{
"type": "metadata",
"serviceEndpoint": "http://localhost:5000/api/v1/aquarius/assets/ddo/did:op:0c184915b07b44c888d468be85a9b28253e80070e5294b1aaed81c2f0264e430",
"attributes": {
"main": {
"name": "Ocean protocol white paper",
"type": "dataset",
"dateCreated": "2012-10-10T17:00:00Z",
"datePublished": "2012-10-10T17:00:00Z",
"author": "Ocean Protocol Foundation Ltd.",
"license": "CC-BY",
"price": "888000000000000000000000000000000",
"files": [
{
"checksum": "efb2c764274b745f5fc37f97c6b0e761",
"contentType": "text/csv",
"checksumType": "MD5",
"contentLength": "4535431",
"resourceId": "access-log2018-02-13-15-17-29-18386C502CAEA932",
"index": 0
},
{
"checksum": "efb2c764274b745f5fc37f97c6b0e761",
"contentType": "text/csv",
"contentLength": "4535431",
"resourceId": "access-log2018-02-13-15-17-29-18386C502CAEA932",
"index": 1
},
{
"index": 2,
"contentType": "text/csv",
}
],
},
"encryptedFiles": "<tests.resources.mocks.secret_store_mock.SecretStoreMock object at 0x7f8146a94710>.0c184915b07b44c888d468be85a9b28253e80070e5294b1aaed81c2f0264e430!![{\"url\": \"https://testocnfiles.blob.core.windows.net/testfiles/testzkp.pdf\", \"checksum\": \"efb2c764274b745f5fc37f97c6b0e761\", \"checksumType\": \"MD5\", \"contentLength\": \"4535431\", \"resourceId\": \"access-log2018-02-13-15-17-29-18386C502CAEA932\"}, {\"url\": \"s3://ocean-test-osmosis-data-plugin-dataseeding-1537375953/data.txt\", \"checksum\": \"efb2c764274b745f5fc37f97c6b0e761\", \"contentLength\": \"4535431\", \"resourceId\": \"access-log2018-02-13-15-17-29-18386C502CAEA932\"}, {\"url\": \"http://ipv4.download.thinkbroadband.com/5MB.zip\"}]!!0",
"curation": {
"rating": 0.93,
"numVotes": 123,
"schema": "Binary Voting",
"isListed": False
},
"additionalInformation": {
"description": "Introduce the main concepts and vision behind ocean protocol",
"copyrightHolder": "Ocean Protocol Foundation Ltd.",
"workExample": "Text PDF",
"inLanguage": "en",
"categories": [
"white-papers"
],
"tags": ["data exchange", "sharing", "curation", "bonding curve"],
"links": [
{
"url": "http://data.ceda.ac.uk/badc/ukcp09/data/gridded-land-obs/gridded-land-obs"
"-daily/"
},
{
"url": "http://data.ceda.ac.uk/badc/ukcp09/data/gridded-land-obs/gridded-land-obs"
"-averages-25km/"
},
{
"url": "http://data.ceda.ac.uk/badc/ukcp09/"
}
],
"updateFrequency": "yearly",
"structuredMarkup": [
{
"uri": "http://skos.um.es/unescothes/C01194/jsonld",
"mediaType": "application/ld+json"
},
{
"uri": "http://skos.um.es/unescothes/C01194/turtle",
"mediaType": "text/turtle"
}
]
}
},
"index": 2
}
],
"proof": {
"type": "DDOIntegritySignature",
"created": "2019-05-22T08:44:27Z",
"creator": "0x00Bd138aBD70e2F00903268F3Db08f2D25677C9e",
"signatureValue": "0xbd7b46b3ac664167bc70ac211b1a1da0baed9ead91613a5f02dfc25c1bb6e3ff40861b455017e8a587fd4e37b703436072598c3a81ec88be28bfe33b61554a471b"
}
}
json_dict_no_metadata = {"publisherId": "0x2"}
json_dict_no_valid_metadata = {"publisherId": "0x4",
"main": {},
"assetId": "002"
}
json_before = {
"@context": "https://w3id.org/future-method/v1",
"created": "2016-02-08T16:02:20Z",
"id": "did:op:112233445566778899",
"publicKey": [
{
"id": "did:op:123456789abcdefghi#keys-1",
"type": "RsaVerificationKey2018",
"owner": "did:op:123456789abcdefghi",
"publicKeyPem": "-----BEGIN PUBLIC KEY...END PUBLIC KEY-----\r\n"
},
{
"id": "did:op:123456789abcdefghi#keys-2",
"type": "Ed25519VerificationKey2018",
"owner": "did:op:123456789abcdefghi",
"publicKeyBase58": "H3C2AVvLMv6gmMNam3uVAjZpfkcJCwDwnZn6z3wXmqPV"
}
],
"authentication": [
{
"type": "RsaSignatureAuthentication2018",
"publicKey": "did:op:123456789abcdefghi#keys-1"
},
{
"type": "ieee2410Authentication2018",
"publicKey": "did:op:123456789abcdefghi#keys-2"
}
],
"proof": {
"type": "UUIDSignature",
"created": "2016-02-08T16:02:20Z",
"creator": "did:example:8uQhQMGzWxR8vw5P3UWH1ja",
"signatureValue": "QNB13Y7Q9...1tzjn4w=="
},
"service": [
{
"type": "Consume",
"index": 0,
"serviceEndpoint": "http://mybrizo.org/api/v1/brizo/services/consume?pubKey=${"
"pubKey}&serviceId={serviceId}&url={url}"
},
{
"type": "Compute",
"index": 1,
"serviceEndpoint": "http://mybrizo.org/api/v1/brizo/services/compute?pubKey=${"
"pubKey}&serviceId={serviceId}&algo={algo}&container={container}"
},
{
"type": "metadata",
"index": 2,
"serviceEndpoint": "http://myaquarius.org/api/v1/provider/assets/metadata/{did}",
"attributes": {
"main": {
"name": "UK Weather information 2011",
"type": "dataset",
"dateCreated": "2012-10-10T17:00:00Z",
"datePublished": "2012-10-10T17:00:00Z",
"author": "Met Office",
"license": "CC-BY",
"files": [{
"index": 0,
"contentLength": "4535431",
"contentType": "text/csv",
"encoding": "UTF-8",
"compression": "zip",
"resourceId": "access-log2018-02-13-15-17-29-18386C502CAEA932"
}
],
"price": "88888880000000000000",
},
"encryptedFiles": "0xkasdhfkljhasdfkjasdhf",
"curation": {
"rating": 0.0,
"numVotes": 0,
"schema": "Binary Votting",
"isListed": True
},
"additionalInformation": {
"description": "Weather information of UK including temperature and humidity",
"copyrightHolder": "Met Office",
"workExample": "stationId,latitude,longitude,datetime,temperature,"
"humidity /n 423432fsd,51.509865,-0.118092,"
"2011-01-01T10:55:11+00:00,7.2,68",
"inLanguage": "en",
"tags": ["weather", "uk", "2011", "temperature", "humidity"],
"updateFrequency": "yearly",
"structuredMarkup": [
{"uri": "http://skos.um.es/unescothes/C01194/jsonld",
"mediaType": "application/ld+json"},
{"uri": "http://skos.um.es/unescothes/C01194/turtle",
"mediaType": "text/turtle"}
],
"links": [
{
"name": "Sample of Asset Data",
"type": "sample",
"url": "https://foo.com/sample.csv"
},
{
"name": "Data Format Definition",
"type": "format",
"url": "https://foo.com/sample2.csv"
}
]
}
}
}
]
}
json_update = {
"@context": "https://w3id.org/future-method/v1",
"created": "2016-02-08T16:02:20Z",
"id": "did:op:112233445566778899",
"publicKey": [
{
"id": "did:op:123456789abcdefghi#keys-1",
"type": "RsaVerificationKey2018",
"owner": "did:op:123456789abcdefghi",
"publicKeyPem": "-----BEGIN PUBLIC KEY...END PUBLIC KEY-----\r\n"
},
{
"id": "did:op:123456789abcdefghi#keys-2",
"type": "Ed25519VerificationKey2018",
"owner": "did:op:123456789abcdefghi",
"publicKeyBase58": "H3C2AVvLMv6gmMNam3uVAjZpfkcJCwDwnZn6z3wXmqPV"
}
],
"authentication": [
{
"type": "RsaSignatureAuthentication2018",
"publicKey": "did:op:123456789abcdefghi#keys-1"
},
{
"type": "ieee2410Authentication2018",
"publicKey": "did:op:123456789abcdefghi#keys-2"
}
],
"proof": {
"type": "UUIDSignature",
"created": "2016-02-08T16:02:20Z",
"creator": "did:example:8uQhQMGzWxR8vw5P3UWH1ja",
"signatureValue": "QNB13Y7Q9...1tzjn4w=="
},
"service": [
{
"type": "Consume",
"index": 0,
"serviceEndpoint": "http://mybrizo.org/api/v1/brizo/services/consume?pubKey=${"
"pubKey}&serviceId={serviceId}&url={url}"
},
{
"type": "Compute",
"index": 1,
"serviceEndpoint": "http://mybrizo.org/api/v1/brizo/services/compute?pubKey=${"
"pubKey}&serviceId={serviceId}&algo={algo}&container={container}"
},
{
"type": "metadata",
"index": 2,
"serviceEndpoint": "http://myaquarius.org/api/v1/provider/assets/metadata/{did}",
"attributes": {
"main": {
"name": "UK Weather information 2012",
"type": "dataset",
"dateCreated": "2012-02-01T10:55:11Z",
"datePublished": "2012-02-01T10:55:11Z",
"author": "Met Office",
"license": "CC-BY",
"files": [{
"index": 0,
"contentLength": "4535431",
"contentType": "text/csv",
"encoding": "UTF-8",
"compression": "zip",
"resourceId": "access-log2018-02-13-15-17-29-18386C502CAEA932"
}],
"price": "15",
},
"encryptedFiles": "0xkasdhfkljhasdfkjasdhf",
"curation": {
"rating": 8.0,
"numVotes": 1,
"schema": "Binary Votting",
"isListed": True
},
"additionalInformation": {
"description": "Weather information of UK including temperature and humidity and white",
"copyrightHolder": "Met Office",
"workExample": "stationId,latitude,longitude,datetime,temperature,"
"humidity /n 423432fsd,51.509865,-0.118092,"
"2011-01-01T10:55:11+00:00,7.2,68",
"inLanguage": "en",
"tags": ["weather", "uk", "2011", "temperature", "humidity"],
"updateFrecuency": "yearly",
"structuredMarkup": [
{"uri": "http://skos.um.es/unescothes/C01194/jsonld",
"mediaType": "application/ld+json"},
{"uri": "http://skos.um.es/unescothes/C01194/turtle",
"mediaType": "text/turtle"}
],
"links": [
{
"name": "Sample of Asset Data",
"type": "sample",
"url": "https://foo.com/sample.csv"
},
{
"name": "Data Format Definition",
"type": "format",
"url": "https://foo.com/sample2.csv"
}
]
}
}
}
]
}
json_valid = {
"main": {
"name": "10 Monkey Species Small",
"dateCreated": "2012-02-01T10:55:11Z",
"author": "Mario",
"license": "CC0: Public Domain",
"price": "10",
"files": [
{
"index": 0,
"contentType": "application/zip",
"encoding": "UTF-8",
"compression": "zip",
"checksum": "2bf9d229d110d1976cdf85e9f3256c7f",
"checksumType": "MD5",
"contentLength": "12057507",
"url": "https://s3.amazonaws.com/assets/training.zip"
},
{
"index": 1,
"contentType": "text/txt",
"encoding": "UTF-8",
"compression": "none",
"checksum": "354d19c0733c47ef3a6cce5b633116b0",
"checksumType": "MD5",
"contentLength": "928",
"url": "https://s3.amazonaws.com/datacommons/monkey_labels.txt"
},
{
"index": 2,
"contentType": "application/zip",
"url": "https://s3.amazonaws.com/datacommons/validation.zip"
}
],
"type": "dataset",
},
"additionalInformation":{
"description": "EXAMPLE ONLY ",
"categories": [
"image"
],
"tags": [
"image data",
"classification",
"animals"
],
"workExample": "image path, id, label",
"links": [
{
"name": "example model",
"url": "https://drive.google.com/open?id=1uuz50RGiAW8YxRcWeQVgQglZpyAebgSM"
},
{
"name": "example code",
"type": "example code",
"url": "https://github.com/slothkong/CNN_classification_10_monkey_species"
},
{
"url": "https://s3.amazonaws.com/datacommons/links/discovery/n5151.jpg",
"name": "n5151.jpg",
"type": "discovery"
},
{
"url": "https://s3.amazonaws.com/datacommons/links/sample/sample.zip",
"name": "sample.zip",
"type": "sample"
}
],
"copyrightHolder": "Unknown",
"inLanguage": "en"
}
}
test_assets = []
for i in range(10):
a = copy.deepcopy(json_dict)
a['id'] = a['id'][:-2] + str(i) +str(i)
test_assets.append(a)
json_request_consume = {
'requestId': "",
'consumerId': "",
'fixed_msg': "",
'sigEncJWT': ""
}
| 34.753394 | 741 | 0.472235 | 2,026 | 30,722 | 7.125864 | 0.19694 | 0.007619 | 0.019118 | 0.017317 | 0.885225 | 0.879684 | 0.867632 | 0.862368 | 0.862368 | 0.861952 | 0 | 0.134213 | 0.378654 | 30,722 | 883 | 742 | 34.792752 | 0.622086 | 0.002506 | 0 | 0.630233 | 0 | 0.006977 | 0.450116 | 0.137234 | 0 | 0 | 0.020691 | 0 | 0.001163 | 1 | 0.003488 | false | 0 | 0.005814 | 0.001163 | 0.010465 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
e9d06118975d9f70373189adf00f048e55f62857 | 96 | py | Python | venv/lib/python3.8/site-packages/attr/__init__.py | Retraces/UkraineBot | 3d5d7f8aaa58fa0cb8b98733b8808e5dfbdb8b71 | [
"MIT"
] | 2 | 2022-03-13T01:58:52.000Z | 2022-03-31T06:07:54.000Z | venv/lib/python3.8/site-packages/attr/__init__.py | DesmoSearch/Desmobot | b70b45df3485351f471080deb5c785c4bc5c4beb | [
"MIT"
] | 19 | 2021-11-20T04:09:18.000Z | 2022-03-23T15:05:55.000Z | venv/lib/python3.8/site-packages/attr/__init__.py | DesmoSearch/Desmobot | b70b45df3485351f471080deb5c785c4bc5c4beb | [
"MIT"
] | null | null | null | /home/runner/.cache/pip/pool/ff/38/49/e0ef10e4a47981a22b8d7ef7be44799c0e8d8dfa4c19740284474d9632 | 96 | 96 | 0.895833 | 9 | 96 | 9.555556 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.4375 | 0 | 96 | 1 | 96 | 96 | 0.458333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
757ccef0e40582206e58f3db893122d0a899c977 | 6,887 | py | Python | test/rai/test_counterfactual_component.py | Azure/RAI-vNext-Preview | be1eb5581a89de26e319184ed3cb95ab2e6d32d1 | [
"MIT"
] | 8 | 2022-01-11T19:41:04.000Z | 2022-03-15T21:00:22.000Z | test/rai/test_counterfactual_component.py | Azure/RAI-vNext-Preview | be1eb5581a89de26e319184ed3cb95ab2e6d32d1 | [
"MIT"
] | 6 | 2022-01-10T21:42:42.000Z | 2022-03-16T14:54:24.000Z | test/rai/test_counterfactual_component.py | Azure/RAI-vNext-Preview | be1eb5581a89de26e319184ed3cb95ab2e6d32d1 | [
"MIT"
] | 1 | 2022-02-11T17:45:33.000Z | 2022-02-11T17:45:33.000Z | # ---------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# ---------------------------------------------------------
import logging
from azure.ai.ml import MLClient, dsl, Input
from test.constants_for_test import Timeouts
from test.utilities_for_test import submit_and_wait
_logger = logging.getLogger(__file__)
logging.basicConfig(level=logging.INFO)
class TestCounterfactualComponent:
def test_classification_all_args(
self,
ml_client: MLClient,
component_config,
registered_adult_model_id: str,
rai_components,
):
version_string = component_config["version"]
@dsl.pipeline(
compute="cpucluster",
description="Test Counterfactual component with all arguments",
experiment_name=f"TestCounterfactualComponent_test_classification_all_args_{version_string}",
)
def test_counterfactual_classification(
target_column_name,
train_data,
test_data,
):
fetch_model_job = rai_components.fetch_model(
model_id=registered_adult_model_id
)
fetch_model_job.set_limits(timeout=Timeouts.DEFAULT_TIMEOUT)
construct_job = rai_components.rai_constructor(
title="Run built from DSL",
task_type="classification",
model_info_path=fetch_model_job.outputs.model_info_output_path,
train_dataset=train_data,
test_dataset=test_data,
target_column_name=target_column_name,
categorical_column_names='["Race", "Sex", "Workclass", "Marital Status", "Country", "Occupation"]',
maximum_rows_for_test_dataset=5000, # Should be default
classes="[]", # Should be default
)
construct_job.set_limits(timeout=Timeouts.DEFAULT_TIMEOUT)
counterfactual_job = rai_components.rai_counterfactual(
rai_insights_dashboard=construct_job.outputs.rai_insights_dashboard,
total_cfs=10, # Case sensitivity bug!
method="random",
desired_class="opposite",
desired_range="[]",
permitted_range='{"Capital Gain": [0, 20000], "Hours per week": [0, 20]}',
features_to_vary='["Capital Gain", "Hours per week", "Age", "Country", "Sex"]',
feature_importance=True,
)
counterfactual_job.set_limits(timeout=Timeouts.COUNTERFACTUAL_TIMEOUT)
gather_job = rai_components.rai_gather(
constructor=construct_job.outputs.rai_insights_dashboard,
insight_1=counterfactual_job.outputs.counterfactual,
)
gather_job.set_limits(timeout=Timeouts.DEFAULT_TIMEOUT)
gather_job.outputs.dashboard.mode = "upload"
gather_job.outputs.ux_json.mode = "upload"
return {
"dashboard": gather_job.outputs.dashboard,
"ux_json": gather_job.outputs.ux_json,
}
adult_train_pq = Input(
type="uri_file", path=f"adult_train_pq:{version_string}", mode="download"
)
adult_test_pq = Input(
type="uri_file", path=f"adult_test_pq:{version_string}", mode="download"
)
rai_pipeline = test_counterfactual_classification(
target_column_name="income",
train_data=adult_train_pq,
test_data=adult_test_pq,
)
rai_pipeline_job = submit_and_wait(ml_client, rai_pipeline)
assert rai_pipeline_job is not None
def test_regression_all_args(
self,
ml_client: MLClient,
component_config,
registered_boston_model_id: str,
rai_components,
):
version_string = component_config["version"]
@dsl.pipeline(
compute="cpucluster",
description="Test Counterfactual component with all arguments",
experiment_name=f"TestCounterfactualComponent_test_regression_all_args_{version_string}",
)
def test_counterfactual_regression(
target_column_name,
train_data,
test_data,
):
fetch_model_job = rai_components.fetch_model(
model_id=registered_boston_model_id
)
fetch_model_job.set_limits(timeout=Timeouts.DEFAULT_TIMEOUT)
construct_job = rai_components.rai_constructor(
title="Run built from DSL",
task_type="regression",
model_info_path=fetch_model_job.outputs.model_info_output_path,
train_dataset=train_data,
test_dataset=test_data,
target_column_name=target_column_name,
categorical_column_names="[]",
maximum_rows_for_test_dataset=5000, # Should be default
classes="[]", # Should be default
)
construct_job.set_limits(timeout=Timeouts.DEFAULT_TIMEOUT)
counterfactual_job = rai_components.rai_counterfactual(
rai_insights_dashboard=construct_job.outputs.rai_insights_dashboard,
total_cfs=10, # Case sensitivity bug
method="kdtree",
desired_class="opposite", # Required argument bug...
desired_range="[20, 100]",
permitted_range='{"ZN": [0, 10], "AGE": [0, 50], "CRIM": [25, 50], "INDUS": [0, 10]}',
features_to_vary='["ZN", "AGE", "CRIM", "INDUS"]',
feature_importance=True,
)
counterfactual_job.set_limits(timeout=Timeouts.COUNTERFACTUAL_TIMEOUT)
gather_job = rai_components.rai_gather(
constructor=construct_job.outputs.rai_insights_dashboard,
insight_1=None,
insight_4=counterfactual_job.outputs.counterfactual,
)
gather_job.set_limits(timeout=120)
gather_job.outputs.dashboard.mode = "upload"
gather_job.outputs.ux_json.mode = "upload"
return {
"dashboard": gather_job.outputs.dashboard,
"ux_json": gather_job.outputs.ux_json,
}
adult_train_pq = Input(
type="uri_file", path=f"boston_train_pq:{version_string}", mode="download"
)
adult_test_pq = Input(
type="uri_file", path=f"boston_test_pq:{version_string}", mode="download"
)
rai_pipeline = test_counterfactual_regression(
target_column_name="y",
train_data=adult_train_pq,
test_data=adult_test_pq,
)
rai_pipeline_job = submit_and_wait(ml_client, rai_pipeline)
assert rai_pipeline_job is not None
| 39.809249 | 115 | 0.606069 | 700 | 6,887 | 5.572857 | 0.218571 | 0.041015 | 0.032812 | 0.038964 | 0.818252 | 0.818252 | 0.797744 | 0.771084 | 0.765445 | 0.706486 | 0 | 0.009267 | 0.294903 | 6,887 | 172 | 116 | 40.040698 | 0.794069 | 0.045448 | 0 | 0.589041 | 0 | 0.006849 | 0.138656 | 0.04053 | 0 | 0 | 0 | 0 | 0.013699 | 1 | 0.027397 | false | 0 | 0.041096 | 0 | 0.089041 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
75cb5a7ff4f6cf072aa7f92495e79b4c8206a5de | 939 | py | Python | examples/requests-missing-timeout.py | bittner/bandit | 87ecc4079ea50d77be13ed72bbf5ad2eb0673c64 | [
"Apache-2.0"
] | null | null | null | examples/requests-missing-timeout.py | bittner/bandit | 87ecc4079ea50d77be13ed72bbf5ad2eb0673c64 | [
"Apache-2.0"
] | null | null | null | examples/requests-missing-timeout.py | bittner/bandit | 87ecc4079ea50d77be13ed72bbf5ad2eb0673c64 | [
"Apache-2.0"
] | null | null | null | import requests
requests.get('https://gmail.com')
requests.get('https://gmail.com', timeout=None)
requests.get('https://gmail.com', timeout=5)
requests.post('https://gmail.com')
requests.post('https://gmail.com', timeout=None)
requests.post('https://gmail.com', timeout=5)
requests.put('https://gmail.com')
requests.put('https://gmail.com', timeout=None)
requests.put('https://gmail.com', timeout=5)
requests.delete('https://gmail.com')
requests.delete('https://gmail.com', timeout=None)
requests.delete('https://gmail.com', timeout=5)
requests.patch('https://gmail.com')
requests.patch('https://gmail.com', timeout=None)
requests.patch('https://gmail.com', timeout=5)
requests.options('https://gmail.com')
requests.options('https://gmail.com', timeout=None)
requests.options('https://gmail.com', timeout=5)
requests.head('https://gmail.com')
requests.head('https://gmail.com', timeout=None)
requests.head('https://gmail.com', timeout=5)
| 39.125 | 51 | 0.728435 | 135 | 939 | 5.066667 | 0.111111 | 0.307018 | 0.399123 | 0.409357 | 0.979532 | 0.78655 | 0 | 0 | 0 | 0 | 0 | 0.007769 | 0.040469 | 939 | 23 | 52 | 40.826087 | 0.751387 | 0 | 0 | 0 | 0 | 0 | 0.380192 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.045455 | 0 | 0.045455 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
75e90ccae8c4e15459f01ae8fdff9dbf976187ce | 27 | py | Python | build/lib/__init__.py | nirvanasupermind/qlang | c3264a343f19af0de1161b006c6ec2ee86e73882 | [
"MIT"
] | null | null | null | build/lib/__init__.py | nirvanasupermind/qlang | c3264a343f19af0de1161b006c6ec2ee86e73882 | [
"MIT"
] | null | null | null | build/lib/__init__.py | nirvanasupermind/qlang | c3264a343f19af0de1161b006c6ec2ee86e73882 | [
"MIT"
] | null | null | null | from q import run, run_text | 27 | 27 | 0.814815 | 6 | 27 | 3.5 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.148148 | 27 | 1 | 27 | 27 | 0.913043 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
f959d598d1f0ef52ccb401ef98f0d4b2e8038fed | 351 | py | Python | 10/00/2.py | pylangstudy/201710 | 139cad34d40f23beac85800633ec2ed63d530bfd | [
"CC0-1.0"
] | null | null | null | 10/00/2.py | pylangstudy/201710 | 139cad34d40f23beac85800633ec2ed63d530bfd | [
"CC0-1.0"
] | 25 | 2017-10-03T00:12:53.000Z | 2017-10-29T23:58:17.000Z | 10/00/2.py | pylangstudy/201710 | 139cad34d40f23beac85800633ec2ed63d530bfd | [
"CC0-1.0"
] | null | null | null | from pathlib import *
print(PurePosixPath('foo') == PurePosixPath('FOO'))
print(PureWindowsPath('foo') == PureWindowsPath('FOO'))
print(PureWindowsPath('FOO') in { PureWindowsPath('foo') })
print(PureWindowsPath('C:') < PureWindowsPath('d:'))
print(PureWindowsPath('foo') == PurePosixPath('foo'))
print(PureWindowsPath('foo') < PurePosixPath('foo'))
| 39 | 59 | 0.720798 | 34 | 351 | 7.441176 | 0.294118 | 0.426877 | 0.363636 | 0.3083 | 0.474308 | 0.332016 | 0 | 0 | 0 | 0 | 0 | 0 | 0.071225 | 351 | 8 | 60 | 43.875 | 0.776074 | 0 | 0 | 0 | 0 | 0 | 0.097143 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.142857 | 0 | 0.142857 | 0.857143 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
f9b48630c9ce3daa3c050f43bba5d93389259213 | 111 | py | Python | templates/django_app_name/models.py | luiscberrocal/django_ansible_config | 0eca2c7a7d7a515efbd143a7b33334f9a0c2f2c5 | [
"MIT"
] | null | null | null | templates/django_app_name/models.py | luiscberrocal/django_ansible_config | 0eca2c7a7d7a515efbd143a7b33334f9a0c2f2c5 | [
"MIT"
] | 8 | 2021-01-04T18:15:53.000Z | 2021-03-14T13:53:31.000Z | templates/django_app_name/models.py | luiscberrocal/django_ansible_config | 0eca2c7a7d7a515efbd143a7b33334f9a0c2f2c5 | [
"MIT"
] | null | null | null | from django.db import models
from django.utils.translation import gettext_lazy as _
# Create your models here.
| 27.75 | 54 | 0.81982 | 17 | 111 | 5.235294 | 0.764706 | 0.224719 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.135135 | 111 | 3 | 55 | 37 | 0.927083 | 0.216216 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
f9b926bb9fa22bcf78ab5a3f149f7d85d0e150bc | 25 | py | Python | liver_ct_segmentation_package/losses/__init__.py | qbic-pipelines/liver-ct-segmentation-package | 9983741ae68b9906045a44f06837a69cf593a416 | [
"MIT"
] | 2 | 2020-04-03T23:02:00.000Z | 2021-12-31T05:18:27.000Z | loss/__init__.py | ryanwongsa/open-images-2019-challenge | b49e0933451c4bf9b31a9a8faf1bd8ba3dee1cc5 | [
"Apache-2.0"
] | 67 | 2021-08-10T18:15:09.000Z | 2022-03-31T18:15:15.000Z | loss/__init__.py | ryanwongsa/open-images-2019-challenge | b49e0933451c4bf9b31a9a8faf1bd8ba3dee1cc5 | [
"Apache-2.0"
] | 1 | 2021-08-10T12:47:02.000Z | 2021-08-10T12:47:02.000Z | from .focal_loss import * | 25 | 25 | 0.8 | 4 | 25 | 4.75 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.12 | 25 | 1 | 25 | 25 | 0.863636 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
f9d1cfa06a274de156722594bf44e225c0b7cc74 | 578 | py | Python | FirstStepsInPython/Basics/Exercise2 Conditional Statements/04.MetricConverter.py | Pittor052/SoftUni-Studies | 1ee6341082f6ccfa45b3e82824c37722bcf2fb31 | [
"MIT"
] | null | null | null | FirstStepsInPython/Basics/Exercise2 Conditional Statements/04.MetricConverter.py | Pittor052/SoftUni-Studies | 1ee6341082f6ccfa45b3e82824c37722bcf2fb31 | [
"MIT"
] | null | null | null | FirstStepsInPython/Basics/Exercise2 Conditional Statements/04.MetricConverter.py | Pittor052/SoftUni-Studies | 1ee6341082f6ccfa45b3e82824c37722bcf2fb31 | [
"MIT"
] | 1 | 2021-10-07T18:30:42.000Z | 2021-10-07T18:30:42.000Z | number = float(input())
convert_from = input()
convert_to = input()
mm = str("mm")
cm = str("cm")
m = str("m")
if convert_from == mm and convert_to == m:
print(f"{number / 1000:.3f}")
elif convert_from == m and convert_to == cm:
print(f"{number * 100:.3f}")
elif convert_from == cm and convert_to == m:
print(f"{number / 100:.3f}")
elif convert_from == cm and convert_to == mm:
print(f"{number * 10:.3f}")
elif convert_from == mm and convert_to == cm:
print(f"{number / 10:.3f}")
elif convert_from == m and convert_to == mm:
print(f"{number * 1000:.3f}") | 32.111111 | 45 | 0.624567 | 96 | 578 | 3.614583 | 0.197917 | 0.221902 | 0.207493 | 0.244957 | 0.806916 | 0.778098 | 0.73487 | 0.567723 | 0.26513 | 0.26513 | 0 | 0.050955 | 0.185121 | 578 | 18 | 46 | 32.111111 | 0.685775 | 0 | 0 | 0 | 0 | 0 | 0.195164 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.333333 | 0 | 0 | 0 | null | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
f9e9a6288b8071574cea9686463ea82a2c5bc0a7 | 36 | py | Python | tests/test_travis.py | lbaumo/CLMM | 678422fd173c27a2bad7017b0c095a7c833bbd32 | [
"BSD-3-Clause"
] | null | null | null | tests/test_travis.py | lbaumo/CLMM | 678422fd173c27a2bad7017b0c095a7c833bbd32 | [
"BSD-3-Clause"
] | null | null | null | tests/test_travis.py | lbaumo/CLMM | 678422fd173c27a2bad7017b0c095a7c833bbd32 | [
"BSD-3-Clause"
] | null | null | null | def test_travis():
assert(True)
| 12 | 18 | 0.666667 | 5 | 36 | 4.6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.194444 | 36 | 2 | 19 | 18 | 0.793103 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.5 | 1 | 0.5 | true | 0 | 0 | 0 | 0.5 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
ddff24d1d65e3ff0f9a95f35c80263360fbaef06 | 110 | py | Python | day01/q1.py | anjaligoswami/dn-python | f723c159f20ef5e452fb76d6a0a0b9f55619a1a2 | [
"MIT"
] | null | null | null | day01/q1.py | anjaligoswami/dn-python | f723c159f20ef5e452fb76d6a0a0b9f55619a1a2 | [
"MIT"
] | null | null | null | day01/q1.py | anjaligoswami/dn-python | f723c159f20ef5e452fb76d6a0a0b9f55619a1a2 | [
"MIT"
] | null | null | null | #https://www.hackerrank.com/challenges/py-hello-world/problem
#qsn: print Hello, World!
print("Hello, World!") | 36.666667 | 61 | 0.754545 | 16 | 110 | 5.1875 | 0.6875 | 0.361446 | 0.361446 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.054545 | 110 | 3 | 62 | 36.666667 | 0.798077 | 0.763636 | 0 | 0 | 0 | 0 | 0.52 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
fb0e66af76bb94a5b8faa57668624fe31d37a30d | 119 | py | Python | lsframe/__init__.py | thcasey3/lsframe | f1d667a305ecd860417b7d1cbbfa1bbfcc40107e | [
"MIT"
] | null | null | null | lsframe/__init__.py | thcasey3/lsframe | f1d667a305ecd860417b7d1cbbfa1bbfcc40107e | [
"MIT"
] | null | null | null | lsframe/__init__.py | thcasey3/lsframe | f1d667a305ecd860417b7d1cbbfa1bbfcc40107e | [
"MIT"
] | null | null | null | from .start import *
from .intake import *
from .engine import *
from .tools import *
from .version import __version__
| 19.833333 | 32 | 0.756303 | 16 | 119 | 5.375 | 0.4375 | 0.465116 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.168067 | 119 | 5 | 33 | 23.8 | 0.868687 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
fb2a2299801948629bfc8d649be2135067d963bd | 105 | py | Python | pyhive/__init__.py | bgms/PyHive | 16a4896c17b5a592eec0ed929b1b1b93ff78331e | [
"Apache-2.0"
] | null | null | null | pyhive/__init__.py | bgms/PyHive | 16a4896c17b5a592eec0ed929b1b1b93ff78331e | [
"Apache-2.0"
] | null | null | null | pyhive/__init__.py | bgms/PyHive | 16a4896c17b5a592eec0ed929b1b1b93ff78331e | [
"Apache-2.0"
] | null | null | null | from __future__ import absolute_import
from __future__ import unicode_literals
__version__ = '0.6.5-rc1'
| 26.25 | 39 | 0.838095 | 15 | 105 | 4.933333 | 0.733333 | 0.27027 | 0.432432 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.042553 | 0.104762 | 105 | 3 | 40 | 35 | 0.744681 | 0 | 0 | 0 | 0 | 0 | 0.085714 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
fb305921bf534e7ae6475ac06e9ba721001ee125 | 4,527 | py | Python | bluzelle/db.py | bluzelle/bluzelle-py | c0cd814cb0e021a51f76e75d2243935da81f99d3 | [
"Apache-2.0"
] | 1 | 2020-01-08T01:09:42.000Z | 2020-01-08T01:09:42.000Z | bluzelle/db.py | bluzelle/bluzelle-py | c0cd814cb0e021a51f76e75d2243935da81f99d3 | [
"Apache-2.0"
] | null | null | null | bluzelle/db.py | bluzelle/bluzelle-py | c0cd814cb0e021a51f76e75d2243935da81f99d3 | [
"Apache-2.0"
] | null | null | null | from pprint import pprint
import asyncio
import sys
import json
from bluzelle import bzapi
from bluzelle.lib.udp.udp_support import *
from bluzelle.lib.udp.test_udp import *
class DB:
def __init__(self, cpp_db):
self.localhost_ip = "127.0.0.1"
self.cpp_db = cpp_db
def load_(self, *args, **kwargs):
method_handle = getattr(self.cpp_db, kwargs['meth'])
resp = method_handle(*args[1:])
return resp
def create(self, *args, **kwargs):
results = json.loads(self.load_(self, *args, **kwargs, meth = sys._getframe().f_code.co_name))
if 'result' in results:
return results['result'] == 1
elif 'error' in results:
raise Exception(results['error'])
else:
raise Exception("Unknown error")
def update(self, *args, **kwargs):
response = self.load_(self, *args, **kwargs, meth = sys._getframe().f_code.co_name)
results = json.loads(response)
if 'result' in results:
return results['result'] == 1
elif 'error' in results:
raise Exception(results['error'])
else:
raise Exception("Unknown error")
def remove(self, *args, **kwargs):
response = self.load_(self, *args, **kwargs, meth = sys._getframe().f_code.co_name)
results = json.loads(response)
if 'result' in results:
return results['result'] == 1
elif 'error' in results:
raise Exception(results['error'])
else:
raise Exception("Unknown error")
def has(self, *args, **kwargs):
response = self.load_(self, *args, **kwargs, meth = sys._getframe().f_code.co_name)
results = json.loads(response)
if 'result' in results:
return results['result'] == 1
elif 'error' in results:
raise Exception(results['error'])
else:
raise Exception("Unknown error")
def read(self, *args, **kwargs):
response = self.load_(self, *args, **kwargs, meth=sys._getframe().f_code.co_name)
results = json.loads(response)
if 'value' in results:
return results['value']
elif 'error' in results:
raise Exception(results['error'])
else:
raise Exception("Unknown error")
def quick_read(self, *args, **kwargs):
response = self.load_(self, *args, **kwargs, meth = sys._getframe().f_code.co_name)
results = json.loads(response)
if 'value' in results:
return results['value']
elif 'error' in results:
raise Exception(results['error'])
else:
raise Exception("Unknown error")
def expire(self, *args, **kwargs):
response = self.load_(self, *args, **kwargs, meth = sys._getframe().f_code.co_name)
results = json.loads(response)
if 'result' in results:
return results['result'] == 1
elif 'error' in results:
raise Exception(results['error'])
else:
raise Exception("Unknown error")
def persist(self, *args, **kwargs):
response = self.load_(self, *args, **kwargs, meth = sys._getframe().f_code.co_name)
results = json.loads(response)
if 'result' in results:
return results['result'] == 1
elif 'error' in results:
raise Exception(results['error'])
else:
raise Exception("Unknown error")
def ttl(self, *args, **kwargs):
response = self.load_(self, *args, **kwargs, meth = sys._getframe().f_code.co_name)
results = json.loads(response)
if 'error' in results:
raise Exception(results['error'])
elif 'ttl' in results:
return results['ttl']
else:
raise Exception("Unknown error")
def keys(self):
response = self.load_(self, meth = sys._getframe().f_code.co_name)
results = json.loads(response)
if 'keys' in results:
return results['keys']
elif 'error' in results:
raise Exception(results['error'])
else:
raise Exception("Unknown error")
def size(self):
response = self.load_(self, meth = sys._getframe().f_code.co_name)
results = json.loads(response)
if 'error' in results:
raise Exception(results['error'])
else:
return results
def swarm_status(self):
response = self.cpp_db.swarm_status()
return response | 34.557252 | 102 | 0.5814 | 529 | 4,527 | 4.858223 | 0.119093 | 0.073541 | 0.103502 | 0.068482 | 0.802335 | 0.802335 | 0.789494 | 0.789494 | 0.787938 | 0.787938 | 0 | 0.004056 | 0.292026 | 4,527 | 131 | 103 | 34.557252 | 0.797816 | 0 | 0 | 0.692982 | 0 | 0 | 0.079284 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.122807 | false | 0 | 0.061404 | 0 | 0.307018 | 0.008772 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
34e67b180c94ca398a5ae5240fb535e68aaa2ce0 | 32 | py | Python | tests/syntax/scripts/lists.py | toddrme2178/pyccel | deec37503ab0c5d0bcca1a035f7909f7ce8ef653 | [
"MIT"
] | null | null | null | tests/syntax/scripts/lists.py | toddrme2178/pyccel | deec37503ab0c5d0bcca1a035f7909f7ce8ef653 | [
"MIT"
] | null | null | null | tests/syntax/scripts/lists.py | toddrme2178/pyccel | deec37503ab0c5d0bcca1a035f7909f7ce8ef653 | [
"MIT"
] | null | null | null | [1,4,5]
['a','b','c']
[x,y,z,t]
| 8 | 13 | 0.3125 | 10 | 32 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.103448 | 0.09375 | 32 | 3 | 14 | 10.666667 | 0.241379 | 0 | 0 | 0 | 0 | 0 | 0.09375 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
34ec79304c4604478442217c40e849cb4fdb423c | 2,457 | py | Python | torchblocks/optims/utils.py | lonePatient/TorchBlocks | 4a65d746cc8a396cb7df73ed4644d97ddf843e29 | [
"MIT"
] | 82 | 2020-06-23T05:51:08.000Z | 2022-03-29T08:11:08.000Z | torchblocks/optims/utils.py | Raiselimit/TorchBlocks | a5baecb9a2470ff175087475630f2b7db3f7ef51 | [
"MIT"
] | null | null | null | torchblocks/optims/utils.py | Raiselimit/TorchBlocks | a5baecb9a2470ff175087475630f2b7db3f7ef51 | [
"MIT"
] | 22 | 2020-06-23T05:51:10.000Z | 2022-03-18T07:01:43.000Z | def get_optimizer_params(model, lr, lr_weight_decay_coef, num_layers):
param_optimizer = list(model.named_parameters())
no_decay = ['bias', 'LayerNorm.bias', 'LayerNorm.weight']
if lr_weight_decay_coef < 1.0:
optimizer_grouped_parameters = [
{'params': [
p for n, p in param_optimizer
if 'bert.embeddings' not in n
and 'bert.encoder' not in n
and not any(nd in n for nd in no_decay)],
'weight_decay': 0.01},
{'params': [
p for n, p in param_optimizer
if 'bert.embeddings' not in n
and 'bert.encoder' not in n
and any(nd in n for nd in no_decay)],
'weight_decay': 0.0},
{'params': [
p for n, p in param_optimizer
if 'bert.embeddings' in n
and not any(nd in n for nd in no_decay)],
'lr': lr * lr_weight_decay_coef ** (num_layers + 1), 'weight_decay': 0.01},
{'params': [
p for n, p in param_optimizer
if 'bert.embeddings' in n
and any(nd in n for nd in no_decay)],
'lr': lr * lr_weight_decay_coef ** (num_layers + 1), 'weight_decay': 0.0}
]
for i in range(num_layers):
optimizer_grouped_parameters.append(
{'params': [
p for n, p in param_optimizer
if 'bert.encoder.layer.{}.'.format(i) in n
and any(nd in n for nd in no_decay)],
'lr': lr * lr_weight_decay_coef ** (num_layers - i), 'weight_decay': 0.0})
optimizer_grouped_parameters.append(
{'params': [
p for n, p in param_optimizer
if 'bert.encoder.layer.{}.'.format(i) in n
and any(nd in n for nd in no_decay)],
'lr': lr * lr_weight_decay_coef ** (num_layers - i), 'weight_decay': 0.0})
else:
optimizer_grouped_parameters = [
{'params': [p for n, p in param_optimizer if not any(nd in n for nd in no_decay)],
'weight_decay': 0.01},
{'params': [p for n, p in param_optimizer if any(nd in n for nd in no_decay)],
'weight_decay': 0.0}
]
return optimizer_grouped_parameters
| 49.14 | 95 | 0.499389 | 322 | 2,457 | 3.614907 | 0.127329 | 0.041237 | 0.068729 | 0.075601 | 0.844502 | 0.844502 | 0.844502 | 0.820447 | 0.820447 | 0.820447 | 0 | 0.015593 | 0.399674 | 2,457 | 49 | 96 | 50.142857 | 0.773559 | 0 | 0 | 0.693878 | 0 | 0 | 0.130399 | 0.018272 | 0 | 0 | 0 | 0 | 0 | 1 | 0.020408 | false | 0 | 0 | 0 | 0.040816 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
55236ca5633389a4594009c78cbffaf4a01fb7d8 | 81 | py | Python | RecMet/__init__.py | rijulizer/RecMet | 88863ef1ce1b9be8bbe1ae14580d6aa06b5cd738 | [
"Apache-2.0"
] | null | null | null | RecMet/__init__.py | rijulizer/RecMet | 88863ef1ce1b9be8bbe1ae14580d6aa06b5cd738 | [
"Apache-2.0"
] | null | null | null | RecMet/__init__.py | rijulizer/RecMet | 88863ef1ce1b9be8bbe1ae14580d6aa06b5cd738 | [
"Apache-2.0"
] | null | null | null | from RecMet.PythonMetrics import Metrics
from RecMet.PysparkMetrics import recmet | 40.5 | 40 | 0.888889 | 10 | 81 | 7.2 | 0.6 | 0.277778 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.08642 | 81 | 2 | 41 | 40.5 | 0.972973 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
9b2afac5dc5bdc8eea15762ece1597360a7fb251 | 63,464 | py | Python | src/qng/qng.py | misken/qng | 8eeb43d60293b87536543b45e9ba5c62012d6c7e | [
"MIT"
] | null | null | null | src/qng/qng.py | misken/qng | 8eeb43d60293b87536543b45e9ba5c62012d6c7e | [
"MIT"
] | null | null | null | src/qng/qng.py | misken/qng | 8eeb43d60293b87536543b45e9ba5c62012d6c7e | [
"MIT"
] | null | null | null | __author__ = 'misken'
import numpy as np
import scipy.stats as stats
import scipy.optimize
import math
def poissoninv(prob, mean):
"""
Return the cumulative inverse of the Poisson distribution.
Useful for capacity planning approximations. Uses normal
approximation to the Poisson distribution for mean > 50.
Parameters
----------
mean : float
mean of the Poisson distribution
prob :
percentile desired
Returns
-------
int
minimum value, c, such that P(X>c) <= prob
"""
return stats.poisson.ppf(prob, mean)
def erlangb_direct(load, c):
"""
Return the the probability of loss in M/G/c/c system.
Parameters
----------
load : float
average arrival rate * average service time (units are erlangs)
c : int
number of servers
Returns
-------
float
probability arrival finds system full
"""
p = stats.poisson.pmf(c, load) / stats.poisson.cdf(c, load)
return p
def erlangb(load, c):
"""
Return the the probability of loss in M/G/c/c system using recursive approach.
Much faster than direct computation via
scipy.stats.poisson.pmf(c, load) / scipy.stats.poisson.cdf(c, load)
Parameters
----------
load : float
average arrival rate * average service time (units are erlangs)
c : int
number of servers
Returns
-------
float
probability arrival finds system full
"""
invb = 1.0
for j in range(1, c + 1):
invb = 1.0 + invb * j / load
b = 1.0 / invb
return b
def erlangc(load, c):
"""
Return the the probability of delay in M/M/c/inf system using recursive Erlang B approach.
Parameters
----------
load : float
average arrival rate * average service time (units are erlangs)
c : int
number of servers
Returns
-------
float
probability all servers busy
"""
rho = load / float(c)
# if rho >= 1.0:
# raise ValueError("rho must be less than 1.0")
eb = erlangb(load, c)
ec = 1.0 / (rho + (1 - rho) * (1.0 / eb))
return ec
def erlangcinv(prob, load):
"""
Return the number of servers such that probability of delay in M/M/c/inf system is
less than specified probability
Parameters
----------
prob : float
threshold delay probability
load : float
average arrival rate * average service time (units are erlangs)
Returns
-------
c : int
number of servers
"""
c = np.ceil(load)
ec = erlangc(load, c)
if ec <= prob:
return c
else:
while ec > prob:
c += 1
ec = erlangc(load, c)
return c
def mmc_prob_n(n, arr_rate, svc_rate, c):
"""
Return the the probability of n customers in system in M/M/c/inf queue.
Uses recursive approach from Tijms, H.C. (1994), "Stochastic Models: An Algorithmic Approach",
John Wiley and Sons, Chichester (Section 4.5.1, p287)
Parameters
----------
n : int
number of customers for which probability is desired
arr_rate : float
average arrival rate to queueing system
svc_rate : float
average service rate (each server). 1/svc_rate is mean service time.
c : int
number of servers
Returns
-------
float
probability n customers in system (in service plus in queue)
"""
rho = arr_rate / (svc_rate * float(c))
# Step 0: Initialization - p[0] is initialized to one via creation method
pbar = np.ones(max(n + 1, c))
# Step 1: compute pbar
for j in range(1, c):
pbar[j] = arr_rate * pbar[j - 1] / (j * svc_rate)
# Step 2: compute normalizing constant and normalize pbar
gamma = np.sum(pbar) + rho * pbar[c - 1] / (1 - rho)
p = pbar / gamma
# Step 3: compute probs beyond c - 1
for j in range(c, n + 1):
p[j] = p[c - 1] * (rho ** (j - c + 1))
return p[n]
def mmc_mean_qsize(arr_rate, svc_rate, c):
"""
Return the the mean queue size in M/M/c/inf queue.
Parameters
----------
arr_rate : float
average arrival rate to queueing system
svc_rate : float
average service rate (each server). 1/svc_rate is mean service time.
c : int
number of servers
Returns
-------
float
mean number of customers in queue
"""
rho = arr_rate / (svc_rate * float(c))
mean_qsize = (rho ** 2 / (1 - rho) ** 2) * mmc_prob_n(c - 1, arr_rate, svc_rate, c)
return mean_qsize
def mmc_mean_syssize(arr_rate, svc_rate, c):
"""
Return the the mean system size in M/M/c/inf queue.
Parameters
----------
arr_rate : float
average arrival rate to queueing system
svc_rate : float
average service rate (each server). 1/svc_rate is mean service time.
c : int
number of servers
Returns
-------
float
mean number of customers in queue + service
"""
load = arr_rate / svc_rate
rho = load / float(c)
mean_qsize = (rho ** 2 / (1 - rho) ** 2) * mmc_prob_n(c - 1, arr_rate, svc_rate, c)
mean_syssize = mean_qsize + load
return mean_syssize
def mmc_mean_qwait(arr_rate, svc_rate, c):
"""
Return the the mean wait in queue time in M/M/c/inf queue.
Uses mmc_mean_qsize along with Little's Law.
Parameters
----------
arr_rate : float
average arrival rate to queueing system
svc_rate : float
average service rate (each server). 1/svc_rate is mean service time.
c : int
number of servers
Returns
-------
float
mean wait time in queue
"""
return mmc_mean_qsize(arr_rate, svc_rate, c) / arr_rate
def mmc_mean_systime(arr_rate, svc_rate, c):
"""
Return the mean time in system (wait in queue + service time) in M/M/c/inf queue.
Uses mmc_mean_qsize along with Little's Law (via mmc_mean_qwait) and relationship between W and Wq..
Parameters
----------
arr_rate : float
average arrival rate to queueing system
svc_rate : float
average service rate (each server). 1/svc_rate is mean service time.
c : int
number of servers
Returns
-------
float
mean wait time in queue
"""
return mmc_mean_qwait(arr_rate, svc_rate, c) + 1 / svc_rate
def mmc_prob_wait_normal(arr_rate, svc_rate, c):
"""
Return the approximate probability of waiting (i.e. erlang C) in M/M/c/inf queue using a normal approximation.
Uses normal approximation approach by Kolesar and Green, "Insights
on Service System Design from a Normal Approximation to Erlang's
Delay Formula", POM, V7, No3, Fall 1998, pp282-293
Parameters
----------
arr_rate : float
average arrival rate to queueing system
svc_rate : float
average service rate (each server). 1/svc_rate is mean service time.
c : int
number of servers
Returns
-------
float
approximate probability of delay in queue
"""
load = arr_rate / svc_rate
prob_wait = 1.0 - stats.norm.cdf(c - load - 0.5) / np.sqrt(load)
return prob_wait
def mgc_prob_wait_erlangc(arr_rate, svc_rate, c):
"""
Return the approximate probability of waiting in M/G/c/inf queue using Erlang-C as approximation.
It's well known that the Erlang-C formula, P(W>0) in M/M/c is a good approximation for
P(W>0) in M/G/c. See, for example, Tjims (1994) on p296 or Whitt (1993) "Approximations
for the GI/G/m queue", Production and Operations Management, 2, 2.
Parameters
----------
arr_rate : float
average arrival rate to queueing system
svc_rate : float
average service rate (each server). 1/svc_rate is mean service time.
c : int
number of servers
Returns
-------
float
approximate probability of delay in queue
"""
load = arr_rate / svc_rate
prob_wait = erlangc(load, c)
return prob_wait
def mm1_qwait_cdf(t, arr_rate, svc_rate):
"""
Return P(Wq < t) in M/M/1/inf queue.
Parameters
----------
t : float
wait time of interest
arr_rate : float
average arrival rate to queueing system
svc_rate : float
average service rate (each server). 1/svc_rate is mean service time.
Returns
-------
float
probability wait time in queue is < t
"""
rho = arr_rate / svc_rate
term1 = rho
term2 = -svc_rate * (1 - rho) * t
prob_wq_lt_t = 1.0 - term1 * np.exp(term2)
return prob_wq_lt_t
def mmc_qwait_cdf(t, arr_rate, svc_rate, c):
"""
Return P(Wq < t) in M/M/c/inf queue.
Parameters
----------
t : float
wait time of interest
arr_rate : float
average arrival rate to queueing system
svc_rate : float
average service rate (each server). 1/svc_rate is mean service time.
c : int
number of servers
Returns
-------
float
probability wait time in queue is < t
"""
rho = arr_rate / (svc_rate * float(c))
term1 = rho / (1 - rho)
term2 = mmc_prob_n(c - 1, arr_rate, svc_rate, c)
term3 = -c * svc_rate * (1 - rho) * t
prob_wq_lt_t = 1.0 - term1 * term2 * np.exp(term3)
return prob_wq_lt_t
def mmc_qwait_cdf_inv(t, prob, arr_rate, svc_rate):
"""
Return the number of servers such that probability of delay < t in M/M/c/inf system is
greater than specified prob
Parameters
----------
t : float
wait time threshold
prob : float
threshold delay probability
arr_rate : float
average arrival rate to queueing system
svc_rate : float
average service rate (each server). 1/svc_rate is mean service time.
Returns
-------
c : int
number of servers
"""
c = math.ceil(arr_rate / svc_rate)
pwait_lt_t = mmc_qwait_cdf(t, arr_rate, svc_rate, c)
if pwait_lt_t >= prob:
return c
else:
while pwait_lt_t < prob:
c += 1
pwait_lt_t = mmc_qwait_cdf(t, arr_rate, svc_rate, c)
return c
def mm1_qwait_pctile(p, arr_rate, svc_rate):
"""
Return p'th percentile of P(Wq < t) in M/M/1/inf queue.
Parameters
----------
p : float
percentile of interest
arr_rate : float
average arrival rate to queueing system
svc_rate : float
average service rate (each server). 1/svc_rate is mean service time.
Returns
-------
float
t such that P(wait time in queue is < t) = p
"""
# For initial guess, we'll use percentile from similar M/M/1 system
init_guess = 1/svc_rate
waitq_pctile = scipy.optimize.newton(_mm1_waitq_pctile_wrap,init_guess,args=(p, arr_rate, svc_rate))
return waitq_pctile
def _mm1_waitq_pctile_wrap(t, p, arr_rate, svc_rate):
return mm1_qwait_cdf(t, arr_rate, svc_rate) - p
def mmc_qwait_pctile(p, arr_rate, svc_rate, c):
"""
Return p'th percentile of P(Wq < t) in M/M/c/inf queue.
Parameters
----------
p : float
percentile of interest
arr_rate : float
average arrival rate to queueing system
svc_rate : float
average service rate (each server). 1/svc_rate is mean service time.
c : int
number of servers
Returns
-------
float
t such that P(wait time in queue is < t) = p
"""
# For initial guess, we'll use percentile from similar M/M/1 system
init_guess = mm1_qwait_pctile(p, arr_rate, c * svc_rate)
waitq_pctile = scipy.optimize.newton(_mmc_waitq_pctile_wrap,init_guess,args=(p, arr_rate, svc_rate, c))
return waitq_pctile
def _mmc_waitq_pctile_wrap(t, p, arr_rate, svc_rate, c):
return mmc_qwait_cdf(t, arr_rate, svc_rate, c) - p
def mdc_mean_qwait_cosmetatos(arr_rate, svc_rate, c):
"""
Return the approximate mean queue wait in M/D/c/inf queue using Cosmetatos approximation.
See Cosmetatos, George P. "Approximate explicit formulae for the average queueing time in the processes (M/D/r)
and (D/M/r)." Infor 13.3 (1975): 328-331.
Parameters
----------
arr_rate : float
average arrival rate to queueing system
svc_rate : float
average service rate (each server). 1/svc_rate is mean service time.
c : int
number of servers
Returns
-------
float
mean number of customers in queue
"""
rho = arr_rate / (svc_rate * float(c))
term1 = 0.5
term2 = (c - 1) * (np.sqrt(4 + 5 * c) - 2) / (16 * c)
term3 = (1 - rho) / rho
term4 = mmc_mean_qwait(arr_rate, svc_rate, c)
mean_qwait = term1 * (1 + term2 * term3) * term4
return mean_qwait
def mdc_mean_qsize_cosmetatos(arr_rate, svc_rate, c):
"""
Return the approximate mean queue size in M/D/c/inf queue using Cosmetatos approximation.
See Cosmetatos, George P. "Approximate explicit formulae for the average queueing time in the processes (M/D/r)
and (D/M/r)." Infor 13.3 (1975): 328-331.
Parameters
----------
arr_rate : float
average arrival rate to queueing system
svc_rate : float
average service rate (each server). 1/svc_rate is mean service time.
c : int
number of servers
Returns
-------
float
mean number of customers in queue
"""
mean_qwait = mdc_mean_qwait_cosmetatos(arr_rate, svc_rate, c)
mean_qsize = mean_qwait * arr_rate
return mean_qsize
def mgc_mean_qwait_kimura(arr_rate, svc_rate, c, cv2_svc_time):
"""
Return the approximate mean queue wait in M/G/c/inf queue using Kimura approximation.
See Kimura, Toshikazu. "Approximations for multi-server queues: system interpolations."
Queueing Systems 17.3-4 (1994): 347-382.
It's based on interpolation between an M/D/c and a M/M/c queueing system.
Parameters
----------
arr_rate : float
average arrival rate to queueing system
svc_rate : float
average service rate (each server). 1/svc_rate is mean service time.
c : int
number of servers
cv2_svc_time : float
squared coefficient of variation for service time distribution
Returns
-------
float
mean wait time in queue
"""
term1 = 1.0 + cv2_svc_time
term2 = 2.0 * cv2_svc_time / mmc_mean_qwait(arr_rate, svc_rate, c)
term3 = (1.0 - cv2_svc_time) / mdc_mean_qwait_cosmetatos(arr_rate, svc_rate, c)
mean_qwait = term1 / (term2 + term3)
return mean_qwait
def mgc_mean_qsize_kimura(arr_rate, svc_rate, c, cv2_svc_time):
"""
Return the approximate mean queue size in M/G/c/inf queue using Kimura approximation.
See Kimura, Toshikazu. "Approximations for multi-server queues: system interpolations."
Queueing Systems 17.3-4 (1994): 347-382.
It's based on interpolation between an M/D/c and a M/M/c queueing system.
Parameters
----------
arr_rate : float
average arrival rate to queueing system
svc_rate : float
average service rate (each server). 1/svc_rate is mean service time.
c : int
number of servers
cv2_svc_time : float
squared coefficient of variation for service time distribution
Returns
-------
float
mean number of customers in queue
"""
mean_qwait = mgc_mean_qwait_kimura(arr_rate, svc_rate, c, cv2_svc_time)
mean_qsize = mean_qwait * arr_rate
return mean_qsize
def mgc_qwait_cdf_whitt(t, arr_rate, svc_rate, c, cs2):
"""
Return the approximate P(Wq <= t) in M/G/c/inf queue using Whitt's G/C/c approximation.
Comparison of Whitt's approximation with the van Hoorn and Tijms M/G/c specific approximation suggests that using
Whitt's is sufficiently accurate and much easier in that we don't have to numerically integrate
excess service time distributions.
Whitt, Ward. "Approximations for the GI/G/m queue" Production and Operations Management 2, 2
(Spring 1993): 114-161.
van Hoorn, Michiel Harpert, and Hendrik Cornelis Tijms. "Approximations for the waiting time
distribution of the M/G/c queue." Performance Evaluation 2.1 (1982): 22-28.
Parameters
----------
t : float
wait time of interest
arr_rate : float
average arrival rate to queueing system
svc_rate : float
average service rate (each server). 1/svc_rate is mean service time.
c : int
number of servers
cs2 : float
squared coefficient of variation for service time distribution
Returns
-------
float
~ P(Wq <= t)
"""
pwait_lt_t = ggm_qwait_cdf_whitt(t, arr_rate, svc_rate, c, 1.0, cs2)
return pwait_lt_t
def mgc_mean_qwait_bjorklund(arr_rate, svc_rate, c, cv2_svc_time):
"""
Return the approximate mean queue wait in M/G/c/inf queue using Bjorklund and Elldin approximation.
See Kimura, Toshikazu. "Approximations for multi-server queues: system interpolations."
Queueing Systems 17.3-4 (1994): 347-382.
It's based on interpolation between an M/D/c and a M/M/c queueing system.
Parameters
----------
arr_rate : float
average arrival rate to queueing system
svc_rate : float
average service rate (each server). 1/svc_rate is mean service time.
c : int
number of servers
cv2_svc_time : float
squared coefficient of variation for service time distribution
Returns
-------
float
mean number of customers in queue
"""
term1 = cv2_svc_time * mmc_mean_qwait(arr_rate, svc_rate, c)
term2 = (1.0 - cv2_svc_time) * mdc_mean_qwait_cosmetatos(arr_rate, svc_rate, c)
mean_qwait = term1 + term2
return mean_qwait
def mgc_mean_qsize_bjorklund(arr_rate, svc_rate, c, cv2_svc_time):
"""
Return the approximate mean queue size in M/G/c/inf queue using Bjorklund and Elldin approximation.
See Kimura, Toshikazu. "Approximations for multi-server queues: system interpolations."
Queueing Systems 17.3-4 (1994): 347-382.
It's based on interpolation between an M/D/c and a M/M/c queueing system.
Parameters
----------
arr_rate : float
average arrival rate to queueing system
svc_rate : float
average service rate (each server). 1/svc_rate is mean service time.
c : int
number of servers
cv2_svc_time : float
squared coefficient of variation for service time distribution
Returns
-------
float
mean number of customers in queue
"""
mean_qwait = mgc_mean_qwait_bjorklund(arr_rate, svc_rate, c, cv2_svc_time)
mean_qsize = mean_qwait * arr_rate
return mean_qsize
def mgc_qcondwait_pctile_firstorder_2moment(prob, arr_rate, svc_rate, c, cv2_svc_time):
"""
Return an approximate conditional queue wait percentile in M/G/c/inf system.
The approximation is based on a first order approximation using the M/M/c delay percentile.
See Tijms, H.C. (1994), "Stochastic Models: An Algorithmic Approach", John Wiley and Sons, Chichester
Chapter 4, p299-300
The percentile is conditional on Wq>0 (i.e. on event customer waits)
This 1st order approximation is OK for 0<=CVSquared<=2 and prob>1-Prob(Delay)
Note that for Prob(Delay) we use MMC as approximation for same quantity in MGC.
Justification in Tijms (p296)
Parameters
----------
arr_rate : float
average arrival rate to queueing system
svc_rate : float
average service rate (each server). 1/svc_rate is mean service time.
c : int
number of servers
cv2_svc_time : float
squared coefficient of variation for service time distribution
Returns
-------
float
t such that P(wait time in queue is < t | wait time in queue is > 0) = prob
"""
load = arr_rate / svc_rate
# Compute corresponding prob for unconditional wait (see p274 of Tjims)
equivalent_uncond_prob = 1.0 - (1.0 - prob) * erlangc(load, c)
# Compute conditional wait time percentile for M/M/c system to use in approximation
condwaitq_pctile_mmc = mmc_qwait_pctile(equivalent_uncond_prob, arr_rate, svc_rate, c)
# First order approximation for conditional wait time in queue
condwaitq_pctile = 0.5 * (1.0 + cv2_svc_time) * condwaitq_pctile_mmc
return condwaitq_pctile
def mgc_qcondwait_pctile_secondorder_2moment(prob, arr_rate, svc_rate, c, cv2_svc_time):
"""
Return an approximate conditional queue wait percentile in M/G/c/inf system.
The approximation is based on a second order approximation using the M/M/c delay percentile.
See Tijms, H.C. (1994), "Stochastic Models: An Algorithmic Approach", John Wiley and Sons, Chichester
Chapter 4, p299-300
The percentile is conditional on Wq>0 (i.e. on event customer waits)
This approximation is based on interpolation between corresponding M/M/c and M/D/c systems.
Parameters
----------
arr_rate : float
average arrival rate to queueing system
svc_rate : float
average service rate (each server). 1/svc_rate is mean service time.
c : int
number of servers
cv2_svc_time : float
squared coefficient of variation for service time distribution
Returns
-------
float
t such that P(wait time in queue is < t | wait time in queue is > 0) = prob
"""
load = arr_rate / svc_rate
# Compute corresponding prob for unconditional wait (see p274 of Tjims)
equivalent_uncond_prob = 1.0 - (1.0 - prob) * erlangc(load, c)
# Compute conditional wait time percentile for M/M/c system to use in approximation
condwaitq_pctile_mmc = mmc_qwait_pctile(equivalent_uncond_prob, arr_rate, svc_rate, c)
# Compute conditional wait time percentile for M/D/c system to use in approximation
# TODO: implement mdc_qwait_pctile
condqwait_pctile_mdc = mdc_waitq_pctile(equivalent_uncond_prob, arr_rate, svc_rate, c)
# Second order approximation for conditional wait time in queue
condwaitq_pctile = (1.0 - cv2_svc_time) * condqwait_pctile_mdc + cv2_svc_time * condwaitq_pctile_mmc
return condwaitq_pctile
def mg1_mean_qsize(arr_rate, svc_rate, cv2_svc_time):
"""
Return the mean queue size in M/G/1/inf queue using P-K formula.
See any decent queueing book.
Parameters
----------
arr_rate : float
average arrival rate to queueing system
svc_rate : float
average service rate (each server). 1/svc_rate is mean service time.
cv2_svc_time : float
squared coefficient of variation for service time distribution
Returns
-------
float
mean number of customers in queue
"""
rho = arr_rate / svc_rate
mean_qsize = (arr_rate ** 2) * cv2_svc_time/(2 * (1.0 - rho))
return mean_qsize
def mg1_mean_qwait(arr_rate, svc_rate, cs2):
"""
Return the mean queue wait in M/G/1/inf queue using P-K formula along with Little's Law.
See any decent queueing book.
Parameters
----------
arr_rate : float
average arrival rate to queueing system
svc_rate : float
average service rate (each server). 1/svc_rate is mean service time.
cs2 : float
squared coefficient of variation for service time distribution
Returns
-------
float
mean wait time in queue
"""
mean_qsize = mg1_mean_qsize(arr_rate, svc_rate, cs2)
mean_qwait = mean_qsize / arr_rate
return mean_qwait
def gamma_0(m, rho):
"""
See p124 immediately after Eq 2.16.
:param m: int
number of servers
:param rho: float
lambda / (mu * m)
:return: float
"""
term1 = 0.24
term2 = (1 - rho) * (m - 1) * (math.sqrt(4 + 5 * m) - 2 ) / (16 * m * rho)
return min(term1, term2)
def _ggm_mean_qwait_whitt_phi_1(m, rho):
"""
See p124 immediately after Eq 2.16.
:param m: int
number of servers
:param rho: float
lambda / (mu * m)
:return: float
"""
return 1.0 + gamma_0(m, rho)
def _ggm_mean_qwait_whitt_phi_2(m, rho):
"""
See p124 immediately after Eq 2.18.
:param m: int
number of servers
:param rho: float
lambda / (mu * m)
:return: float
"""
return 1.0 - 4.0 * gamma_0(m, rho)
def _ggm_mean_qwait_whitt_phi_3(m, rho):
"""
See p124 immediately after Eq 2.20.
:param m: int
number of servers
:param rho: float
lambda / (mu * m)
:return: float
"""
term1 = _ggm_mean_qwait_whitt_phi_2(m, rho)
term2 = math.exp(-2.0 * (1 - rho) / (3.0 * rho))
return term1 * term2
def _ggm_mean_qwait_whitt_phi_4(m, rho):
"""
See p125 , Eq 2.21.
:param m: int
number of servers
:param rho: float
lambda / (mu * m)
:return: float
"""
term1 = 1.0
term2 = 0.5 * (_ggm_mean_qwait_whitt_phi_1(m, rho) + _ggm_mean_qwait_whitt_phi_3(m, rho))
return min(term1, term2)
def _ggm_mean_qwait_whitt_psi_0(c2, m, rho):
"""
See p125 , Eq 2.22.
:param c2: float
common squared CV for both arrival and service process
:param m: int
number of servers
:param rho: float
lambda / (mu * m)
:return: float
"""
if c2 >= 1:
return 1.0
else:
return _ggm_mean_qwait_whitt_phi_4(m, rho) ** (2 * (1 - c2))
def _ggm_mean_qwait_whitt_phi_0(rho, ca2, cs2, m):
"""
See p125 , Eq 2.25.
:param rho: float
lambda / (mu * m)
:param ca2: float
squared CV for arrival process
:param cs2: float
squared CV for service process
:param m: int
number of servers
:return: float
"""
if ca2 >= cs2:
term1 = _ggm_mean_qwait_whitt_phi_1(m, rho) * (4 * (ca2 - cs2) / (4 * ca2 - 3 * cs2))
term2 = (cs2 / (4 * ca2 - 3 * cs2)) * _ggm_mean_qwait_whitt_psi_0((ca2 + cs2) / 2.0, m, rho)
return term1 + term2
else:
term1 = _ggm_mean_qwait_whitt_phi_3(m, rho) * ((cs2 - ca2) / (2 * ca2 + 2 * cs2))
term2 = ( (cs2 + 3 * ca2) / (2 * ca2 + 2 * cs2) )
term3 = _ggm_mean_qwait_whitt_psi_0((ca2 + cs2) / 2.0, m, rho)
check = term2 * term3 / term1
#print (check)
return term1 + term2 * term3
def ggm_mean_qwait_whitt(arr_rate, svc_rate, m, ca2, cs2):
"""
Return the approximate mean queue wait in GI/G/c/inf queue using Whitt's 1993 approximation.
See Whitt, Ward. "Approximations for the GI/G/m queue"
Production and Operations Management 2, 2 (Spring 1993): 114-161.
It's based on interpolations with corrections between an M/D/c, D/M/c and a M/M/c queueing systems.
Parameters
----------
arr_rate : float
average arrival rate to queueing system
svc_rate : float
average service rate (each server). 1/svc_rate is mean service time.
c : int
number of servers
ca2 : float
squared coefficient of variation for inter-arrival time distribution
cs2 : float
squared coefficient of variation for service time distribution
Returns
-------
float
mean wait time in queue
"""
rho = arr_rate / (svc_rate * float(m))
if rho >= 1.0:
raise ValueError("rho must be less than 1.0")
# Now implement Eq 2.24 on p 125
# Hack - for some reason I can't get this approximation to match Table 2 in the above
# reference for the case of D/M/m. However, if I use Eq 2.20 (specific for the D/M/m case),
# I do match the expected results. So, for now, I'll trap for this case.
if ca2 == 0 and cs2 == 1:
qwait = dmm_mean_qwait_whitt(arr_rate, svc_rate, m)
else:
term1 = _ggm_mean_qwait_whitt_phi_0(rho, ca2, cs2, m)
term2 = 0.5 * (ca2 + cs2)
term3 = mmc_mean_qwait(arr_rate, svc_rate, m)
qwait = term1 * term2 * term3
return qwait
def ggm_prob_wait_whitt(arr_rate, svc_rate, m, ca2, cs2):
"""
Return the approximate P(Wq > 0) in GI/G/c/inf queue using Whitt's 1993 approximation.
See Whitt, Ward. "Approximations for the GI/G/m queue"
Production and Operations Management 2, 2 (Spring 1993): 114-161.
It's based on interpolations with corrections between an M/D/c, D/M/c and a M/M/c queueing systems.
Parameters
----------
arr_rate : float
average arrival rate to queueing system
svc_rate : float
average service rate (each server). 1/svc_rate is mean service time.
c : int
number of servers
ca2 : float
squared coefficient of variation for inter-arrival time distribution
cs2 : float
squared coefficient of variation for service time distribution
Returns
-------
float
mean wait time in queue
"""
rho = arr_rate / (svc_rate * float(m))
# For ca2 = 1 (e.g. Poisson arrivals), Whitt uses fact that Erlang-C works well for M/G/c
if ca2 == 1:
pwait = mgc_prob_wait_erlangc(arr_rate, svc_rate, m)
else:
pi = _ggm_prob_wait_whitt_pi(m, rho, ca2, cs2)
pwait = min(pi, 1)
return pwait
def _ggm_prob_wait_whitt_z(ca2, cs2):
"""
Equation 3.8 on p139 of Whitt (1993). Used in approximation for P(Wq > 0) in GI/G/c/inf queue.
See Whitt, Ward. "Approximations for the GI/G/m queue"
Production and Operations Management 2, 2 (Spring 1993): 114-161.
Parameters
----------
ca2 : float
squared coefficient of variation for inter-arrival time distribution
cs2 : float
squared coefficient of variation for service time distribution
Returns
-------
float
approximation for intermediate term z (see Eq 3.6)
"""
z = (ca2 + cs2) / (1.0 + cs2)
return z
def _ggm_prob_wait_whitt_gamma(m, rho, z):
"""
Equation 3.5 on p136 of Whitt (1993). Used in approximation for P(Wq > 0) in GI/G/c/inf queue.
See Whitt, Ward. "Approximations for the GI/G/m queue"
Production and Operations Management 2, 2 (Spring 1993): 114-161.
Parameters
----------
m : int
number of servers
rho : float
traffic intensity; arr_rate / (svc_rate * m)
z : float
intermediate term approximated in Eq 3.8
Returns
-------
float
intermediate term gamma (see Eq 3.5)
"""
term1 = m - m * rho - 0.5
term2 = np.sqrt(m * rho * z)
gamma = term1 / term2
return gamma
def _ggm_prob_wait_whitt_pi_6(m, rho, z):
"""
Part of Equation 3.11 on p139 of Whitt (1993). Used in approximation for P(Wq > 0) in GI/G/c/inf queue.
See Whitt, Ward. "Approximations for the GI/G/m queue"
Production and Operations Management 2, 2 (Spring 1993): 114-161.
Parameters
----------
m : int
number of servers
rho : float
traffic intensity; arr_rate / (svc_rate * m)
z : float
intermediate term approximated in Eq 3.8
Returns
-------
float
intermediate term pi_6 (see Eq 3.11)
"""
pi_6 = 1.0 - stats.norm.cdf((m - m * rho - 0.5) / np.sqrt(m * rho * z))
return pi_6
def _ggm_prob_wait_whitt_pi_5(m, rho, ca2, cs2):
"""
Part of Equation 3.11 on p139 of Whitt (1993). Used in approximation for P(Wq > 0) in GI/G/c/inf queue.
See Whitt, Ward. "Approximations for the GI/G/m queue"
Production and Operations Management 2, 2 (Spring 1993): 114-161.
Parameters
----------
m : int
number of servers
rho : float
traffic intensity; arr_rate / (svc_rate * m)
ca2 : float
squared coefficient of variation for inter-arrival time distribution
cs2 : float
squared coefficient of variation for service time distribution
Returns
-------
float
intermediate term pi_5(see Eq 3.11)
"""
term1 = 2.0 * (1.0 - rho) * np.sqrt(m) / (1.0 + ca2)
term2 = (1.0 - rho) * np.sqrt(m)
term3 = erlangc(rho * m, m) * (1.0 - stats.norm.cdf(term1)) / (1.0 - stats.norm.cdf(term2))
pi_5 = min(1.0,term3)
return pi_5
def _ggm_prob_wait_whitt_pi_4(m, rho, ca2, cs2):
"""
Part of Equation 3.11 on p139 of Whitt (1993). Used in approximation for P(Wq > 0) in GI/G/c/inf queue.
See Whitt, Ward. "Approximations for the GI/G/m queue"
Production and Operations Management 2, 2 (Spring 1993): 114-161.
Parameters
----------
m : int
number of servers
rho : float
traffic intensity; arr_rate / (svc_rate * m)
ca2 : float
squared coefficient of variation for inter-arrival time distribution
cs2 : float
squared coefficient of variation for service time distribution
Returns
-------
float
intermediate term pi_5(see Eq 3.11)
"""
term1 = (1.0 + cs2) * (1.0 - rho) * np.sqrt(m) / (ca2 + cs2)
term2 = (1.0 - rho) * np.sqrt(m)
term3 = erlangc(rho * m, m) * (1.0 - stats.norm.cdf(term1)) / (1.0 - stats.norm.cdf(term2))
pi_4 = min(1.0,term3)
return pi_4
def _ggm_prob_wait_whitt_pi_1(m, rho, ca2, cs2):
"""
Part of Equation 3.11 on p139 of Whitt (1993). Used in approximation for P(Wq > 0) in GI/G/c/inf queue.
See Whitt, Ward. "Approximations for the GI/G/m queue"
Production and Operations Management 2, 2 (Spring 1993): 114-161.
Parameters
----------
m : int
number of servers
rho : float
traffic intensity; arr_rate / (svc_rate * m)
ca2 : float
squared coefficient of variation for inter-arrival time distribution
cs2 : float
squared coefficient of variation for service time distribution
Returns
-------
float
intermediate term pi_5(see Eq 3.11)
"""
pi_4 = _ggm_prob_wait_whitt_pi_4(m, rho, ca2, cs2)
pi_5 = _ggm_prob_wait_whitt_pi_5(m, rho, ca2, cs2)
pi_1 = (rho ** 2) * pi_4 + (1.0 - rho **2) * pi_5
return pi_1
def _ggm_prob_wait_whitt_pi_2(m, rho, ca2, cs2):
"""
Part of Equation 3.11 on p139 of Whitt (1993). Used in approximation for P(Wq > 0) in GI/G/c/inf queue.
See Whitt, Ward. "Approximations for the GI/G/m queue"
Production and Operations Management 2, 2 (Spring 1993): 114-161.
Parameters
----------
m : int
number of servers
rho : float
traffic intensity; arr_rate / (svc_rate * m)
ca2 : float
squared coefficient of variation for inter-arrival time distribution
cs2 : float
squared coefficient of variation for service time distribution
Returns
-------
float
intermediate term pi_2(see Eq 3.11)
"""
pi_1 = _ggm_prob_wait_whitt_pi_1(m, rho, ca2, cs2)
z = _ggm_prob_wait_whitt_z(ca2, cs2)
pi_6 = _ggm_prob_wait_whitt_pi_6(m, rho, z)
pi_2 = ca2 * pi_1 + (1.0 - ca2) * pi_6
return pi_2
def _ggm_prob_wait_whitt_pi_3(m, rho, ca2, cs2):
"""
Part of Equation 3.11 on p139 of Whitt (1993). Used in approximation for P(Wq > 0) in GI/G/c/inf queue.
See Whitt, Ward. "Approximations for the GI/G/m queue"
Production and Operations Management 2, 2 (Spring 1993): 114-161.
Parameters
----------
m : int
number of servers
rho : float
traffic intensity; arr_rate / (svc_rate * m)
ca2 : float
squared coefficient of variation for inter-arrival time distribution
cs2 : float
squared coefficient of variation for service time distribution
Returns
-------
float
intermediate term pi_5(see Eq 3.11)
"""
z = _ggm_prob_wait_whitt_z(ca2, cs2)
gamma = _ggm_prob_wait_whitt_gamma(m, rho, z)
pi_2 = _ggm_prob_wait_whitt_pi_2(m, rho, ca2, cs2)
pi_1 = _ggm_prob_wait_whitt_pi_1(m, rho, ca2, cs2)
term1 = 2.0 * (1.0 - ca2) * (gamma - 0.5)
term2 = 1.0 - term1
pi_3 = term1 * pi_2 + term2 * pi_1
return pi_3
def _ggm_prob_wait_whitt_pi(m, rho, ca2, cs2):
"""
Equation 3.10 on p139 of Whitt (1993). Used in approximation for P(Wq > 0) in GI/G/c/inf queue.
See Whitt, Ward. "Approximations for the GI/G/m queue"
Production and Operations Management 2, 2 (Spring 1993): 114-161.
Parameters
----------
m : int
number of servers
rho : float
traffic intensity; arr_rate / (svc_rate * m)
ca2 : float
squared coefficient of variation for inter-arrival time distribution
cs2 : float
squared coefficient of variation for service time distribution
Returns
-------
float
intermediate term pi_5(see Eq 3.11)
"""
z = _ggm_prob_wait_whitt_z(ca2, cs2)
gamma = _ggm_prob_wait_whitt_gamma(m, rho, z)
if m <= 6 or gamma <= 0.5 or ca2 >= 1:
pi = _ggm_prob_wait_whitt_pi_1(m, rho, ca2, cs2)
elif m >= 7 and gamma >= 1.0 and ca2 < 1:
pi = _ggm_prob_wait_whitt_pi_2(m, rho, ca2, cs2)
else:
pi = _ggm_prob_wait_whitt_pi_3(m, rho, ca2, cs2)
return pi
def _ggm_prob_wait_whitt_whichpi(m, rho, ca2, cs2):
"""
Equation 3.10 on p139 of Whitt (1993). Used in approximation for P(Wq > 0) in GI/G/c/inf queue.
Primarily used for debugging and validation of the approximation implementation.
See Whitt, Ward. "Approximations for the GI/G/m queue"
Production and Operations Management 2, 2 (Spring 1993): 114-161.
Parameters
----------
m : int
number of servers
rho : float
traffic intensity; arr_rate / (svc_rate * m)
ca2 : float
squared coefficient of variation for inter-arrival time distribution
cs2 : float
squared coefficient of variation for service time distribution
Returns
-------
int
the pi case used in the approximation (1, 2, or 3)
"""
z = _ggm_prob_wait_whitt_z(ca2, cs2)
gamma = _ggm_prob_wait_whitt_gamma(m, rho, z)
if m <= 6 or gamma <= 0.5 or ca2 >= 1:
whichpi = 1
elif m >= 7 and gamma >= 1.0 and ca2 < 1:
whichpi = 2
else:
whichpi = 3
return whichpi
def _ggm_qcondwait_whitt_ds3(cs2):
"""
Return the approximate E(V^3)/(EV)^2 where V is a service time; based on either a hyperexponential
or Erlang distribution. Used in approximation of conditional wait time CDF (conditional on W>0).
Whitt refers to conditional wait as D in his paper:
See Whitt, Ward. "Approximations for the GI/G/m queue"
Production and Operations Management 2, 2 (Spring 1993): 114-161.
This is Equation 4.3 on p146. Note that there is a typo in the original paper in which the first term
for Case 1 is shown as cubed, whereas it should be squared. This can be confirmed by seeing Eq 51 in
Whitt's paper on the QNA (Bell Systems Technical Journal, Nov 1983).
Parameters
----------
cs2 : float
squared coefficient of variation for service time distribution
Returns
-------
float
mean wait time in queue
"""
if cs2 >= 1:
ds3 = 3.0 * cs2 * (1.0 + cs2)
else:
ds3 = (2 * cs2 + 1.0) * (cs2 + 1.0)
return ds3
def ggm_qcondwait_whitt_cd2(rho, cs2):
"""
Return the approximate squared coefficient of conditional wait time (aka delay) in G/G/m queue
See Whitt, Ward. "Approximations for the GI/G/m queue"
Production and Operations Management 2, 2 (Spring 1993): 114-161.
This is Equation 4.2 on p145.
Parameters
----------
rho : float
traffic intensity; arr_rate / (svc_rate * m)
cs2 : float
squared coefficient of variation for service time distribution
Returns
-------
float
mean wait time in queue
"""
term1 = 2 * rho - 1.0
term2 = 4 * (1.0 - rho) * _ggm_qcondwait_whitt_ds3(cs2)
term3 = 3.0 * (cs2 + 1.0) ** 2
cd2 = term1+ term2 / term3
return cd2
def ggm_qwait_whitt_cw2(arr_rate, svc_rate, m, ca2, cs2):
"""
Return the approximate squared coefficient of wait time in G/G/m queue
See Whitt, Ward. "Approximations for the GI/G/m queue"
Production and Operations Management 2, 2 (Spring 1993): 114-161.
Parameters
----------
arr_rate : float
average arrival rate to queueing system
svc_rate : float
average service rate (each server). 1/svc_rate is mean service time.
c : int
number of servers
ca2 : float
squared coefficient of variation for inter-arrival time distribution
cs2 : float
squared coefficient of variation for service time distribution
Returns
-------
float
scv of wait time in queue
"""
rho = arr_rate / (svc_rate * float(m))
pwait = ggm_prob_wait_whitt(arr_rate, svc_rate, m, ca2, cs2)
cd2 = ggm_qcondwait_whitt_cd2(rho, cs2)
cw2 = (cd2 + 1 - pwait) / pwait
return cw2
def ggm_qcondwait_whitt_ed(arr_rate, svc_rate, m, ca2, cs2):
"""
Return the approximate mean conditional wait time (aka delay) in G/G/m queue
See Whitt, Ward. "Approximations for the GI/G/m queue"
Production and Operations Management 2, 2 (Spring 1993): 114-161.
Parameters
----------
arr_rate : float
average arrival rate to queueing system
svc_rate : float
average service rate (each server). 1/svc_rate is mean service time.
c : int
number of servers
ca2 : float
squared coefficient of variation for inter-arrival time distribution
cs2 : float
squared coefficient of variation for service time distribution
Returns
-------
float
variance of conditional wait time in queue
"""
pwait = ggm_prob_wait_whitt(arr_rate, svc_rate, m, ca2, cs2)
meanwait = ggm_mean_qwait_whitt(arr_rate, svc_rate, m, ca2, cs2) / pwait
return meanwait
def ggm_qcondwait_whitt_vard(arr_rate, svc_rate, m, ca2, cs2):
"""
Return the approximate variance of conditional wait time (aka delay) in G/G/m queue
See Whitt, Ward. "Approximations for the GI/G/m queue"
Production and Operations Management 2, 2 (Spring 1993): 114-161.
Parameters
----------
arr_rate : float
average arrival rate to queueing system
svc_rate : float
average service rate (each server). 1/svc_rate is mean service time.
c : int
number of servers
ca2 : float
squared coefficient of variation for inter-arrival time distribution
cs2 : float
squared coefficient of variation for service time distribution
Returns
-------
float
variance of conditional wait time in queue
"""
rho = arr_rate / (svc_rate * float(m))
pwait = ggm_prob_wait_whitt(arr_rate, svc_rate, m, ca2, cs2)
cd2 = ggm_qcondwait_whitt_cd2(rho, cs2)
meanwait = ggm_mean_qwait_whitt(arr_rate, svc_rate, m, ca2, cs2)
vard = (meanwait ** 2) * cd2 / (pwait ** 2)
return vard
def ggm_qcondwait_whitt_ed2(arr_rate, svc_rate, m, ca2, cs2):
"""
Return the approximate 2nd moment of conditional wait time (aka delay) in G/G/m queue
See Whitt, Ward. "Approximations for the GI/G/m queue"
Production and Operations Management 2, 2 (Spring 1993): 114-161.
Parameters
----------
arr_rate : float
average arrival rate to queueing system
svc_rate : float
average service rate (each server). 1/svc_rate is mean service time.
c : int
number of servers
ca2 : float
squared coefficient of variation for inter-arrival time distribution
cs2 : float
squared coefficient of variation for service time distribution
Returns
-------
float
variance of conditional wait time in queue
"""
pwait = ggm_prob_wait_whitt(arr_rate, svc_rate, m, ca2, cs2)
vard = ggm_qcondwait_whitt_vard(arr_rate, svc_rate, m, ca2, cs2)
meanwait = ggm_mean_qwait_whitt(arr_rate, svc_rate, m, ca2, cs2)
# Compute conditional wait
meandelay = meanwait / pwait
ed2 = vard + meandelay ** 2
return ed2
def ggm_qwait_whitt_varw(arr_rate, svc_rate, m, ca2, cs2):
"""
Return the approximate variance of wait time (aka delay) in G/G/m queue
See Whitt, Ward. "Approximations for the GI/G/m queue"
Production and Operations Management 2, 2 (Spring 1993): 114-161.
Parameters
----------
arr_rate : float
average arrival rate to queueing system
svc_rate : float
average service rate (each server). 1/svc_rate is mean service time.
c : int
number of servers
ca2 : float
squared coefficient of variation for inter-arrival time distribution
cs2 : float
squared coefficient of variation for service time distribution
Returns
-------
float
variance of conditional wait time in queue
"""
cw2 = ggm_qwait_whitt_cw2(arr_rate, svc_rate, m, ca2, cs2)
meanwait = ggm_mean_qwait_whitt(arr_rate, svc_rate, m, ca2, cs2)
varw = (meanwait ** 2) * cw2
return varw
def ggm_qwait_whitt_ew2(arr_rate, svc_rate, m, ca2, cs2):
"""
Return the approximate 2nd moment of wait time in G/G/m queue
See Whitt, Ward. "Approximations for the GI/G/m queue"
Production and Operations Management 2, 2 (Spring 1993): 114-161.
Parameters
----------
arr_rate : float
average arrival rate to queueing system
svc_rate : float
average service rate (each server). 1/svc_rate is mean service time.
c : int
number of servers
ca2 : float
squared coefficient of variation for inter-arrival time distribution
cs2 : float
squared coefficient of variation for service time distribution
Returns
-------
float
variance of conditional wait time in queue
"""
varw = ggm_qwait_whitt_varw(arr_rate, svc_rate, m, ca2, cs2)
meanwait = ggm_mean_qwait_whitt(arr_rate, svc_rate, m, ca2, cs2)
ew2 = varw + meanwait ** 2
return ew2
def ggm_mean_sojourn_whitt(arr_rate, svc_rate, m, ca2, cs2):
"""
Return the approximate soujourn time (wait + service) in G/G/m queue
See Whitt, Ward. "Approximations for the GI/G/m queue"
Production and Operations Management 2, 2 (Spring 1993): 114-161.
Parameters
----------
arr_rate : float
average arrival rate to queueing system
svc_rate : float
average service rate (each server). 1/svc_rate is mean service time.
c : int
number of servers
ca2 : float
squared coefficient of variation for inter-arrival time distribution
cs2 : float
squared coefficient of variation for service time distribution
Returns
-------
float
variance of conditional wait time in queue
"""
meanwait = ggm_mean_qwait_whitt(arr_rate, svc_rate, m, ca2, cs2)
sojourn = meanwait + 1.0 / svc_rate
return sojourn
def ggm_sojourn_whitt_var(arr_rate, svc_rate, m, ca2, cs2):
"""
Return the approximate variance of soujourn time (wait + service) in G/G/m queue
See Whitt, Ward. "Approximations for the GI/G/m queue"
Production and Operations Management 2, 2 (Spring 1993): 114-161.
Parameters
----------
arr_rate : float
average arrival rate to queueing system
svc_rate : float
average service rate (each server). 1/svc_rate is mean service time.
c : int
number of servers
ca2 : float
squared coefficient of variation for inter-arrival time distribution
cs2 : float
squared coefficient of variation for service time distribution
Returns
-------
float
variance of conditional wait time in queue
"""
varwait = ggm_qwait_whitt_varw(arr_rate, svc_rate, m, ca2, cs2)
sojourn = varwait + cs2 * (1.0 / svc_rate) ** 2
return sojourn
def ggm_sojourn_whitt_et2(arr_rate, svc_rate, m, ca2, cs2):
"""
Return the approximate 2nd moment of soujourn time (wait + service) in G/G/m queue
See Whitt, Ward. "Approximations for the GI/G/m queue"
Production and Operations Management 2, 2 (Spring 1993): 114-161.
Parameters
----------
arr_rate : float
average arrival rate to queueing system
svc_rate : float
average service rate (each server). 1/svc_rate is mean service time.
c : int
number of servers
ca2 : float
squared coefficient of variation for inter-arrival time distribution
cs2 : float
squared coefficient of variation for service time distribution
Returns
-------
float
variance of conditional wait time in queue
"""
varsojourn = ggm_sojourn_whitt_var(arr_rate, svc_rate, m, ca2, cs2)
meansojourn = ggm_mean_sojourn_whitt(arr_rate, svc_rate, m, ca2, cs2)
et2 = varsojourn + meansojourn ** 2
return et2
def ggm_sojourn_whitt_cv2(arr_rate, svc_rate, m, ca2, cs2):
"""
Return the approximate scv of soujourn time (wait + service) in G/G/m queue
See Whitt, Ward. "Approximations for the GI/G/m queue"
Production and Operations Management 2, 2 (Spring 1993): 114-161.
Parameters
----------
arr_rate : float
average arrival rate to queueing system
svc_rate : float
average service rate (each server). 1/svc_rate is mean service time.
c : int
number of servers
ca2 : float
squared coefficient of variation for inter-arrival time distribution
cs2 : float
squared coefficient of variation for service time distribution
Returns
-------
float
variance of conditional wait time in queue
"""
varsojourn = ggm_sojourn_whitt_var(arr_rate, svc_rate, m, ca2, cs2)
meansojourn = ggm_mean_sojourn_whitt(arr_rate, svc_rate, m, ca2, cs2)
cv2 = varsojourn / meansojourn ** 2
return cv2
def ggm_mean_qsize_whitt(arr_rate, svc_rate, m, ca2, cs2):
"""
Return the approximate mean queue size in GI/G/c/inf queue using Whitt's 1993 approximation and Little's Law.
See Whitt, Ward. "Approximations for the GI/G/m queue"
Production and Operations Management 2, 2 (Spring 1993): 114-161.
It's based on interpolations with corrections between an M/D/c, D/M/c and a M/M/c queueing systems.
Parameters
----------
arr_rate : float
average arrival rate to queueing system
svc_rate : float
average service rate (each server). 1/svc_rate is mean service time.
c : int
number of servers
ca2 : float
squared coefficient of variation for inter-arrival time distribution
cs2 : float
squared coefficient of variation for service time distribution
Returns
-------
float
mean wait time in queue
"""
# Use Eq 2.24 on p 125 to compute mean wait time in queue
qwait = ggm_mean_qwait_whitt(arr_rate, svc_rate, m, ca2, cs2)
# Now use Little's Law
return qwait * arr_rate
def ggm_mean_syssize_whitt(arr_rate, svc_rate, m, ca2, cs2):
"""
Return the approximate mean system size in GI/G/c/inf queue using Whitt's 1993 approximation and Little's Law.
See Whitt, Ward. "Approximations for the GI/G/m queue"
Production and Operations Management 2, 2 (Spring 1993): 114-161.
It's based on interpolations with corrections between an M/D/c, D/M/c and a M/M/c queueing systems.
Parameters
----------
arr_rate : float
average arrival rate to queueing system
svc_rate : float
average service rate (each server). 1/svc_rate is mean service time.
c : int
number of servers
ca2 : float
squared coefficient of variation for inter-arrival time distribution
cs2 : float
squared coefficient of variation for service time distribution
Returns
-------
float
mean wait time in queue
"""
# Use Eq 2.24 on p 125 to compute mean wait time in queue
mean_sojourn = ggm_mean_sojourn_whitt(arr_rate, svc_rate, m, ca2, cs2)
# Now use Little's Law
return mean_sojourn * arr_rate
def dmm_mean_qwait_whitt(arr_rate, svc_rate, m, ca2=0.0, cs2=1.0):
"""
Return the approximate mean queue size in D/M/m/inf queue using Whitt's 1993 approximation.
See Whitt, Ward. "Approximations for the GI/G/m queue"
Production and Operations Management 2, 2 (Spring 1993): 114-161. Specifically, this approximation
is Eq 2.20 on p124.
This, along with mdm_mean_qwait_whitt are refinements of the Cosmetatos approximations.
Parameters
----------
arr_rate : float
average arrival rate to queueing system
svc_rate : float
average service rate (each server). 1/svc_rate is mean service time.
c : int
number of servers
ca2 : float
squared coefficient of variation for inter-arrival time distribution (0 for D)
cs2 : float
squared coefficient of variation for service time distribution (1 for M)
Returns
-------
float
mean wait time in queue
"""
rho = arr_rate / (svc_rate * float(m))
# Now implement Eq 2.20 on p 124
term1 = _ggm_mean_qwait_whitt_phi_3(m, rho)
term2 = 0.5 * (ca2 + cs2)
term3 = mmc_mean_qwait(arr_rate, svc_rate, m)
return term1 * term2 * term3
def mdm_mean_qwait_whitt(arr_rate, svc_rate, m, ca2=0.0, cs2=1.0):
"""
Return the approximate mean queue size in M/D/m/inf queue using Whitt's 1993 approximation.
See Whitt, Ward. "Approximations for the GI/G/m queue"
Production and Operations Management 2, 2 (Spring 1993): 114-161. Specifically, this approximation
is Eq 2.16 on p124.
This, along with dmm_mean_qwait_whitt are refinements of the Cosmetatos approximations.
Parameters
----------
arr_rate : float
average arrival rate to queueing system
svc_rate : float
average service rate (each server). 1/svc_rate is mean service time.
c : int
number of servers
ca2 : float
squared coefficient of variation for inter-arrival time distribution (0 for D)
cs2 : float
squared coefficient of variation for service time distribution (1 for M)
Returns
-------
float
mean wait time in queue
"""
rho = arr_rate / (svc_rate * float(m))
# Now implement Eq 2.16 on p 124
term1 = _ggm_mean_qwait_whitt_phi_1(m, rho)
term2 = 0.5 * (ca2 + cs2)
term3 = mmc_mean_qwait(arr_rate, svc_rate, m)
return term1 * term2 * term3
def fit_balanced_hyperexpon2(mean, cs2):
"""
Return the branching probability and rates for a balanced H2 distribution based
on a specified mean and scv. Intended for scv's > 1.
See Whitt, Ward. "Approximations for the GI/G/m queue"
Production and Operations Management 2, 2 (Spring 1993): 114-161.
Parameters
----------
cs2 : float
squared coefficient of variation for desired distribution
Returns
-------
tuple (float p, float rate1, float rate2)
branching probability and exponential rates
"""
p1 = 0.5 * (1 + np.sqrt((cs2-1) / (cs2+1)))
p2 = 1 - p1
mu1 = 2 * p1 / mean
mu2 = 2 * p2 / mean
return (p1, mu1, mu2)
def hyperexpon_cdf(x, probs, rates):
"""
Return the P(X < x) where X is hypergeometric with probabilities and exponential rates
in lists probs and rates.
Parameters
----------
probs : list of floats
branching probabilities for hyperexponential
probs : list of floats
exponential rates
Returns
-------
float
P(X<x) where X~hyperexponetial(probs, rates)
"""
sumproduct = sum([p * np.exp(-r * x) for (p, r) in zip(probs, rates)])
prob_lt_x = 1.0 - sumproduct
return prob_lt_x
def ggm_qcondwait_cdf_whitt(t, arr_rate, svc_rate, c, ca2, cs2):
"""
Return the approximate P(D <= t) where D = (W|W>0) in G/G/m queue using Whitt's two moment
approximation.
See Section 4 of Whitt, Ward. "Approximations for the GI/G/m queue"
Production and Operations Management 2, 2 (Spring 1993): 114-161.
It's based on an approach he originally used for G/G/1 queues in QNA. There are different
cases based on the value of an approximation for the scv of D.
Parameters
----------
t : float
wait time of interest
arr_rate : float
average arrival rate to queueing system
svc_rate : float
average service rate (each server). 1/svc_rate is mean service time.
c : int
number of servers
cv2_svc_time : float
squared coefficient of variation for service time distribution
Returns
-------
float
~ P(D <= t | )
"""
rho = arr_rate / (svc_rate * float(c))
ed = ggm_mean_qwait_whitt(arr_rate, svc_rate, c, ca2, cs2) / ggm_prob_wait_whitt(arr_rate, svc_rate, c, ca2, cs2)
cd2 = ggm_qcondwait_whitt_cd2(rho,cs2)
if cd2 > 1.01:
# Hyperexponential approx
p1, gamma1, gamma2 = fit_balanced_hyperexpon2(ed, cd2)
p2 = 1.0 - p1
prob_wait_ltx = hyperexpon_cdf(t, [p1,p2], [gamma1, gamma2])
elif cd2 >= 0.99 and cd2 <= 1.01:
# Exponential approx
prob_wait_ltx = stats.expon.cdf(t,scale=ed)
elif cd2 >= 0.501 and cd2 < 0.99:
# Convolution of two exponentials approx
vard = ggm_qcondwait_whitt_vard(arr_rate, svc_rate, c, ca2, cs2)
gamma2 = 2.0 / (ed + np.sqrt(2 * vard - ed ** 2))
gamma1 = 1.0 / (ed - 1.0 / gamma2)
prob_wait_gtx = (gamma1 * np.exp(-gamma2 * t) - gamma2 * np.exp(-gamma1 * t)) / (gamma1 - gamma2)
prob_wait_ltx = 1.0 - prob_wait_gtx
else:
# Erlang approx
gamma1 = 2.0 / ed
prob_wait_gtx = np.exp(-gamma1 * t) * (1.0 + gamma1 * t)
prob_wait_ltx = 1.0 - prob_wait_gtx
return prob_wait_ltx
def ggm_qwait_cdf_whitt(t, arr_rate, svc_rate, c, ca2, cs2):
"""
Return the approximate P(W <= t) in G/G/m queue using Whitt's two moment
approximation for conditional wait and the P(W>0).
See Section 4 of Whitt, Ward. "Approximations for the GI/G/m queue"
Production and Operations Management 2, 2 (Spring 1993): 114-161.
See ggm_qcondwait_cdf_whitt for more details.
Parameters
----------
t : float
wait time of interest
arr_rate : float
average arrival rate to queueing system
svc_rate : float
average service rate (each server). 1/svc_rate is mean service time.
c : int
number of servers
cv2_svc_time : float
squared coefficient of variation for service time distribution
Returns
-------
float
~ P(W <= t | )
"""
qcondwait = ggm_qcondwait_cdf_whitt(t, arr_rate, svc_rate, c, ca2, cs2)
pdelay = ggm_prob_wait_whitt(arr_rate, svc_rate, c, ca2, cs2)
qwait = qcondwait * pdelay + (1.0 - pdelay)
return qwait
def ggm_qwait_pctile_whitt(p, arr_rate, svc_rate, c, ca2, cs2):
"""
Return approx p'th percentile of P(Wq < t) in G/G/c/inf queue using Whitt's two moment
approximation for the wait time CDF
Parameters
----------
p : float
percentile of interest
arr_rate : float
average arrival rate to queueing system
svc_rate : float
average service rate (each server). 1/svc_rate is mean service time.
c : int
number of servers
Returns
-------
float
t such that P(wait time in queue is < t) = p
"""
# For initial guess, we'll use percentile from similar M/M/1 system
init_guess = mm1_qwait_pctile(p, arr_rate, c * svc_rate)
waitq_pctile = scipy.optimize.newton(_ggm_waitq_pctile_whitt_wrap,init_guess,args=(p, arr_rate, svc_rate, c, ca2, cs2))
return waitq_pctile
def _ggm_waitq_pctile_whitt_wrap(t, p, arr_rate, svc_rate, c, ca2, cs2):
return ggm_qwait_cdf_whitt(t, arr_rate, svc_rate, c, ca2, cs2) - p
def _ggm_qsize_prob_gt_0_whitt_5_2(arr_rate, svc_rate, c, ca2, cs2):
"""
Return the approximate P(Q>0) in G/G/m queue using Whitt's simple
approximation involving rho and P(W>0).
This approximation is exact for M/M/m and has strong theoretical
support for GI/M/m. It's described by Whitt as "crude" but is
"a useful quick approximation".
See Section 5 of Whitt, Ward. "Approximations for the GI/G/m queue"
Production and Operations Management 2, 2 (Spring 1993): 114-161. In
particular, this is Equation 5.2.
Parameters
----------
arr_rate : float
average arrival rate to queueing system
svc_rate : float
average service rate (each server). 1/svc_rate is mean service time.
c : int
number of servers
cv2_svc_time : float
squared coefficient of variation for service time distribution
Returns
-------
float
~ P(Q > 0)
"""
rho = arr_rate / (svc_rate * float(c))
pdelay = ggm_prob_wait_whitt(arr_rate, svc_rate, c, ca2, cs2)
prob_gt_0 = rho * pdelay
return prob_gt_0
def _ggm_qsize_prob_gt_0_whitt_5_1(arr_rate, svc_rate, c, ca2, cs2):
"""
Return the approximate P(Q>0) in G/G/m queue using Whitt's approximation
which is based on an exact expression for P(Q>0) given the CDF's
of an interarrival time and a waiting time .
This approximation is exact for M/M/m and has strong theoretical
support for GI/M/m - see Equation 5.1. It is preferred to the cruder
approximation given in Equation 5.2 (see ggm_qsize_prob_gt_0_whitt_5_2).
See Section 5 of Whitt, Ward. "Approximations for the GI/G/m queue"
Production and Operations Management 2, 2 (Spring 1993): 114-161. In
particular, this is Equation 5.1.
Parameters
----------
arr_rate : float
average arrival rate to queueing system
svc_rate : float
average service rate (each server). 1/svc_rate is mean service time.
c : int
number of servers
cv2_svc_time : float
squared coefficient of variation for service time distribution
Returns
-------
float
~ P(Q > 0
"""
rho = arr_rate / (svc_rate * float(c))
pdelay = ggm_prob_wait_whitt(arr_rate, svc_rate, c, ca2, cs2)
# TODO - implement Equation 5.1 of Whitt (1995)
return 0
def ggm_qsize_whitt_cq2(arr_rate, svc_rate, m, ca2, cs2):
"""
Return the approximate squared coefficient of queue size in G/G/m queue.
See Whitt, Ward. "Approximations for the GI/G/m queue"
Production and Operations Management 2, 2 (Spring 1993): 114-161.
Equation 5.6.
Parameters
----------
arr_rate : float
average arrival rate to queueing system
svc_rate : float
average service rate (each server). 1/svc_rate is mean service time.
c : int
number of servers
ca2 : float
squared coefficient of variation for inter-arrival time distribution
cs2 : float
squared coefficient of variation for service time distribution
Returns
-------
float
scv of number in queue
"""
eq = ggm_mean_qsize_whitt(arr_rate, svc_rate, m, ca2, cs2)
cw2 = ggm_qwait_whitt_cw2(arr_rate, svc_rate, m, ca2, cs2)
cq2 = (1/eq) + cw2
return cq2
def hyper_erlang_moment(rates, stages, probs, moment):
terms = [probs[i - 1] * math.factorial(stages[i - 1] + moment - 1) * (1 / math.factorial(stages[i - 1] - 1)) * (
stages[i - 1] * rates[i - 1]) ** (-moment)
for i in range(1, len(rates) + 1)]
return sum(terms)
| 26.487479 | 123 | 0.638204 | 9,394 | 63,464 | 4.17575 | 0.055035 | 0.042471 | 0.035435 | 0.049609 | 0.828231 | 0.808601 | 0.796569 | 0.784077 | 0.760726 | 0.739465 | 0 | 0.03994 | 0.270531 | 63,464 | 2,395 | 124 | 26.498539 | 0.807387 | 0.616838 | 0 | 0.304136 | 0 | 0 | 0.00164 | 0 | 0 | 0 | 0 | 0.000835 | 0 | 1 | 0.182482 | false | 0 | 0.009732 | 0.007299 | 0.384428 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
9b4655608ca44fa36d02abb4baccf91772c878ef | 35 | py | Python | db/__init__.py | JonKoala/diariobot-scraper | 8f1a44dc19d666139f5650ee64a8b14ab7c77b9a | [
"MIT"
] | 1 | 2018-04-23T16:39:22.000Z | 2018-04-23T16:39:22.000Z | db/__init__.py | JonKoala/diariobot-datamining | c97095a7906fa984f4373cfbcdbf4576137d8e2f | [
"MIT"
] | null | null | null | db/__init__.py | JonKoala/diariobot-datamining | c97095a7906fa984f4373cfbcdbf4576137d8e2f | [
"MIT"
] | null | null | null | from .interface import Dbinterface
| 17.5 | 34 | 0.857143 | 4 | 35 | 7.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.114286 | 35 | 1 | 35 | 35 | 0.967742 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
9b46709f8a20e80e6aa16fc440ead19bb93a12b3 | 17,086 | py | Python | lib/data_allocation.py | stuckerc/ResDepth | c117409db4b5972583153b918521570bae3a9ec6 | [
"MIT"
] | 32 | 2021-07-30T17:35:40.000Z | 2022-03-16T22:16:01.000Z | lib/data_allocation.py | stuckerc/ResDepth | c117409db4b5972583153b918521570bae3a9ec6 | [
"MIT"
] | 1 | 2022-02-07T21:28:19.000Z | 2022-02-09T15:24:03.000Z | lib/data_allocation.py | stuckerc/ResDepth | c117409db4b5972583153b918521570bae3a9ec6 | [
"MIT"
] | 5 | 2021-07-30T17:35:48.000Z | 2022-03-16T22:15:03.000Z | import numpy as np
import sys
from lib import fdutil, rasterutils
STRATEGIES = ['5-crossval_vertical', '5-crossval_horizontal']
def _verify_inputs(fn_raster_in, allocation_strategy, test_stripe, crossval_training):
"""
Verifies the inputs of the function allocate_data().
:param fn_raster_in: str, path to the GeoTiff raster file
:param allocation_strategy: str, allocation strategy (see parameter STRATEGIES)
:param test_stripe: int, index of the test stripe (or validation stripe if cross-validation is enabled)
:param crossval_training: bool, True if the raster is used for cross-validation
(split into training and validation regions only), False otherwise
"""
# Check that the input GeoTiff raster exists
if not fdutil.file_exists(fn_raster_in):
print('Input raster does not exist: {}'.format(fn_raster_in))
sys.exit(1)
if not isinstance(test_stripe, int):
print("'test_stripe' must be an integer in the range [0,4].")
sys.exit(1)
if test_stripe > 4:
print("'test_stripe' must be an integer in the range [0,4].")
sys.exit(1)
if allocation_strategy not in STRATEGIES:
print("{} as 'allocation_strategy' is not a valid choice. Choose among: {}.".format(allocation_strategy,
STRATEGIES))
sys.exit(1)
if not isinstance(crossval_training, bool):
print("'crossval_training' must be boolean.")
sys.exit(1)
def allocate_data(fn_raster_in, allocation_strategy, test_stripe=0, crossval_training=False):
"""
Splits a given raster into geographically separate stripes for training, validation, and testing.
Assumption: the validation stripe is located to the right/bottom (east/south) of the test stripe (cyclic order).
:param fn_raster_in: str, path to the GeoTiff raster file
:param allocation_strategy: str, allocation strategy (see parameter STRATEGIES)
:param test_stripe: int, index of the test stripe (or validation stripe if cross-validation is enabled)
:param crossval_training: bool, True if the raster is used for cross-validation
(split into training and validation regions only), False otherwise
:return: returns three dictionaries train, val, and test, where each dictionary defines
geographically rectangular regions. Each dictionary is composed of the following
key-value pairs:
x_extent: list of n tuples, where n denotes the number of rectangular regions
(stripes). Each tuple defines the upper-left and lower-right
x-coordinate of a rectangular region (stripe).
y_extent: list of n tuples, where n denotes the number of rectangular regions
(stripes). Each tuple defines the upper-left and lower-right
y-coordinate of a rectangular region (stripe).
Assumption: The i.th tuple of x_extent and i.th tuple of y_extent define a
geographically rectangular region (stripe).
"""
# Check inputs
_verify_inputs(fn_raster_in, allocation_strategy, test_stripe, crossval_training)
if allocation_strategy == '5-crossval_vertical':
train, val, test = _allocate_5crossval_vertical(fn_raster_in, test_stripe, crossval_training)
elif allocation_strategy == '5-crossval_horizontal':
train, val, test = _allocate_5crossval_horizontal(fn_raster_in, test_stripe, crossval_training)
return train, val, test
def _allocate_5crossval_vertical(fn_raster_in, test_stripe, crossval_training):
"""
Splits the geographic area of fn_raster_in into five equally large and mutually exclusive vertical stripes
(north-south oriented) for training, validation, and testing (or training and validation only if cross-validation
is enabled). Assumption: the validation stripe is located to the right (east) of the test stripe (cyclic order).
:param fn_raster_in: str, path to the GeoTiff raster file
:param test_stripe: int, index of the test stripe (or validation stripe if cross-validation is enabled)
:param crossval_training: bool, True if the raster is used for cross-validation
(split into training and validation regions only), False otherwise
:return: returns three dictionaries train, val, and test, where each dictionary defines
geographically rectangular regions (vertically oriented stripes). Each dictionary is
composed of the following key-value pairs:
x_extent: list of n tuples, where n denotes the number of rectangular regions
(stripes). Each tuple defines the upper-left and lower-right
x-coordinate of a rectangular region (stripe).
y_extent: list of n tuples, where n denotes the number of rectangular regions
(stripes). Each tuple defines the upper-left and lower-right
y-coordinate of a rectangular region (stripe).
Assumption: The i.th tuple of x_extent and i.th tuple of y_extent define a
geographically rectangular region (stripe).
"""
# Get the extent of the input raster
extent = rasterutils.get_raster_extent(fn_raster_in)
cols = extent['cols']
rows = extent['rows']
# Compute the width of the stripes
width = int(round(float(cols) * 0.2))
# Compute the extent in X-direction of the stripes
x_start = 0
x_extent = []
for i in range(5):
if i < 4:
x_end = x_start + width - 1
else:
x_end = cols - 1
x_extent.append((x_start, x_end))
x_start = x_end + 1
# Validation and test stripe: compute the extent in Y-direction
y_val = [(0, rows - 1)]
y_test = [(0, rows - 1)]
if crossval_training is False:
if test_stripe == 0:
# Stripe order: | test | val | train | train | train |
x_train = [(x_extent[2][0], x_extent[4][1])]
x_val = [x_extent[1]]
x_test = [x_extent[0]]
y_train = [(0, rows - 1)]
elif test_stripe == 1:
# Stripe order: | train | test | val | train | train |
x_train = [x_extent[0], (x_extent[3][0], x_extent[4][1])]
x_val = [x_extent[2]]
x_test = [x_extent[1]]
y_train = [(0, rows - 1), (0, rows - 1)]
elif test_stripe == 2:
# Stripe order: | train | train | test | val | train |
x_train = [(x_extent[0][0], x_extent[1][1]), x_extent[4]]
x_val = [x_extent[3]]
x_test = [x_extent[2]]
y_train = [(0, rows - 1), (0, rows - 1)]
elif test_stripe == 3:
# Stripe order: | train | train | train | test | val |
x_train = [(x_extent[0][0], x_extent[2][1])]
x_val = [x_extent[4]]
x_test = [x_extent[3]]
y_train = [(0, rows - 1)]
elif test_stripe == 4:
# Stripe order: | val | train | train | train | test |
x_train = [(x_extent[1][0], x_extent[3][1])]
x_val = [x_extent[0]]
x_test = [x_extent[4]]
y_train = [(0, rows - 1)]
test = {'x_extent': x_test, 'y_extent': y_test}
else:
if test_stripe == 0:
# Stripe order: | val | train | train | train | train |
x_train = [(x_extent[1][0], x_extent[4][1])]
x_val = [x_extent[0]]
y_train = [(0, rows - 1)]
elif test_stripe == 1:
# Stripe order: | train | val | train | train | train |
x_train = [x_extent[0], (x_extent[2][0], x_extent[4][1])]
x_val = [x_extent[1]]
y_train = [(0, rows - 1), (0, rows - 1)]
elif test_stripe == 2:
# Stripe order: | train | train | val | train | train |
x_train = [(x_extent[0][0], x_extent[1][1]), (x_extent[3][0], x_extent[4][1])]
x_val = [x_extent[2]]
y_train = [(0, rows - 1), (0, rows - 1)]
elif test_stripe == 3:
# Stripe order: | train | train | train | val | train |
x_train = [(x_extent[0][0], x_extent[2][1]), x_extent[4]]
x_val = [x_extent[3]]
y_train = [(0, rows - 1), (0, rows - 1)]
elif test_stripe == 4:
# Stripe order: | train | train | train | train | val |
x_train = [(x_extent[0][0], x_extent[3][1])]
x_val = [x_extent[4]]
y_train = [(0, rows - 1)]
test = {}
train = {'x_extent': x_train, 'y_extent': y_train}
val = {'x_extent': x_val, 'y_extent': y_val}
return train, val, test
def _allocate_5crossval_horizontal(fn_raster_in, test_stripe, crossval_training):
"""
Splits the geographic area of fn_raster_in into five equally large and mutually exclusive horizontal stripes
(west-east oriented) for training, validation, and testing (or training and validation only if cross-validation
is enabled). Assumption: the validation stripe is located to the bottom (south) of the test stripe (cyclic order).
:param fn_raster_in: str, path to the GeoTiff raster file
:param test_stripe: int, index of the test stripe (or validation stripe if cross-validation is enabled)
:param crossval_training: bool, True if the raster is used for cross-validation
(split into training and validation regions only), False otherwise
:return: returns three dictionaries train, val, and test, where each dictionary defines
geographically rectangular regions (horizontally oriented stripes). Each dictionary is
composed of the following key-value pairs:
x_extent: list of n tuples, where n denotes the number of rectangular regions
(stripes). Each tuple defines the upper-left and lower-right
x-coordinate of a rectangular region (stripe).
y_extent: list of n tuples, where n denotes the number of rectangular regions
(stripes). Each tuple defines the upper-left and lower-right
y-coordinate of a rectangular region (stripe).
Assumption: The i.th tuple of x_extent and i.th tuple of y_extent define a
geographically rectangular region (stripe).
"""
# Get the extent of the input raster
extent = rasterutils.get_raster_extent(fn_raster_in)
cols = extent['cols']
rows = extent['rows']
# Compute the height of the stripes
height = int(round(float(rows) * 0.2))
# Compute the extent in Y-direction of the stripes
y_start = 0
y_extent = []
for i in range(5):
if i < 4:
y_end = y_start + height - 1
else:
y_end = rows - 1
y_extent.append((y_start, y_end))
y_start = y_end + 1
# Validation and test stripe: compute the extent in X-direction
x_val = [(0, cols - 1)]
x_test = [(0, cols - 1)]
if crossval_training is False:
if test_stripe == 0:
# Stripe order: | test | val | train | train | train |
y_train = [(y_extent[2][0], y_extent[4][1])]
y_val = [y_extent[1]]
y_test = [y_extent[0]]
x_train = [(0, cols - 1)]
elif test_stripe == 1:
# Stripe order: | train | test | val | train | train |
y_train = [y_extent[0], (y_extent[3][0], y_extent[4][1])]
y_val = [y_extent[2]]
y_test = [y_extent[1]]
x_train = [(0, cols - 1), (0, cols - 1)]
elif test_stripe == 2:
# Stripe order: | train | train | test | val | train |
y_train = [(y_extent[0][0], y_extent[1][1]), y_extent[4]]
y_val = [y_extent[3]]
y_test = [y_extent[2]]
x_train = [(0, cols - 1), (0, cols - 1)]
elif test_stripe == 3:
# Stripe order: | train | train | train | test | val |
y_train = [(y_extent[0][0], y_extent[2][1])]
y_val = [y_extent[4]]
y_test = [y_extent[3]]
x_train = [(0, cols - 1)]
elif test_stripe == 4:
# Stripe order: | val | train | train | train | test |
y_train = [(y_extent[1][0], y_extent[3][1])]
y_val = [y_extent[0]]
y_test = [y_extent[4]]
x_train = [(0, cols - 1)]
test = {'x_extent': x_test, 'y_extent': y_test}
else:
if test_stripe == 0:
# Stripe order: | val | train | train | train | train |
y_train = [(y_extent[1][0], y_extent[4][1])]
y_val = [y_extent[0]]
x_train = [(0, cols - 1)]
elif test_stripe == 1:
# Stripe order: | train | val | train | train | train |
y_train = [y_extent[0], (y_extent[2][0], y_extent[4][1])]
y_val = [y_extent[1]]
x_train = [(0, cols - 1), (0, cols - 1)]
elif test_stripe == 2:
# Stripe order: | train | train | val | train | train |
y_train = [(y_extent[0][0], y_extent[1][1]), (y_extent[3][0], y_extent[4][1])]
y_val = [y_extent[2]]
x_train = [(0, cols - 1), (0, cols - 1)]
elif test_stripe == 3:
# Stripe order: | train | train | train | val | train |
y_train = [(y_extent[0][0], y_extent[2][1]), y_extent[4]]
y_val = [y_extent[3]]
x_train = [(0, cols - 1), (0, cols - 1)]
elif test_stripe == 4:
# Stripe order: | train | train | train | train | val |
y_train = [(y_extent[0][0], y_extent[3][1])]
y_val = [y_extent[4]]
x_train = [(0, cols - 1)]
test = {}
train = {'x_extent': x_train, 'y_extent': y_train}
val = {'x_extent': x_val, 'y_extent': y_val}
return train, val, test
def indices_from_area_defn(area_defn, tile_size):
"""
Returns the location (upper-left image coordinates) of valid patch positions.
:param area_defn: dictionary, defines one or multiple rectangularly-shaped geographic regions from which
DSM patches will be sampled. The dictionary is composed of the following key-value pairs:
x_extent: list of n tuples, where n denotes the number of rectangular regions (stripes).
Each tuple defines the upper-left and lower-right x-coordinate of a rectangular
region (stripe).
y_extent: list of n tuples, where n denotes the number of rectangular regions (stripes).
Each tuple defines the upper-left and lower-right y-coordinate of a rectangular
region (stripe).
Assumption: The i.th tuple of x_extent and i.th tuple of y_extent define a geographically
rectangular region (stripe).
:param tile_size: int, tile size in pixels,
:return: list of (y,x) tuples, upper-left image coordinates of valid patch positions. Note that the
returned patch positions do not exceed the area specified in area_defn.
"""
# Initialize output list
valid_positions = []
# Number of regions specified in area_defn
num_regions = len(area_defn['x_extent'])
for i in range(num_regions):
# Extent of the i.th region
x = area_defn['x_extent'][i]
y = area_defn['y_extent'][i]
# Compute valid x-coordinates of the i.th region
x_start = x[0]
x_end = x[1] - tile_size + 1
x_indices = np.linspace(x_start, x_end, x_end - x_start + 1, dtype=int)
# Compute valid y-coordinates of the i.th region
y_start = y[0]
y_end = y[1] - tile_size + 1
y_indices = np.linspace(y_start, y_end, y_end - y_start + 1, dtype=int)
for y in y_indices:
for x in x_indices:
valid_positions.append((y, x))
return valid_positions
| 45.081794 | 118 | 0.558996 | 2,231 | 17,086 | 4.112954 | 0.084715 | 0.045009 | 0.029425 | 0.026155 | 0.817241 | 0.806452 | 0.780841 | 0.774412 | 0.753923 | 0.707171 | 0 | 0.022951 | 0.34461 | 17,086 | 378 | 119 | 45.201058 | 0.796482 | 0.508721 | 0 | 0.522222 | 0 | 0 | 0.056811 | 0.007866 | 0 | 0 | 0 | 0 | 0 | 1 | 0.027778 | false | 0 | 0.016667 | 0 | 0.066667 | 0.027778 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
9b8b7a7c13f469661a4c01c42b1b6b2f8b43e089 | 74 | py | Python | jacdac/water_level/__init__.py | microsoft/jacdac-python | 712ad5559e29065f5eccb5dbfe029c039132df5a | [
"MIT"
] | 1 | 2022-02-15T21:30:36.000Z | 2022-02-15T21:30:36.000Z | jacdac/water_level/__init__.py | microsoft/jacdac-python | 712ad5559e29065f5eccb5dbfe029c039132df5a | [
"MIT"
] | null | null | null | jacdac/water_level/__init__.py | microsoft/jacdac-python | 712ad5559e29065f5eccb5dbfe029c039132df5a | [
"MIT"
] | 1 | 2022-02-08T19:32:45.000Z | 2022-02-08T19:32:45.000Z | # Autogenerated file.
from .client import WaterLevelClient # type: ignore
| 24.666667 | 51 | 0.797297 | 8 | 74 | 7.375 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.135135 | 74 | 2 | 52 | 37 | 0.921875 | 0.432432 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
9b8e4b00ef30709a480c5000399e967796cb57d2 | 47 | py | Python | Phone/Pots.py | vidhurraj147/pythonexample | f595034cb6e4c317812f25d8d92501f62c2ccfee | [
"Apache-2.0"
] | null | null | null | Phone/Pots.py | vidhurraj147/pythonexample | f595034cb6e4c317812f25d8d92501f62c2ccfee | [
"Apache-2.0"
] | null | null | null | Phone/Pots.py | vidhurraj147/pythonexample | f595034cb6e4c317812f25d8d92501f62c2ccfee | [
"Apache-2.0"
] | null | null | null | def PotsImpl():
print ("I'm PotsImpl Phone") | 23.5 | 31 | 0.659574 | 7 | 47 | 4.428571 | 0.857143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.170213 | 47 | 2 | 31 | 23.5 | 0.794872 | 0 | 0 | 0 | 0 | 0 | 0.375 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | true | 0 | 0 | 0 | 0.5 | 0.5 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
9b98a490f1080fa77f8a9ff10b6b3f0e8f32f99a | 14,748 | py | Python | python_modules/libraries/dagster-postgres/dagster_postgres_tests/test_run_storage.py | mcoleman-sain/dagster | 97ec16d58e68a3a6f3c87010a4906fc34c35befb | [
"Apache-2.0"
] | null | null | null | python_modules/libraries/dagster-postgres/dagster_postgres_tests/test_run_storage.py | mcoleman-sain/dagster | 97ec16d58e68a3a6f3c87010a4906fc34c35befb | [
"Apache-2.0"
] | null | null | null | python_modules/libraries/dagster-postgres/dagster_postgres_tests/test_run_storage.py | mcoleman-sain/dagster | 97ec16d58e68a3a6f3c87010a4906fc34c35befb | [
"Apache-2.0"
] | null | null | null | import uuid
import yaml
from dagster.core.definitions.pipeline import ExecutionSelector, PipelineRunsFilter
from dagster.core.events import DagsterEvent, DagsterEventType
from dagster.core.instance import DagsterInstance
from dagster.core.storage.pipeline_run import PipelineRun, PipelineRunStatus
def build_run(
run_id, pipeline_name, mode='default', tags=None, status=PipelineRunStatus.NOT_STARTED
):
return PipelineRun(
pipeline_name=pipeline_name,
run_id=run_id,
environment_dict=None,
mode=mode,
selector=ExecutionSelector(pipeline_name),
step_keys_to_execute=None,
tags=tags,
status=status,
)
def test_add_get_postgres_run_storage(clean_storage):
run_storage = clean_storage
run_id = str(uuid.uuid4())
run_to_add = build_run(pipeline_name='pipeline_name', run_id=run_id)
added = run_storage.add_run(run_to_add)
assert added
fetched_run = run_storage.get_run_by_id(run_id)
assert run_to_add == fetched_run
assert run_storage.has_run(run_id)
assert not run_storage.has_run(str(uuid.uuid4()))
assert run_storage.get_runs() == [run_to_add]
assert run_storage.get_runs(PipelineRunsFilter(pipeline_name='pipeline_name')) == [run_to_add]
assert run_storage.get_runs(PipelineRunsFilter(pipeline_name='nope')) == []
run_storage.wipe()
assert run_storage.get_runs() == []
def test_handle_run_event_pipeline_success_test(clean_storage):
run_storage = clean_storage
run_id = str(uuid.uuid4())
run_to_add = build_run(pipeline_name='pipeline_name', run_id=run_id)
run_storage.add_run(run_to_add)
dagster_pipeline_start_event = DagsterEvent(
message='a message',
event_type_value=DagsterEventType.PIPELINE_START.value,
pipeline_name='pipeline_name',
step_key=None,
solid_handle=None,
step_kind_value=None,
logging_tags=None,
)
run_storage.handle_run_event(run_id, dagster_pipeline_start_event)
assert run_storage.get_run_by_id(run_id).status == PipelineRunStatus.STARTED
run_storage.handle_run_event(
str(uuid.uuid4()), # diff run
DagsterEvent(
message='a message',
event_type_value=DagsterEventType.PIPELINE_SUCCESS.value,
pipeline_name='pipeline_name',
step_key=None,
solid_handle=None,
step_kind_value=None,
logging_tags=None,
),
)
assert run_storage.get_run_by_id(run_id).status == PipelineRunStatus.STARTED
run_storage.handle_run_event(
run_id, # correct run
DagsterEvent(
message='a message',
event_type_value=DagsterEventType.PIPELINE_SUCCESS.value,
pipeline_name='pipeline_name',
step_key=None,
solid_handle=None,
step_kind_value=None,
logging_tags=None,
),
)
assert run_storage.get_run_by_id(run_id).status == PipelineRunStatus.SUCCESS
def test_clear(clean_storage):
storage = clean_storage
run_id = str(uuid.uuid4())
storage.add_run(build_run(run_id=run_id, pipeline_name='some_pipeline'))
assert len(storage.get_runs()) == 1
storage.wipe()
assert list(storage.get_runs()) == []
def test_delete(clean_storage):
storage = clean_storage
run_id = str(uuid.uuid4())
storage.add_run(build_run(run_id=run_id, pipeline_name='some_pipeline'))
assert len(storage.get_runs()) == 1
storage.delete_run(run_id)
assert list(storage.get_runs()) == []
def test_fetch_by_filter(clean_storage):
storage = clean_storage
one = str(uuid.uuid4())
two = str(uuid.uuid4())
three = str(uuid.uuid4())
storage.add_run(
build_run(
run_id=one,
pipeline_name='some_pipeline',
tags={'tag': 'hello', 'tag2': 'world'},
status=PipelineRunStatus.SUCCESS,
)
)
storage.add_run(
build_run(
run_id=two,
pipeline_name='some_pipeline',
tags={'tag': 'hello'},
status=PipelineRunStatus.FAILURE,
),
)
storage.add_run(
build_run(run_id=three, pipeline_name='other_pipeline', status=PipelineRunStatus.SUCCESS)
)
assert len(storage.get_runs()) == 3
some_runs = storage.get_runs(PipelineRunsFilter(run_id=one))
count = storage.get_runs_count(PipelineRunsFilter(run_id=one))
assert len(some_runs) == 1
assert count == 1
assert some_runs[0].run_id == one
some_runs = storage.get_runs(PipelineRunsFilter(pipeline_name='some_pipeline'))
count = storage.get_runs_count(PipelineRunsFilter(pipeline_name='some_pipeline'))
assert len(some_runs) == 2
assert count == 2
assert any(x.run_id == one for x in some_runs)
assert any(x.run_id == two for x in some_runs)
some_runs = storage.get_runs(PipelineRunsFilter(status=PipelineRunStatus.SUCCESS))
count = storage.get_runs_count(PipelineRunsFilter(status=PipelineRunStatus.SUCCESS))
assert len(some_runs) == 2
assert count == 2
assert any(x.run_id == one for x in some_runs)
assert any(x.run_id == three for x in some_runs)
some_runs = storage.get_runs(PipelineRunsFilter(tags={'tag': 'hello'}))
count = storage.get_runs_count(PipelineRunsFilter(tags={'tag': 'hello'}))
assert len(some_runs) == 2
assert count == 2
assert any(x.run_id == one for x in some_runs)
assert any(x.run_id == two for x in some_runs)
some_runs = storage.get_runs(PipelineRunsFilter(tags={'tag': 'hello', 'tag2': 'world'}))
count = storage.get_runs_count(PipelineRunsFilter(tags={'tag': 'hello', 'tag2': 'world'}))
assert len(some_runs) == 1
assert count == 1
assert some_runs[0].run_id == one
some_runs = storage.get_runs(
PipelineRunsFilter(pipeline_name="some_pipeline", tags={'tag': 'hello'})
)
count = storage.get_runs_count(
PipelineRunsFilter(pipeline_name="some_pipeline", tags={'tag': 'hello'})
)
assert len(some_runs) == 2
assert count == 2
assert any(x.run_id == one for x in some_runs)
assert any(x.run_id == two for x in some_runs)
some_runs = storage.get_runs(
PipelineRunsFilter(
pipeline_name="some_pipeline", tags={'tag': 'hello'}, status=PipelineRunStatus.SUCCESS,
)
)
count = storage.get_runs_count(
PipelineRunsFilter(
pipeline_name="some_pipeline", tags={'tag': 'hello'}, status=PipelineRunStatus.SUCCESS,
)
)
assert len(some_runs) == 1
assert count == 1
assert some_runs[0].run_id == one
# All filters
some_runs = storage.get_runs(
PipelineRunsFilter(
run_id=one,
pipeline_name="some_pipeline",
tags={'tag': 'hello'},
status=PipelineRunStatus.SUCCESS,
)
)
count = storage.get_runs_count(
PipelineRunsFilter(
run_id=one,
pipeline_name="some_pipeline",
tags={'tag': 'hello'},
status=PipelineRunStatus.SUCCESS,
)
)
assert len(some_runs) == 1
assert count == 1
assert some_runs[0].run_id == one
some_runs = storage.get_runs(PipelineRunsFilter())
count = storage.get_runs_count(PipelineRunsFilter())
assert len(some_runs) == 3
assert count == 3
def test_fetch_by_pipeline(clean_storage):
storage = clean_storage
one = str(uuid.uuid4())
two = str(uuid.uuid4())
storage.add_run(build_run(run_id=one, pipeline_name='some_pipeline'))
storage.add_run(build_run(run_id=two, pipeline_name='some_other_pipeline'))
assert len(storage.get_runs()) == 2
some_runs = storage.get_runs(PipelineRunsFilter(pipeline_name='some_pipeline'))
assert len(some_runs) == 1
assert some_runs[0].run_id == one
def test_fetch_count_by_tag(clean_storage):
storage = clean_storage
one = str(uuid.uuid4())
two = str(uuid.uuid4())
three = str(uuid.uuid4())
storage.add_run(
build_run(
run_id=one, pipeline_name='some_pipeline', tags={'mytag': 'hello', 'mytag2': 'world'}
)
)
storage.add_run(
build_run(
run_id=two, pipeline_name='some_pipeline', tags={'mytag': 'goodbye', 'mytag2': 'world'}
)
)
storage.add_run(build_run(run_id=three, pipeline_name='some_pipeline'))
assert len(storage.get_runs()) == 3
run_count = storage.get_runs_count(
PipelineRunsFilter(tags={'mytag': 'hello', 'mytag2': 'world'})
)
assert run_count == 1
run_count = storage.get_runs_count(PipelineRunsFilter(tags={'mytag2': 'world'}))
assert run_count == 2
run_count = storage.get_runs_count(PipelineRunsFilter())
assert run_count == 3
def test_fetch_by_tags(clean_storage):
storage = clean_storage
one = str(uuid.uuid4())
two = str(uuid.uuid4())
three = str(uuid.uuid4())
storage.add_run(
build_run(
run_id=one, pipeline_name='some_pipeline', tags={'mytag': 'hello', 'mytag2': 'world'}
)
)
storage.add_run(
build_run(
run_id=two, pipeline_name='some_pipeline', tags={'mytag': 'goodbye', 'mytag2': 'world'}
)
)
storage.add_run(build_run(run_id=three, pipeline_name='some_pipeline'))
assert len(storage.get_runs()) == 3
some_runs = storage.get_runs(PipelineRunsFilter(tags={'mytag': 'hello', 'mytag2': 'world'}))
assert len(some_runs) == 1
assert some_runs[0].run_id == one
some_runs = storage.get_runs(PipelineRunsFilter(tags={'mytag2': 'world'}))
assert len(some_runs) == 2
assert any(x.run_id == one for x in some_runs)
assert any(x.run_id == two for x in some_runs)
some_runs = storage.get_runs(PipelineRunsFilter(tags={}))
assert len(some_runs) == 3
def test_slice(clean_storage):
storage = clean_storage
one, two, three = sorted([str(uuid.uuid4()), str(uuid.uuid4()), str(uuid.uuid4())])
storage.add_run(build_run(run_id=one, pipeline_name='some_pipeline', tags={'mytag': 'hello'}))
storage.add_run(build_run(run_id=two, pipeline_name='some_pipeline', tags={'mytag': 'hello'}))
storage.add_run(build_run(run_id=three, pipeline_name='some_pipeline', tags={'mytag': 'hello'}))
all_runs = storage.get_runs()
assert len(all_runs) == 3
sliced_runs = storage.get_runs(cursor=three, limit=1)
assert len(sliced_runs) == 1
assert sliced_runs[0].run_id == two
all_runs = storage.get_runs(PipelineRunsFilter(pipeline_name='some_pipeline'))
assert len(all_runs) == 3
sliced_runs = storage.get_runs(
PipelineRunsFilter(pipeline_name='some_pipeline'), cursor=three, limit=1
)
assert len(sliced_runs) == 1
assert sliced_runs[0].run_id == two
all_runs = storage.get_runs(PipelineRunsFilter(tags={'mytag': 'hello'}))
assert len(all_runs) == 3
sliced_runs = storage.get_runs(
PipelineRunsFilter(tags={'mytag': 'hello'}), cursor=three, limit=1
)
assert len(sliced_runs) == 1
assert sliced_runs[0].run_id == two
def test_fetch_by_status(clean_storage):
storage = clean_storage
one = str(uuid.uuid4())
two = str(uuid.uuid4())
three = str(uuid.uuid4())
four = str(uuid.uuid4())
storage.add_run(
build_run(run_id=one, pipeline_name='some_pipeline', status=PipelineRunStatus.NOT_STARTED)
)
storage.add_run(
build_run(run_id=two, pipeline_name='some_pipeline', status=PipelineRunStatus.STARTED)
)
storage.add_run(
build_run(run_id=three, pipeline_name='some_pipeline', status=PipelineRunStatus.STARTED)
)
storage.add_run(
build_run(run_id=four, pipeline_name='some_pipeline', status=PipelineRunStatus.FAILURE)
)
assert {
run.run_id
for run in storage.get_runs(PipelineRunsFilter(status=PipelineRunStatus.NOT_STARTED))
} == {one}
assert {
run.run_id for run in storage.get_runs(PipelineRunsFilter(status=PipelineRunStatus.STARTED))
} == {two, three,}
assert {
run.run_id for run in storage.get_runs(PipelineRunsFilter(status=PipelineRunStatus.FAILURE))
} == {four}
assert {
run.run_id for run in storage.get_runs(PipelineRunsFilter(status=PipelineRunStatus.SUCCESS))
} == set()
def test_fetch_by_status_cursored(clean_storage):
storage = clean_storage
one = str(uuid.uuid4())
two = str(uuid.uuid4())
three = str(uuid.uuid4())
four = str(uuid.uuid4())
storage.add_run(
build_run(run_id=one, pipeline_name='some_pipeline', status=PipelineRunStatus.STARTED)
)
storage.add_run(
build_run(run_id=two, pipeline_name='some_pipeline', status=PipelineRunStatus.STARTED)
)
storage.add_run(
build_run(run_id=three, pipeline_name='some_pipeline', status=PipelineRunStatus.NOT_STARTED)
)
storage.add_run(
build_run(run_id=four, pipeline_name='some_pipeline', status=PipelineRunStatus.STARTED)
)
cursor_four_runs = storage.get_runs(
PipelineRunsFilter(status=PipelineRunStatus.STARTED), cursor=four
)
assert len(cursor_four_runs) == 2
assert {run.run_id for run in cursor_four_runs} == {one, two}
cursor_two_runs = storage.get_runs(
PipelineRunsFilter(status=PipelineRunStatus.STARTED), cursor=two
)
assert len(cursor_two_runs) == 1
assert {run.run_id for run in cursor_two_runs} == {one}
cursor_one_runs = storage.get_runs(
PipelineRunsFilter(status=PipelineRunStatus.STARTED), cursor=one
)
assert not cursor_one_runs
cursor_four_limit_one = storage.get_runs(
filters=PipelineRunsFilter(status=PipelineRunStatus.STARTED), cursor=four, limit=1
)
assert len(cursor_four_limit_one) == 1
assert cursor_four_limit_one[0].run_id == two
def test_load_from_config(hostname):
url_cfg = '''
run_storage:
module: dagster_postgres.run_storage
class: PostgresRunStorage
config:
postgres_url: postgresql://test:test@{hostname}:5432/test
'''.format(
hostname=hostname
)
explicit_cfg = '''
run_storage:
module: dagster_postgres.run_storage
class: PostgresRunStorage
config:
postgres_db:
username: test
password: test
hostname: {hostname}
db_name: test
'''.format(
hostname=hostname
)
# pylint: disable=protected-access
from_url = DagsterInstance.local_temp(overrides=yaml.safe_load(url_cfg))._run_storage
from_explicit = DagsterInstance.local_temp(overrides=yaml.safe_load(explicit_cfg))._run_storage
assert from_url.postgres_url == from_explicit.postgres_url
| 33.141573 | 100 | 0.67304 | 1,885 | 14,748 | 4.975597 | 0.067905 | 0.039983 | 0.076128 | 0.084444 | 0.841561 | 0.815865 | 0.788677 | 0.74123 | 0.714042 | 0.651775 | 0 | 0.008333 | 0.210673 | 14,748 | 444 | 101 | 33.216216 | 0.797354 | 0.004407 | 0 | 0.528455 | 0 | 0 | 0.092451 | 0.006745 | 0 | 0 | 0 | 0 | 0.219512 | 1 | 0.03523 | false | 0.00271 | 0.01626 | 0.00271 | 0.054201 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
9b9f516aad2811f21bd9b387290685f03e4c83d5 | 152 | py | Python | src/z3c/objpath/__init__.py | zopefoundation/z3c.objpath | bc4947b105a6851d06d09a205f8fffcb8088507f | [
"ZPL-2.1"
] | 1 | 2021-03-05T17:27:29.000Z | 2021-03-05T17:27:29.000Z | src/z3c/objpath/__init__.py | zopefoundation/z3c.objpath | bc4947b105a6851d06d09a205f8fffcb8088507f | [
"ZPL-2.1"
] | 5 | 2018-03-12T17:28:42.000Z | 2021-09-21T06:16:30.000Z | src/z3c/objpath/__init__.py | zopefoundation/z3c.objpath | bc4947b105a6851d06d09a205f8fffcb8088507f | [
"ZPL-2.1"
] | null | null | null | from z3c.objpath import path as _path # noqa: F401
from z3c.objpath.path import path # noqa: F401
from z3c.objpath.path import resolve # noqa: F401
| 30.4 | 51 | 0.75 | 25 | 152 | 4.52 | 0.36 | 0.185841 | 0.371681 | 0.283186 | 0.637168 | 0.637168 | 0.637168 | 0.637168 | 0 | 0 | 0 | 0.096 | 0.177632 | 152 | 4 | 52 | 38 | 0.808 | 0.210526 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
b5cec28b337bf7d0906cf99a63d07f0bf93efd59 | 49 | py | Python | gadrionwrap/__init__.py | gadr1on/pilwrapper | 79289ae5f1f74f8f3fc9fa47b9a64bb8eecee3e1 | [
"MIT"
] | 1 | 2020-08-05T01:25:16.000Z | 2020-08-05T01:25:16.000Z | gadrionwrap/__init__.py | gadr1on/pilwrapper | 79289ae5f1f74f8f3fc9fa47b9a64bb8eecee3e1 | [
"MIT"
] | null | null | null | gadrionwrap/__init__.py | gadr1on/pilwrapper | 79289ae5f1f74f8f3fc9fa47b9a64bb8eecee3e1 | [
"MIT"
] | null | null | null | from gadrionwrap.wrapper import findBestFontSize
| 24.5 | 48 | 0.897959 | 5 | 49 | 8.8 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.081633 | 49 | 1 | 49 | 49 | 0.977778 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
1fba5ce5b2b1121a7546dec53aecb152a759712a | 11,676 | py | Python | gem/tests/test_auth.py | praekelt/molo-gem | ab4f9a16719f6cfc1981ef448f1d42c457e1dad7 | [
"BSD-2-Clause"
] | 3 | 2017-08-03T22:37:13.000Z | 2018-06-14T15:36:01.000Z | gem/tests/test_auth.py | praekelt/molo-gem | ab4f9a16719f6cfc1981ef448f1d42c457e1dad7 | [
"BSD-2-Clause"
] | 635 | 2016-01-12T07:23:46.000Z | 2018-11-16T07:43:13.000Z | gem/tests/test_auth.py | praekelt/molo-gem | ab4f9a16719f6cfc1981ef448f1d42c457e1dad7 | [
"BSD-2-Clause"
] | 6 | 2017-05-11T09:50:34.000Z | 2018-11-16T10:42:56.000Z | from django.core import mail
from django.urls import reverse
from django.conf import settings
from django.conf.urls import url, include
from django.contrib.auth import get_user_model
from django.test.utils import override_settings
from django.test import TestCase, Client, RequestFactory
from django.contrib.auth.models import Group, Permission
from allauth.socialaccount.models import SocialLogin, SocialAccount
from wagtail.core import urls as wagtail_urls
from wagtail.admin import urls as wagtailadmin_urls
from gem.models import Invite
from gem.tests.base import GemTestCaseMixin
from gem.adapter import StaffUserSocialAdapter, StaffUserAdapter
urlpatterns = [
url(r'^admin/', include(wagtailadmin_urls)),
url(r'', include(wagtail_urls)),
]
class TestAllAuth(GemTestCaseMixin, TestCase):
def setUp(self):
self.user = get_user_model().objects.create_superuser(
username='superuser', email='superuser@email.com', password='pass')
self.main = self.mk_main(
title='main1', slug='main1',
path='00010002', url_path='/main1/'
)
self.site = self.main.get_site()
self.user.profile.admin_sites.add(self.site)
self.client = Client(SERVER_NAME=self.site.hostname)
self.factory = RequestFactory()
@override_settings(ENABLE_ALL_AUTH=True)
def test_admin_login_view(self):
res = self.client.get(reverse('wagtailadmin_login'))
self.assertEqual(res.status_code, 200)
# toDo: find a better way to handle conditional urls patterns,
# because django only loads them once on server instantiation
# self.assertTemplateUsed(res, 'wagtailadmin/social_login.html')
@override_settings(ENABLE_ALL_AUTH=True)
def test_admin_views_authed_user(self):
self.client.force_login(self.user)
res = self.client.get(reverse('wagtailadmin_login'))
self.assertEqual(res.status_code, 302)
self.assertEqual(settings.ENABLE_ALL_AUTH, True)
self.assertEqual(res.url, '/admin/')
res = self.client.get(res.url)
self.assertEqual(res.status_code, 200)
@override_settings(ENABLE_ALL_AUTH=True)
def test_invite_create_view(self):
req = self.factory.get("/")
req.user = self.user
req._wagtail_site = self.main.get_site()
self.client.force_login(self.user)
url = '/admin/gem/invite/create/'
data = {
'email': 'testinvite@test.com'
}
res = self.client.post(url, data=data, request=req)
subject = '{}: Admin site invitation'.format(self.site)
self.assertEqual(res.status_code, 302)
self.assertEqual(len(mail.outbox), 1)
self.assertEqual(mail.outbox[0].subject, subject)
self.assertEqual(mail.outbox[0].to, [data['email']])
self.assertEqual(mail.outbox[0].from_email, 'no-reply@gehosting.org')
self.assertTrue(
Invite.objects.filter(user=self.user).exists())
@override_settings(ENABLE_ALL_AUTH=True)
def test_invite_edit_view(self):
data = {
'email': 'testinvite@test.com'
}
req = self.factory.get("/")
req.user = self.user
req._wagtail_site = self.main.get_site()
invite = Invite.objects.create(
email=data['email'], user=self.user, site=self.site)
self.client.force_login(self.user)
url = '/admin/gem/invite/edit/{}/'.format(invite.pk)
res = self.client.post(url, request=req)
self.assertEqual(res.status_code, 200)
self.assertContains(res, data['email'])
res = self.client.post(url, data=data, request=req)
self.assertEqual(res.status_code, 302)
# Note: email sent on creation of invite object by signal
# testing that a duplicate email was not sent on update
self.assertEqual(len(mail.outbox), 1)
def test_staff_social_adaptor(self):
"""
Test a front-end user getting an invite to admin site
"""
request = self.factory.get("/")
request._wagtail_site = self.main.get_site()
adaptor = StaffUserSocialAdapter(request=request)
user = get_user_model().objects.create_user(
username='testuser',
email='testuser@email.com',
password='pass'
)
sociallogin = SocialLogin(user=user)
group = Group.objects.filter().first()
perm = Permission.objects.filter().first()
self.assertFalse(adaptor.is_open_for_signup(request, sociallogin))
invite = Invite.objects.create(
email=user.email, user=self.user, site=self.site)
invite.groups.add(group)
invite.permissions.add(perm)
self.assertFalse(user.groups.all().exists())
self.assertFalse(user.user_permissions.all().exists())
adaptor.add_perms(user)
invite.refresh_from_db()
self.assertTrue(invite.is_accepted)
self.assertTrue(user.groups.all().exists())
self.assertTrue(user.user_permissions.all().exists())
user.delete()
invite.delete()
def test_staff_social_adaptor_new_user(self):
"""
Test a new user getting an invite to admin site
"""
request = self.factory.get("/")
request._wagtail_site = self.main.get_site()
adaptor = StaffUserSocialAdapter(request=request)
user = get_user_model()(
username='testuser',
email='testuser@email.com',
password='pass'
)
sociallogin = SocialLogin(user=user)
group = Group.objects.filter().first()
perm = Permission.objects.filter().first()
self.assertFalse(user.pk)
self.assertFalse(adaptor.is_open_for_signup(request, sociallogin))
invite = Invite.objects.create(
email=user.email, user=self.user, site=self.site)
invite.groups.add(group)
invite.permissions.add(perm)
adaptor.add_perms(user)
invite.refresh_from_db()
self.assertTrue(invite.is_accepted)
self.assertTrue(user.groups.all().exists())
self.assertTrue(user.user_permissions.all().exists())
user.delete()
invite.delete()
def test_staff_social_adaptor_staff(self):
"""
Test a regular staff login
"""
request = self.factory.get("/")
request._wagtail_site = self.main.get_site()
adaptor = StaffUserSocialAdapter(request=request)
user = get_user_model().objects.create_user(
username='testuser',
email='testuser@email.com',
password='pass',
is_staff=True,
)
sociallogin = SocialLogin(user=user)
group = Group.objects.filter().first()
perm = Permission.objects.filter().first()
user.groups.add(group)
user.user_permissions.add(perm)
self.assertFalse(adaptor.is_open_for_signup(request, sociallogin))
adaptor.add_perms(user)
self.assertTrue(user.groups.all().exists())
self.assertTrue(user.user_permissions.all().exists())
user.delete()
def test_staff_social_adaptor_superuser(self):
"""
Test a superuser login
"""
request = self.factory.get("/")
request._wagtail_site = self.main.get_site()
adaptor = StaffUserSocialAdapter(request=request)
user = get_user_model().objects.create_user(
username='testuser',
email='testuser@email.com',
is_superuser=True,
password='pass'
)
sociallogin = SocialLogin(user=user)
self.assertFalse(adaptor.is_open_for_signup(request, sociallogin))
self.assertFalse(user.groups.all().exists())
self.assertFalse(user.user_permissions.all().exists())
adaptor.add_perms(user)
self.assertFalse(user.groups.all().exists())
self.assertFalse(user.user_permissions.all().exists())
user.delete()
def test_staff_user_adapter_new_user(self):
adaptor = StaffUserAdapter()
user = get_user_model()(
username='testuser',
email='testuser@email.com',
password='pass'
)
request = RequestFactory().post(
data={
'username': user.username,
'password': user.password
}, path=reverse('wagtailadmin_login'))
request._wagtail_site = self.main.get_site()
self.assertFalse(adaptor.is_open_for_signup(request, None))
def test_staff_user_adapter_front_end_user(self):
adaptor = StaffUserAdapter()
user = get_user_model().objects.create(
username='testuser',
email='testuser@email.com',
password='pass'
)
request = RequestFactory().post(
data={
'username': user.username,
'password': user.password
}, path=reverse('wagtailadmin_login'))
request._wagtail_site = self.main.get_site()
self.assertFalse(adaptor.is_open_for_signup(request, None))
def test_staff_user_adapter_staff_user(self):
adaptor = StaffUserAdapter()
user = get_user_model().objects.create(
username='testuser',
email='testuser@email.com',
is_staff=True,
password='pass'
)
request = RequestFactory().post(
data={
'username': user.username,
'password': user.password
}, path=reverse('wagtailadmin_login'))
request._wagtail_site = self.main.get_site()
self.assertFalse(adaptor.is_open_for_signup(request, None))
def test_staff_user_adapter_staff_user_perms(self):
adaptor = StaffUserAdapter()
group = Group.objects.filter().first()
perm = Permission.objects.filter().first()
user = get_user_model().objects.create(
username='testuser',
email='testuser@email.com',
is_staff=True,
password='pass'
)
user.groups.add(group)
user.user_permissions.add(perm)
request = RequestFactory().post(
data={
'username': user.username,
'password': user.password
}, path=reverse('wagtailadmin_login'))
request._wagtail_site = self.main.get_site()
self.assertFalse(adaptor.is_open_for_signup(request, None))
def test_user_delete(self):
user = get_user_model().objects.create(
username='testuser',
email='testuser@email.com',
is_staff=True,
password='pass'
)
SocialAccount.objects.create(
user=user, provider='google', uid='1')
user.delete()
self.assertFalse(
SocialAccount.objects.filter(user=user)
)
class TestAllAuthDisabled(GemTestCaseMixin, TestCase):
def setUp(self):
self.user = get_user_model().objects.create_superuser(
username='superuser', email='superuser@email.com', password='pass')
self.main = self.mk_main(
title='main1', slug='main1',
path='00010002', url_path='/main1/'
)
self.site = self.main.get_site()
self.user.profile.admin_sites.add(self.site)
self.factory = RequestFactory()
@override_settings(ENABLE_ALL_AUTH=False)
def test_login_all_auth_disabled(self):
res = self.client.get(reverse('wagtailadmin_login'))
self.assertEqual(res.status_code, 200)
self.assertNotContains(res, '<span class="fa fa-google"></span>Google')
| 35.815951 | 79 | 0.63001 | 1,310 | 11,676 | 5.452672 | 0.130534 | 0.029119 | 0.02016 | 0.0252 | 0.776004 | 0.739605 | 0.717206 | 0.702086 | 0.678286 | 0.615008 | 0 | 0.00561 | 0.25197 | 11,676 | 325 | 80 | 35.926154 | 0.812228 | 0.038284 | 0 | 0.723077 | 0 | 0 | 0.072045 | 0.008714 | 0 | 0 | 0 | 0.003077 | 0.157692 | 1 | 0.061538 | false | 0.057692 | 0.053846 | 0 | 0.123077 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
1ff3c80df93b2a7473307a99cc39ead522d1273f | 43 | py | Python | src/version_1/crystals/__init__.py | eragasa/pypospack | 21cdecaf3b05c87acc532d992be2c04d85bfbc22 | [
"MIT"
] | 4 | 2018-01-18T19:59:56.000Z | 2020-08-25T11:56:52.000Z | src/version_1/crystals/__init__.py | eragasa/pypospack | 21cdecaf3b05c87acc532d992be2c04d85bfbc22 | [
"MIT"
] | 1 | 2018-04-22T23:02:13.000Z | 2018-04-22T23:02:13.000Z | src/version_1/crystals/__init__.py | eragasa/pypospack | 21cdecaf3b05c87acc532d992be2c04d85bfbc22 | [
"MIT"
] | 1 | 2019-09-14T07:04:42.000Z | 2019-09-14T07:04:42.000Z |
from pypospack2.crystals.atom import atom
| 14.333333 | 41 | 0.837209 | 6 | 43 | 6 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.026316 | 0.116279 | 43 | 2 | 42 | 21.5 | 0.921053 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
1ffa2421b413dc766aa8250a36c8fe6d21aa5eab | 36 | py | Python | keycloak_admin_aio/_resources/client_scopes/by_id/__init__.py | V-Mann-Nick/keycloak-admin-aio | 83ac1af910e492a5864eb369aacfc0512e5c8c45 | [
"Apache-2.0"
] | 12 | 2021-11-08T18:03:09.000Z | 2022-03-17T16:34:06.000Z | keycloak_admin_aio/_resources/client_scopes/by_id/__init__.py | V-Mann-Nick/keycloak-admin-aio | 83ac1af910e492a5864eb369aacfc0512e5c8c45 | [
"Apache-2.0"
] | null | null | null | keycloak_admin_aio/_resources/client_scopes/by_id/__init__.py | V-Mann-Nick/keycloak-admin-aio | 83ac1af910e492a5864eb369aacfc0512e5c8c45 | [
"Apache-2.0"
] | 1 | 2021-11-14T13:55:30.000Z | 2021-11-14T13:55:30.000Z | from .by_id import ClientScopesById
| 18 | 35 | 0.861111 | 5 | 36 | 6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.111111 | 36 | 1 | 36 | 36 | 0.9375 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
95306c3d8e61f43b252c8d0e2cb991eaaad6f24c | 48 | py | Python | scheduler/__init__.py | nielsrolf/scheduler | d7e03529cffd25c3db975f618a83861e25aff4a6 | [
"Apache-2.0"
] | null | null | null | scheduler/__init__.py | nielsrolf/scheduler | d7e03529cffd25c3db975f618a83861e25aff4a6 | [
"Apache-2.0"
] | 1 | 2020-04-17T12:53:54.000Z | 2020-04-17T12:53:54.000Z | scheduler/__init__.py | nielsrolf/scheduler | d7e03529cffd25c3db975f618a83861e25aff4a6 | [
"Apache-2.0"
] | null | null | null | from scheduler.schedule import Schedule # noqa
| 24 | 47 | 0.8125 | 6 | 48 | 6.5 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.145833 | 48 | 1 | 48 | 48 | 0.95122 | 0.083333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
1f226911a9a50dfb3e02f4bb0f2f980fbc4dd049 | 120 | py | Python | makefile/practice/code_gen.py | jaebaek/Linker101 | c6fbbbde0e280fd0b7d0c3247ad499f8c329cb29 | [
"MIT"
] | 5 | 2018-01-24T13:01:22.000Z | 2020-11-19T18:29:10.000Z | makefile/practice/code_gen.py | jaebaek/Linker101 | c6fbbbde0e280fd0b7d0c3247ad499f8c329cb29 | [
"MIT"
] | null | null | null | makefile/practice/code_gen.py | jaebaek/Linker101 | c6fbbbde0e280fd0b7d0c3247ad499f8c329cb29 | [
"MIT"
] | null | null | null | #!/usr/bin/env python
# -*- coding: utf-8 -*-
import sys
print "print \"I am code generated by " + sys.argv[0] + "\""
| 17.142857 | 60 | 0.583333 | 19 | 120 | 3.684211 | 0.894737 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.020619 | 0.191667 | 120 | 6 | 61 | 20 | 0.701031 | 0.35 | 0 | 0 | 1 | 0 | 0.315789 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.5 | null | null | 0.5 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 6 |
1f4539986db245199b0716420813c0d9633bc2d0 | 182 | py | Python | thai_parser.py | trangtops/typomorise | afce9d922d8f3b18e170c8d9ce1f887d7898a8cf | [
"MIT"
] | null | null | null | thai_parser.py | trangtops/typomorise | afce9d922d8f3b18e170c8d9ce1f887d7898a8cf | [
"MIT"
] | null | null | null | thai_parser.py | trangtops/typomorise | afce9d922d8f3b18e170c8d9ce1f887d7898a8cf | [
"MIT"
] | null | null | null | import json
with open("eng_to_thai.json", "r") as f:
eng_to_thai = json.load(f)
def to_thai(eng_char):
# global eng_to_thai
return eng_to_thai.get(eng_char, eng_char)
| 18.2 | 46 | 0.703297 | 35 | 182 | 3.314286 | 0.457143 | 0.258621 | 0.310345 | 0.224138 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.181319 | 182 | 9 | 47 | 20.222222 | 0.778523 | 0.098901 | 0 | 0 | 0 | 0 | 0.104938 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0.2 | 0.2 | 0.6 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
1f7e081d9fcb9c31452705f0361ff9d67c63f07c | 73 | py | Python | lang/py/cookbook/v2/source/cb2_4_6_exm_1.py | ch1huizong/learning | 632267634a9fd84a5f5116de09ff1e2681a6cc85 | [
"MIT"
] | null | null | null | lang/py/cookbook/v2/source/cb2_4_6_exm_1.py | ch1huizong/learning | 632267634a9fd84a5f5116de09ff1e2681a6cc85 | [
"MIT"
] | null | null | null | lang/py/cookbook/v2/source/cb2_4_6_exm_1.py | ch1huizong/learning | 632267634a9fd84a5f5116de09ff1e2681a6cc85 | [
"MIT"
] | null | null | null | for x in flatten([1, 2, [3, [], 4, [5, 6], 7, [8,], ], 9]):
print x,
| 24.333333 | 59 | 0.383562 | 15 | 73 | 1.866667 | 0.933333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.169811 | 0.273973 | 73 | 2 | 60 | 36.5 | 0.358491 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0.5 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
2f1393cb1520c83c7db9b9dbbc2f0f30470f06fd | 35 | py | Python | relativity/general/__init__.py | tdsymonds/relativity | 89314f4a8b7003ae8ee3718ff5fc518c5bdb2973 | [
"MIT"
] | null | null | null | relativity/general/__init__.py | tdsymonds/relativity | 89314f4a8b7003ae8ee3718ff5fc518c5bdb2973 | [
"MIT"
] | null | null | null | relativity/general/__init__.py | tdsymonds/relativity | 89314f4a8b7003ae8ee3718ff5fc518c5bdb2973 | [
"MIT"
] | null | null | null | from .general_relativity import *
| 17.5 | 34 | 0.8 | 4 | 35 | 6.75 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.142857 | 35 | 1 | 35 | 35 | 0.9 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
2f7683295c5634d692d2a6cce955e1809ab9fc89 | 12,296 | py | Python | tests/endtoend/test_ip_restrictions.py | satroutr/poppy | 27417f86854d9e0a04726acc263ef0a2ce9f8f6e | [
"Apache-2.0"
] | 3 | 2017-07-05T20:09:59.000Z | 2018-11-27T22:02:57.000Z | tests/endtoend/test_ip_restrictions.py | satroutr/poppy | 27417f86854d9e0a04726acc263ef0a2ce9f8f6e | [
"Apache-2.0"
] | 24 | 2017-04-18T15:14:04.000Z | 2019-03-20T19:09:07.000Z | tests/endtoend/test_ip_restrictions.py | satroutr/poppy | 27417f86854d9e0a04726acc263ef0a2ce9f8f6e | [
"Apache-2.0"
] | 8 | 2017-04-03T13:24:27.000Z | 2021-11-08T20:28:10.000Z | # Copyright (c) 2015 Rackspace, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import subprocess
import requests
from tests.endtoend import base
from tests.endtoend.utils import config
class TestIpRestrictions(base.TestBase):
@classmethod
def setUpClass(cls):
super(TestIpRestrictions, cls).setUpClass()
cls.test_config = config.TestConfig()
cls.check_preconditions()
@classmethod
def check_preconditions(cls):
"""Ensure our environment meets our needs to ensure a valid test."""
origin = cls.http_client.get("http://" + cls.default_origin)
assert origin.status_code == 200
def setUp(self):
super(TestIpRestrictions, self).setUp()
self.service_name = base.random_string('E2E-IP-Restriction')
self.cname_rec = []
self.service_location = ''
def get_ipv4_address(self):
return requests.get('https://api.ipify.org').text
def get_ipv6_address(self):
ifconfig_eth0 = subprocess.Popen(
['ifconfig', 'eth0'], stdout=subprocess.PIPE)
ifconfig_eth0_global_scope = subprocess.Popen(
['grep', 'Scope:Global'],
stdin=ifconfig_eth0.stdout,
stdout=subprocess.PIPE)
ifconfig_eth0_global_scope = ifconfig_eth0_global_scope.stdout.read()
if ifconfig_eth0_global_scope == '':
# assign an ipv6 address
ipv6 = 'FE80:0000:0000:0000:0202:B3FF:FE1E:8329'
else:
ipv6_substring = ifconfig_eth0_global_scope.split(
'inet6 addr: ')[1]
ipv6 = ipv6_substring.split('/64 Scope:Global\n')[0]
return ipv6
def test_ip_blacklist(self):
test_domain = "{0}.{1}".format(
base.random_string('test-blacklist-ip'),
self.dns_config.test_domain)
domains = [{'domain': test_domain}]
origins = [{
"origin": self.default_origin,
"port": 80,
"ssl": False,
"rules": [{
"name": "default",
"request_url": "/*",
}],
}]
caching = [
{"name": "default",
"ttl": 3600,
"rules": [{"name": "default", "request_url": "/*"}]}]
test_system_ipv4 = self.get_ipv4_address()
test_system_ipv6 = self.get_ipv6_address()
restrictions = [
{"name": "test_ip_blacklist",
"access": "blacklist",
"rules": [
{"name": "blacklist",
"client_ip": test_system_ipv4,
"request_url": "/*"},
{"name": "blacklist",
"client_ip": test_system_ipv6,
"request_url": "/*"}]}]
resp = self.setup_service(
service_name=self.service_name,
domain_list=domains,
origin_list=origins,
caching_list=caching,
restrictions_list=restrictions,
flavor_id=self.poppy_config.flavor)
self.service_location = resp.headers['location']
resp = self.poppy_client.get_service(location=self.service_location)
links = resp.json()['links']
access_url = [link['href'] for link in links if
link['rel'] == 'access_url']
rec = self.setup_cname(test_domain, access_url[0])
if rec:
self.cname_rec.append(rec[0])
# Verify blacklisted IP cannot fetch cdn content
cdn_url = 'http://' + test_domain
resp = self.http_client.get(url=cdn_url)
self.assertEqual(resp.status_code, 403)
self.assertIn('Access Denied', resp.content)
# Verify wpt can fetch cdn content
wpt_result = self.run_webpagetest(url=cdn_url)
test_region = wpt_result.keys()[0]
wpt_response_text = \
wpt_result[
test_region]['data']['runs']['1']['firstView']['requests'][
0]['headers']['response'][0]
self.assertIn(
'HTTP/1.1 200', wpt_response_text)
def test_ip_cidr_blacklist(self):
test_domain = "{0}.{1}".format(
base.random_string('test-blacklist-ip'),
self.dns_config.test_domain)
domains = [{'domain': test_domain}]
origins = [{
"origin": self.default_origin,
"port": 80,
"ssl": False,
"rules": [{
"name": "default",
"request_url": "/*",
}],
}]
caching = [
{"name": "default",
"ttl": 3600,
"rules": [{"name": "default", "request_url": "/*"}]}]
test_system_ipv4_cidr = self.get_ipv4_address() + '/25'
test_system_ipv6_cidr = self.get_ipv6_address() + '/100'
restrictions = [
{"name": "test_ip_blacklist",
"access": "blacklist",
"rules": [
{"name": "blacklist",
"client_ip": test_system_ipv4_cidr,
"request_url": "/*"},
{"name": "blacklist",
"client_ip": test_system_ipv6_cidr,
"request_url": "/*"}]}]
resp = self.setup_service(
service_name=self.service_name,
domain_list=domains,
origin_list=origins,
caching_list=caching,
restrictions_list=restrictions,
flavor_id=self.poppy_config.flavor)
self.service_location = resp.headers['location']
resp = self.poppy_client.get_service(location=self.service_location)
links = resp.json()['links']
access_url = [link['href'] for link in links if
link['rel'] == 'access_url']
rec = self.setup_cname(test_domain, access_url[0])
if rec:
self.cname_rec.append(rec[0])
# Verify blacklisted IP range cannot fetch cdn content
cdn_url = 'http://' + test_domain
resp = self.http_client.get(url=cdn_url)
self.assertEqual(resp.status_code, 403)
self.assertIn('Access Denied', resp.content)
# Verify wpt can fetch cdn content
# wpt accesses from a different country, which will not fall within
# the blacklisted IP CIDR
wpt_result = self.run_webpagetest(url=cdn_url)
test_region = wpt_result.keys()[0]
wpt_response_text = \
wpt_result[
test_region]['data']['runs']['1']['firstView']['requests'][
0]['headers']['response'][0]
self.assertIn(
'HTTP/1.1 200', wpt_response_text)
def test_ip_whitelist(self):
test_domain = "{0}.{1}".format(
base.random_string('test-whitelist-ip'),
self.dns_config.test_domain)
domains = [{'domain': test_domain}]
origins = [{
"origin": self.default_origin,
"port": 80,
"ssl": False,
"rules": [{
"name": "default",
"request_url": "/*",
}],
}]
caching = [
{"name": "default",
"ttl": 3600,
"rules": [{"name": "default", "request_url": "/*"}]}]
test_system_ipv4 = self.get_ipv4_address()
test_system_ipv6 = self.get_ipv6_address()
restrictions = [
{"name": "test_ip_whitelist",
"access": "whitelist",
"rules": [
{"name": "whitelist",
"client_ip": test_system_ipv4,
"request_url": "/*"},
{"name": "whitelist",
"client_ip": test_system_ipv6,
"request_url": "/*"}]}]
resp = self.setup_service(
service_name=self.service_name,
domain_list=domains,
origin_list=origins,
caching_list=caching,
restrictions_list=restrictions,
flavor_id=self.poppy_config.flavor)
self.service_location = resp.headers['location']
resp = self.poppy_client.get_service(location=self.service_location)
links = resp.json()['links']
access_url = [link['href'] for link in links if
link['rel'] == 'access_url']
rec = self.setup_cname(test_domain, access_url[0])
if rec:
self.cname_rec.append(rec[0])
# Verify whitelisted IP can fetch cdn content
cdn_url = 'http://' + test_domain
resp = self.http_client.get(url=cdn_url)
self.assertEqual(resp.status_code, 200)
self.assertIn('Test Flask Site', resp.content)
# Verify wpt cannot fetch cdn content
wpt_result = self.run_webpagetest(url=cdn_url)
test_region = wpt_result.keys()[0]
wpt_response_text = \
wpt_result[
test_region]['data']['runs']['1']['firstView']['requests'][
0]['headers']['response'][0]
self.assertIn(
'HTTP/1.1 403 Forbidden', wpt_response_text)
def test_ip_cidr_whitelist(self):
test_domain = "{0}.{1}".format(
base.random_string('test-whitelist-ip'),
self.dns_config.test_domain)
domains = [{'domain': test_domain}]
origins = [{
"origin": self.default_origin,
"port": 80,
"ssl": False,
"rules": [{
"name": "default",
"request_url": "/*",
}],
}]
caching = [
{"name": "default",
"ttl": 3600,
"rules": [{"name": "default", "request_url": "/*"}]}]
test_system_ipv4_cidr = self.get_ipv4_address() + '/15'
test_system_ipv6_cidr = self.get_ipv6_address() + '/42'
restrictions = [
{"name": "test_ip_whitelist",
"access": "whitelist",
"rules": [
{"name": "whitelist",
"client_ip": test_system_ipv4_cidr,
"request_url": "/*"},
{"name": "whitelist",
"client_ip": test_system_ipv6_cidr,
"request_url": "/*"}]}]
resp = self.setup_service(
service_name=self.service_name,
domain_list=domains,
origin_list=origins,
caching_list=caching,
restrictions_list=restrictions,
flavor_id=self.poppy_config.flavor)
self.service_location = resp.headers['location']
resp = self.poppy_client.get_service(location=self.service_location)
links = resp.json()['links']
access_url = [link['href'] for link in links if
link['rel'] == 'access_url']
rec = self.setup_cname(test_domain, access_url[0])
if rec:
self.cname_rec.append(rec[0])
# Verify whitelisted IP range can fetch cdn content
cdn_url = 'http://' + test_domain
resp = self.http_client.get(url=cdn_url)
self.assertEqual(resp.status_code, 200)
self.assertIn('Test Flask Site', resp.content)
# Verify wpt cannot fetch cdn content.
# wpt accesses from a different country, which will not fall within
# the whitelisted IP CIDR.
wpt_result = self.run_webpagetest(url=cdn_url)
test_region = wpt_result.keys()[0]
wpt_response_text = \
wpt_result[
test_region]['data']['runs']['1']['firstView']['requests'][
0]['headers']['response'][0]
self.assertIn(
'HTTP/1.1 403 Forbidden', wpt_response_text)
def tearDown(self):
self.poppy_client.delete_service(location=self.service_location)
for record in self.cname_rec:
self.dns_client.delete_record(record)
super(TestIpRestrictions, self).tearDown()
| 36.164706 | 77 | 0.555872 | 1,321 | 12,296 | 4.946253 | 0.165027 | 0.030609 | 0.029079 | 0.02816 | 0.768289 | 0.763085 | 0.763085 | 0.747781 | 0.736762 | 0.721457 | 0 | 0.021196 | 0.317014 | 12,296 | 339 | 78 | 36.271386 | 0.756847 | 0.094014 | 0 | 0.825279 | 0 | 0 | 0.134678 | 0.003511 | 0 | 0 | 0 | 0 | 0.048327 | 1 | 0.037175 | false | 0 | 0.01487 | 0.003717 | 0.063197 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
c801ae076ec837db4add91d6432d00c543887cba | 204 | py | Python | tests/test_util.py | hsharrison/pyphase | adb3de4cb540553851c06b5d137a3a9c18cdf240 | [
"MIT"
] | 1 | 2020-03-22T10:58:47.000Z | 2020-03-22T10:58:47.000Z | tests/test_util.py | hsharrison/pyphase | adb3de4cb540553851c06b5d137a3a9c18cdf240 | [
"MIT"
] | null | null | null | tests/test_util.py | hsharrison/pyphase | adb3de4cb540553851c06b5d137a3a9c18cdf240 | [
"MIT"
] | null | null | null | import numpy as np
from pyphase import util
def test_wrap():
assert np.all(np.isclose(util.wrap([0, 2 * np.pi, -2 * np.pi, np.pi, -np.pi]),
[0, 0, 0, -np.pi, -np.pi]))
| 22.666667 | 82 | 0.519608 | 36 | 204 | 2.916667 | 0.444444 | 0.228571 | 0.171429 | 0.228571 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.041958 | 0.29902 | 204 | 8 | 83 | 25.5 | 0.692308 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.2 | 1 | 0.2 | true | 0 | 0.4 | 0 | 0.6 | 0 | 0 | 0 | 0 | null | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
c8353babb1759e0e6f7452203f444985464550b7 | 2,638 | py | Python | lightyear/functions/colors.py | divmain/lightyear | f328a2c5113edaab15565d0372e610aa8636ab90 | [
"MIT"
] | 1 | 2015-08-25T12:31:13.000Z | 2015-08-25T12:31:13.000Z | lightyear/functions/colors.py | divmain/lightyear | f328a2c5113edaab15565d0372e610aa8636ab90 | [
"MIT"
] | null | null | null | lightyear/functions/colors.py | divmain/lightyear | f328a2c5113edaab15565d0372e610aa8636ab90 | [
"MIT"
] | null | null | null | from decimal import Decimal
from colorsys import hls_to_rgb, rgb_to_hls
from . import bifunc
from ..ly_types import Color, Distance
@bifunc
def darken(env, color, amount):
if not isinstance(color, Color):
raise ValueError('Cannot darken non-color:', str(color))
if isinstance(amount, Decimal):
new_r = int(color._r - amount)
new_g = int(color._g - amount)
new_b = int(color._b - amount)
color._r, color._g, color._b = (color if color >= 0 else 0
for color in (new_r, new_g, new_b))
return color
elif isinstance(amount, Color):
new_r = int(color._r - amount._r)
new_g = int(color._g - amount._g)
new_b = int(color._b - amount._b)
color._r, color._g, color._b = (color if color >= 0 else 0
for color in (new_r, new_g, new_b))
return color
elif isinstance(amount, Distance) and amount.unit == '%':
r, g, b = float(color._r/255), float(color._g/255), float(color._b/255)
h, l, s = rgb_to_hls(r, g, b)
new_l = l * (100 - int(amount.value)) / 100
color._r, color._g, color._b = (
Decimal(255*c) if c >= 0 else Decimal(0)
for c in hls_to_rgb(h, new_l, s))
return color
raise ValueError('Cannot darken by value:', str(amount))
@bifunc
def lighten(env, color, amount):
if not isinstance(color, Color):
raise ValueError('Cannot lighten non-color:', str(color))
if isinstance(amount, Decimal):
new_r = int(color._r + amount)
new_g = int(color._g + amount)
new_b = int(color._b + amount)
color._r, color._g, color._b = (color if color <= 255 else 255
for color in (new_r, new_g, new_b))
return color
elif isinstance(amount, Color):
new_r = int(color._r + amount._r)
new_g = int(color._g + amount._g)
new_b = int(color._b + amount._b)
color._r, color._g, color._b = (color if color <= 255 else 255
for color in (new_r, new_g, new_b))
return color
elif isinstance(amount, Distance) and amount.unit == '%':
r, g, b = float(color._r/255), float(color._g/255), float(color._b/255)
h, l, s = rgb_to_hls(r, g, b)
new_l = l * (100 + int(amount.value)) / 100
color._r, color._g, color._b = (
Decimal(255*c) if c <= 255 else Decimal(255)
for c in hls_to_rgb(h, new_l, s))
return color
raise ValueError('Cannot lighten by value:', str(amount))
| 36.638889 | 79 | 0.566338 | 394 | 2,638 | 3.581218 | 0.119289 | 0.068037 | 0.046775 | 0.051028 | 0.860383 | 0.841956 | 0.841956 | 0.841956 | 0.841956 | 0.841956 | 0 | 0.033058 | 0.311979 | 2,638 | 71 | 80 | 37.15493 | 0.744353 | 0 | 0 | 0.551724 | 0 | 0 | 0.037149 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.034483 | false | 0 | 0.068966 | 0 | 0.206897 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
c098872c8ed4af54134960570fac093ba176d402 | 39 | py | Python | ccimport/__init__.py | FindDefinition/ccimport | 2be66fe4cdeb4daa915d2dfc75f2363c0c0bfb75 | [
"MIT"
] | 1 | 2021-11-23T08:36:48.000Z | 2021-11-23T08:36:48.000Z | ccimport/__init__.py | FindDefinition/ccimport | 2be66fe4cdeb4daa915d2dfc75f2363c0c0bfb75 | [
"MIT"
] | null | null | null | ccimport/__init__.py | FindDefinition/ccimport | 2be66fe4cdeb4daa915d2dfc75f2363c0c0bfb75 | [
"MIT"
] | 1 | 2021-11-23T08:26:52.000Z | 2021-11-23T08:26:52.000Z | from .core import autoimport, ccimport
| 19.5 | 38 | 0.820513 | 5 | 39 | 6.4 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.128205 | 39 | 1 | 39 | 39 | 0.941176 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
c0998b67342a6eb6ad4cc9f221d2e0482870820b | 1,259 | py | Python | zoffimzoo/cards.py | normanjaeckel/ZoffImZoo | 14f723f045090616d06c785b6199b0e1a9e25906 | [
"MIT"
] | null | null | null | zoffimzoo/cards.py | normanjaeckel/ZoffImZoo | 14f723f045090616d06c785b6199b0e1a9e25906 | [
"MIT"
] | 4 | 2021-03-19T01:58:02.000Z | 2021-09-22T18:53:40.000Z | zoffimzoo/cards.py | normanjaeckel/ZoffImZoo | 14f723f045090616d06c785b6199b0e1a9e25906 | [
"MIT"
] | null | null | null | class Card:
def __init__(self, name):
self.name = name
ALL_CARDS = [
Card("Elefant"),
Card("Elefant"),
Card("Elefant"),
Card("Elefant"),
Card("Elefant"),
Card("Mücke"),
Card("Mücke"),
Card("Mücke"),
Card("Mücke"),
Card("Sardinen"),
Card("Sardinen"),
Card("Sardinen"),
Card("Sardinen"),
Card("Sardinen"),
Card("Chamäleon"),
Card("Barsch"),
Card("Barsch"),
Card("Barsch"),
Card("Barsch"),
Card("Barsch"),
Card("Robbe"),
Card("Robbe"),
Card("Robbe"),
Card("Robbe"),
Card("Robbe"),
Card("Eisbär"),
Card("Eisbär"),
Card("Eisbär"),
Card("Eisbär"),
Card("Eisbär"),
Card("Krokodil"),
Card("Krokodil"),
Card("Krokodil"),
Card("Krokodil"),
Card("Krokodil"),
Card("Orka"),
Card("Orka"),
Card("Orka"),
Card("Orka"),
Card("Orka"),
Card("Löwe"),
Card("Löwe"),
Card("Löwe"),
Card("Löwe"),
Card("Löwe"),
Card("Igel"),
Card("Igel"),
Card("Igel"),
Card("Igel"),
Card("Igel"),
Card("Fuchs"),
Card("Fuchs"),
Card("Fuchs"),
Card("Fuchs"),
Card("Fuchs"),
Card("Maus"),
Card("Maus"),
Card("Maus"),
Card("Maus"),
Card("Maus"),
]
| 18.514706 | 29 | 0.494837 | 131 | 1,259 | 4.717557 | 0.160305 | 0.088997 | 0.121359 | 0.142395 | 0.914239 | 0.914239 | 0.914239 | 0.86246 | 0.804207 | 0 | 0 | 0 | 0.258936 | 1,259 | 67 | 30 | 18.791045 | 0.662379 | 0 | 0 | 0.907692 | 0 | 0 | 0.26529 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.015385 | false | 0 | 0 | 0 | 0.030769 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
23c0e87ea92f6aa2d7d9eb381b20f3101b8b4d66 | 212 | py | Python | licenses/management/commands/check_for_translation_updates.py | snehal199/cc-licenses | d64c7293eb7be15ff3cd74cc5ff1536eb16794de | [
"MIT"
] | null | null | null | licenses/management/commands/check_for_translation_updates.py | snehal199/cc-licenses | d64c7293eb7be15ff3cd74cc5ff1536eb16794de | [
"MIT"
] | null | null | null | licenses/management/commands/check_for_translation_updates.py | snehal199/cc-licenses | d64c7293eb7be15ff3cd74cc5ff1536eb16794de | [
"MIT"
] | null | null | null | from django.core.management import BaseCommand
from licenses.transifex import check_for_translation_updates
class Command(BaseCommand):
def handle(self, **options):
check_for_translation_updates()
| 23.555556 | 60 | 0.79717 | 25 | 212 | 6.52 | 0.72 | 0.09816 | 0.233129 | 0.319018 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.136792 | 212 | 8 | 61 | 26.5 | 0.89071 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0.4 | 0 | 0.8 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
23d2c33fb6e9b653ef444924b88ee528c63df1a0 | 121 | py | Python | fancy/descriptor/__init__.py | susautw/fancy_descriptors | 884331742a1c69671c71180db6a9a6532e0024cb | [
"MIT"
] | null | null | null | fancy/descriptor/__init__.py | susautw/fancy_descriptors | 884331742a1c69671c71180db6a9a6532e0024cb | [
"MIT"
] | null | null | null | fancy/descriptor/__init__.py | susautw/fancy_descriptors | 884331742a1c69671c71180db6a9a6532e0024cb | [
"MIT"
] | 1 | 2021-04-09T13:34:47.000Z | 2021-04-09T13:34:47.000Z | from .method_descriptor_base import MethodDescriptor
from .method_descriptor_factories import MethodDescriptorFactories
| 30.25 | 66 | 0.909091 | 12 | 121 | 8.833333 | 0.666667 | 0.188679 | 0.377358 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.07438 | 121 | 3 | 67 | 40.333333 | 0.946429 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
23ed24abf9e9da8310157d90f84322170778c116 | 346 | py | Python | bitmovin_api_sdk/encoding/encodings/muxings/fmp4/drm/fairplay/__init__.py | jaythecaesarean/bitmovin-api-sdk-python | 48166511fcb9082041c552ace55a9b66cc59b794 | [
"MIT"
] | 11 | 2019-07-03T10:41:16.000Z | 2022-02-25T21:48:06.000Z | bitmovin_api_sdk/encoding/encodings/muxings/fmp4/drm/fairplay/__init__.py | jaythecaesarean/bitmovin-api-sdk-python | 48166511fcb9082041c552ace55a9b66cc59b794 | [
"MIT"
] | 8 | 2019-11-23T00:01:25.000Z | 2021-04-29T12:30:31.000Z | bitmovin_api_sdk/encoding/encodings/muxings/fmp4/drm/fairplay/__init__.py | jaythecaesarean/bitmovin-api-sdk-python | 48166511fcb9082041c552ace55a9b66cc59b794 | [
"MIT"
] | 13 | 2020-01-02T14:58:18.000Z | 2022-03-26T12:10:30.000Z | from bitmovin_api_sdk.encoding.encodings.muxings.fmp4.drm.fairplay.fairplay_api import FairplayApi
from bitmovin_api_sdk.encoding.encodings.muxings.fmp4.drm.fairplay.customdata.customdata_api import CustomdataApi
from bitmovin_api_sdk.encoding.encodings.muxings.fmp4.drm.fairplay.fair_play_drm_list_query_params import FairPlayDrmListQueryParams
| 86.5 | 132 | 0.901734 | 47 | 346 | 6.361702 | 0.425532 | 0.120401 | 0.150502 | 0.180602 | 0.571906 | 0.571906 | 0.571906 | 0.571906 | 0.571906 | 0.571906 | 0 | 0.008982 | 0.034682 | 346 | 3 | 133 | 115.333333 | 0.886228 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
9b06f78aae8c60a4828e2a40657ef137f0e45e4c | 347 | py | Python | Prescient/models/__init__.py | RamonWill/Data-App | e4b28704940546156f9521c88eced73f1443ce7e | [
"MIT"
] | 28 | 2020-10-07T04:40:42.000Z | 2022-03-17T10:34:18.000Z | Prescient/models/__init__.py | RamonWill/Data-App | e4b28704940546156f9521c88eced73f1443ce7e | [
"MIT"
] | 2 | 2021-01-16T18:48:56.000Z | 2022-03-06T23:02:01.000Z | Prescient/models/__init__.py | RamonWill/Data-App | e4b28704940546156f9521c88eced73f1443ce7e | [
"MIT"
] | 16 | 2020-09-28T17:30:39.000Z | 2022-03-20T00:09:27.000Z | from Prescient.models.user import User, load_user
from Prescient.models.watchlist import (WatchlistItems,
Watchlist_Group,
default_date)
from Prescient.models.db_securities import (Available_Securities,
Sector_Definitions)
| 49.571429 | 65 | 0.556196 | 28 | 347 | 6.678571 | 0.571429 | 0.208556 | 0.304813 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.403458 | 347 | 6 | 66 | 57.833333 | 0.903382 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
f19f358d44994efe578b6b7d9923073d0629e1b0 | 10,403 | py | Python | markov_chain_neigh.py | Ultimawashi/CMC-CPS | 883f0589f4c3f8e3c249c7eece3e368b44c9ba68 | [
"MIT"
] | null | null | null | markov_chain_neigh.py | Ultimawashi/CMC-CPS | 883f0589f4c3f8e3c249c7eece3e368b44c9ba68 | [
"MIT"
] | null | null | null | markov_chain_neigh.py | Ultimawashi/CMC-CPS | 883f0589f4c3f8e3c249c7eece3e368b44c9ba68 | [
"MIT"
] | null | null | null | import numpy as np
from tools import gauss, gauss2
from math import sqrt, pi
from markov_chain import simu_mc, simu_mc_nonstat, calc_probaprio_mc
def forward_neigh(A, p, gauss, g2, g3):
"""
Cette fonction calcule récursivement (mais ce n'est pas une fonction récursive!) les valeurs forward de la chaîne
:param A: Matrice (2*2) de transition de la chaîne
:param p: vecteur de taille 2 avec la probailité d'apparition a priori pour chaque classe
:param gauss: numpy array (longeur de signal_noisy)*2 qui correspond aux valeurs des densité gaussiennes pour chaque élément du signal bruité
:return: un vecteur de taille la longueur de la chaîne, contenant tous les forward (de 1 à n)
"""
proba2 = A @ g2[0]
proba3 = A @ g3[0]
forward = np.zeros((len(gauss), 2))
forward[0] = p * (gauss[0]*proba2*proba3)
forward[0] = forward[0] / (forward[0].sum())
for l in range(1, len(gauss)):
proba2 = A@g2[l]
proba3 = A@g3[l]
forward[l] = (gauss[l]*proba2*proba3) * (forward[l - 1] @ A)
forward[l] = forward[l] / forward[l].sum()
return forward
def backward_neigh(A, gauss, g2, g3):
"""
Cette fonction calcule récursivement (mais ce n'est pas une fonction récursive!) les valeurs backward de la chaîne
:param A: Matrice (2*2) de transition de la chaîne
:param p: vecteur de taille 2 avec la probailité d'apparition a priori pour chaque classe
:param gauss: numpy array (longeur de signal_noisy)*2 qui correspond aux valeurs des densité gaussiennes pour chaque élément du signal bruité
:return: un vecteur de taille la longueur de la chaîne, contenant tous les backward (de 1 à n).
Attention, si on calcule les backward en partant de la fin de la chaine, je conseille quand même d'ordonner le vecteur backward du début à la fin
"""
backward = np.zeros((len(gauss), 2))
backward[len(gauss) - 1] = np.ones(2)
backward[len(gauss) - 1] = backward[len(gauss) - 1] / (backward[len(gauss) - 1].sum())
for k in reversed(range(0, len(gauss)-1)):
proba2 = A @ g2[k+1]
proba3 = A @ g3[k+1]
backward[k] = A @ (backward[k + 1] * (gauss[k + 1]*proba2*proba3))
backward[k] = backward[k] / (backward[k].sum())
return backward
def mpm_mc_neigh(signal_noisy, neighboursh, neighboursv, w, p, A, m1, sig1, m2, sig2):
"""
Cette fonction permet d'appliquer la méthode mpm pour retrouver notre signal d'origine à partir de sa version bruité et des paramètres du model.
:param signal_noisy: Signal bruité (numpy array 1D de float)
:param w: vecteur dont la première composante est la valeur de la classe w1 et la deuxième est la valeur de la classe w2
:param p: vecteur de taille 2 avec la probailité d'apparition a priori pour chaque classe
:param A: Matrice (2*2) de transition de la chaîne
:param m1: La moyenne de la première gaussienne
:param sig1: La variance de la première gaussienne
:param m2: La moyenne de la deuxième gaussienne
:param sig2: La variance de la deuxième gaussienne
:return: Un signal discret à 2 classe (numpy array 1D d'int)
"""
gausses = gauss(signal_noisy, m1, sig1, m2, sig2)
g2 = gauss2(neighboursh, m1, sig1, m2, sig2)
g3 = gauss2(neighboursv, m1, sig1, m2, sig2)
alpha = forward_neigh(A, p, gausses, g2, g3)
beta = backward_neigh(A, gausses, g2, g3)
proba_apost = alpha * beta
proba_apost = proba_apost / (proba_apost.sum(axis=1)[..., np.newaxis])
return w[np.argmax(proba_apost, axis=1)]
def calc_param_EM_mc_neigh(signal_noisy, neighboursh, neighboursv, p, A, m1, sig1, m2, sig2):
"""
Cette fonction permet de calculer les nouveaux paramètres estimé pour une itération de EM
:param signal_noisy: Signal bruité (numpy array 1D de float)
:param p: vecteur de taille 2 avec la probailité d'apparition a priori pour chaque classe
:param A: Matrice (2*2) de transition de la chaîne
:param m1: La moyenne de la première gaussienne
:param sig1: La variance de la première gaussienne
:param m2: La moyenne de la deuxième gaussienne
:param sig2: La variance de la deuxième gaussienne
:return: tous les paramètres réestimés donc p, A, m1, sig1, m2, sig2
"""
gausses = gauss(signal_noisy, m1, sig1, m2, sig2)
g2 = gauss2(neighboursh, m1, sig1, m2, sig2)
g3 = gauss2(neighboursv, m1, sig1, m2, sig2)
proba2 = np.einsum('ij,kj->ki',A,g2)
proba3 = np.einsum('ij,kj->ki',A,g3)
alpha = forward_neigh(A, p, gausses, g2, g3)
beta = backward_neigh(A, gausses, g2, g3)
proba_apost = alpha * beta
proba_apost = proba_apost / (proba_apost.sum(axis=1)[..., np.newaxis])
p = proba_apost.sum(axis=0)/proba_apost.shape[0]
proba_c_apost = (
alpha[:-1, :, np.newaxis]
* ((gausses[1:, np.newaxis, :]*proba2[1:, np.newaxis, :]*proba3[1:, np.newaxis, :])
* beta[1:, np.newaxis, :]
* A[np.newaxis, :, :])
)
proba_c_apost = proba_c_apost / (proba_c_apost.sum(axis=(1, 2))[..., np.newaxis, np.newaxis])
A = np.transpose(np.transpose((proba_c_apost.sum(axis=0))) / (proba_apost[:-1:].sum(axis=0)))
m1 = (proba_apost[:,0] * signal_noisy).sum()/proba_apost[:,0].sum()
sig1 = np.sqrt((proba_apost[:,0]*((signal_noisy-m1)**2)).sum()/proba_apost[:,0].sum())
m2 = (proba_apost[:, 1] * signal_noisy).sum() / proba_apost[:, 1].sum()
sig2 = np.sqrt((proba_apost[:, 1] * ((signal_noisy - m2) ** 2)).sum() / proba_apost[:, 1].sum())
return p, A, m1, sig1, m2, sig2
def estim_param_EM_mc_neigh(iter, signal_noisy, neighboursh, neighboursv, p, A, m1, sig1, m2, sig2):
"""
Cette fonction est l'implémentation de l'algorithme EM pour le modèle en question
:param iter: Nombre d'itération choisie
:param signal_noisy: Signal bruité (numpy array 1D de float)
:param p: la valeur d'initialisation du vecteur de proba
:param A: la valeur d'initialisation de la matrice de transition de la chaîne
:param m1: la valeur d'initialisation de la moyenne de la première gaussienne
:param sig1: la valeur d'initialisation de l'écart type de la première gaussienne
:param m2: la valeur d'initialisation de la moyenne de la deuxième gaussienne
:param sig2: la valeur d'initialisation de l'écart type de la deuxième gaussienne
:return: Tous les paramètres réestimés à la fin de l'algorithme EM donc p, A, m1, sig1, m2, sig2
"""
p_est = p
A_est = A
m1_est = m1
sig1_est = sig1
m2_est = m2
sig2_est = sig2
for i in range(iter):
p_est, A_est, m1_est, sig1_est, m2_est, sig2_est = calc_param_EM_mc_neigh(signal_noisy, neighboursh, neighboursv,
p_est, A_est, m1_est, sig1_est, m2_est,
sig2_est)
print({'iter':i,'p': p_est, 'A': A_est, 'm1': m1_est, 'sig1': sig1_est, 'm2': m2_est, 'sig2': sig2_est})
return p_est, A_est, m1_est, sig1_est, m2_est, sig2_est
def calc_param_SEM_mc_neigh(signal_noisy, neighboursh, neighboursv, p, A, m1, sig1, m2, sig2):
"""
Cette fonction permet de calculer les nouveaux paramètres estimé pour une itération de EM
:param signal_noisy: Signal bruité (numpy array 1D de float)
:param p: vecteur de taille 2 avec la probailité d'apparition a priori pour chaque classe
:param A: Matrice (2*2) de transition de la chaîne
:param m1: La moyenne de la première gaussienne
:param sig1: La variance de la première gaussienne
:param m2: La moyenne de la deuxième gaussienne
:param sig2: La variance de la deuxième gaussienne
:return: tous les paramètres réestimés donc p, A, m1, sig1, m2, sig2
"""
gausses = gauss(signal_noisy, m1, sig1, m2, sig2)
g2 = gauss2(neighboursh, m1, sig1, m2, sig2)
g3 = gauss2(neighboursv, m1, sig1, m2, sig2)
proba2 = np.einsum('ij,kj->ki',A,g2)
proba3 = np.einsum('ij,kj->ki',A,g3)
alpha = forward_neigh(A, p, gausses, g2, g3)
beta = backward_neigh(A, gausses, g2, g3)
proba_init = alpha[0] * beta[0]
proba_init = proba_init / proba_init.sum()
tapost = (
((gausses[1:, np.newaxis, :]*proba2[1:, np.newaxis, :]*proba3[1:, np.newaxis, :])
* beta[1:, np.newaxis, :]
* A[np.newaxis, :, :])
)
tapost = tapost / tapost.sum(axis=2)[..., np.newaxis]
signal = simu_mc_nonstat(signal_noisy.shape[0], proba_init, tapost)
p,A = calc_probaprio_mc(signal, np.array([0,1]))
m1 = ((signal==0) * signal_noisy).sum()/(signal==0).sum()
sig1 = np.sqrt(((signal==0)*((signal_noisy-m1)**2)).sum()/(signal==0).sum())
m2 = ((signal==1) * signal_noisy).sum()/(signal==1).sum()
sig2 = np.sqrt(((signal == 1) * ((signal_noisy - m2) ** 2)).sum() / (signal == 1).sum())
return p, A, m1, sig1, m2, sig2
def estim_param_SEM_mc_neigh(iter, signal_noisy, neighboursh, neighboursv, p, A, m1, sig1, m2, sig2):
"""
Cette fonction est l'implémentation de l'algorithme EM pour le modèle en question
:param iter: Nombre d'itération choisie
:param signal_noisy: Signal bruité (numpy array 1D de float)
:param p: la valeur d'initialisation du vecteur de proba
:param A: la valeur d'initialisation de la matrice de transition de la chaîne
:param m1: la valeur d'initialisation de la moyenne de la première gaussienne
:param sig1: la valeur d'initialisation de l'écart type de la première gaussienne
:param m2: la valeur d'initialisation de la moyenne de la deuxième gaussienne
:param sig2: la valeur d'initialisation de l'écart type de la deuxième gaussienne
:return: Tous les paramètres réestimés à la fin de l'algorithme EM donc p, A, m1, sig1, m2, sig2
"""
p_est = p
A_est = A
m1_est = m1
sig1_est = sig1
m2_est = m2
sig2_est = sig2
for i in range(iter):
p_est, A_est, m1_est, sig1_est, m2_est, sig2_est = calc_param_SEM_mc_neigh(signal_noisy, neighboursh, neighboursv,
p_est, A_est, m1_est, sig1_est, m2_est,
sig2_est)
print({'iter':i,'p': p_est, 'A': A_est, 'm1': m1_est, 'sig1': sig1_est, 'm2': m2_est, 'sig2': sig2_est})
return p_est, A_est, m1_est, sig1_est, m2_est, sig2_est
| 52.276382 | 149 | 0.652408 | 1,619 | 10,403 | 4.090179 | 0.108091 | 0.024766 | 0.024162 | 0.036243 | 0.813047 | 0.777862 | 0.742676 | 0.742676 | 0.734974 | 0.729991 | 0 | 0.040305 | 0.232048 | 10,403 | 198 | 150 | 52.540404 | 0.788584 | 0.427377 | 0 | 0.472222 | 0 | 0 | 0.012764 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.064815 | false | 0 | 0.037037 | 0 | 0.166667 | 0.018519 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
f1b5044c5eb1cb284ff5a3ea9b87c2265ad5e875 | 25 | py | Python | jobs/__init__.py | eternalflow/push-money | d8ca1452b57f13b57c7a736c03f0287275f77950 | [
"MIT"
] | 3 | 2020-02-02T08:59:22.000Z | 2020-05-05T09:18:52.000Z | jobs/__init__.py | eternalflow/push-money | d8ca1452b57f13b57c7a736c03f0287275f77950 | [
"MIT"
] | 5 | 2020-04-12T23:27:58.000Z | 2020-05-05T12:27:54.000Z | jobs/__init__.py | eternalflow/push-money | d8ca1452b57f13b57c7a736c03f0287275f77950 | [
"MIT"
] | 4 | 2020-02-04T01:48:09.000Z | 2020-04-26T10:37:07.000Z | from jobs.mailer import * | 25 | 25 | 0.8 | 4 | 25 | 5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.12 | 25 | 1 | 25 | 25 | 0.909091 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
7b10707ddc226043ce7444ca7daa7fffb0327642 | 158 | py | Python | app/api_1_0/__init__.py | SherlockSheep/Snacks | 62c622a641aa20421bd4f41ec268a14090763998 | [
"MIT"
] | null | null | null | app/api_1_0/__init__.py | SherlockSheep/Snacks | 62c622a641aa20421bd4f41ec268a14090763998 | [
"MIT"
] | null | null | null | app/api_1_0/__init__.py | SherlockSheep/Snacks | 62c622a641aa20421bd4f41ec268a14090763998 | [
"MIT"
] | null | null | null | from flask import Blueprint
api = Blueprint('api', __name__)
from . import authentication, posts, users, comments, errors, register, tags, snacks, load_pic
| 26.333333 | 94 | 0.765823 | 20 | 158 | 5.8 | 0.8 | 0.206897 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.139241 | 158 | 5 | 95 | 31.6 | 0.852941 | 0 | 0 | 0 | 0 | 0 | 0.018987 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.666667 | 0 | 0.666667 | 0.666667 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 6 |
7bc3815bcec58aee7f58cee09f27406ec889c73a | 910 | py | Python | src/process/process/domain/base/operation/__init__.py | ahmetcagriakca/pythondataintegrator | 079b968d6c893008f02c88dbe34909a228ac1c7b | [
"MIT"
] | 1 | 2020-12-18T21:37:28.000Z | 2020-12-18T21:37:28.000Z | src/process/process/domain/base/operation/__init__.py | ahmetcagriakca/pythondataintegrator | 079b968d6c893008f02c88dbe34909a228ac1c7b | [
"MIT"
] | null | null | null | src/process/process/domain/base/operation/__init__.py | ahmetcagriakca/pythondataintegrator | 079b968d6c893008f02c88dbe34909a228ac1c7b | [
"MIT"
] | 1 | 2020-12-18T21:37:31.000Z | 2020-12-18T21:37:31.000Z | from process.domain.base.operation.DataOperationBase import DataOperationBase
from process.domain.base.operation.DataOperationContactBase import DataOperationContactBase
from process.domain.base.operation.DataOperationIntegrationBase import DataOperationIntegrationBase
from process.domain.base.operation.DataOperationJobBase import DataOperationJobBase
from process.domain.base.operation.DataOperationJobExecutionBase import DataOperationJobExecutionBase
from process.domain.base.operation.DataOperationJobExecutionEventBase import DataOperationJobExecutionEventBase
from process.domain.base.operation.DataOperationJobExecutionIntegrationBase import \
DataOperationJobExecutionIntegrationBase
from process.domain.base.operation.DataOperationJobExecutionIntegrationEventBase import \
DataOperationJobExecutionIntegrationEventBase
from process.domain.base.operation.DefinitionBase import DefinitionBase
| 75.833333 | 111 | 0.907692 | 72 | 910 | 11.472222 | 0.208333 | 0.119855 | 0.18523 | 0.228814 | 0.326877 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.050549 | 910 | 11 | 112 | 82.727273 | 0.956019 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.818182 | 0 | 0.818182 | 0 | 0 | 0 | 1 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
7bc94b25a3ca05524e56466c027f08865661d444 | 31 | py | Python | ppq/log/__init__.py | xiguadong/ppq | 6c71adb3c2a8ca95967f101724b5e4b3e6f761ff | [
"Apache-2.0"
] | null | null | null | ppq/log/__init__.py | xiguadong/ppq | 6c71adb3c2a8ca95967f101724b5e4b3e6f761ff | [
"Apache-2.0"
] | null | null | null | ppq/log/__init__.py | xiguadong/ppq | 6c71adb3c2a8ca95967f101724b5e4b3e6f761ff | [
"Apache-2.0"
] | null | null | null | from .logger import NaiveLogger | 31 | 31 | 0.870968 | 4 | 31 | 6.75 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.096774 | 31 | 1 | 31 | 31 | 0.964286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
7bcf7b9b2a65d12388e1e2effb434332d78d92eb | 47 | py | Python | fastapi_discord/models/__init__.py | abhishek0220/fastapi-discord | f06cc61a4e800eae5c09fd55329a74fbfc6e270e | [
"MIT"
] | 2 | 2022-02-03T18:03:33.000Z | 2022-03-21T10:54:41.000Z | fastapi_discord/models/__init__.py | abhishek0220/fastapi-discord | f06cc61a4e800eae5c09fd55329a74fbfc6e270e | [
"MIT"
] | null | null | null | fastapi_discord/models/__init__.py | abhishek0220/fastapi-discord | f06cc61a4e800eae5c09fd55329a74fbfc6e270e | [
"MIT"
] | null | null | null | from .guild import Guild
from .user import User | 23.5 | 24 | 0.808511 | 8 | 47 | 4.75 | 0.5 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.148936 | 47 | 2 | 25 | 23.5 | 0.95 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
cda12a716a9774b9eaa5666cb7d95fee9566ea36 | 2,488 | py | Python | examples/planar_hand/analysis/plot_cost.py | lujieyang/irs_lqr | bc9cade6a3bb2fa2d76bdd5fe453030a7b28700f | [
"MIT"
] | 6 | 2021-11-20T19:05:06.000Z | 2022-01-31T00:10:41.000Z | examples/planar_hand/analysis/plot_cost.py | lujieyang/irs_lqr | bc9cade6a3bb2fa2d76bdd5fe453030a7b28700f | [
"MIT"
] | 10 | 2021-07-24T19:50:36.000Z | 2021-11-20T19:06:40.000Z | examples/planar_hand/analysis/plot_cost.py | lujieyang/irs_lqr | bc9cade6a3bb2fa2d76bdd5fe453030a7b28700f | [
"MIT"
] | 1 | 2021-12-15T22:09:31.000Z | 2021-12-15T22:09:31.000Z | import numpy as np
import matplotlib.pyplot as plt
exact = np.loadtxt(
"examples/planar_hand/analysis/planar_hand_exact.csv",
delimiter=",")
first_order = np.loadtxt(
"examples/planar_hand/analysis/planar_hand_first_order.csv",
delimiter=",")
zero_order_B = np.loadtxt(
"examples/planar_hand/analysis/planar_hand_zero_order_B.csv",
delimiter=",")
zero_order_AB = np.loadtxt(
"examples/planar_hand/analysis/planar_hand_zero_order_AB.csv",
delimiter=",")
plt.figure()
plt.plot(exact, marker='x', color='red', label='exact')
plt.plot(first_order, marker='v', color='springgreen', label='first order')
plt.plot(zero_order_AB, marker='^', color='blue', label='zero order')
#plt.plot(zero_order_AB, marker='+', color='magenta', label='zero order_AB')
plt.legend()
plt.xlabel('iterations')
plt.ylabel('Cost')
plt.title("Planar Hand (Move Right)")
plt.grid()
plt.show()
exact = np.loadtxt(
"examples/planar_hand/analysis/planar_hand_spin_exact.csv",
delimiter=",")
first_order = np.loadtxt(
"examples/planar_hand/analysis/planar_hand_spin_first_order.csv",
delimiter=",")
zero_order_B = np.loadtxt(
"examples/planar_hand/analysis/planar_hand_spin_zero_order_B.csv",
delimiter=",")
zero_order_AB = np.loadtxt(
"examples/planar_hand/analysis/planar_hand_spin_zero_order_AB.csv",
delimiter=",")
plt.figure()
plt.plot(exact, marker='x', color='red', label='exact')
plt.plot(first_order, marker='v', color='springgreen', label='first order')
plt.plot(zero_order_AB, marker='^', color='blue', label='zero order')
#plt.plot(zero_order_AB, marker='^', color='magenta', label='zero order_AB')
plt.legend()
plt.xlabel('iterations')
plt.ylabel('Cost')
plt.title("Planar Hand (Spin In-Place)")
plt.grid()
plt.show()
exact = np.loadtxt(
"examples/planar_hand/analysis/planar_hand_spin_second_exact.csv",
delimiter=",")
first_order = np.loadtxt(
"examples/planar_hand/analysis/planar_hand_spin_second_first.csv",
delimiter=",")
zero_order = np.loadtxt(
"examples/planar_hand/analysis/planar_hand_spin_second_zero.csv",
delimiter=",")
plt.figure()
plt.plot(exact, marker='x', color='red', label='exact')
plt.plot(first_order, marker='v', color='springgreen', label='first order')
plt.plot(zero_order, marker='^', color='blue', label='zero order')
plt.legend()
plt.yscale('log')
plt.xlabel('iterations')
plt.ylabel('Cost (log scale)')
plt.title("Planar Hand (Spin In-hand, Second-Order Sim)")
plt.grid()
plt.show() | 33.621622 | 76 | 0.725482 | 359 | 2,488 | 4.807799 | 0.13649 | 0.144844 | 0.108343 | 0.146582 | 0.919467 | 0.919467 | 0.883546 | 0.865006 | 0.865006 | 0.836037 | 0 | 0 | 0.099277 | 2,488 | 74 | 77 | 33.621622 | 0.770192 | 0.060289 | 0 | 0.69697 | 0 | 0 | 0.411639 | 0.281558 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.030303 | 0 | 0.030303 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
cde6252ebb1d2095703f323b7f4c13d249618dac | 86 | py | Python | annom/__init__.py | jcreinhold/annom | 4ad65322bec7f038a2b3ce42f688672dc914c2e2 | [
"Apache-2.0"
] | 1 | 2021-03-06T17:42:32.000Z | 2021-03-06T17:42:32.000Z | annom/__init__.py | jcreinhold/annom | 4ad65322bec7f038a2b3ce42f688672dc914c2e2 | [
"Apache-2.0"
] | null | null | null | annom/__init__.py | jcreinhold/annom | 4ad65322bec7f038a2b3ce42f688672dc914c2e2 | [
"Apache-2.0"
] | null | null | null | from .errors import *
from .layers import *
from .loss import *
from .models import *
| 17.2 | 21 | 0.72093 | 12 | 86 | 5.166667 | 0.5 | 0.483871 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.186047 | 86 | 4 | 22 | 21.5 | 0.885714 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
a805f67e28db13fd611bdb9daf5bc380dc22985a | 54 | py | Python | src/pyannet/__init__.py | neo0311/pyannet | e485527b180a8bde962e0c3d5abd317c0b5dfad4 | [
"MIT"
] | 1 | 2022-03-01T22:48:05.000Z | 2022-03-01T22:48:05.000Z | src/pyannet/__init__.py | neo0311/pyannet | e485527b180a8bde962e0c3d5abd317c0b5dfad4 | [
"MIT"
] | null | null | null | src/pyannet/__init__.py | neo0311/pyannet | e485527b180a8bde962e0c3d5abd317c0b5dfad4 | [
"MIT"
] | null | null | null | import pyannet.data_prep
import pyannet.neural_network | 27 | 29 | 0.907407 | 8 | 54 | 5.875 | 0.75 | 0.553191 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.055556 | 54 | 2 | 29 | 27 | 0.921569 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
a80e35969e596a75b56cb2ecc361e1215fa52d59 | 167 | py | Python | versioning/__init__.py | nbag/python_utils | 5c7039f7fe85dcb1dc71eac5b80b0d0225628f41 | [
"MIT"
] | null | null | null | versioning/__init__.py | nbag/python_utils | 5c7039f7fe85dcb1dc71eac5b80b0d0225628f41 | [
"MIT"
] | null | null | null | versioning/__init__.py | nbag/python_utils | 5c7039f7fe85dcb1dc71eac5b80b0d0225628f41 | [
"MIT"
] | null | null | null | from .minimal_ext_cmd import minimal_ext_cmd
from .pairing import n_pairing, reverse_n_pairing
from .git_version import MAIN_BRANCHES, git_version, update_git_version
| 41.75 | 71 | 0.874251 | 27 | 167 | 4.962963 | 0.481481 | 0.223881 | 0.19403 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.08982 | 167 | 3 | 72 | 55.666667 | 0.881579 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
a8243e1e139cddef08e77fed32e1e6f27d2ca15a | 36 | py | Python | __init__.py | mobilityhouse/cloudwatch_to_elastic | e638a4fa1607371e02f93f7f1c52d31204e7b1a2 | [
"MIT"
] | 4 | 2017-07-31T22:06:10.000Z | 2021-06-05T16:16:18.000Z | __init__.py | mobilityhouse/cloudwatch_to_elastic | e638a4fa1607371e02f93f7f1c52d31204e7b1a2 | [
"MIT"
] | null | null | null | __init__.py | mobilityhouse/cloudwatch_to_elastic | e638a4fa1607371e02f93f7f1c52d31204e7b1a2 | [
"MIT"
] | 1 | 2021-06-05T16:16:22.000Z | 2021-06-05T16:16:22.000Z | from es_store import lambda_handler
| 18 | 35 | 0.888889 | 6 | 36 | 5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.111111 | 36 | 1 | 36 | 36 | 0.9375 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
b58b00f96b63c5a83a30dd203fffd4ae0bc13f42 | 2,899 | py | Python | tests/test_gui_navigation_frame.py | rbotter/pyDEA | 2c8b4a70e8c071d580eff26a040efc22fc264045 | [
"MIT"
] | 29 | 2017-10-22T03:03:20.000Z | 2022-03-21T09:15:22.000Z | tests/test_gui_navigation_frame.py | rbotter/pyDEA | 2c8b4a70e8c071d580eff26a040efc22fc264045 | [
"MIT"
] | 6 | 2018-07-18T01:40:43.000Z | 2021-04-11T00:38:30.000Z | tests/test_gui_navigation_frame.py | rbotter/pyDEA | 2c8b4a70e8c071d580eff26a040efc22fc264045 | [
"MIT"
] | 20 | 2018-01-23T05:50:29.000Z | 2022-02-22T05:04:56.000Z | import pytest
from pyDEA.core.gui_modules.navigation_frame_gui import NavigationForTableFrame
class TableFrameMock(object):
def __init__(self):
self.nb_rows = 100
self.display_data_called = False
def display_data(self, row_index):
self.display_data_called = True
@pytest.fixture
def nav_frame(request):
nav_frame = NavigationForTableFrame(None, TableFrameMock())
request.addfinalizer(nav_frame.destroy)
return nav_frame
def test_reset_navigation(nav_frame):
nav_frame.reset_navigation()
assert nav_frame.current_page_str.get() == '1'
assert nav_frame.text_var_nb_pages.get() == '1 pages'
assert nav_frame.goto_spin.cget('to') == 1
def test_set_navigation(nav_frame):
nav_frame.set_navigation(10)
assert nav_frame.current_page_str.get() == '1'
assert nav_frame.text_var_nb_pages.get() == '10 pages'
assert nav_frame.goto_spin.cget('to') == 10
def test_on_page_change(nav_frame):
nav_frame.set_navigation(5)
nav_frame.current_page_str.set(3)
nav_frame.on_page_change()
assert nav_frame.table.display_data_called is True
def test_on_page_change_more_than_max(nav_frame):
nav_frame.set_navigation(5)
nav_frame.current_page_str.set(7)
nav_frame.on_page_change()
assert nav_frame.table.display_data_called is True
assert nav_frame.current_page_str.get() == '5'
def test_on_page_change_negative(nav_frame):
nav_frame.set_navigation(5)
nav_frame.current_page_str.set(-7)
nav_frame.on_page_change()
assert nav_frame.table.display_data_called is True
assert nav_frame.current_page_str.get() == '1'
def test_on_page_change_invalid(nav_frame):
nav_frame.set_navigation(5)
nav_frame.current_page_str.set('text')
nav_frame.on_page_change()
assert nav_frame.table.display_data_called is True
assert nav_frame.current_page_str.get() == '1'
def test_show_prev_page_ok(nav_frame):
nav_frame.set_navigation(5)
nav_frame.current_page_str.set(3)
nav_frame.show_prev_page()
assert nav_frame.current_page_str.get() == '2'
assert nav_frame.table.display_data_called is True
def test_show_prev_page_invalid(nav_frame):
nav_frame.set_navigation(5)
nav_frame.current_page_str.set(1)
nav_frame.show_prev_page()
assert nav_frame.current_page_str.get() == '1'
assert nav_frame.table.display_data_called is False
def test_show_next_page_ok(nav_frame):
nav_frame.set_navigation(5)
nav_frame.current_page_str.set(3)
nav_frame.show_next_page()
assert nav_frame.current_page_str.get() == '4'
assert nav_frame.table.display_data_called is True
def test_show_next_page_invalid(nav_frame):
nav_frame.set_navigation(5)
nav_frame.current_page_str.set(5)
nav_frame.show_next_page()
assert nav_frame.current_page_str.get() == '5'
assert nav_frame.table.display_data_called is False
| 29.581633 | 79 | 0.759917 | 459 | 2,899 | 4.372549 | 0.148148 | 0.243149 | 0.146487 | 0.160937 | 0.796712 | 0.732436 | 0.717987 | 0.717987 | 0.684106 | 0.648231 | 0 | 0.014136 | 0.145912 | 2,899 | 97 | 80 | 29.886598 | 0.796446 | 0 | 0 | 0.485714 | 0 | 0 | 0.011038 | 0 | 0 | 0 | 0 | 0 | 0.3 | 1 | 0.185714 | false | 0 | 0.028571 | 0 | 0.242857 | 0 | 0 | 0 | 0 | null | 1 | 0 | 1 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
b5b3a24a809eff6fc31cb4a731d5f3a3ec485726 | 372 | py | Python | src/bfs/python/simple/test.py | zakmandhro/Algorithms | de828b6dba9f3cbaf1cc0775c1ade03de57c8de1 | [
"MIT"
] | 13 | 2018-03-25T16:00:01.000Z | 2022-03-07T23:10:32.000Z | src/bfs/python/simple/test.py | zakmandhro/Algorithms | de828b6dba9f3cbaf1cc0775c1ade03de57c8de1 | [
"MIT"
] | 1 | 2022-02-26T20:10:48.000Z | 2022-02-26T20:10:48.000Z | src/bfs/python/simple/test.py | zakmandhro/Algorithms | de828b6dba9f3cbaf1cc0775c1ade03de57c8de1 | [
"MIT"
] | 5 | 2021-06-02T05:43:13.000Z | 2022-02-20T11:04:54.000Z | from bfs import bfs
graph = [
[False, True, True, False, False, False],
[False, False, False, True, True, False],
[False, True, False, True, False, False],
[False, False, False, False, True, True],
[False, False, False, False, False, True],
[False, False, False, False, False, False],
]
# Should be [0, 1, 1, 2, 2, 3]
print(bfs(graph, 0))
| 21.882353 | 47 | 0.591398 | 53 | 372 | 4.150943 | 0.245283 | 0.909091 | 1.022727 | 1 | 0.759091 | 0.759091 | 0.759091 | 0.759091 | 0.472727 | 0.472727 | 0 | 0.024648 | 0.236559 | 372 | 16 | 48 | 23.25 | 0.75 | 0.075269 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.1 | 0 | 0.1 | 0.1 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
a9112a74fa934aa95a65c0bb5389331901a45f8f | 35 | py | Python | deepab/layers/__init__.py | antonkulaga/DeepAb | 51a32d06d19815705bdbfb35a8a9518c17ec313a | [
"RSA-MD"
] | 67 | 2021-07-02T08:31:10.000Z | 2022-03-30T01:25:11.000Z | deepab/layers/__init__.py | antonkulaga/DeepAb | 51a32d06d19815705bdbfb35a8a9518c17ec313a | [
"RSA-MD"
] | 9 | 2021-08-18T10:32:27.000Z | 2022-03-30T06:40:05.000Z | deepab/layers/__init__.py | antonkulaga/DeepAb | 51a32d06d19815705bdbfb35a8a9518c17ec313a | [
"RSA-MD"
] | 16 | 2021-07-17T08:33:30.000Z | 2022-03-29T07:36:34.000Z | from .OuterConcatenation2D import * | 35 | 35 | 0.857143 | 3 | 35 | 10 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.03125 | 0.085714 | 35 | 1 | 35 | 35 | 0.90625 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.