hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
f8b1bf0eccf99ba18978785e889e1fca58dc7e6c | 72 | py | Python | serve.py | JamesGear/RFIDLOCK | 4cb40e4d1ec58660b1b76adcd9999d5de4ea55e1 | [
"MIT"
] | 955 | 2015-01-01T21:27:47.000Z | 2022-03-29T11:55:44.000Z | serve.py | JamesGear/RFIDLOCK | 4cb40e4d1ec58660b1b76adcd9999d5de4ea55e1 | [
"MIT"
] | 113 | 2015-02-02T23:29:04.000Z | 2021-08-01T13:18:05.000Z | serve.py | JamesGear/RFIDLOCK | 4cb40e4d1ec58660b1b76adcd9999d5de4ea55e1 | [
"MIT"
] | 143 | 2015-01-14T17:02:41.000Z | 2022-03-06T15:51:06.000Z | from app import create_app, config
app = create_app(config.dev_config)
| 18 | 35 | 0.805556 | 12 | 72 | 4.583333 | 0.5 | 0.327273 | 0.545455 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.125 | 72 | 3 | 36 | 24 | 0.873016 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
3e38af3c8e472bf9595c1b09c4d3cfd818088096 | 1,137 | py | Python | jirator/userdata.py | hawry/jirator | afdd22c27f3478b342f46646b21a4e017bfb4370 | [
"MIT"
] | 2 | 2019-06-09T17:19:21.000Z | 2019-06-11T08:44:28.000Z | jirator/userdata.py | hawry/jirator | afdd22c27f3478b342f46646b21a4e017bfb4370 | [
"MIT"
] | 2 | 2019-05-24T07:58:19.000Z | 2019-06-19T08:09:33.000Z | jirator/userdata.py | hawry/jirator | afdd22c27f3478b342f46646b21a4e017bfb4370 | [
"MIT"
] | null | null | null | from os.path import expanduser
from constant import CONFIG_DIR
import json
class UserData():
homedir = expanduser("~")
configinfo = {}
def __init__(self):
self._load()
def _load(self):
with open(self.homedir + CONFIG_DIR) as fh:
self.configinfo = json.load(fh)
def server(self):
return self.configinfo["server"]
def username(self):
return self.configinfo["username"]
def password(self):
return self.configinfo["password"]
def statuses(self):
return self.configinfo["status"]
def default_transition_id(self):
if "dtid" not in self._runtime():
return None
return self._runtime()["dtid"]
def save_default_tid(self, tid):
if "runtime" not in self.configinfo:
self.configinfo["runtime"] = {}
self.configinfo["runtime"]["dtid"] = tid
with open(self.homedir + CONFIG_DIR,"r+") as fh:
json.dump(self.configinfo, fh, indent=4)
def _runtime(self):
if "runtime" not in self.configinfo:
return {}
return self.configinfo["runtime"]
| 25.840909 | 56 | 0.60686 | 133 | 1,137 | 5.067669 | 0.315789 | 0.228487 | 0.148368 | 0.142433 | 0.166172 | 0.166172 | 0 | 0 | 0 | 0 | 0 | 0.001211 | 0.273527 | 1,137 | 43 | 57 | 26.44186 | 0.81477 | 0 | 0 | 0.060606 | 0 | 0 | 0.068602 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.272727 | false | 0.060606 | 0.090909 | 0.121212 | 0.69697 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | 0 | 5 |
3e4d04227c0b6ad1d072f83d801dcc34305361b8 | 148 | py | Python | net/connections/__init__.py | aldmbmtl/net | 6fa7058e84309a61f71224ede6f4d659e741d2c8 | [
"MIT"
] | null | null | null | net/connections/__init__.py | aldmbmtl/net | 6fa7058e84309a61f71224ede6f4d659e741d2c8 | [
"MIT"
] | 11 | 2019-03-23T02:40:19.000Z | 2022-03-30T21:09:14.000Z | net/connections/__init__.py | aldmbmtl/net | 6fa7058e84309a61f71224ede6f4d659e741d2c8 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
"""Connection package for net."""
from .flag import *
from .connect import *
from .subscribe import *
from .event import *
| 18.5 | 33 | 0.662162 | 19 | 148 | 5.157895 | 0.684211 | 0.306122 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.008197 | 0.175676 | 148 | 7 | 34 | 21.142857 | 0.795082 | 0.337838 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
e4197405665c19facb5a0fb60c62f7e2b9349d57 | 147 | py | Python | essmc2/datasets/videos/__init__.py | huang-ziyuan/EssentialMC2 | 87141df94c1ac8e426ceec071720b97f5b9d3b88 | [
"MIT"
] | 69 | 2021-11-01T11:18:13.000Z | 2022-03-28T04:27:17.000Z | essmc2/datasets/videos/__init__.py | huang-ziyuan/EssentialMC2 | 87141df94c1ac8e426ceec071720b97f5b9d3b88 | [
"MIT"
] | 6 | 2021-11-01T09:28:13.000Z | 2022-02-11T09:49:58.000Z | essmc2/datasets/videos/__init__.py | huang-ziyuan/EssentialMC2 | 87141df94c1ac8e426ceec071720b97f5b9d3b88 | [
"MIT"
] | 16 | 2021-11-11T06:26:18.000Z | 2022-03-20T13:32:15.000Z | # Copyright 2021 Alibaba Group Holding Limited. All Rights Reserved.
from .hmdb51 import Hmdb51
from .ucf101 import UCF101
from .ssv2 import SSV2
| 24.5 | 68 | 0.802721 | 21 | 147 | 5.619048 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.129032 | 0.156463 | 147 | 5 | 69 | 29.4 | 0.822581 | 0.44898 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
e470e6b04572596bef50d4061f48fe3b6b7780c1 | 170 | py | Python | src/demos/greedy/gas.py | DavidLlorens/algoritmia | 40ca0a89ea6de9b633fa5f697f0a28cae70816a2 | [
"MIT"
] | 6 | 2018-09-15T15:09:10.000Z | 2022-02-27T01:23:11.000Z | src/demos/greedy/gas.py | JeromeIllgner/algoritmia | 406afe7206f2411557859bf03480c16db7dcce0d | [
"MIT"
] | null | null | null | src/demos/greedy/gas.py | JeromeIllgner/algoritmia | 406afe7206f2411557859bf03480c16db7dcce0d | [
"MIT"
] | 5 | 2018-07-10T20:19:55.000Z | 2021-03-31T03:32:22.000Z | #coding: latin1
#< full
from algoritmia.problems.gas import GasStationRoutePlanner
print(GasStationRoutePlanner().plan([65, 23, 45, 62, 12, 56, 26], 150))
#> full | 24.285714 | 72 | 0.711765 | 21 | 170 | 5.761905 | 0.904762 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.124138 | 0.147059 | 170 | 7 | 73 | 24.285714 | 0.710345 | 0.152941 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0.5 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 5 |
e48a184318c8aada33838b1192cf75b767bf9edb | 158 | py | Python | tests/errors/syntax_blockers/functions_in_uniontype.py | dina-fouad/pyccel | f4d919e673b400442b9c7b81212b6fbef749c7b7 | [
"MIT"
] | 206 | 2018-06-28T00:28:47.000Z | 2022-03-29T05:17:03.000Z | tests/errors/syntax_blockers/functions_in_uniontype.py | dina-fouad/pyccel | f4d919e673b400442b9c7b81212b6fbef749c7b7 | [
"MIT"
] | 670 | 2018-07-23T11:02:24.000Z | 2022-03-30T07:28:05.000Z | tests/errors/syntax_blockers/functions_in_uniontype.py | dina-fouad/pyccel | f4d919e673b400442b9c7b81212b6fbef749c7b7 | [
"MIT"
] | 19 | 2019-09-19T06:01:00.000Z | 2022-03-29T05:17:06.000Z | # pylint: disable=missing-function-docstring, missing-module-docstring/
#$ header function f((int)(int)|(real)(real), int|real)
def f(g, a):
return g(a)
| 26.333333 | 71 | 0.689873 | 24 | 158 | 4.541667 | 0.583333 | 0.12844 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.120253 | 158 | 5 | 72 | 31.6 | 0.784173 | 0.778481 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | false | 0 | 0 | 0.5 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 5 |
e48afc4689ff13761018e0ecb31b51079689a1bc | 216 | py | Python | autocomplete_light/apps.py | julyzergcn/django-autocomplete-light-2.3.3 | 1043af9dc463bc97e0d6bf35f24133c6f7f42700 | [
"MIT"
] | null | null | null | autocomplete_light/apps.py | julyzergcn/django-autocomplete-light-2.3.3 | 1043af9dc463bc97e0d6bf35f24133c6f7f42700 | [
"MIT"
] | 2 | 2021-03-31T18:52:30.000Z | 2021-12-13T19:50:13.000Z | autocomplete_light/apps.py | julyzergcn/django-autocomplete-light-2.3.3 | 1043af9dc463bc97e0d6bf35f24133c6f7f42700 | [
"MIT"
] | null | null | null | from django.apps import AppConfig
class AutocompleteLightConfig(AppConfig):
name = 'autocomplete_light'
def ready(self):
from autocomplete_light.registry import autodiscover
autodiscover()
| 21.6 | 60 | 0.740741 | 21 | 216 | 7.52381 | 0.714286 | 0.21519 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.199074 | 216 | 9 | 61 | 24 | 0.913295 | 0 | 0 | 0 | 0 | 0 | 0.083333 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0 | 0.333333 | 0 | 0.833333 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
e4a1dc3b938d1f48e717dea2aae2f3c6a2b42d62 | 205 | py | Python | NST/models/__init__.py | VITA-Group/Sandwich-Batch-Normalization | 25e7df6e64a67cebd7e70b911f874cfc1bd19df0 | [
"MIT"
] | 46 | 2021-02-20T18:49:46.000Z | 2022-03-24T08:46:20.000Z | NST/models/__init__.py | VITA-Group/Sandwich-Batch-Normalization | 25e7df6e64a67cebd7e70b911f874cfc1bd19df0 | [
"MIT"
] | null | null | null | NST/models/__init__.py | VITA-Group/Sandwich-Batch-Normalization | 25e7df6e64a67cebd7e70b911f874cfc1bd19df0 | [
"MIT"
] | 3 | 2021-02-23T07:28:12.000Z | 2021-02-26T01:09:56.000Z | # -*- coding: utf-8 -*-
# @Date : 2/16/21
# @Author : Xinyu Gong (xinyu.gong@utexas.edu)
# @Link : None
# @Version : 0.0
from .network import AdaINNet, SaAdaINNet
from .modules import vgg, decoder
| 22.777778 | 47 | 0.639024 | 29 | 205 | 4.517241 | 0.827586 | 0.137405 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.04878 | 0.2 | 205 | 8 | 48 | 25.625 | 0.75 | 0.570732 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
e4ad5547911eecc4cbf1c0b195b4c340e3437382 | 407 | py | Python | barique/commands/cmd_file.py | mboudet/barique | 32cb1a0c8bd077690f8edf243ebc8dfefef07571 | [
"MIT"
] | null | null | null | barique/commands/cmd_file.py | mboudet/barique | 32cb1a0c8bd077690f8edf243ebc8dfefef07571 | [
"MIT"
] | null | null | null | barique/commands/cmd_file.py | mboudet/barique | 32cb1a0c8bd077690f8edf243ebc8dfefef07571 | [
"MIT"
] | null | null | null | import click
from barique.commands.file.freeze import cli as freeze
from barique.commands.file.list import cli as list
from barique.commands.file.tree import cli as tree
from barique.commands.file.pull import cli as pull
@click.group()
def cli():
"""
Manipulate files managed by Baricadr
"""
pass
cli.add_command(freeze)
cli.add_command(list)
cli.add_command(tree)
cli.add_command(pull)
| 20.35 | 54 | 0.759214 | 64 | 407 | 4.765625 | 0.34375 | 0.144262 | 0.24918 | 0.301639 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.149877 | 407 | 19 | 55 | 21.421053 | 0.881503 | 0.088452 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.083333 | true | 0.083333 | 0.416667 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 5 |
e4aee4b4438c66b7228f9fdb61f874473f8d3ea2 | 183 | py | Python | yandex_images_download/__init__.py | lebionick/yandex-images-download | a35c0c1f8a8778ee11b45fedef75de89c635f95d | [
"MIT"
] | 58 | 2019-08-20T18:09:50.000Z | 2022-03-04T05:47:40.000Z | yandex_images_download/__init__.py | lebionick/yandex-images-download | a35c0c1f8a8778ee11b45fedef75de89c635f95d | [
"MIT"
] | 6 | 2020-03-02T16:34:56.000Z | 2021-09-21T23:17:56.000Z | yandex_images_download/__init__.py | lebionick/yandex-images-download | a35c0c1f8a8778ee11b45fedef75de89c635f95d | [
"MIT"
] | 18 | 2019-08-21T20:17:39.000Z | 2021-12-16T10:02:59.000Z | from __future__ import absolute_import
def run_main():
from yandex_images_download.yandex_images_download import main
main()
if __name__ == '__main__':
run_main() | 22.875 | 67 | 0.73224 | 23 | 183 | 5 | 0.521739 | 0.121739 | 0.347826 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.196721 | 183 | 8 | 68 | 22.875 | 0.782313 | 0 | 0 | 0 | 0 | 0 | 0.045198 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | true | 0 | 0.333333 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
e4b0a00174a93d3e97264ea397bcb65cf5ec89e0 | 120 | py | Python | nksama/utils/sendlog.py | Punnisher80/Komi-San | 90aeeeb0503573d81f20556ba3e1c3dafe9a40d4 | [
"Apache-2.0"
] | 31 | 2021-10-02T15:19:38.000Z | 2022-03-24T11:55:24.000Z | nksama/utils/sendlog.py | Punnisher80/Komi-San | 90aeeeb0503573d81f20556ba3e1c3dafe9a40d4 | [
"Apache-2.0"
] | 5 | 2021-10-03T12:33:24.000Z | 2022-02-05T15:28:55.000Z | nksama/utils/sendlog.py | Punnisher80/Komi-San | 90aeeeb0503573d81f20556ba3e1c3dafe9a40d4 | [
"Apache-2.0"
] | 59 | 2021-10-02T15:19:48.000Z | 2022-03-10T10:35:21.000Z | from nksama import bot
def send_log(err, module):
bot.send_message(-1001646296281, f"error in {module}\n\n{err}")
| 20 | 67 | 0.716667 | 20 | 120 | 4.2 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.126214 | 0.141667 | 120 | 5 | 68 | 24 | 0.68932 | 0 | 0 | 0 | 0 | 0 | 0.216667 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
e4b9eb08085908526f1e6769e5825363ff68bbf2 | 207 | py | Python | intro_to_wc_modeling/concepts_skills/software_engineering/databases/__init__.py | KarrLab/python_package_tutorial | dd20e0d3056138904e7e7fbbf6bb884d64dbf8f6 | [
"MIT"
] | 15 | 2018-01-06T11:33:01.000Z | 2022-03-01T15:18:40.000Z | intro_to_wc_modeling/concepts_skills/software_engineering/databases/__init__.py | KarrLab/python_package_tutorial | dd20e0d3056138904e7e7fbbf6bb884d64dbf8f6 | [
"MIT"
] | 2 | 2018-01-30T23:21:12.000Z | 2018-03-23T20:22:06.000Z | intro_to_wc_modeling/concepts_skills/software_engineering/databases/__init__.py | KarrLab/python_package_tutorial | dd20e0d3056138904e7e7fbbf6bb884d64dbf8f6 | [
"MIT"
] | 8 | 2018-01-08T21:40:19.000Z | 2022-01-04T14:48:02.000Z | from .core import (Base, specie_reaction, Organism, Compartment, Specie, Reaction,
create_database, create_session, insert_records, query_database,
edit_database, main)
| 51.75 | 83 | 0.676329 | 21 | 207 | 6.380952 | 0.761905 | 0.208955 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.256039 | 207 | 3 | 84 | 69 | 0.87013 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.333333 | 0 | 0.333333 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
9005171041baf3f2f1fd0d52e4d9d73e3e3855e8 | 140 | py | Python | importexport/models/__init__.py | ampafdv/ampadb | 25c804a5cb21afcbe4e222a3b48cca27ff2d9e19 | [
"MIT"
] | null | null | null | importexport/models/__init__.py | ampafdv/ampadb | 25c804a5cb21afcbe4e222a3b48cca27ff2d9e19 | [
"MIT"
] | 28 | 2016-10-21T16:04:56.000Z | 2018-11-10T20:55:40.000Z | importexport/models/__init__.py | ampafdv/ampadb | 25c804a5cb21afcbe4e222a3b48cca27ff2d9e19 | [
"MIT"
] | 2 | 2016-10-22T19:24:45.000Z | 2017-02-11T10:49:02.000Z | from .models import ImportData, IesImport, ClassMap
from .changes import CanviImportacio, AddAlumne, MoveAlumne, DeleteAlumne, DeleteClasse
| 46.666667 | 87 | 0.842857 | 14 | 140 | 8.428571 | 0.857143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.1 | 140 | 2 | 88 | 70 | 0.936508 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
5f58c6b7776e359387816e9be966f6a74864f5dc | 128 | py | Python | app/familias/__init__.py | originaltebas/chmembers | 983578ec8cb6d1da76e98b1467d996d6fac752ee | [
"MIT"
] | null | null | null | app/familias/__init__.py | originaltebas/chmembers | 983578ec8cb6d1da76e98b1467d996d6fac752ee | [
"MIT"
] | 2 | 2021-09-08T01:19:10.000Z | 2022-03-11T23:59:40.000Z | app/familias/__init__.py | originaltebas/chmembers | 983578ec8cb6d1da76e98b1467d996d6fac752ee | [
"MIT"
] | 1 | 2019-04-09T10:42:20.000Z | 2019-04-09T10:42:20.000Z | # app/familias/__init__.py
from flask import Blueprint
familias = Blueprint('familias', __name__)
from . import views | 16 | 43 | 0.726563 | 15 | 128 | 5.666667 | 0.666667 | 0.4 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.1875 | 128 | 8 | 44 | 16 | 0.817308 | 0.1875 | 0 | 0 | 0 | 0 | 0.083333 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.666667 | 0 | 0.666667 | 0.666667 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 5 |
5f81ea5aa21400d309dc90cb90458eecca18246f | 175 | py | Python | typeit/schema/__init__.py | avanov/type | dbf2a94de13b592987695b7346f10cbf53acf3af | [
"MIT"
] | 8 | 2018-06-17T16:01:12.000Z | 2021-11-05T23:34:55.000Z | typeit/schema/__init__.py | avanov/type | dbf2a94de13b592987695b7346f10cbf53acf3af | [
"MIT"
] | 71 | 2018-06-23T15:31:56.000Z | 2021-03-09T16:56:50.000Z | typeit/schema/__init__.py | avanov/type | dbf2a94de13b592987695b7346f10cbf53acf3af | [
"MIT"
] | 1 | 2021-11-05T23:34:57.000Z | 2021-11-05T23:34:57.000Z | from . import meta
from . import primitives
from . import types
from . import nodes
from .errors import Invalid
__all__ = ['meta', 'primitives', 'types', 'nodes', 'Invalid']
| 21.875 | 61 | 0.714286 | 22 | 175 | 5.5 | 0.409091 | 0.330579 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.16 | 175 | 7 | 62 | 25 | 0.823129 | 0 | 0 | 0 | 0 | 0 | 0.177143 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.833333 | 0 | 0.833333 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
5f8d1337cdf3699f84e126138d9ecaef2f61123a | 154 | py | Python | utils/namedtuples.py | sks-sys/djangocicd | c5b1c5b11b38ebd1be1cb2f138ca21e976282ab8 | [
"MIT"
] | 1 | 2022-02-13T06:13:47.000Z | 2022-02-13T06:13:47.000Z | utils/namedtuples.py | sks-sys/djangocicd | c5b1c5b11b38ebd1be1cb2f138ca21e976282ab8 | [
"MIT"
] | null | null | null | utils/namedtuples.py | sks-sys/djangocicd | c5b1c5b11b38ebd1be1cb2f138ca21e976282ab8 | [
"MIT"
] | null | null | null | from typing import NamedTuple, Optional
class Checking(NamedTuple):
passed: bool = True
message: Optional[str] = None
params: dict = dict()
| 19.25 | 39 | 0.694805 | 18 | 154 | 5.944444 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.214286 | 154 | 7 | 40 | 22 | 0.884298 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.2 | 0.2 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 5 |
5f8d6ade54457d55fb1f33edd8fb27861c609055 | 33 | py | Python | src/w1therm2influx/__init__.py | rkschamer/w1therm2influx | b72e33a7632b9d6516252a924b49a3844f1b3220 | [
"MIT"
] | null | null | null | src/w1therm2influx/__init__.py | rkschamer/w1therm2influx | b72e33a7632b9d6516252a924b49a3844f1b3220 | [
"MIT"
] | null | null | null | src/w1therm2influx/__init__.py | rkschamer/w1therm2influx | b72e33a7632b9d6516252a924b49a3844f1b3220 | [
"MIT"
] | null | null | null | from .core import ValueCollector
| 16.5 | 32 | 0.848485 | 4 | 33 | 7 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.121212 | 33 | 1 | 33 | 33 | 0.965517 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
5f9649861b9c44994b6a48765a158de1066a97aa | 262 | py | Python | courses/serializers.py | Saidounelson/djangoapi | 45ce8bdbb1bcc223caeb96bf57b0b685fca992cb | [
"MIT"
] | null | null | null | courses/serializers.py | Saidounelson/djangoapi | 45ce8bdbb1bcc223caeb96bf57b0b685fca992cb | [
"MIT"
] | null | null | null | courses/serializers.py | Saidounelson/djangoapi | 45ce8bdbb1bcc223caeb96bf57b0b685fca992cb | [
"MIT"
] | null | null | null | from rest_framework import routers, serializers, viewsets
from .models import Course
from rest_framework import routers
class CourseSerializer(serializers.ModelSerializer):
class Meta:
model = Course
fields = ('id','name','language','price') | 32.75 | 57 | 0.744275 | 29 | 262 | 6.655172 | 0.655172 | 0.082902 | 0.176166 | 0.238342 | 0.310881 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.167939 | 262 | 8 | 58 | 32.75 | 0.885321 | 0 | 0 | 0 | 0 | 0 | 0.072243 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.428571 | 0 | 0.714286 | 0 | 1 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
5f9d2a5cb6d81c07f2fe9cb54027ed0230a2c674 | 259 | py | Python | antitile/__init__.py | brsr/antitile | 57228f1e2f2646ee88afbfc853adb8d3a6bcd736 | [
"MIT"
] | 11 | 2017-05-04T05:37:41.000Z | 2021-01-11T22:50:16.000Z | antitile/__init__.py | brsr/antitile | 57228f1e2f2646ee88afbfc853adb8d3a6bcd736 | [
"MIT"
] | null | null | null | antitile/__init__.py | brsr/antitile | 57228f1e2f2646ee88afbfc853adb8d3a6bcd736 | [
"MIT"
] | 2 | 2018-04-23T13:36:55.000Z | 2019-06-03T07:28:06.000Z | # -*- coding: utf-8 -*-
"""
Manipulation of polyhedra and tilings
"""
from __future__ import division, absolute_import, print_function
#import warnings
# warnings.filterwarnings("ignore")
from . import breakdown, flat, off, projection, gcopoly, tiling, xmath
| 28.777778 | 70 | 0.756757 | 30 | 259 | 6.333333 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.004405 | 0.123552 | 259 | 8 | 71 | 32.375 | 0.832599 | 0.420849 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0.5 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | 5 |
397117285e835e0f587f1f9e1c76088937b1490b | 192 | py | Python | src/cfehome/aws/storages.py | vinodkiwi/Dive-into-AWS-Course----Django-S3-Cloudfront | ead089093113916f348bbd5efdf3492ae29de87b | [
"MIT"
] | 31 | 2018-12-17T05:53:01.000Z | 2022-01-11T21:54:10.000Z | src/cfehome/aws/storages.py | vinodkiwi/Dive-into-AWS-Course----Django-S3-Cloudfront | ead089093113916f348bbd5efdf3492ae29de87b | [
"MIT"
] | 5 | 2020-06-05T19:46:46.000Z | 2021-09-08T00:48:21.000Z | src/cfehome/aws/storages.py | codingforentrepreneurs/Dive-into-AWS-Course---Direct-to-S3-via-Django-JavaScript | b6a5e61288ce745851848eb4853fb70b3e67f1f7 | [
"MIT"
] | 14 | 2019-01-09T18:33:40.000Z | 2022-03-02T20:21:54.000Z | from storages.backends.s3boto3 import S3Boto3Storage
# static/
StaticS3Storage = lambda: S3Boto3Storage(location='static')
# media/
MediaS3Storage = lambda: S3Boto3Storage(location='media') | 24 | 59 | 0.802083 | 18 | 192 | 8.555556 | 0.666667 | 0.25974 | 0.363636 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.057471 | 0.09375 | 192 | 8 | 60 | 24 | 0.827586 | 0.072917 | 0 | 0 | 0 | 0 | 0.0625 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.333333 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
f2cc0209ee34d8fa5c4bd05e2cd997b4bc418d84 | 82 | py | Python | 7_kyu/beginner_series_3_sum_of_numbers.py | nik4nd/codewars | efae95f1f9fbd5f31fc62b1b4f5a7d1ee511ced0 | [
"MIT"
] | null | null | null | 7_kyu/beginner_series_3_sum_of_numbers.py | nik4nd/codewars | efae95f1f9fbd5f31fc62b1b4f5a7d1ee511ced0 | [
"MIT"
] | null | null | null | 7_kyu/beginner_series_3_sum_of_numbers.py | nik4nd/codewars | efae95f1f9fbd5f31fc62b1b4f5a7d1ee511ced0 | [
"MIT"
] | null | null | null | def get_sum(a, b):
return sum(range(b, a+1)) if a > b else sum(range(a, b+1))
| 27.333333 | 62 | 0.585366 | 20 | 82 | 2.35 | 0.5 | 0.12766 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.030769 | 0.207317 | 82 | 2 | 63 | 41 | 0.692308 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | false | 0 | 0 | 0.5 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 5 |
f2ea688403c22b9eb0bd03348868bb5f2277e21d | 161 | py | Python | backend/api/views/__init__.py | CaoRuiming/CS1320-Final-Project | 6b688c28c79e56df5cc667d704db72ba30141f7a | [
"MIT"
] | null | null | null | backend/api/views/__init__.py | CaoRuiming/CS1320-Final-Project | 6b688c28c79e56df5cc667d704db72ba30141f7a | [
"MIT"
] | null | null | null | backend/api/views/__init__.py | CaoRuiming/CS1320-Final-Project | 6b688c28c79e56df5cc667d704db72ba30141f7a | [
"MIT"
] | null | null | null | from .UserView import UserView
from .CourseView import CourseView
from .PostView import PostView
from .TagView import TagView
from .SearchView import SearchView
| 26.833333 | 34 | 0.84472 | 20 | 161 | 6.8 | 0.35 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.124224 | 161 | 5 | 35 | 32.2 | 0.964539 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
8404ce913c2bd4f9a9abc2863d701ba9f5442f77 | 1,139 | py | Python | test/test_template_assignments_page.py | CiscoDevNet/python-msx-sdk | d7e0a08c656504b4f4551d263e67c671a2a04b3f | [
"MIT"
] | null | null | null | test/test_template_assignments_page.py | CiscoDevNet/python-msx-sdk | d7e0a08c656504b4f4551d263e67c671a2a04b3f | [
"MIT"
] | null | null | null | test/test_template_assignments_page.py | CiscoDevNet/python-msx-sdk | d7e0a08c656504b4f4551d263e67c671a2a04b3f | [
"MIT"
] | null | null | null | """
MSX SDK
MSX SDK client. # noqa: E501
The version of the OpenAPI document: 1.0.9
Generated by: https://openapi-generator.tech
"""
import sys
import unittest
import python_msx_sdk
from python_msx_sdk.model.page_header import PageHeader
from python_msx_sdk.model.template_assignment import TemplateAssignment
from python_msx_sdk.model.template_assignments_page_all_of import TemplateAssignmentsPageAllOf
globals()['PageHeader'] = PageHeader
globals()['TemplateAssignment'] = TemplateAssignment
globals()['TemplateAssignmentsPageAllOf'] = TemplateAssignmentsPageAllOf
from python_msx_sdk.model.template_assignments_page import TemplateAssignmentsPage
class TestTemplateAssignmentsPage(unittest.TestCase):
"""TemplateAssignmentsPage unit test stubs"""
def setUp(self):
pass
def tearDown(self):
pass
def testTemplateAssignmentsPage(self):
"""Test TemplateAssignmentsPage"""
# FIXME: construct object with mandatory attributes with example values
# model = TemplateAssignmentsPage() # noqa: E501
pass
if __name__ == '__main__':
unittest.main()
| 27.119048 | 94 | 0.757682 | 119 | 1,139 | 7.033613 | 0.478992 | 0.050179 | 0.071685 | 0.076464 | 0.164875 | 0.139785 | 0.105137 | 0.105137 | 0 | 0 | 0 | 0.009464 | 0.165057 | 1,139 | 41 | 95 | 27.780488 | 0.870662 | 0.27568 | 0 | 0.157895 | 1 | 0 | 0.081115 | 0.035488 | 0 | 0 | 0 | 0.02439 | 0 | 1 | 0.157895 | false | 0.157895 | 0.368421 | 0 | 0.578947 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 0 | 5 |
84280ff809d44f6be51d39b9181d30ae199b237f | 161 | py | Python | dist/brukva/brukva/__init__.py | procool/mygw | f35b72b5915d314e883dcde45c3c33ff26f173df | [
"BSD-2-Clause"
] | 83 | 2015-01-05T08:16:02.000Z | 2021-11-12T11:42:46.000Z | dist/brukva/brukva/__init__.py | procool/mygw | f35b72b5915d314e883dcde45c3c33ff26f173df | [
"BSD-2-Clause"
] | 4 | 2015-02-22T06:17:08.000Z | 2018-03-13T09:06:11.000Z | dist/brukva/brukva/__init__.py | procool/mygw | f35b72b5915d314e883dcde45c3c33ff26f173df | [
"BSD-2-Clause"
] | 12 | 2015-01-22T06:03:42.000Z | 2019-02-09T08:52:21.000Z | from brukva.client import Connection, Client
from brukva.exceptions import RedisError, ConnectionError, ResponseError, InvalidResponse
from brukva import adisp
| 32.2 | 89 | 0.857143 | 18 | 161 | 7.666667 | 0.611111 | 0.217391 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.10559 | 161 | 4 | 90 | 40.25 | 0.958333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
ffc664f961e3ad6f9ccb6a912612313359aa8e83 | 260 | py | Python | tests/test_unit.py | fakegit/pydu | e6e4055f81dbbece55dccfe29b7fb82b9bf40610 | [
"MIT"
] | 229 | 2018-01-03T11:26:17.000Z | 2022-03-31T04:39:45.000Z | tests/test_unit.py | fakegit/pydu | e6e4055f81dbbece55dccfe29b7fb82b9bf40610 | [
"MIT"
] | 3 | 2018-01-03T12:49:25.000Z | 2021-12-27T12:18:35.000Z | tests/test_unit.py | fakegit/pydu | e6e4055f81dbbece55dccfe29b7fb82b9bf40610 | [
"MIT"
] | 42 | 2018-04-14T01:41:37.000Z | 2022-03-04T22:48:31.000Z | from pydu.unit import Bytes
class TestBytes:
def test_convert(self):
assert Bytes(1024*1024).convert() == (1, 'MB')
assert Bytes(1024*1024).convert(unit='KB') == (1024, 'KB')
assert Bytes(1000).convert(multiple=1000) == (1, 'KB')
| 28.888889 | 66 | 0.615385 | 35 | 260 | 4.542857 | 0.514286 | 0.207547 | 0.188679 | 0.238994 | 0.327044 | 0 | 0 | 0 | 0 | 0 | 0 | 0.144928 | 0.203846 | 260 | 8 | 67 | 32.5 | 0.623188 | 0 | 0 | 0 | 0 | 0 | 0.030769 | 0 | 0 | 0 | 0 | 0 | 0.5 | 1 | 0.166667 | false | 0 | 0.166667 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
ffee2f1ba2663170af02bc4727dbb85ea0bab6d1 | 131 | py | Python | testdoc/tests/hasemptycase.py | testing-cabal/testdoc | f0940e6c8fd4eecfe0d9de582f5daa0eaf6c695f | [
"MIT"
] | 3 | 2015-07-12T14:05:38.000Z | 2016-01-11T23:52:34.000Z | testdoc/tests/hasemptycase.py | testing-cabal/testdoc | f0940e6c8fd4eecfe0d9de582f5daa0eaf6c695f | [
"MIT"
] | null | null | null | testdoc/tests/hasemptycase.py | testing-cabal/testdoc | f0940e6c8fd4eecfe0d9de582f5daa0eaf6c695f | [
"MIT"
] | null | null | null | # Copyright (c) 2007-2010 testdoc authors. See LICENSE for details.
import unittest
class SomeTest(unittest.TestCase):
pass
| 16.375 | 67 | 0.755725 | 17 | 131 | 5.823529 | 0.941176 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.073395 | 0.167939 | 131 | 7 | 68 | 18.714286 | 0.834862 | 0.496183 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 5 |
fff4d89da00be548756d9654144ad0f3d25a1ee6 | 129 | py | Python | src/media_list/utils/__init__.py | mincem/media_list | ed255c37feaf94da82851627466719a2af95635e | [
"MIT"
] | null | null | null | src/media_list/utils/__init__.py | mincem/media_list | ed255c37feaf94da82851627466719a2af95635e | [
"MIT"
] | 2 | 2020-08-02T17:25:09.000Z | 2022-03-12T00:12:46.000Z | src/media_list/utils/__init__.py | mincem/media_list | ed255c37feaf94da82851627466719a2af95635e | [
"MIT"
] | null | null | null | from .color_picker import ColorPicker
from .baka_page_scraper import BakaPageScraper
from .image_retriever import ImageRetriever
| 32.25 | 46 | 0.883721 | 16 | 129 | 6.875 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.093023 | 129 | 3 | 47 | 43 | 0.940171 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
081b7e2a03fec1c6bce12fc93892030bca5a696c | 865 | py | Python | tests/functional/test_features.py | JrGoodle/clowder | 864afacfc7122e937f7087e233c61d05fd007af2 | [
"MIT"
] | 12 | 2016-02-12T02:37:24.000Z | 2021-01-04T05:14:12.000Z | tests/functional/test_features.py | JrGoodle/clowder | 864afacfc7122e937f7087e233c61d05fd007af2 | [
"MIT"
] | 370 | 2015-07-06T22:59:08.000Z | 2021-10-01T14:58:17.000Z | tests/functional/test_features.py | JrGoodle/clowder | 864afacfc7122e937f7087e233c61d05fd007af2 | [
"MIT"
] | 3 | 2015-10-22T18:45:31.000Z | 2018-10-16T15:30:30.000Z | """test_features"""
from pytest_bdd import scenarios
# scenarios('../features', example_converters=dict(number_commits=int, number_ahead=int, number_behind=int))
scenarios('../features/base.feature')
scenarios('../features/branch.feature')
scenarios('../features/checkout.feature')
scenarios('../features/clean.feature')
scenarios('../features/config.feature')
scenarios('../features/diff.feature')
scenarios('../features/forall.feature')
scenarios('../features/herd.feature')
scenarios('../features/init.feature')
scenarios('../features/link.feature')
scenarios('../features/prune.feature')
scenarios('../features/repo.feature')
scenarios('../features/reset.feature')
scenarios('../features/save.feature')
scenarios('../features/start.feature')
scenarios('../features/stash.feature')
scenarios('../features/status.feature')
scenarios('../features/yaml.feature')
| 36.041667 | 108 | 0.751445 | 93 | 865 | 6.924731 | 0.365591 | 0.501553 | 0.63354 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.034682 | 865 | 23 | 109 | 37.608696 | 0.771257 | 0.139884 | 0 | 0 | 0 | 0 | 0.608401 | 0.608401 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.052632 | 0 | 0.052632 | 0 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
082b030189f00791ce5ae7e1e5d7465b5959c5eb | 69 | py | Python | RecordMapper/avro/__init__.py | urbandataanalytics/RecordMapper | 96cb6a5d2e710c68c7aa05079a59b2b492609f53 | [
"MIT"
] | 1 | 2021-08-29T21:30:09.000Z | 2021-08-29T21:30:09.000Z | RecordMapper/avro/__init__.py | urbandataanalytics/RecordMapper | 96cb6a5d2e710c68c7aa05079a59b2b492609f53 | [
"MIT"
] | 1 | 2021-04-22T14:46:41.000Z | 2021-04-22T15:09:34.000Z | RecordMapper/avro/__init__.py | urbandataanalytics/RecordMapper | 96cb6a5d2e710c68c7aa05079a59b2b492609f53 | [
"MIT"
] | null | null | null | from .AvroWriter import AvroWriter
from .AvroReader import AvroReader | 34.5 | 34 | 0.869565 | 8 | 69 | 7.5 | 0.5 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.101449 | 69 | 2 | 35 | 34.5 | 0.967742 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
08460d07ee46c694b1ad4768835ae35ed5dca42a | 61 | py | Python | fastapi/middleware/gzip.py | parampavar/fastapi | 4e77737a3f7bf2608132ea170e9ff013b5af6732 | [
"MIT"
] | 10 | 2020-06-11T23:20:03.000Z | 2022-01-14T16:07:27.000Z | fastapi/middleware/gzip.py | parampavar/fastapi | 4e77737a3f7bf2608132ea170e9ff013b5af6732 | [
"MIT"
] | 22 | 2020-06-27T19:24:59.000Z | 2020-10-18T19:35:50.000Z | fastapi/middleware/gzip.py | parampavar/fastapi | 4e77737a3f7bf2608132ea170e9ff013b5af6732 | [
"MIT"
] | 2 | 2020-06-22T09:46:57.000Z | 2021-04-25T21:32:04.000Z | from starlette.middleware.gzip import GZipMiddleware # noqa
| 30.5 | 60 | 0.836066 | 7 | 61 | 7.285714 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.114754 | 61 | 1 | 61 | 61 | 0.944444 | 0.065574 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
f2614757688c15eea38ef8034d77ca874c339f3f | 151 | py | Python | tests/__init__.py | Steap/skelpy | 5efd02042a0cef4b54c65d9a77a8ec2547184efa | [
"MIT"
] | null | null | null | tests/__init__.py | Steap/skelpy | 5efd02042a0cef4b54c65d9a77a8ec2547184efa | [
"MIT"
] | 2 | 2019-06-02T04:26:44.000Z | 2021-08-19T00:53:28.000Z | tests/__init__.py | Steap/skelpy | 5efd02042a0cef4b54c65d9a77a8ec2547184efa | [
"MIT"
] | 2 | 2019-06-01T14:44:47.000Z | 2021-08-18T12:10:19.000Z | # -*- coding: utf-8 -*-
# python 2 & 3 compatibility
try:
import mock # First try python 2.7.x
except ImportError:
from unittest import mock
| 18.875 | 41 | 0.662252 | 22 | 151 | 4.545455 | 0.772727 | 0.14 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.043103 | 0.231788 | 151 | 7 | 42 | 21.571429 | 0.818966 | 0.470199 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.75 | 0 | 0.75 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
f2845dc6938b858f5e1c958afd9a902c36e13309 | 55 | py | Python | telegram_unvoicer_bot/telegram/__init__.py | nabokihms/telegram_unvoicer_bot | 0f2f00ae86e9b9166dc53d0126f9b559215cd76c | [
"Apache-2.0"
] | 3 | 2018-12-26T06:05:08.000Z | 2021-10-09T15:36:47.000Z | telegram_unvoicer_bot/telegram/__init__.py | nabokihms/telegram_unvoicer_bot | 0f2f00ae86e9b9166dc53d0126f9b559215cd76c | [
"Apache-2.0"
] | 31 | 2021-02-02T21:15:11.000Z | 2022-03-30T01:16:17.000Z | telegram_unvoicer_bot/telegram/__init__.py | nabokihms/telegram_unvoicer_bot | 0f2f00ae86e9b9166dc53d0126f9b559215cd76c | [
"Apache-2.0"
] | null | null | null | from .primary_handler import *
from .request import *
| 13.75 | 30 | 0.763636 | 7 | 55 | 5.857143 | 0.714286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.163636 | 55 | 3 | 31 | 18.333333 | 0.891304 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
f295472b314e87cd9bf2b8b5a61d6f72cb7e105a | 180 | py | Python | gcpy/atm_sci/calc.py | jmoch1214/gcpy | d0f7e9014efdc3bc430e941de6cc84113d5f16c3 | [
"NCSA",
"Apache-2.0",
"MIT"
] | 1 | 2020-02-20T23:41:26.000Z | 2020-02-20T23:41:26.000Z | gcpy/atm_sci/calc.py | liuchao95/gcpy | 5adcf4861317cde96d96354a86bd0ce80aadddd5 | [
"NCSA",
"Apache-2.0",
"MIT"
] | null | null | null | gcpy/atm_sci/calc.py | liuchao95/gcpy | 5adcf4861317cde96d96354a86bd0ce80aadddd5 | [
"NCSA",
"Apache-2.0",
"MIT"
] | null | null | null | """ Mathematical calculations which do not necessarily belong to any specialized
scientific submodule. """
def org_corr():
# TODO
pass
def qqnorm():
# TODO
pass | 15 | 80 | 0.677778 | 21 | 180 | 5.761905 | 0.857143 | 0.132231 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.238889 | 180 | 12 | 81 | 15 | 0.883212 | 0.611111 | 0 | 0.5 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.083333 | 0 | 1 | 0.5 | true | 0.5 | 0 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 5 |
4b8c26b3acee6f8a45147efdd3e7757bc62551da | 194 | py | Python | config_example.py | meramos/RickyRenuncia_Protests | 03f91bcbf26620b1e81f8e07639b575cc145b28f | [
"MIT"
] | 3 | 2019-08-18T13:53:46.000Z | 2019-09-01T21:09:34.000Z | config.dist.py | Czino/bitcoin-is-the-sun | 54181f135b89b083d0a3739754d869f110f733b4 | [
"MIT"
] | null | null | null | config.dist.py | Czino/bitcoin-is-the-sun | 54181f135b89b083d0a3739754d869f110f733b4 | [
"MIT"
] | 1 | 2020-09-24T22:56:52.000Z | 2020-09-24T22:56:52.000Z | credentials = {
"consumer_key": "CONSUMER-KEY-HERE",
"consumer_secret": "CONSUMER-SECRET-HERE",
"access_token": "ACCESS-TOKEN-HERE",
"access_token_secret": "ACCESS-SECRET-HERE"
} | 32.333333 | 47 | 0.685567 | 22 | 194 | 5.818182 | 0.318182 | 0.257813 | 0.234375 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.139175 | 194 | 6 | 48 | 32.333333 | 0.766467 | 0 | 0 | 0 | 0 | 0 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
298ccdb5ce8016245774786d13284e774afe73fb | 214 | py | Python | sdk/python/opencannabis/structs/pricing/__init__.py | CookiesCo/OpenCannabis | a7bb1f71200c6b8f56c509df47039198f0c3bd4c | [
"MIT"
] | 2 | 2020-08-27T00:45:49.000Z | 2021-06-19T08:01:13.000Z | sdk/python/opencannabis/structs/pricing/__init__.py | CookiesCo/OpenCannabis | a7bb1f71200c6b8f56c509df47039198f0c3bd4c | [
"MIT"
] | 67 | 2020-08-27T03:16:33.000Z | 2022-03-26T14:33:38.000Z | sdk/python/opencannabis/structs/pricing/__init__.py | CookiesCo/OpenCannabis | a7bb1f71200c6b8f56c509df47039198f0c3bd4c | [
"MIT"
] | 1 | 2020-11-12T04:26:43.000Z | 2020-11-12T04:26:43.000Z | # ~*~ coding: utf-8 ~*~
__doc__ = """
`opencannabis.structs.pricing`
-------------------------------------
Structures that define pricing data and sale/discount info.
"""
# `opencannabis.structs.pricing`
| 17.833333 | 61 | 0.556075 | 19 | 214 | 6.052632 | 0.789474 | 0.330435 | 0.452174 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.005464 | 0.14486 | 214 | 11 | 62 | 19.454545 | 0.622951 | 0.242991 | 0 | 0 | 0 | 0 | 0.867925 | 0.421384 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
299a85b072145ed9c3139e22ae41baf432e1585b | 75 | py | Python | examples/gen_ml_and_mli/lib/py/orange.py | mooreryan/pyml_bindgen | b326af274fca2de959c9b1ec1c61030de4633304 | [
"Apache-2.0",
"MIT"
] | 24 | 2021-11-10T06:36:17.000Z | 2022-02-08T15:16:10.000Z | examples/gen_ml_and_mli/lib/py/orange.py | mooreryan/pyml_bindgen | b326af274fca2de959c9b1ec1c61030de4633304 | [
"Apache-2.0",
"MIT"
] | 9 | 2022-01-28T05:57:08.000Z | 2022-03-23T05:59:21.000Z | examples/gen_ml_and_mli/lib/py/orange.py | mooreryan/pyml_bindgen | b326af274fca2de959c9b1ec1c61030de4633304 | [
"Apache-2.0",
"MIT"
] | 1 | 2022-01-28T05:25:19.000Z | 2022-01-28T05:25:19.000Z | class Orange:
def __init__(self, flavor):
self.flavor = flavor
| 18.75 | 31 | 0.64 | 9 | 75 | 4.888889 | 0.666667 | 0.454545 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.266667 | 75 | 3 | 32 | 25 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 5 |
29b4c488acf9323f5f7695bed204b9137b450ed0 | 62 | py | Python | efficient_net_v2/config/__init__.py | akashAD98/EfficientNetv2-with-Detectron2 | 1ba7f32cda31550ed4a040c15271612fa3f73d74 | [
"Apache-2.0"
] | null | null | null | efficient_net_v2/config/__init__.py | akashAD98/EfficientNetv2-with-Detectron2 | 1ba7f32cda31550ed4a040c15271612fa3f73d74 | [
"Apache-2.0"
] | null | null | null | efficient_net_v2/config/__init__.py | akashAD98/EfficientNetv2-with-Detectron2 | 1ba7f32cda31550ed4a040c15271612fa3f73d74 | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/env python
from .config import get_cfg # NOQA
| 15.5 | 36 | 0.677419 | 10 | 62 | 4.1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.209677 | 62 | 3 | 37 | 20.666667 | 0.836735 | 0.403226 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
4b0d25c0eb3bd2ad75081e0403e2f9d25f9a4f4f | 91 | py | Python | proxies/__init__.py | whardier/MirrorPool | e1846e019907936d95a85ccf62b7e3abffa7e2f2 | [
"MIT"
] | 2 | 2015-09-24T00:26:36.000Z | 2017-12-03T01:02:18.000Z | proxies/__init__.py | whardier/MirrorPool | e1846e019907936d95a85ccf62b7e3abffa7e2f2 | [
"MIT"
] | null | null | null | proxies/__init__.py | whardier/MirrorPool | e1846e019907936d95a85ccf62b7e3abffa7e2f2 | [
"MIT"
] | null | null | null | import settings
from utils import module_loader
module_loader(__name__, settings.PROXIES)
| 18.2 | 41 | 0.857143 | 12 | 91 | 6 | 0.666667 | 0.333333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.098901 | 91 | 4 | 42 | 22.75 | 0.878049 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
d9dfdbb5e3810d62452fe805de5a024005209362 | 61,386 | py | Python | tests/core/test_prometheus.py | Mattlk13/dd-agent | 167d0c0ed8d7b66a531dd0c21097f0fa2fba8960 | [
"BSD-3-Clause"
] | 1,172 | 2015-01-04T21:56:16.000Z | 2022-03-13T00:01:44.000Z | tests/core/test_prometheus.py | Mattlk13/dd-agent | 167d0c0ed8d7b66a531dd0c21097f0fa2fba8960 | [
"BSD-3-Clause"
] | 2,086 | 2015-01-02T16:33:21.000Z | 2022-03-15T10:01:47.000Z | tests/core/test_prometheus.py | Mattlk13/dd-agent | 167d0c0ed8d7b66a531dd0c21097f0fa2fba8960 | [
"BSD-3-Clause"
] | 972 | 2015-01-02T05:03:46.000Z | 2022-03-23T04:36:19.000Z | # (C) Datadog, Inc. 2016
# All rights reserved
# Licensed under Simplified BSD License (see LICENSE)
import logging
import os
import unittest
from mock import MagicMock, patch, call
from checks.prometheus_check import PrometheusCheck
from checks.prometheus_mixins import UnknownFormatError
from utils.prometheus import parse_metric_family, metrics_pb2
class MockResponse:
"""
MockResponse is used to simulate the object requests.Response commonly returned by requests.get
"""
def __init__(self, content, content_type):
self.content = content
self.headers = {'Content-Type': content_type}
def iter_lines(self, **_):
for elt in self.content.split("\n"):
yield elt
def close(self):
pass
class TestPrometheusFuncs(unittest.TestCase):
def test_parse_metric_family(self):
f_name = os.path.join(os.path.dirname(__file__), 'fixtures', 'prometheus', 'protobuf.bin')
with open(f_name, 'rb') as f:
data = f.read()
self.assertEqual(len(data), 51855)
messages = list(parse_metric_family(data))
self.assertEqual(len(messages), 61)
self.assertEqual(messages[-1].name, 'process_virtual_memory_bytes')
class TestPrometheusProcessor(unittest.TestCase):
def setUp(self):
self.check = PrometheusCheck('prometheus_check', {}, {}, {})
self.check.gauge = MagicMock()
self.check.log = logging.getLogger('datadog-prometheus.test')
self.check.log.debug = MagicMock()
self.check.metrics_mapper = {'process_virtual_memory_bytes': 'process.vm.bytes'}
self.check.NAMESPACE = 'prometheus'
self.protobuf_content_type = 'application/vnd.google.protobuf; proto=io.prometheus.client.MetricFamily; encoding=delimited'
# reference gauge metric in the protobuf target class type
self.ref_gauge = metrics_pb2.MetricFamily()
self.ref_gauge.name = 'process_virtual_memory_bytes'
self.ref_gauge.help = 'Virtual memory size in bytes.'
self.ref_gauge.type = 1 # GAUGE
_m = self.ref_gauge.metric.add()
_m.gauge.value = 39211008.0
# Loading test binary data
self.bin_data = None
f_name = os.path.join(os.path.dirname(__file__), 'fixtures', 'prometheus', 'protobuf.bin')
with open(f_name, 'rb') as f:
self.bin_data = f.read()
self.assertEqual(len(self.bin_data), 51855)
self.text_data = None
# Loading test text data
f_name = os.path.join(os.path.dirname(__file__), 'fixtures', 'prometheus', 'metrics.txt')
with open(f_name, 'rb') as f:
self.text_data = f.read()
self.assertEqual(len(self.text_data), 14494)
def tearDown(self):
# Cleanup
self.check = None
self.bin_data = None
self.ref_gauge = None
def test_check(self):
""" Should not be implemented as it is the mother class """
with self.assertRaises(NotImplementedError):
self.check.check(None)
def test_parse_metric_family_protobuf(self):
response = MockResponse(self.bin_data, self.protobuf_content_type)
messages = list(self.check.parse_metric_family(response))
self.assertEqual(len(messages), 61)
self.assertEqual(messages[-1].name, 'process_virtual_memory_bytes')
# check type overriding is working
# original type:
self.assertEqual(messages[1].name, 'go_goroutines')
self.assertEqual(messages[1].type, 1) # gauge
# override the type:
self.check.type_overrides = {"go_goroutines": "summary"}
response = MockResponse(self.bin_data, self.protobuf_content_type)
messages = list(self.check.parse_metric_family(response))
self.assertEqual(len(messages), 61)
self.assertEqual(messages[1].name, 'go_goroutines')
self.assertEqual(messages[1].type, 2) # summary
def test_parse_metric_family_text(self):
""" Test the high level method for loading metrics from text format """
response = MockResponse(self.text_data, 'text/plain; version=0.0.4')
messages = list(self.check.parse_metric_family(response))
# total metrics are 41 but one is typeless and we expect it not to be
# parsed...
self.assertEqual(len(messages), 40)
# ...unless the check ovverrides the type manually
self.check.type_overrides = {"go_goroutines": "gauge"}
response = MockResponse(self.text_data, 'text/plain; version=0.0.4')
messages = list(self.check.parse_metric_family(response))
self.assertEqual(len(messages), 41)
# Tests correct parsing of counters
_counter = metrics_pb2.MetricFamily()
_counter.name = 'skydns_skydns_dns_cachemiss_count_total'
_counter.help = 'Counter of DNS requests that result in a cache miss.'
_counter.type = 0 # COUNTER
_c = _counter.metric.add()
_c.counter.value = 1359194.0
_lc = _c.label.add()
_lc.name = 'cache'
_lc.value = 'response'
self.assertIn(_counter, messages)
# Tests correct parsing of gauges
_gauge = metrics_pb2.MetricFamily()
_gauge.name = 'go_memstats_heap_alloc_bytes'
_gauge.help = 'Number of heap bytes allocated and still in use.'
_gauge.type = 1 # GAUGE
_gauge.metric.add().gauge.value = 6396288.0
self.assertIn(_gauge, messages)
# Tests correct parsing of summaries
_summary = metrics_pb2.MetricFamily()
_summary.name = 'http_response_size_bytes'
_summary.help = 'The HTTP response sizes in bytes.'
_summary.type = 2 # SUMMARY
_sm = _summary.metric.add()
_lsm = _sm.label.add()
_lsm.name = 'handler'
_lsm.value = 'prometheus'
_sm.summary.sample_count = 25
_sm.summary.sample_sum = 147728.0
_sq1 = _sm.summary.quantile.add()
_sq1.quantile = 0.5
_sq1.value = 21470.0
_sq2 = _sm.summary.quantile.add()
_sq2.quantile = 0.9
_sq2.value = 21470.0
_sq3 = _sm.summary.quantile.add()
_sq3.quantile = 0.99
_sq3.value = 21470.0
self.assertIn(_summary, messages)
# Tests correct parsing of histograms
_histo = metrics_pb2.MetricFamily()
_histo.name = 'skydns_skydns_dns_response_size_bytes'
_histo.help = 'Size of the returns response in bytes.'
_histo.type = 4 # HISTOGRAM
_sample_data = [
{'ct': 1359194, 'sum': 199427281.0, 'lbl': {'system': 'auth'},
'buckets': {0.0: 0, 512.0: 1359194, 1024.0: 1359194,
1500.0: 1359194, 2048.0: 1359194, float('+Inf'): 1359194}},
{'ct': 520924, 'sum': 41527128.0, 'lbl': {'system': 'recursive'},
'buckets': {0.0: 0, 512.0: 520924, 1024.0: 520924, 1500.0: 520924,
2048.0: 520924, float('+Inf'): 520924}},
{'ct': 67648, 'sum': 6075182.0, 'lbl': {'system': 'reverse'},
'buckets': {0.0: 0, 512.0: 67648, 1024.0: 67648, 1500.0: 67648,
2048.0: 67648, float('+Inf'): 67648}},
]
for _data in _sample_data:
_h = _histo.metric.add()
_h.histogram.sample_count = _data['ct']
_h.histogram.sample_sum = _data['sum']
for k, v in _data['lbl'].items():
_lh = _h.label.add()
_lh.name = k
_lh.value = v
for _b in sorted(_data['buckets'].iterkeys()):
_subh = _h.histogram.bucket.add()
_subh.upper_bound = _b
_subh.cumulative_count = _data['buckets'][_b]
self.assertIn(_histo, messages)
def test_parse_metric_family_unsupported(self):
with self.assertRaises(UnknownFormatError):
response = MockResponse(self.bin_data, 'application/json')
list(self.check.parse_metric_family(response))
def test_process(self):
endpoint = "http://fake.endpoint:10055/metrics"
self.check.poll = MagicMock(return_value=MockResponse(self.bin_data, self.protobuf_content_type))
self.check.process_metric = MagicMock()
self.check.process(endpoint, instance=None)
self.check.poll.assert_called_with(endpoint)
self.check.process_metric.assert_called_with(self.ref_gauge, instance=None)
def test_process_send_histograms_buckets(self):
""" Cheks that the send_histograms_buckets parameter is passed along """
endpoint = "http://fake.endpoint:10055/metrics"
self.check.poll = MagicMock(
return_value=MockResponse(self.bin_data, self.protobuf_content_type))
self.check.process_metric = MagicMock()
self.check.process(endpoint, send_histograms_buckets=False, instance=None)
self.check.poll.assert_called_with(endpoint)
self.check.process_metric.assert_called_with(self.ref_gauge, instance=None, send_histograms_buckets=False)
def test_process_instance_with_tags(self):
""" Checks that an instances with tags passes them as custom tag """
endpoint = "http://fake.endpoint:10055/metrics"
self.check.poll = MagicMock(
return_value=MockResponse(self.bin_data, self.protobuf_content_type))
self.check.process_metric = MagicMock()
instance = {'endpoint': 'IgnoreMe', 'tags': ['tag1:tagValue1', 'tag2:tagValue2']}
self.check.process(endpoint, instance=instance)
self.check.poll.assert_called_with(endpoint)
self.check.process_metric.assert_called_with(self.ref_gauge, custom_tags=['tag1:tagValue1', 'tag2:tagValue2'],
instance=instance)
def test_process_metric_gauge(self):
""" Gauge ref submission """
self.check._dry_run = False
self.check.process_metric(self.ref_gauge)
self.check.gauge.assert_called_with('prometheus.process.vm.bytes', 39211008.0, [], hostname=None)
def test_process_metric_filtered(self):
""" Metric absent from the metrics_mapper """
filtered_gauge = metrics_pb2.MetricFamily()
filtered_gauge.name = "process_start_time_seconds"
filtered_gauge.help = "Start time of the process since unix epoch in seconds."
filtered_gauge.type = 1 # GAUGE
_m = filtered_gauge.metric.add()
_m.gauge.value = 39211008.0
self.check._dry_run = False
self.check.process_metric(filtered_gauge)
self.check.log.debug.assert_called_with(
"Unable to handle metric: process_start_time_seconds - error: 'PrometheusCheck' object has no attribute 'process_start_time_seconds'")
self.check.gauge.assert_not_called()
@patch('requests.get')
def test_poll_protobuf(self, mock_get):
""" Tests poll using the protobuf format """
mock_get.return_value = MagicMock(
status_code=200,
content=self.bin_data,
headers={'Content-Type': self.protobuf_content_type})
response = self.check.poll("http://fake.endpoint:10055/metrics")
messages = list(self.check.parse_metric_family(response))
self.assertEqual(len(messages), 61)
self.assertEqual(messages[-1].name, 'process_virtual_memory_bytes')
@patch('requests.get')
def test_poll_text_plain(self, mock_get):
"""Tests poll using the text format"""
mock_get.return_value = MagicMock(
status_code=200,
iter_lines=lambda **kwargs: self.text_data.split("\n"),
headers={'Content-Type': "text/plain"})
response = self.check.poll("http://fake.endpoint:10055/metrics")
messages = list(self.check.parse_metric_family(response))
messages.sort(key=lambda x: x.name)
self.assertEqual(len(messages), 40)
self.assertEqual(messages[-1].name, 'skydns_skydns_dns_response_size_bytes')
def test_submit_gauge_with_labels(self):
""" submitting metrics that contain labels should result in tags on the gauge call """
_l1 = self.ref_gauge.metric[0].label.add()
_l1.name = 'my_1st_label'
_l1.value = 'my_1st_label_value'
_l2 = self.ref_gauge.metric[0].label.add()
_l2.name = 'my_2nd_label'
_l2.value = 'my_2nd_label_value'
self.check._submit(self.check.metrics_mapper[self.ref_gauge.name], self.ref_gauge)
self.check.gauge.assert_called_with('prometheus.process.vm.bytes', 39211008.0,
['my_1st_label:my_1st_label_value', 'my_2nd_label:my_2nd_label_value'],
hostname=None)
def test_submit_gauge_with_labels_and_hostname_override(self):
""" submitting metrics that contain labels should result in tags on the gauge call """
_l1 = self.ref_gauge.metric[0].label.add()
_l1.name = 'my_1st_label'
_l1.value = 'my_1st_label_value'
_l2 = self.ref_gauge.metric[0].label.add()
_l2.name = 'node'
_l2.value = 'foo'
self.check.label_to_hostname = 'node'
self.check._submit(self.check.metrics_mapper[self.ref_gauge.name], self.ref_gauge)
self.check.gauge.assert_called_with('prometheus.process.vm.bytes', 39211008.0,
['my_1st_label:my_1st_label_value', 'node:foo'],
hostname="foo")
def test_submit_gauge_with_labels_and_hostname_already_overridden(self):
""" submitting metrics that contain labels should result in tags on the gauge call """
_l1 = self.ref_gauge.metric[0].label.add()
_l1.name = 'my_1st_label'
_l1.value = 'my_1st_label_value'
_l2 = self.ref_gauge.metric[0].label.add()
_l2.name = 'node'
_l2.value = 'foo'
self.check.label_to_hostname = 'node'
self.check._submit(self.check.metrics_mapper[self.ref_gauge.name], self.ref_gauge, hostname="bar")
self.check.gauge.assert_called_with('prometheus.process.vm.bytes', 39211008.0,
['my_1st_label:my_1st_label_value', 'node:foo'],
hostname="bar")
def test_labels_not_added_as_tag_once_for_each_metric(self):
_l1 = self.ref_gauge.metric[0].label.add()
_l1.name = 'my_1st_label'
_l1.value = 'my_1st_label_value'
_l2 = self.ref_gauge.metric[0].label.add()
_l2.name = 'my_2nd_label'
_l2.value = 'my_2nd_label_value'
tags = ['test']
self.check._submit(self.check.metrics_mapper[self.ref_gauge.name], self.ref_gauge, custom_tags=tags)
# Call a second time to check that the labels were not added once more to the tags list and
# avoid regression on https://github.com/DataDog/dd-agent/pull/3359
self.check._submit(self.check.metrics_mapper[self.ref_gauge.name], self.ref_gauge, custom_tags=tags)
self.check.gauge.assert_called_with('prometheus.process.vm.bytes', 39211008.0,
['test', 'my_1st_label:my_1st_label_value',
'my_2nd_label:my_2nd_label_value'], hostname=None)
def test_submit_gauge_with_custom_tags(self):
""" Providing custom tags should add them as is on the gauge call """
tags = ['env:dev', 'app:my_pretty_app']
self.check._submit(self.check.metrics_mapper[self.ref_gauge.name], self.ref_gauge, custom_tags=tags)
self.check.gauge.assert_called_with('prometheus.process.vm.bytes', 39211008.0,
['env:dev', 'app:my_pretty_app'], hostname=None)
def test_submit_gauge_with_labels_mapper(self):
"""
Submitting metrics that contain labels mappers should result in tags
on the gauge call with transformed tag names
"""
_l1 = self.ref_gauge.metric[0].label.add()
_l1.name = 'my_1st_label'
_l1.value = 'my_1st_label_value'
_l2 = self.ref_gauge.metric[0].label.add()
_l2.name = 'my_2nd_label'
_l2.value = 'my_2nd_label_value'
self.check.labels_mapper = {'my_1st_label': 'transformed_1st', 'non_existent': 'should_not_matter',
'env': 'dont_touch_custom_tags'}
tags = ['env:dev', 'app:my_pretty_app']
self.check._submit(self.check.metrics_mapper[self.ref_gauge.name], self.ref_gauge, custom_tags=tags)
self.check.gauge.assert_called_with('prometheus.process.vm.bytes', 39211008.0,
['env:dev', 'app:my_pretty_app', 'transformed_1st:my_1st_label_value',
'my_2nd_label:my_2nd_label_value'], hostname=None)
def test_submit_gauge_with_exclude_labels(self):
"""
Submitting metrics when filtering with exclude_labels should end up with
a filtered tags list
"""
_l1 = self.ref_gauge.metric[0].label.add()
_l1.name = 'my_1st_label'
_l1.value = 'my_1st_label_value'
_l2 = self.ref_gauge.metric[0].label.add()
_l2.name = 'my_2nd_label'
_l2.value = 'my_2nd_label_value'
self.check.labels_mapper = {'my_1st_label': 'transformed_1st', 'non_existent': 'should_not_matter',
'env': 'dont_touch_custom_tags'}
tags = ['env:dev', 'app:my_pretty_app']
self.check.exclude_labels = ['my_2nd_label', 'whatever_else', 'env'] # custom tags are not filtered out
self.check._submit(self.check.metrics_mapper[self.ref_gauge.name], self.ref_gauge, custom_tags=tags)
self.check.gauge.assert_called_with('prometheus.process.vm.bytes', 39211008.0,
['env:dev', 'app:my_pretty_app', 'transformed_1st:my_1st_label_value'],
hostname=None)
def test_submit_counter(self):
_counter = metrics_pb2.MetricFamily()
_counter.name = 'my_counter'
_counter.help = 'Random counter'
_counter.type = 0 # COUNTER
_met = _counter.metric.add()
_met.counter.value = 42
self.check._submit('custom.counter', _counter)
self.check.gauge.assert_called_with('prometheus.custom.counter', 42, [], hostname=None)
def test_submits_summary(self):
_sum = metrics_pb2.MetricFamily()
_sum.name = 'my_summary'
_sum.help = 'Random summary'
_sum.type = 2 # SUMMARY
_met = _sum.metric.add()
_met.summary.sample_count = 42
_met.summary.sample_sum = 3.14
_q1 = _met.summary.quantile.add()
_q1.quantile = 10.0
_q1.value = 3
_q2 = _met.summary.quantile.add()
_q2.quantile = 4.0
_q2.value = 5
self.check._submit('custom.summary', _sum)
self.check.gauge.assert_has_calls([
call('prometheus.custom.summary.count', 42, [], hostname=None),
call('prometheus.custom.summary.sum', 3.14, [], hostname=None),
call('prometheus.custom.summary.quantile', 3, ['quantile:10.0'], hostname=None),
call('prometheus.custom.summary.quantile', 5, ['quantile:4.0'], hostname=None)
])
def test_submit_histogram(self):
_histo = metrics_pb2.MetricFamily()
_histo.name = 'my_histogram'
_histo.help = 'Random histogram'
_histo.type = 4 # HISTOGRAM
_met = _histo.metric.add()
_met.histogram.sample_count = 42
_met.histogram.sample_sum = 3.14
_b1 = _met.histogram.bucket.add()
_b1.upper_bound = 12.7
_b1.cumulative_count = 33
_b2 = _met.histogram.bucket.add()
_b2.upper_bound = 18.2
_b2.cumulative_count = 666
self.check._submit('custom.histogram', _histo)
self.check.gauge.assert_has_calls([
call('prometheus.custom.histogram.count', 42, [], hostname=None),
call('prometheus.custom.histogram.sum', 3.14, [], hostname=None),
call('prometheus.custom.histogram.count', 33, ['upper_bound:12.7'], hostname=None),
call('prometheus.custom.histogram.count', 666, ['upper_bound:18.2'], hostname=None)
])
class TestPrometheusTextParsing(unittest.TestCase):
"""
The docstrings of each test_* method is a string representation of the expected MetricFamily (if present)
"""
def setUp(self):
self.check = PrometheusCheck('prometheus_check', {}, {}, {})
def test_parse_one_gauge(self):
"""
name: "etcd_server_has_leader"
help: "Whether or not a leader exists. 1 is existence, 0 is not."
type: GAUGE
metric {
gauge {
value: 1.0
}
}
"""
text_data = (
"# HELP etcd_server_has_leader Whether or not a leader exists. 1 is existence, 0 is not.\n"
"# TYPE etcd_server_has_leader gauge\n"
"etcd_server_has_leader 1\n")
expected_etcd_metric = metrics_pb2.MetricFamily()
expected_etcd_metric.help = "Whether or not a leader exists. 1 is existence, 0 is not."
expected_etcd_metric.name = "etcd_server_has_leader"
expected_etcd_metric.type = 1
expected_etcd_metric.metric.add().gauge.value = 1
# Iter on the generator to get all metrics
response = MockResponse(text_data, 'text/plain; version=0.0.4')
metrics = [k for k in self.check.parse_metric_family(response)]
self.assertEqual(1, len(metrics))
current_metric = metrics[0]
self.assertEqual(expected_etcd_metric, current_metric)
# Remove the old metric and add a new one with a different value
expected_etcd_metric.metric.pop()
expected_etcd_metric.metric.add().gauge.value = 0
self.assertNotEqual(expected_etcd_metric, current_metric)
# Re-add the expected value but as different type: it should works
expected_etcd_metric.metric.pop()
expected_etcd_metric.metric.add().gauge.value = 1.0
self.assertEqual(expected_etcd_metric, current_metric)
def test_parse_one_counter(self):
"""
name: "go_memstats_mallocs_total"
help: "Total number of mallocs."
type: COUNTER
metric {
counter {
value: 18713.0
}
}
"""
text_data = (
"# HELP go_memstats_mallocs_total Total number of mallocs.\n"
"# TYPE go_memstats_mallocs_total counter\n"
"go_memstats_mallocs_total 18713\n")
expected_etcd_metric = metrics_pb2.MetricFamily()
expected_etcd_metric.help = "Total number of mallocs."
expected_etcd_metric.name = "go_memstats_mallocs_total"
expected_etcd_metric.type = 0
expected_etcd_metric.metric.add().counter.value = 18713
# Iter on the generator to get all metrics
response = MockResponse(text_data, 'text/plain; version=0.0.4')
metrics = [k for k in self.check.parse_metric_family(response)]
self.assertEqual(1, len(metrics))
current_metric = metrics[0]
self.assertEqual(expected_etcd_metric, current_metric)
# Remove the old metric and add a new one with a different value
expected_etcd_metric.metric.pop()
expected_etcd_metric.metric.add().counter.value = 18714
self.assertNotEqual(expected_etcd_metric, current_metric)
def test_parse_one_histograms_with_label(self):
text_data = (
'# HELP etcd_disk_wal_fsync_duration_seconds The latency distributions of fsync called by wal.\n'
'# TYPE etcd_disk_wal_fsync_duration_seconds histogram\n'
'etcd_disk_wal_fsync_duration_seconds_bucket{app="vault",le="0.001"} 2\n'
'etcd_disk_wal_fsync_duration_seconds_bucket{app="vault",le="0.002"} 2\n'
'etcd_disk_wal_fsync_duration_seconds_bucket{app="vault",le="0.004"} 2\n'
'etcd_disk_wal_fsync_duration_seconds_bucket{app="vault",le="0.008"} 2\n'
'etcd_disk_wal_fsync_duration_seconds_bucket{app="vault",le="0.016"} 4\n'
'etcd_disk_wal_fsync_duration_seconds_bucket{app="vault",le="0.032"} 4\n'
'etcd_disk_wal_fsync_duration_seconds_bucket{app="vault",le="0.064"} 4\n'
'etcd_disk_wal_fsync_duration_seconds_bucket{app="vault",le="0.128"} 4\n'
'etcd_disk_wal_fsync_duration_seconds_bucket{app="vault",le="0.256"} 4\n'
'etcd_disk_wal_fsync_duration_seconds_bucket{app="vault",le="0.512"} 4\n'
'etcd_disk_wal_fsync_duration_seconds_bucket{app="vault",le="1.024"} 4\n'
'etcd_disk_wal_fsync_duration_seconds_bucket{app="vault",le="2.048"} 4\n'
'etcd_disk_wal_fsync_duration_seconds_bucket{app="vault",le="4.096"} 4\n'
'etcd_disk_wal_fsync_duration_seconds_bucket{app="vault",le="8.192"} 4\n'
'etcd_disk_wal_fsync_duration_seconds_bucket{app="vault",le="+Inf"} 4\n'
'etcd_disk_wal_fsync_duration_seconds_sum{app="vault"} 0.026131671\n'
'etcd_disk_wal_fsync_duration_seconds_count{app="vault"} 4\n')
expected_etcd_vault_metric = metrics_pb2.MetricFamily()
expected_etcd_vault_metric.help = "The latency distributions of fsync called by wal."
expected_etcd_vault_metric.name = "etcd_disk_wal_fsync_duration_seconds"
expected_etcd_vault_metric.type = 4
histogram_metric = expected_etcd_vault_metric.metric.add()
# Label for app vault
summary_label = histogram_metric.label.add()
summary_label.name, summary_label.value = "app", "vault"
for upper_bound, cumulative_count in [
(0.001, 2),
(0.002, 2),
(0.004, 2),
(0.008, 2),
(0.016, 4),
(0.032, 4),
(0.064, 4),
(0.128, 4),
(0.256, 4),
(0.512, 4),
(1.024, 4),
(2.048, 4),
(4.096, 4),
(8.192, 4),
(float('inf'), 4),
]:
bucket = histogram_metric.histogram.bucket.add()
bucket.upper_bound = upper_bound
bucket.cumulative_count = cumulative_count
# Root histogram sample
histogram_metric.histogram.sample_count = 4
histogram_metric.histogram.sample_sum = 0.026131671
# Iter on the generator to get all metrics
response = MockResponse(text_data, 'text/plain; version=0.0.4')
metrics = [k for k in self.check.parse_metric_family(response)]
self.assertEqual(1, len(metrics))
current_metric = metrics[0]
self.assertEqual(expected_etcd_vault_metric, current_metric)
def test_parse_one_histogram(self):
"""
name: "etcd_disk_wal_fsync_duration_seconds"
help: "The latency distributions of fsync called by wal."
type: HISTOGRAM
metric {
histogram {
sample_count: 4
sample_sum: 0.026131671
bucket {
cumulative_count: 2
upper_bound: 0.001
}
bucket {
cumulative_count: 2
upper_bound: 0.002
}
bucket {
cumulative_count: 2
upper_bound: 0.004
}
bucket {
cumulative_count: 2
upper_bound: 0.008
}
bucket {
cumulative_count: 4
upper_bound: 0.016
}
bucket {
cumulative_count: 4
upper_bound: 0.032
}
bucket {
cumulative_count: 4
upper_bound: 0.064
}
bucket {
cumulative_count: 4
upper_bound: 0.128
}
bucket {
cumulative_count: 4
upper_bound: 0.256
}
bucket {
cumulative_count: 4
upper_bound: 0.512
}
bucket {
cumulative_count: 4
upper_bound: 1.024
}
bucket {
cumulative_count: 4
upper_bound: 2.048
}
bucket {
cumulative_count: 4
upper_bound: 4.096
}
bucket {
cumulative_count: 4
upper_bound: 8.192
}
bucket {
cumulative_count: 4
upper_bound: inf
}
}
}
"""
text_data = (
'# HELP etcd_disk_wal_fsync_duration_seconds The latency distributions of fsync called by wal.\n'
'# TYPE etcd_disk_wal_fsync_duration_seconds histogram\n'
'etcd_disk_wal_fsync_duration_seconds_bucket{le="0.001"} 2\n'
'etcd_disk_wal_fsync_duration_seconds_bucket{le="0.002"} 2\n'
'etcd_disk_wal_fsync_duration_seconds_bucket{le="0.004"} 2\n'
'etcd_disk_wal_fsync_duration_seconds_bucket{le="0.008"} 2\n'
'etcd_disk_wal_fsync_duration_seconds_bucket{le="0.016"} 4\n'
'etcd_disk_wal_fsync_duration_seconds_bucket{le="0.032"} 4\n'
'etcd_disk_wal_fsync_duration_seconds_bucket{le="0.064"} 4\n'
'etcd_disk_wal_fsync_duration_seconds_bucket{le="0.128"} 4\n'
'etcd_disk_wal_fsync_duration_seconds_bucket{le="0.256"} 4\n'
'etcd_disk_wal_fsync_duration_seconds_bucket{le="0.512"} 4\n'
'etcd_disk_wal_fsync_duration_seconds_bucket{le="1.024"} 4\n'
'etcd_disk_wal_fsync_duration_seconds_bucket{le="2.048"} 4\n'
'etcd_disk_wal_fsync_duration_seconds_bucket{le="4.096"} 4\n'
'etcd_disk_wal_fsync_duration_seconds_bucket{le="8.192"} 4\n'
'etcd_disk_wal_fsync_duration_seconds_bucket{le="+Inf"} 4\n'
'etcd_disk_wal_fsync_duration_seconds_sum 0.026131671\n'
'etcd_disk_wal_fsync_duration_seconds_count 4\n')
expected_etcd_metric = metrics_pb2.MetricFamily()
expected_etcd_metric.help = "The latency distributions of fsync called by wal."
expected_etcd_metric.name = "etcd_disk_wal_fsync_duration_seconds"
expected_etcd_metric.type = 4
histogram_metric = expected_etcd_metric.metric.add()
for upper_bound, cumulative_count in [
(0.001, 2),
(0.002, 2),
(0.004, 2),
(0.008, 2),
(0.016, 4),
(0.032, 4),
(0.064, 4),
(0.128, 4),
(0.256, 4),
(0.512, 4),
(1.024, 4),
(2.048, 4),
(4.096, 4),
(8.192, 4),
(float('inf'), 4),
]:
bucket = histogram_metric.histogram.bucket.add()
bucket.upper_bound = upper_bound
bucket.cumulative_count = cumulative_count
# Root histogram sample
histogram_metric.histogram.sample_count = 4
histogram_metric.histogram.sample_sum = 0.026131671
# Iter on the generator to get all metrics
response = MockResponse(text_data, 'text/plain; version=0.0.4')
metrics = [k for k in self.check.parse_metric_family(response)]
self.assertEqual(1, len(metrics))
current_metric = metrics[0]
self.assertEqual(expected_etcd_metric, current_metric)
def test_parse_two_histograms_with_label(self):
text_data = (
'# HELP etcd_disk_wal_fsync_duration_seconds The latency distributions of fsync called by wal.\n'
'# TYPE etcd_disk_wal_fsync_duration_seconds histogram\n'
'etcd_disk_wal_fsync_duration_seconds_bucket{kind="fs",app="vault",le="0.001"} 2\n'
'etcd_disk_wal_fsync_duration_seconds_bucket{kind="fs",app="vault",le="0.002"} 2\n'
'etcd_disk_wal_fsync_duration_seconds_bucket{kind="fs",app="vault",le="0.004"} 2\n'
'etcd_disk_wal_fsync_duration_seconds_bucket{kind="fs",app="vault",le="0.008"} 2\n'
'etcd_disk_wal_fsync_duration_seconds_bucket{kind="fs",app="vault",le="0.016"} 4\n'
'etcd_disk_wal_fsync_duration_seconds_bucket{kind="fs",app="vault",le="0.032"} 4\n'
'etcd_disk_wal_fsync_duration_seconds_bucket{kind="fs",app="vault",le="0.064"} 4\n'
'etcd_disk_wal_fsync_duration_seconds_bucket{kind="fs",app="vault",le="0.128"} 4\n'
'etcd_disk_wal_fsync_duration_seconds_bucket{kind="fs",app="vault",le="0.256"} 4\n'
'etcd_disk_wal_fsync_duration_seconds_bucket{kind="fs",app="vault",le="0.512"} 4\n'
'etcd_disk_wal_fsync_duration_seconds_bucket{kind="fs",app="vault",le="1.024"} 4\n'
'etcd_disk_wal_fsync_duration_seconds_bucket{kind="fs",app="vault",le="2.048"} 4\n'
'etcd_disk_wal_fsync_duration_seconds_bucket{kind="fs",app="vault",le="4.096"} 4\n'
'etcd_disk_wal_fsync_duration_seconds_bucket{kind="fs",app="vault",le="8.192"} 4\n'
'etcd_disk_wal_fsync_duration_seconds_bucket{kind="fs",app="vault",le="+Inf"} 4\n'
'etcd_disk_wal_fsync_duration_seconds_sum{kind="fs",app="vault"} 0.026131671\n'
'etcd_disk_wal_fsync_duration_seconds_count{kind="fs",app="vault"} 4\n'
'etcd_disk_wal_fsync_duration_seconds_bucket{kind="fs",app="kubernetes",le="0.001"} 718\n'
'etcd_disk_wal_fsync_duration_seconds_bucket{kind="fs",app="kubernetes",le="0.002"} 740\n'
'etcd_disk_wal_fsync_duration_seconds_bucket{kind="fs",app="kubernetes",le="0.004"} 743\n'
'etcd_disk_wal_fsync_duration_seconds_bucket{kind="fs",app="kubernetes",le="0.008"} 748\n'
'etcd_disk_wal_fsync_duration_seconds_bucket{kind="fs",app="kubernetes",le="0.016"} 751\n'
'etcd_disk_wal_fsync_duration_seconds_bucket{kind="fs",app="kubernetes",le="0.032"} 751\n'
'etcd_disk_wal_fsync_duration_seconds_bucket{kind="fs",app="kubernetes",le="0.064"} 751\n'
'etcd_disk_wal_fsync_duration_seconds_bucket{kind="fs",app="kubernetes",le="0.128"} 751\n'
'etcd_disk_wal_fsync_duration_seconds_bucket{kind="fs",app="kubernetes",le="0.256"} 751\n'
'etcd_disk_wal_fsync_duration_seconds_bucket{kind="fs",app="kubernetes",le="0.512"} 751\n'
'etcd_disk_wal_fsync_duration_seconds_bucket{kind="fs",app="kubernetes",le="1.024"} 751\n'
'etcd_disk_wal_fsync_duration_seconds_bucket{kind="fs",app="kubernetes",le="2.048"} 751\n'
'etcd_disk_wal_fsync_duration_seconds_bucket{kind="fs",app="kubernetes",le="4.096"} 751\n'
'etcd_disk_wal_fsync_duration_seconds_bucket{kind="fs",app="kubernetes",le="8.192"} 751\n'
'etcd_disk_wal_fsync_duration_seconds_bucket{kind="fs",app="kubernetes",le="+Inf"} 751\n'
'etcd_disk_wal_fsync_duration_seconds_sum{kind="fs",app="kubernetes"} 0.3097010759999998\n'
'etcd_disk_wal_fsync_duration_seconds_count{kind="fs",app="kubernetes"} 751\n')
expected_etcd_metric = metrics_pb2.MetricFamily()
expected_etcd_metric.help = "The latency distributions of fsync called by wal."
expected_etcd_metric.name = "etcd_disk_wal_fsync_duration_seconds"
expected_etcd_metric.type = 4
# Vault
histogram_metric = expected_etcd_metric.metric.add()
# Label for app vault
summary_label = histogram_metric.label.add()
summary_label.name, summary_label.value = "kind", "fs"
summary_label = histogram_metric.label.add()
summary_label.name, summary_label.value = "app", "vault"
for upper_bound, cumulative_count in [
(0.001, 2),
(0.002, 2),
(0.004, 2),
(0.008, 2),
(0.016, 4),
(0.032, 4),
(0.064, 4),
(0.128, 4),
(0.256, 4),
(0.512, 4),
(1.024, 4),
(2.048, 4),
(4.096, 4),
(8.192, 4),
(float('inf'), 4),
]:
bucket = histogram_metric.histogram.bucket.add()
bucket.upper_bound = upper_bound
bucket.cumulative_count = cumulative_count
# Root histogram sample
histogram_metric.histogram.sample_count = 4
histogram_metric.histogram.sample_sum = 0.026131671
# Kubernetes
histogram_metric = expected_etcd_metric.metric.add()
# Label for app kubernetes
summary_label = histogram_metric.label.add()
summary_label.name, summary_label.value = "kind", "fs"
summary_label = histogram_metric.label.add()
summary_label.name, summary_label.value = "app", "kubernetes"
for upper_bound, cumulative_count in [
(0.001, 718),
(0.002, 740),
(0.004, 743),
(0.008, 748),
(0.016, 751),
(0.032, 751),
(0.064, 751),
(0.128, 751),
(0.256, 751),
(0.512, 751),
(1.024, 751),
(2.048, 751),
(4.096, 751),
(8.192, 751),
(float('inf'), 751),
]:
bucket = histogram_metric.histogram.bucket.add()
bucket.upper_bound = upper_bound
bucket.cumulative_count = cumulative_count
# Root histogram sample
histogram_metric.histogram.sample_count = 751
histogram_metric.histogram.sample_sum = 0.3097010759999998
# Iter on the generator to get all metrics
response = MockResponse(text_data, 'text/plain; version=0.0.4')
metrics = [k for k in self.check.parse_metric_family(response)]
self.assertEqual(1, len(metrics))
current_metric = metrics[0]
self.assertEqual(expected_etcd_metric, current_metric)
def test_parse_one_summary(self):
"""
name: "http_response_size_bytes"
help: "The HTTP response sizes in bytes."
type: SUMMARY
metric {
label {
name: "handler"
value: "prometheus"
}
summary {
sample_count: 5
sample_sum: 120512.0
quantile {
quantile: 0.5
value: 24547.0
}
quantile {
quantile: 0.9
value: 25763.0
}
quantile {
quantile: 0.99
value: 25763.0
}
}
}
"""
text_data = (
'# HELP http_response_size_bytes The HTTP response sizes in bytes.\n'
'# TYPE http_response_size_bytes summary\n'
'http_response_size_bytes{handler="prometheus",quantile="0.5"} 24547\n'
'http_response_size_bytes{handler="prometheus",quantile="0.9"} 25763\n'
'http_response_size_bytes{handler="prometheus",quantile="0.99"} 25763\n'
'http_response_size_bytes_sum{handler="prometheus"} 120512\n'
'http_response_size_bytes_count{handler="prometheus"} 5\n')
expected_etcd_metric = metrics_pb2.MetricFamily()
expected_etcd_metric.help = "The HTTP response sizes in bytes."
expected_etcd_metric.name = "http_response_size_bytes"
expected_etcd_metric.type = 2
summary_metric = expected_etcd_metric.metric.add()
# Label for prometheus handler
summary_label = summary_metric.label.add()
summary_label.name, summary_label.value = "handler", "prometheus"
# Root summary sample
summary_metric.summary.sample_count = 5
summary_metric.summary.sample_sum = 120512
# Create quantiles 0.5, 0.9, 0.99
quantile_05 = summary_metric.summary.quantile.add()
quantile_05.quantile = 0.5
quantile_05.value = 24547
quantile_09 = summary_metric.summary.quantile.add()
quantile_09.quantile = 0.9
quantile_09.value = 25763
quantile_099 = summary_metric.summary.quantile.add()
quantile_099.quantile = 0.99
quantile_099.value = 25763
# Iter on the generator to get all metrics
response = MockResponse(text_data, 'text/plain; version=0.0.4')
metrics = [k for k in self.check.parse_metric_family(response)]
self.assertEqual(1, len(metrics))
current_metric = metrics[0]
self.assertEqual(expected_etcd_metric, current_metric)
def test_parse_two_summaries_with_labels(self):
text_data = (
'# HELP http_response_size_bytes The HTTP response sizes in bytes.\n'
'# TYPE http_response_size_bytes summary\n'
'http_response_size_bytes{from="internet",handler="prometheus",quantile="0.5"} 24547\n'
'http_response_size_bytes{from="internet",handler="prometheus",quantile="0.9"} 25763\n'
'http_response_size_bytes{from="internet",handler="prometheus",quantile="0.99"} 25763\n'
'http_response_size_bytes_sum{from="internet",handler="prometheus"} 120512\n'
'http_response_size_bytes_count{from="internet",handler="prometheus"} 5\n'
'http_response_size_bytes{from="cluster",handler="prometheus",quantile="0.5"} 24615\n'
'http_response_size_bytes{from="cluster",handler="prometheus",quantile="0.9"} 24627\n'
'http_response_size_bytes{from="cluster",handler="prometheus",quantile="0.99"} 24627\n'
'http_response_size_bytes_sum{from="cluster",handler="prometheus"} 94913\n'
'http_response_size_bytes_count{from="cluster",handler="prometheus"} 4\n')
expected_etcd_metric = metrics_pb2.MetricFamily()
expected_etcd_metric.help = "The HTTP response sizes in bytes."
expected_etcd_metric.name = "http_response_size_bytes"
expected_etcd_metric.type = 2
# Metric from internet #
summary_metric_from_internet = expected_etcd_metric.metric.add()
# Label for prometheus handler
summary_label = summary_metric_from_internet.label.add()
summary_label.name, summary_label.value = "handler", "prometheus"
summary_label = summary_metric_from_internet.label.add()
summary_label.name, summary_label.value = "from", "internet"
# Root summary sample
summary_metric_from_internet.summary.sample_count = 5
summary_metric_from_internet.summary.sample_sum = 120512
# Create quantiles 0.5, 0.9, 0.99
quantile_05 = summary_metric_from_internet.summary.quantile.add()
quantile_05.quantile = 0.5
quantile_05.value = 24547
quantile_09 = summary_metric_from_internet.summary.quantile.add()
quantile_09.quantile = 0.9
quantile_09.value = 25763
quantile_099 = summary_metric_from_internet.summary.quantile.add()
quantile_099.quantile = 0.99
quantile_099.value = 25763
# Metric from cluster #
summary_metric_from_cluster = expected_etcd_metric.metric.add()
# Label for prometheus handler
summary_label = summary_metric_from_cluster.label.add()
summary_label.name, summary_label.value = "handler", "prometheus"
summary_label = summary_metric_from_cluster.label.add()
summary_label.name, summary_label.value = "from", "cluster"
# Root summary sample
summary_metric_from_cluster.summary.sample_count = 4
summary_metric_from_cluster.summary.sample_sum = 94913
# Create quantiles 0.5, 0.9, 0.99
quantile_05 = summary_metric_from_cluster.summary.quantile.add()
quantile_05.quantile = 0.5
quantile_05.value = 24615
quantile_09 = summary_metric_from_cluster.summary.quantile.add()
quantile_09.quantile = 0.9
quantile_09.value = 24627
quantile_099 = summary_metric_from_cluster.summary.quantile.add()
quantile_099.quantile = 0.99
quantile_099.value = 24627
# Iter on the generator to get all metrics
response = MockResponse(text_data, 'text/plain; version=0.0.4')
metrics = [k for k in self.check.parse_metric_family(response)]
self.assertEqual(1, len(metrics))
current_metric = metrics[0]
self.assertEqual(expected_etcd_metric, current_metric)
def test_parse_one_summary_with_none_values(self):
text_data = (
'# HELP http_response_size_bytes The HTTP response sizes in bytes.\n'
'# TYPE http_response_size_bytes summary\n'
'http_response_size_bytes{handler="prometheus",quantile="0.5"} NaN\n'
'http_response_size_bytes{handler="prometheus",quantile="0.9"} NaN\n'
'http_response_size_bytes{handler="prometheus",quantile="0.99"} NaN\n'
'http_response_size_bytes_sum{handler="prometheus"} 0\n'
'http_response_size_bytes_count{handler="prometheus"} 0\n')
expected_etcd_metric = metrics_pb2.MetricFamily()
expected_etcd_metric.help = "The HTTP response sizes in bytes."
expected_etcd_metric.name = "http_response_size_bytes"
expected_etcd_metric.type = 2
summary_metric = expected_etcd_metric.metric.add()
# Label for prometheus handler
summary_label = summary_metric.label.add()
summary_label.name, summary_label.value = "handler", "prometheus"
# Root summary sample
summary_metric.summary.sample_count = 0
summary_metric.summary.sample_sum = 0.
# Create quantiles 0.5, 0.9, 0.99
quantile_05 = summary_metric.summary.quantile.add()
quantile_05.quantile = 0.5
quantile_05.value = float('nan')
quantile_09 = summary_metric.summary.quantile.add()
quantile_09.quantile = 0.9
quantile_09.value = float('nan')
quantile_099 = summary_metric.summary.quantile.add()
quantile_099.quantile = 0.99
quantile_099.value = float('nan')
# Iter on the generator to get all metrics
response = MockResponse(text_data, 'text/plain; version=0.0.4')
metrics = [k for k in self.check.parse_metric_family(response)]
self.assertEqual(1, len(metrics))
current_metric = metrics[0]
# As the NaN value isn't supported when we are calling assertEqual
# we need to compare the object representation instead of the object itself
self.assertEqual(expected_etcd_metric.__repr__(), current_metric.__repr__())
@patch('requests.get')
def test_label_joins(self, mock_get):
""" Tests label join on text format """
text_data = None
f_name = os.path.join(os.path.dirname(__file__), 'fixtures', 'prometheus', 'ksm.txt')
with open(f_name, 'r') as f:
text_data = f.read()
mock_get.return_value = MagicMock(
status_code=200,
iter_lines=lambda **kwargs: text_data.split("\n"),
headers={'Content-Type': "text/plain"})
self.check.NAMESPACE = 'ksm'
self.check.label_joins = {
'kube_pod_info': {
'label_to_match': 'pod',
'labels_to_get': ['node', 'pod_ip']
},
'kube_deployment_labels': {
'label_to_match': 'deployment',
'labels_to_get': ['label_addonmanager_kubernetes_io_mode', 'label_k8s_app', 'label_kubernetes_io_cluster_service']
}
}
self.check.metrics_mapper = {'kube_pod_status_ready': 'pod.ready',
'kube_pod_status_scheduled': 'pod.scheduled',
'kube_deployment_status_replicas': 'deploy.replicas.available'}
self.check.gauge = MagicMock()
# dry run to build mapping
self.check.process("http://fake.endpoint:10055/metrics")
# run with submit
self.check.process("http://fake.endpoint:10055/metrics")
# check a bunch of metrics
self.check.gauge.assert_has_calls([
call('ksm.pod.ready', 1.0, ['pod:event-exporter-v0.1.7-958884745-qgnbw', 'namespace:kube-system', 'condition:true', 'node:gke-foobar-test-kube-default-pool-9b4ff111-0kch', 'pod_ip:11.32.3.14'], hostname=None),
call('ksm.pod.ready', 1.0, ['pod:fluentd-gcp-v2.0.9-6dj58', 'namespace:kube-system', 'condition:true', 'node:gke-foobar-test-kube-default-pool-9b4ff111-0kch', 'pod_ip:11.132.0.7'], hostname=None),
call('ksm.pod.ready', 1.0, ['pod:fluentd-gcp-v2.0.9-z348z', 'namespace:kube-system', 'condition:true', 'node:gke-foobar-test-kube-default-pool-9b4ff111-j75z', 'pod_ip:11.132.0.14'], hostname=None),
call('ksm.pod.ready', 1.0, ['pod:heapster-v1.4.3-2027615481-lmjm5', 'namespace:kube-system', 'condition:true', 'node:gke-foobar-test-kube-default-pool-9b4ff111-j75z', 'pod_ip:11.32.5.7'], hostname=None),
call('ksm.pod.ready', 1.0, ['pod:kube-dns-3092422022-lvrmx', 'namespace:kube-system', 'condition:true', 'node:gke-foobar-test-kube-default-pool-9b4ff111-0kch', 'pod_ip:11.32.3.10'], hostname=None),
call('ksm.pod.ready', 1.0, ['pod:kube-dns-3092422022-x0tjx', 'namespace:kube-system', 'condition:true', 'node:gke-foobar-test-kube-default-pool-9b4ff111-0kch', 'pod_ip:11.32.3.9'], hostname=None),
call('ksm.pod.ready', 1.0, ['pod:kube-dns-autoscaler-97162954-mf6d3', 'namespace:kube-system', 'condition:true', 'node:gke-foobar-test-kube-default-pool-9b4ff111-j75z', 'pod_ip:11.32.5.6'], hostname=None),
call('ksm.pod.ready', 1.0, ['pod:kube-proxy-gke-foobar-test-kube-default-pool-9b4ff111-0kch', 'namespace:kube-system', 'condition:true', 'node:gke-foobar-test-kube-default-pool-9b4ff111-0kch', 'pod_ip:11.132.0.7'], hostname=None),
call('ksm.pod.scheduled', 1.0, ['pod:ungaged-panther-kube-state-metrics-3918010230-64xwc', 'namespace:default', 'condition:true', 'node:gke-foobar-test-kube-default-pool-9b4ff111-j75z', 'pod_ip:11.32.5.45'], hostname=None),
call('ksm.pod.scheduled', 1.0, ['pod:event-exporter-v0.1.7-958884745-qgnbw', 'namespace:kube-system', 'condition:true', 'node:gke-foobar-test-kube-default-pool-9b4ff111-0kch', 'pod_ip:11.32.3.14'], hostname=None),
call('ksm.pod.scheduled', 1.0, ['pod:fluentd-gcp-v2.0.9-6dj58', 'namespace:kube-system', 'condition:true', 'node:gke-foobar-test-kube-default-pool-9b4ff111-0kch', 'pod_ip:11.132.0.7'], hostname=None),
call('ksm.pod.scheduled', 1.0, ['pod:fluentd-gcp-v2.0.9-z348z', 'namespace:kube-system', 'condition:true', 'node:gke-foobar-test-kube-default-pool-9b4ff111-j75z', 'pod_ip:11.132.0.14'], hostname=None),
call('ksm.pod.scheduled', 1.0, ['pod:heapster-v1.4.3-2027615481-lmjm5', 'namespace:kube-system', 'condition:true', 'node:gke-foobar-test-kube-default-pool-9b4ff111-j75z', 'pod_ip:11.32.5.7'], hostname=None),
call('ksm.pod.scheduled', 1.0, ['pod:kube-dns-3092422022-lvrmx', 'namespace:kube-system', 'condition:true', 'node:gke-foobar-test-kube-default-pool-9b4ff111-0kch', 'pod_ip:11.32.3.10'], hostname=None),
call('ksm.pod.scheduled', 1.0, ['pod:kube-dns-3092422022-x0tjx', 'namespace:kube-system', 'condition:true', 'node:gke-foobar-test-kube-default-pool-9b4ff111-0kch', 'pod_ip:11.32.3.9'], hostname=None),
call('ksm.deploy.replicas.available', 1.0, ['namespace:kube-system', 'deployment:event-exporter-v0.1.7', 'label_k8s_app:event-exporter', 'label_addonmanager_kubernetes_io_mode:Reconcile', 'label_kubernetes_io_cluster_service:true'], hostname=None),
call('ksm.deploy.replicas.available', 1.0, ['namespace:kube-system', 'deployment:heapster-v1.4.3', 'label_k8s_app:heapster', 'label_addonmanager_kubernetes_io_mode:Reconcile', 'label_kubernetes_io_cluster_service:true'], hostname=None),
call('ksm.deploy.replicas.available', 2.0, ['namespace:kube-system', 'deployment:kube-dns', 'label_kubernetes_io_cluster_service:true', 'label_addonmanager_kubernetes_io_mode:Reconcile', 'label_k8s_app:kube-dns'], hostname=None),
call('ksm.deploy.replicas.available', 1.0, ['namespace:kube-system', 'deployment:kube-dns-autoscaler', 'label_kubernetes_io_cluster_service:true', 'label_addonmanager_kubernetes_io_mode:Reconcile', 'label_k8s_app:kube-dns-autoscaler'], hostname=None),
call('ksm.deploy.replicas.available', 1.0, ['namespace:kube-system', 'deployment:kubernetes-dashboard', 'label_kubernetes_io_cluster_service:true', 'label_addonmanager_kubernetes_io_mode:Reconcile', 'label_k8s_app:kubernetes-dashboard'], hostname=None),
call('ksm.deploy.replicas.available', 1.0, ['namespace:kube-system', 'deployment:l7-default-backend', 'label_k8s_app:glbc', 'label_addonmanager_kubernetes_io_mode:Reconcile', 'label_kubernetes_io_cluster_service:true'], hostname=None),
call('ksm.deploy.replicas.available', 1.0, ['namespace:kube-system', 'deployment:tiller-deploy'], hostname=None),
call('ksm.deploy.replicas.available', 1.0, ['namespace:default', 'deployment:ungaged-panther-kube-state-metrics'], hostname=None)
], any_order=True)
@patch('requests.get')
def test_label_joins_gc(self, mock_get):
""" Tests label join GC on text format """
text_data = None
f_name = os.path.join(os.path.dirname(__file__), 'fixtures', 'prometheus', 'ksm.txt')
with open(f_name, 'r') as f:
text_data = f.read()
mock_get.return_value = MagicMock(
status_code=200,
iter_lines=lambda **kwargs: text_data.split("\n"),
headers={'Content-Type': "text/plain"})
self.check.NAMESPACE = 'ksm'
self.check.label_joins = {
'kube_pod_info': {
'label_to_match': 'pod',
'labels_to_get': ['node', 'pod_ip']
}
}
self.check.metrics_mapper = {'kube_pod_status_ready': 'pod.ready'}
self.check.gauge = MagicMock()
# dry run to build mapping
self.check.process("http://fake.endpoint:10055/metrics")
# run with submit
self.check.process("http://fake.endpoint:10055/metrics")
# check a bunch of metrics
self.check.gauge.assert_has_calls([
call('ksm.pod.ready', 1.0, ['pod:fluentd-gcp-v2.0.9-6dj58', 'namespace:kube-system', 'condition:true', 'node:gke-foobar-test-kube-default-pool-9b4ff111-0kch', 'pod_ip:11.132.0.7'], hostname=None),
call('ksm.pod.ready', 1.0, ['pod:fluentd-gcp-v2.0.9-z348z', 'namespace:kube-system', 'condition:true', 'node:gke-foobar-test-kube-default-pool-9b4ff111-j75z', 'pod_ip:11.132.0.14'], hostname=None),
], any_order=True)
self.assertEqual(15, len(self.check._label_mapping['pod']))
text_data = text_data.replace('dd-agent-62bgh', 'dd-agent-1337')
mock_get.return_value = MagicMock(
status_code=200,
iter_lines=lambda **kwargs: text_data.split("\n"),
headers={'Content-Type': "text/plain"})
self.check.process("http://fake.endpoint:10055/metrics")
self.assertTrue('dd-agent-1337' in self.check._label_mapping['pod'])
self.assertFalse('dd-agent-62bgh' in self.check._label_mapping['pod'])
self.assertEqual(15, len(self.check._label_mapping['pod']))
@patch('requests.get')
def test_label_joins_missconfigured(self, mock_get):
""" Tests label join missconfigured label is ignored """
text_data = None
f_name = os.path.join(os.path.dirname(__file__), 'fixtures', 'prometheus', 'ksm.txt')
with open(f_name, 'r') as f:
text_data = f.read()
mock_get.return_value = MagicMock(
status_code=200,
iter_lines=lambda **kwargs: text_data.split("\n"),
headers={'Content-Type': "text/plain"})
self.check.NAMESPACE = 'ksm'
self.check.label_joins = {
'kube_pod_info': {
'label_to_match': 'pod',
'labels_to_get': ['node', 'not_existing']
}
}
self.check.metrics_mapper = {'kube_pod_status_ready': 'pod.ready'}
self.check.gauge = MagicMock()
# dry run to build mapping
self.check.process("http://fake.endpoint:10055/metrics")
# run with submit
self.check.process("http://fake.endpoint:10055/metrics")
# check a bunch of metrics
self.check.gauge.assert_has_calls([
call('ksm.pod.ready', 1.0, ['pod:fluentd-gcp-v2.0.9-6dj58', 'namespace:kube-system', 'condition:true', 'node:gke-foobar-test-kube-default-pool-9b4ff111-0kch'], hostname=None),
call('ksm.pod.ready', 1.0, ['pod:fluentd-gcp-v2.0.9-z348z', 'namespace:kube-system', 'condition:true', 'node:gke-foobar-test-kube-default-pool-9b4ff111-j75z'], hostname=None),
], any_order=True)
@patch('requests.get')
def test_label_join_not_existing(self, mock_get):
""" Tests label join on non existing matching label is ignored """
text_data = None
f_name = os.path.join(os.path.dirname(__file__), 'fixtures', 'prometheus', 'ksm.txt')
with open(f_name, 'r') as f:
text_data = f.read()
mock_get.return_value = MagicMock(
status_code=200,
iter_lines=lambda **kwargs: text_data.split("\n"),
headers={'Content-Type': "text/plain"})
self.check.NAMESPACE = 'ksm'
self.check.label_joins = {
'kube_pod_info': {
'label_to_match': 'not_existing',
'labels_to_get': ['node', 'pod_ip']
}
}
self.check.metrics_mapper = {'kube_pod_status_ready': 'pod.ready'}
self.check.gauge = MagicMock()
# dry run to build mapping
self.check.process("http://fake.endpoint:10055/metrics")
# run with submit
self.check.process("http://fake.endpoint:10055/metrics")
# check a bunch of metrics
self.check.gauge.assert_has_calls([
call('ksm.pod.ready', 1.0, ['pod:fluentd-gcp-v2.0.9-6dj58', 'namespace:kube-system', 'condition:true'], hostname=None),
call('ksm.pod.ready', 1.0, ['pod:fluentd-gcp-v2.0.9-z348z', 'namespace:kube-system', 'condition:true'], hostname=None),
], any_order=True)
@patch('requests.get')
def test_label_join_metric_not_existing(self, mock_get):
""" Tests label join on non existing metric is ignored """
text_data = None
f_name = os.path.join(os.path.dirname(__file__), 'fixtures', 'prometheus', 'ksm.txt')
with open(f_name, 'r') as f:
text_data = f.read()
mock_get.return_value = MagicMock(
status_code=200,
iter_lines=lambda **kwargs: text_data.split("\n"),
headers={'Content-Type': "text/plain"})
self.check.NAMESPACE = 'ksm'
self.check.label_joins = {
'not_existing': {
'label_to_match': 'pod',
'labels_to_get': ['node', 'pod_ip']
}
}
self.check.metrics_mapper = {'kube_pod_status_ready': 'pod.ready'}
self.check.gauge = MagicMock()
# dry run to build mapping
self.check.process("http://fake.endpoint:10055/metrics")
# run with submit
self.check.process("http://fake.endpoint:10055/metrics")
# check a bunch of metrics
self.check.gauge.assert_has_calls([
call('ksm.pod.ready', 1.0, ['pod:fluentd-gcp-v2.0.9-6dj58', 'namespace:kube-system', 'condition:true'], hostname=None),
call('ksm.pod.ready', 1.0, ['pod:fluentd-gcp-v2.0.9-z348z', 'namespace:kube-system', 'condition:true'], hostname=None),
], any_order=True)
@patch('requests.get')
def test_label_join_with_hostname(self, mock_get):
""" Tests label join and hostname override on a metric """
text_data = None
f_name = os.path.join(os.path.dirname(__file__), 'fixtures', 'prometheus', 'ksm.txt')
with open(f_name, 'r') as f:
text_data = f.read()
mock_get.return_value = MagicMock(
status_code=200,
iter_lines=lambda **kwargs: text_data.split("\n"),
headers={'Content-Type': "text/plain"})
self.check.NAMESPACE = 'ksm'
self.check.label_joins = {
'kube_pod_info': {
'label_to_match': 'pod',
'labels_to_get': ['node']
}
}
self.check.label_to_hostname = 'node'
self.check.metrics_mapper = {'kube_pod_status_ready': 'pod.ready'}
self.check.gauge = MagicMock()
# dry run to build mapping
self.check.process("http://fake.endpoint:10055/metrics")
# run with submit
self.check.process("http://fake.endpoint:10055/metrics")
# check a bunch of metrics
self.check.gauge.assert_has_calls([
call('ksm.pod.ready', 1.0, ['pod:fluentd-gcp-v2.0.9-6dj58', 'namespace:kube-system', 'condition:true', 'node:gke-foobar-test-kube-default-pool-9b4ff111-0kch'], hostname='gke-foobar-test-kube-default-pool-9b4ff111-0kch'),
call('ksm.pod.ready', 1.0, ['pod:fluentd-gcp-v2.0.9-z348z', 'namespace:kube-system', 'condition:true', 'node:gke-foobar-test-kube-default-pool-9b4ff111-j75z'], hostname='gke-foobar-test-kube-default-pool-9b4ff111-j75z'),
], any_order=True)
| 49.148118 | 265 | 0.632033 | 7,887 | 61,386 | 4.663117 | 0.06796 | 0.032302 | 0.023329 | 0.033933 | 0.800071 | 0.773778 | 0.747295 | 0.708793 | 0.685709 | 0.669639 | 0 | 0.0543 | 0.243981 | 61,386 | 1,248 | 266 | 49.1875 | 0.738176 | 0.092904 | 0 | 0.525028 | 0 | 0.002225 | 0.333217 | 0.221588 | 0 | 0 | 0 | 0 | 0.080089 | 1 | 0.046719 | false | 0.001112 | 0.007786 | 0 | 0.058954 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
8a27b25c87bb472d12f2ed64bfae08737339adbd | 197 | py | Python | apps/ots/strategy/risk_manager_base.py | yt7589/iching | 6673da38f4c80e7fd297c86fedc5616aee8ac09b | [
"Apache-2.0"
] | 32 | 2020-04-14T08:32:18.000Z | 2022-02-09T07:05:08.000Z | apps/ots/strategy/risk_manager_base.py | trinh-hoang-hiep/iching | e1feae5741c3cbde535d7a275b01d4f0cf9e21ed | [
"Apache-2.0"
] | 1 | 2020-04-08T10:42:15.000Z | 2020-04-15T01:38:03.000Z | apps/ots/strategy/risk_manager_base.py | trinh-hoang-hiep/iching | e1feae5741c3cbde535d7a275b01d4f0cf9e21ed | [
"Apache-2.0"
] | 4 | 2020-08-25T03:56:46.000Z | 2021-05-11T05:55:51.000Z | #
from apps.ots.event.signal_event import SignalEvent
class RiskManagerBase(object):
def __init__(self):
self.refl = ''
def get_mkt_quantity(self, signalEvent):
return 100 | 21.888889 | 51 | 0.695431 | 24 | 197 | 5.416667 | 0.791667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.019355 | 0.213198 | 197 | 9 | 52 | 21.888889 | 0.819355 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.166667 | 0.166667 | 0.833333 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 5 |
8a2c327f1ed1b2e7f623cb9ef6ad2bb57bac8b1d | 78 | py | Python | tests/vis/__init__.py | UCLCheminformatics/ScaffoldGraph | 0443ce118110290a99601d65b2d000ac8bc7a1e9 | [
"MIT"
] | 121 | 2019-12-12T15:30:16.000Z | 2022-02-28T02:00:54.000Z | tests/vis/__init__.py | UCLCheminformatics/ScaffoldGraph | 0443ce118110290a99601d65b2d000ac8bc7a1e9 | [
"MIT"
] | 8 | 2020-04-04T15:37:26.000Z | 2021-11-17T07:30:31.000Z | tests/vis/__init__.py | UCLCheminformatics/ScaffoldGraph | 0443ce118110290a99601d65b2d000ac8bc7a1e9 | [
"MIT"
] | 28 | 2019-12-16T11:58:53.000Z | 2021-11-19T09:57:46.000Z | """
scaffoldgraph tests.vis
"""
from ..test_network import long_test_network
| 13 | 44 | 0.769231 | 10 | 78 | 5.7 | 0.8 | 0.385965 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.115385 | 78 | 5 | 45 | 15.6 | 0.826087 | 0.294872 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
8a4f9f2880e7d0b0c9a30ba271e8c017476374ad | 395 | py | Python | flaskblog/errors/handlers.py | saudahabib/writerly | c7c52d481a35ced08dbace1e3298f85be0597785 | [
"MIT"
] | null | null | null | flaskblog/errors/handlers.py | saudahabib/writerly | c7c52d481a35ced08dbace1e3298f85be0597785 | [
"MIT"
] | null | null | null | flaskblog/errors/handlers.py | saudahabib/writerly | c7c52d481a35ced08dbace1e3298f85be0597785 | [
"MIT"
] | null | null | null | from flask import Blueprint, render_template
errors = Blueprint('errors', __name__)
@errors.app_errorhandler(404)
def error_404(error):
return render_template('errors/404.html'),404
@errors.app_errorhandler(403)
def error_403(error):
return render_template('errors/403.html'),403
@errors.app_errorhandler(500)
def error_500(errors):
return render_template('errors/500.html'),500
| 24.6875 | 49 | 0.774684 | 55 | 395 | 5.309091 | 0.309091 | 0.191781 | 0.273973 | 0.267123 | 0.212329 | 0 | 0 | 0 | 0 | 0 | 0 | 0.101408 | 0.101266 | 395 | 15 | 50 | 26.333333 | 0.721127 | 0 | 0 | 0 | 0 | 0 | 0.129114 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.272727 | false | 0 | 0.090909 | 0.272727 | 0.636364 | 0.181818 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 5 |
8a6e779a5a1187453f8212bf49fc6479c3e26133 | 205 | py | Python | test_Calculator/src/calculator.py | XuXuClassMate/My_Test_PyProject | 5822455af47f5855d1db4c388c2c973c440a4d3f | [
"Apache-2.0"
] | null | null | null | test_Calculator/src/calculator.py | XuXuClassMate/My_Test_PyProject | 5822455af47f5855d1db4c388c2c973c440a4d3f | [
"Apache-2.0"
] | null | null | null | test_Calculator/src/calculator.py | XuXuClassMate/My_Test_PyProject | 5822455af47f5855d1db4c388c2c973c440a4d3f | [
"Apache-2.0"
] | null | null | null | class Calculator:
def add(self, a, b):
return a + b
def sub(self, a, b):
return a - b
def mul(self, a, b):
return a * b
def div(self, a, b):
return a / b
| 15.769231 | 24 | 0.463415 | 34 | 205 | 2.794118 | 0.323529 | 0.168421 | 0.252632 | 0.505263 | 0.684211 | 0.684211 | 0.536842 | 0 | 0 | 0 | 0 | 0 | 0.414634 | 205 | 12 | 25 | 17.083333 | 0.791667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.444444 | false | 0 | 0 | 0.444444 | 1 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 5 |
8a7150859aaae45df48c268b56b41b09693387cc | 51 | py | Python | plugins/takeaphoto.py | dark10crk001/alita3.0 | 70db1bf27e703cfe25f049b1644c0932ce78176b | [
"Unlicense"
] | null | null | null | plugins/takeaphoto.py | dark10crk001/alita3.0 | 70db1bf27e703cfe25f049b1644c0932ce78176b | [
"Unlicense"
] | null | null | null | plugins/takeaphoto.py | dark10crk001/alita3.0 | 70db1bf27e703cfe25f049b1644c0932ce78176b | [
"Unlicense"
] | null | null | null | def exec(cmd,question):
return "来看着我的眼睛,微笑,茄子。" | 25.5 | 27 | 0.686275 | 8 | 51 | 4.375 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.137255 | 51 | 2 | 27 | 25.5 | 0.795455 | 0 | 0 | 0 | 0 | 0 | 0.269231 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | false | 0 | 0 | 0.5 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 5 |
8a7be333b83737a660d087bcb006b8d7c867297a | 235 | py | Python | dphsir/degrades/__init__.py | Zeqiang-Lai/DPHSIR | aef3bdabeed7b900b63a2b9f0a5222458b38554a | [
"MIT"
] | 3 | 2022-02-06T02:44:46.000Z | 2022-03-09T01:37:01.000Z | dphsir/degrades/__init__.py | Zeqiang-Lai/DPHSIR | aef3bdabeed7b900b63a2b9f0a5222458b38554a | [
"MIT"
] | null | null | null | dphsir/degrades/__init__.py | Zeqiang-Lai/DPHSIR | aef3bdabeed7b900b63a2b9f0a5222458b38554a | [
"MIT"
] | null | null | null | from .general import AffineTransform, PerspectiveTransform, HSI2RGB
from .blur import GaussianBlur, UniformBlur
from .sr import GaussianDownsample, BiCubicDownsample, UniformDownsample
from . import cs
from .noise import GaussianNoise
| 39.166667 | 72 | 0.851064 | 24 | 235 | 8.333333 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.004762 | 0.106383 | 235 | 5 | 73 | 47 | 0.947619 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
8ab549ab75be83199fc1012ade3cf9a230881b43 | 146 | py | Python | Examples/basic_scene.py | maithamn/BrainRender | 9359ccc5b278f58ee3124bcf75b9ebefe0378bbc | [
"MIT"
] | null | null | null | Examples/basic_scene.py | maithamn/BrainRender | 9359ccc5b278f58ee3124bcf75b9ebefe0378bbc | [
"MIT"
] | null | null | null | Examples/basic_scene.py | maithamn/BrainRender | 9359ccc5b278f58ee3124bcf75b9ebefe0378bbc | [
"MIT"
] | null | null | null | """
This tutorial shows how to create and render a brainrender scene
"""
from brainrender.scene import Scene
scene = Scene()
scene.render() | 18.25 | 68 | 0.726027 | 20 | 146 | 5.3 | 0.65 | 0.283019 | 0.283019 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.184932 | 146 | 8 | 69 | 18.25 | 0.890756 | 0.438356 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.333333 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
8ab883be71797dc36182b1784c96407bcbd9eab9 | 85 | py | Python | back-end/app/api/__init__.py | IMYCJ/flask-vuejs-blog | 3faeb332c6bc4c749f9df2b7613055b24cb3a3d5 | [
"MIT"
] | null | null | null | back-end/app/api/__init__.py | IMYCJ/flask-vuejs-blog | 3faeb332c6bc4c749f9df2b7613055b24cb3a3d5 | [
"MIT"
] | 4 | 2021-03-10T19:55:06.000Z | 2022-02-27T05:34:17.000Z | back-end/app/api/__init__.py | IMYCJ/flask-vuejs-blog | 3faeb332c6bc4c749f9df2b7613055b24cb3a3d5 | [
"MIT"
] | null | null | null | from flask import Blueprint
bp = Blueprint('api',__name__)
from app.api import ping | 17 | 30 | 0.776471 | 13 | 85 | 4.769231 | 0.692308 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.141176 | 85 | 5 | 31 | 17 | 0.849315 | 0 | 0 | 0 | 0 | 0 | 0.034884 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.666667 | 0 | 0.666667 | 0.666667 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 5 |
76f46dafdc669e03a0af9196c0640d1346c9bf06 | 50,955 | py | Python | tests/sentry/lang/javascript/test_plugin.py | gecka/sentry | 9bfcde5f244dc4a8d5cf81222f14d3f8de1d9877 | [
"BSD-3-Clause"
] | 1 | 2018-12-04T12:57:00.000Z | 2018-12-04T12:57:00.000Z | tests/sentry/lang/javascript/test_plugin.py | gecka/sentry | 9bfcde5f244dc4a8d5cf81222f14d3f8de1d9877 | [
"BSD-3-Clause"
] | 1 | 2020-05-12T05:44:07.000Z | 2020-05-12T05:44:07.000Z | tests/sentry/lang/javascript/test_plugin.py | gecka/sentry | 9bfcde5f244dc4a8d5cf81222f14d3f8de1d9877 | [
"BSD-3-Clause"
] | null | null | null | # coding: utf-8
from __future__ import absolute_import
import pytest
import os.path
import responses
from mock import patch
from django.conf import settings
from sentry.models import Event, File, Release, ReleaseFile
from sentry.testutils import TestCase
BASE64_SOURCEMAP = 'data:application/json;base64,' + (
'{"version":3,"file":"generated.js","sources":["/test.js"],"names":[],"mappings":"AAAA","sourcesContent":["console.log(\\"hello, World!\\")"]}'.
encode('base64').replace('\n', '')
)
def get_fixture_path(name):
return os.path.join(os.path.dirname(__file__), 'fixtures', name)
def load_fixture(name):
with open(get_fixture_path(name), 'rb') as fp:
return fp.read()
class JavascriptIntegrationTest(TestCase):
@pytest.mark.skipif(
settings.SENTRY_TAGSTORE == 'sentry.tagstore.v2.V2TagStorage',
reason='Queries are completly different when using tagstore'
)
def test_adds_contexts_without_device(self):
data = {
'message': 'hello',
'platform': 'javascript',
'request': {
'url':
'http://example.com',
'headers': [
[
'User-Agent',
'Mozilla/5.0 (Windows NT 6.2; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/28.0.1500.72 Safari/537.36'
],
],
}
}
# We do a preflight post, because there are many queries polluting the array
# before the actual "processing" happens (like, auth_user)
self._postWithHeader(data)
with self.assertWriteQueries({
'nodestore_node': 2,
'sentry_eventtag': 1,
'sentry_eventuser': 1,
'sentry_filtervalue': 6,
'sentry_groupedmessage': 1,
'sentry_message': 1,
'sentry_messagefiltervalue': 6,
'sentry_userreport': 1
}, debug=True): # debug=True is for coverage
resp = self._postWithHeader(data)
assert resp.status_code, 200
event = Event.objects.first()
contexts = event.interfaces['contexts'].to_json()
assert contexts.get('os') == {
'name': 'Windows 8',
'type': 'os',
}
assert contexts.get('browser') == {
'name': 'Chrome',
'type': 'browser',
'version': '28.0.1500',
}
assert contexts.get('device') is None
def test_adds_contexts_with_device(self):
data = {
'message': 'hello',
'platform': 'javascript',
'request': {
'url':
'http://example.com',
'headers': [
[
'User-Agent',
'Mozilla/5.0 (Linux; U; Android 4.3; en-us; SCH-R530U Build/JSS15J) AppleWebKit/534.30 (KHTML, like Gecko) Version/4.0 Mobile Safari/534.30 USCC-R530U'
],
],
}
}
resp = self._postWithHeader(data)
assert resp.status_code, 200
event = Event.objects.get()
contexts = event.interfaces['contexts'].to_json()
assert contexts.get('os') == {
'name': 'Android',
'type': 'os',
'version': '4.3',
}
assert contexts.get('browser') == {
'name': 'Android',
'type': 'browser',
'version': '4.3',
}
assert contexts.get('device') == {
'family': 'Samsung SCH-R530U',
'type': 'device',
'model': 'SCH-R530U',
'brand': 'Samsung',
}
def test_adds_contexts_with_ps4_device(self):
data = {
'message': 'hello',
'platform': 'javascript',
'request': {
'url':
'http://example.com',
'headers': [
[
'User-Agent',
'Mozilla/5.0 (PlayStation 4 3.55) AppleWebKit/537.78 (KHTML, like Gecko)'
],
],
}
}
resp = self._postWithHeader(data)
assert resp.status_code, 200
event = Event.objects.get()
contexts = event.interfaces['contexts'].to_json()
assert contexts.get('os') is None
assert contexts.get('browser') is None
assert contexts.get('device') == {
'family': 'PlayStation 4',
'type': 'device',
'model': 'PlayStation 4',
'brand': 'Sony',
}
@patch('sentry.lang.javascript.processor.fetch_file')
def test_source_expansion(self, mock_fetch_file):
data = {
'message': 'hello',
'platform': 'javascript',
'exception': {
'values': [
{
'type': 'Error',
'stacktrace': {
'frames': [
{
'abs_path': 'http://example.com/foo.js',
'filename': 'foo.js',
'lineno': 4,
'colno': 0,
},
{
'abs_path': 'http://example.com/foo.js',
'filename': 'foo.js',
'lineno': 1,
'colno': 0,
},
],
},
}
],
}
}
mock_fetch_file.return_value.body = '\n'.join('hello world')
mock_fetch_file.return_value.encoding = None
resp = self._postWithHeader(data)
assert resp.status_code, 200
mock_fetch_file.assert_called_once_with(
'http://example.com/foo.js',
project=self.project,
release=None,
dist=None,
allow_scraping=True,
)
event = Event.objects.get()
exception = event.interfaces['exception']
frame_list = exception.values[0].stacktrace.frames
frame = frame_list[0]
assert frame.pre_context == ['h', 'e', 'l']
assert frame.context_line == 'l'
assert frame.post_context == ['o', ' ', 'w', 'o', 'r']
frame = frame_list[1]
assert not frame.pre_context
assert frame.context_line == 'h'
assert frame.post_context == ['e', 'l', 'l', 'o', ' ']
# no source map means no raw_stacktrace
assert exception.values[0].raw_stacktrace is None
@patch('sentry.lang.javascript.processor.fetch_file')
@patch('sentry.lang.javascript.processor.discover_sourcemap')
def test_inlined_sources(self, mock_discover_sourcemap, mock_fetch_file):
data = {
'message': 'hello',
'platform': 'javascript',
'exception': {
'values': [
{
'type': 'Error',
'stacktrace': {
'frames': [
{
'abs_path': 'http://example.com/test.min.js',
'filename': 'test.js',
'lineno': 1,
'colno': 1,
},
],
},
}
],
}
}
mock_discover_sourcemap.return_value = BASE64_SOURCEMAP
mock_fetch_file.return_value.url = 'http://example.com/test.min.js'
mock_fetch_file.return_value.body = '\n'.join('<generated source>')
mock_fetch_file.return_value.encoding = None
resp = self._postWithHeader(data)
assert resp.status_code, 200
mock_fetch_file.assert_called_once_with(
'http://example.com/test.min.js',
project=self.project,
release=None,
dist=None,
allow_scraping=True,
)
event = Event.objects.get()
exception = event.interfaces['exception']
frame_list = exception.values[0].stacktrace.frames
frame = frame_list[0]
assert not frame.pre_context
assert frame.context_line == 'console.log("hello, World!")'
assert not frame.post_context
assert frame.data['sourcemap'] == 'http://example.com/test.min.js'
@responses.activate
def test_error_message_translations(self):
data = {
'message': 'hello',
'platform': 'javascript',
'logentry': {
'message': u'ReferenceError: Impossible de d\xe9finir une propri\xe9t\xe9 \xab foo \xbb : objet non extensible'
},
'exception': {
'values': [
{
'type': 'Error',
'value': u'P\u0159\xedli\u0161 mnoho soubor\u016f'
},
{
'type': 'Error',
'value': u'foo: wyst\u0105pi\u0142 nieoczekiwany b\u0142\u0105d podczas pr\xf3by uzyskania informacji o metadanych'
}
],
}
}
resp = self._postWithHeader(data)
assert resp.status_code, 200
event = Event.objects.get()
message = event.interfaces['logentry']
assert message.message == 'ReferenceError: Cannot define property \'foo\': object is not extensible'
exception = event.interfaces['exception']
assert exception.values[0].value == 'Too many files'
assert exception.values[1].value == 'foo: an unexpected failure occurred while trying to obtain metadata information'
@responses.activate
def test_sourcemap_source_expansion(self):
responses.add(
responses.GET,
'http://example.com/file.min.js',
body=load_fixture('file.min.js'),
content_type='application/javascript; charset=utf-8'
)
responses.add(
responses.GET,
'http://example.com/file1.js',
body=load_fixture('file1.js'),
content_type='application/javascript; charset=utf-8'
)
responses.add(
responses.GET,
'http://example.com/file2.js',
body=load_fixture('file2.js'),
content_type='application/javascript; charset=utf-8'
)
responses.add(
responses.GET,
'http://example.com/file.sourcemap.js',
body=load_fixture('file.sourcemap.js'),
content_type='application/javascript; charset=utf-8'
)
responses.add(responses.GET, 'http://example.com/index.html', body='Not Found', status=404)
data = {
'message': 'hello',
'platform': 'javascript',
'exception': {
'values': [
{
'type': 'Error',
'stacktrace': {
'frames': [
{
'abs_path': 'http://example.com/file.min.js',
'filename': 'file.min.js',
'lineno': 1,
'colno': 39,
},
# NOTE: Intentionally source is not retrieved from this HTML file
{
'function': 'function: "HTMLDocument.<anonymous>"',
'abs_path': "http//example.com/index.html",
'filename': 'index.html',
'lineno': 283,
'colno': 17,
'in_app': False,
}
],
},
}
],
}
}
resp = self._postWithHeader(data)
assert resp.status_code, 200
event = Event.objects.get()
assert event.data['errors'] == [
{
'type': 'js_no_source',
'url': 'http//example.com/index.html'
}
]
exception = event.interfaces['exception']
frame_list = exception.values[0].stacktrace.frames
frame = frame_list[0]
assert frame.pre_context == [
'function add(a, b) {',
'\t"use strict";',
]
expected = u'\treturn a + b; // fôo'
assert frame.context_line == expected
assert frame.post_context == ['}', '']
raw_frame_list = exception.values[0].raw_stacktrace.frames
raw_frame = raw_frame_list[0]
assert not raw_frame.pre_context
assert raw_frame.context_line == 'function add(a,b){"use strict";return a+b}function multiply(a,b){"use strict";return a*b}function divide(a,b){"use strict";try{return multip {snip}'
assert raw_frame.post_context == ['//@ sourceMappingURL=file.sourcemap.js']
assert raw_frame.lineno == 1
# Since we couldn't expand source for the 2nd frame, both
# its raw and original form should be identical
assert raw_frame_list[1] == frame_list[1]
@responses.activate
def test_sourcemap_embedded_source_expansion(self):
responses.add(
responses.GET,
'http://example.com/embedded.js',
body=load_fixture('embedded.js'),
content_type='application/javascript; charset=utf-8'
)
responses.add(
responses.GET,
'http://example.com/embedded.js.map',
body=load_fixture('embedded.js.map'),
content_type='application/json; charset=utf-8'
)
responses.add(responses.GET, 'http://example.com/index.html', body='Not Found', status=404)
data = {
'message': 'hello',
'platform': 'javascript',
'exception': {
'values': [
{
'type': 'Error',
'stacktrace': {
'frames': [
{
'abs_path': 'http://example.com/embedded.js',
'filename': 'file.min.js',
'lineno': 1,
'colno': 39,
},
# NOTE: Intentionally source is not retrieved from this HTML file
{
'function': 'function: "HTMLDocument.<anonymous>"',
'abs_path': "http//example.com/index.html",
'filename': 'index.html',
'lineno': 283,
'colno': 17,
'in_app': False,
}
],
},
}
],
}
}
resp = self._postWithHeader(data)
assert resp.status_code, 200
event = Event.objects.get()
assert event.data['errors'] == [
{
'type': 'js_no_source',
'url': 'http//example.com/index.html'
}
]
exception = event.interfaces['exception']
frame_list = exception.values[0].stacktrace.frames
frame = frame_list[0]
assert frame.pre_context == [
'function add(a, b) {',
'\t"use strict";',
]
expected = u'\treturn a + b; // fôo'
assert frame.context_line == expected
assert frame.post_context == ['}', '']
@responses.activate
def test_sourcemap_nofiles_source_expansion(self):
project = self.project
release = Release.objects.create(
organization_id=project.organization_id,
version='abc',
)
release.add_project(project)
f_minified = File.objects.create(
name='nofiles.js',
type='release.file',
headers={'Content-Type': 'application/json'},
)
f_minified.putfile(open(get_fixture_path('nofiles.js'), 'rb'))
ReleaseFile.objects.create(
name=u'~/{}'.format(f_minified.name),
release=release,
organization_id=project.organization_id,
file=f_minified,
)
f_sourcemap = File.objects.create(
name='nofiles.js.map',
type='release.file',
headers={'Content-Type': 'application/json'},
)
f_sourcemap.putfile(open(get_fixture_path('nofiles.js.map'), 'rb'))
ReleaseFile.objects.create(
name=u'app:///{}'.format(f_sourcemap.name),
release=release,
organization_id=project.organization_id,
file=f_sourcemap,
)
data = {
'message': 'hello',
'platform': 'javascript',
'release': 'abc',
'exception': {
'values': [
{
'type': 'Error',
'stacktrace': {
'frames': [
{
'abs_path': 'app:///nofiles.js',
'lineno': 1,
'colno': 39,
}
],
},
}
],
}
}
resp = self._postWithHeader(data)
assert resp.status_code, 200
event = Event.objects.get()
assert 'errors' not in event.data
exception = event.interfaces['exception']
frame_list = exception.values[0].stacktrace.frames
assert len(frame_list) == 1
frame = frame_list[0]
assert frame.pre_context == [
'function multiply(a, b) {',
'\t"use strict";',
]
assert frame.context_line == u'\treturn a * b;'
assert frame.post_context == [
'}',
'function divide(a, b) {',
'\t"use strict";',
'\ttry {',
'\t\treturn multiply(add(a, b), a, b) / c;'
]
@responses.activate
def test_indexed_sourcemap_source_expansion(self):
responses.add(
responses.GET,
'http://example.com/indexed.min.js',
body=load_fixture('indexed.min.js'),
content_type='application/javascript; charset=utf-8'
)
responses.add(
responses.GET,
'http://example.com/file1.js',
body=load_fixture('file1.js'),
content_type='application/javascript; charset=utf-8'
)
responses.add(
responses.GET,
'http://example.com/file2.js',
body=load_fixture('file2.js'),
content_type='application/javascript; charset=utf-8'
)
responses.add(
responses.GET,
'http://example.com/indexed.sourcemap.js',
body=load_fixture('indexed.sourcemap.js'),
content_type='application/json; charset=utf-8'
)
data = {
'message': 'hello',
'platform': 'javascript',
'exception': {
'values': [
{
'type': 'Error',
'stacktrace': {
'frames': [
{
'abs_path': 'http://example.com/indexed.min.js',
'filename': 'indexed.min.js',
'lineno': 1,
'colno': 39,
},
{
'abs_path': 'http://example.com/indexed.min.js',
'filename': 'indexed.min.js',
'lineno': 2,
'colno': 44,
},
],
},
}
],
}
}
resp = self._postWithHeader(data)
assert resp.status_code, 200
event = Event.objects.get()
assert 'errors' not in event.data
exception = event.interfaces['exception']
frame_list = exception.values[0].stacktrace.frames
frame = frame_list[0]
assert frame.pre_context == [
'function add(a, b) {',
'\t"use strict";',
]
expected = u'\treturn a + b; // fôo'
assert frame.context_line == expected
assert frame.post_context == ['}', '']
raw_frame_list = exception.values[0].raw_stacktrace.frames
raw_frame = raw_frame_list[0]
assert not raw_frame.pre_context
assert raw_frame.context_line == 'function add(a,b){"use strict";return a+b}'
assert raw_frame.post_context == [
'function multiply(a,b){"use strict";return a*b}function divide(a,b){"use strict";try{return multiply(add(a,b),a,b)/c}catch(e){Raven.captureE {snip}',
'//# sourceMappingURL=indexed.sourcemap.js', ''
]
assert raw_frame.lineno == 1
frame = frame_list[1]
assert frame.pre_context == [
'function multiply(a, b) {',
'\t"use strict";',
]
assert frame.context_line == '\treturn a * b;'
assert frame.post_context == [
'}',
'function divide(a, b) {',
'\t"use strict";',
'\ttry {',
'\t\treturn multiply(add(a, b), a, b) / c;',
]
raw_frame = raw_frame_list[1]
assert raw_frame.pre_context == ['function add(a,b){"use strict";return a+b}']
assert raw_frame.context_line == 'function multiply(a,b){"use strict";return a*b}function divide(a,b){"use strict";try{return multiply(add(a,b),a,b)/c}catch(e){Raven.captureE {snip}'
assert raw_frame.post_context == ['//# sourceMappingURL=indexed.sourcemap.js', '']
assert raw_frame.lineno == 2
@responses.activate
def test_expansion_via_release_artifacts(self):
project = self.project
release = Release.objects.create(
organization_id=project.organization_id,
version='abc',
)
release.add_project(project)
# file.min.js
# ------------
f_minified = File.objects.create(
name='file.min.js',
type='release.file',
headers={'Content-Type': 'application/json'},
)
f_minified.putfile(open(get_fixture_path('file.min.js'), 'rb'))
# Intentionally omit hostname - use alternate artifact path lookup instead
# /file1.js vs http://example.com/file1.js
ReleaseFile.objects.create(
name=u'~/{}?foo=bar'.format(f_minified.name),
release=release,
organization_id=project.organization_id,
file=f_minified,
)
# file1.js
# ---------
f1 = File.objects.create(
name='file1.js',
type='release.file',
headers={'Content-Type': 'application/json'},
)
f1.putfile(open(get_fixture_path('file1.js'), 'rb'))
ReleaseFile.objects.create(
name=u'http://example.com/{}'.format(f1.name),
release=release,
organization_id=project.organization_id,
file=f1,
)
# file2.js
# ----------
f2 = File.objects.create(
name='file2.js',
type='release.file',
headers={'Content-Type': 'application/json'},
)
f2.putfile(open(get_fixture_path('file2.js'), 'rb'))
ReleaseFile.objects.create(
name=u'http://example.com/{}'.format(f2.name),
release=release,
organization_id=project.organization_id,
file=f2,
)
# To verify that the full url has priority over the relative url,
# we will also add a second ReleaseFile alias for file2.js (f3) w/o
# hostname that points to an empty file. If the processor chooses
# this empty file over the correct file2.js, it will not locate
# context for the 2nd frame.
f2_empty = File.objects.create(
name='empty.js',
type='release.file',
headers={'Content-Type': 'application/json'},
)
f2_empty.putfile(open(get_fixture_path('empty.js'), 'rb'))
ReleaseFile.objects.create(
name=u'~/{}'.format(f2.name), # intentionally using f2.name ("file2.js")
release=release,
organization_id=project.organization_id,
file=f2_empty,
)
# sourcemap
# ----------
f_sourcemap = File.objects.create(
name='file.sourcemap.js',
type='release.file',
headers={'Content-Type': 'application/json'},
)
f_sourcemap.putfile(open(get_fixture_path('file.sourcemap.js'), 'rb'))
ReleaseFile.objects.create(
name=u'http://example.com/{}'.format(f_sourcemap.name),
release=release,
organization_id=project.organization_id,
file=f_sourcemap,
)
data = {
'message': 'hello',
'platform': 'javascript',
'release': 'abc',
'exception': {
'values': [
{
'type': 'Error',
'stacktrace': {
'frames': [
{
'abs_path': 'http://example.com/file.min.js?foo=bar',
'filename': 'file.min.js',
'lineno': 1,
'colno': 39,
}, {
'abs_path': 'http://example.com/file.min.js?foo=bar',
'filename': 'file.min.js',
'lineno': 1,
'colno': 79,
}
],
},
}
],
}
}
resp = self._postWithHeader(data)
assert resp.status_code, 200
event = Event.objects.get()
assert 'errors' not in event.data
exception = event.interfaces['exception']
frame_list = exception.values[0].stacktrace.frames
frame = frame_list[0]
assert frame.pre_context == [
'function add(a, b) {',
'\t"use strict";',
]
assert frame.context_line == u'\treturn a + b; // fôo'
assert frame.post_context == ['}', '']
frame = frame_list[1]
assert frame.pre_context == [
'function multiply(a, b) {',
'\t"use strict";',
]
assert frame.context_line == '\treturn a * b;'
assert frame.post_context == [
'}', 'function divide(a, b) {', '\t"use strict";', u'\ttry {',
'\t\treturn multiply(add(a, b), a, b) / c;'
]
@responses.activate
def test_expansion_via_distribution_release_artifacts(self):
project = self.project
release = Release.objects.create(
organization_id=project.organization_id,
version='abc',
)
release.add_project(project)
dist = release.add_dist('foo')
# file.min.js
# ------------
f_minified = File.objects.create(
name='file.min.js',
type='release.file',
headers={'Content-Type': 'application/json'},
)
f_minified.putfile(open(get_fixture_path('file.min.js'), 'rb'))
# Intentionally omit hostname - use alternate artifact path lookup instead
# /file1.js vs http://example.com/file1.js
ReleaseFile.objects.create(
name=u'~/{}?foo=bar'.format(f_minified.name),
release=release,
dist=dist,
organization_id=project.organization_id,
file=f_minified,
)
# file1.js
# ---------
f1 = File.objects.create(
name='file1.js',
type='release.file',
headers={'Content-Type': 'application/json'},
)
f1.putfile(open(get_fixture_path('file1.js'), 'rb'))
ReleaseFile.objects.create(
name=u'http://example.com/{}'.format(f1.name),
release=release,
dist=dist,
organization_id=project.organization_id,
file=f1,
)
# file2.js
# ----------
f2 = File.objects.create(
name='file2.js',
type='release.file',
headers={'Content-Type': 'application/json'},
)
f2.putfile(open(get_fixture_path('file2.js'), 'rb'))
ReleaseFile.objects.create(
name=u'http://example.com/{}'.format(f2.name),
release=release,
dist=dist,
organization_id=project.organization_id,
file=f2,
)
# To verify that the full url has priority over the relative url,
# we will also add a second ReleaseFile alias for file2.js (f3) w/o
# hostname that points to an empty file. If the processor chooses
# this empty file over the correct file2.js, it will not locate
# context for the 2nd frame.
f2_empty = File.objects.create(
name='empty.js',
type='release.file',
headers={'Content-Type': 'application/json'},
)
f2_empty.putfile(open(get_fixture_path('empty.js'), 'rb'))
ReleaseFile.objects.create(
name=u'~/{}'.format(f2.name), # intentionally using f2.name ("file2.js")
release=release,
dist=dist,
organization_id=project.organization_id,
file=f2_empty,
)
# sourcemap
# ----------
f_sourcemap = File.objects.create(
name='file.sourcemap.js',
type='release.file',
headers={'Content-Type': 'application/json'},
)
f_sourcemap.putfile(open(get_fixture_path('file.sourcemap.js'), 'rb'))
ReleaseFile.objects.create(
name=u'http://example.com/{}'.format(f_sourcemap.name),
release=release,
dist=dist,
organization_id=project.organization_id,
file=f_sourcemap,
)
data = {
'message': 'hello',
'platform': 'javascript',
'release': 'abc',
'dist': 'foo',
'exception': {
'values': [
{
'type': 'Error',
'stacktrace': {
'frames': [
{
'abs_path': 'http://example.com/file.min.js?foo=bar',
'filename': 'file.min.js',
'lineno': 1,
'colno': 39,
}, {
'abs_path': 'http://example.com/file.min.js?foo=bar',
'filename': 'file.min.js',
'lineno': 1,
'colno': 79,
}
],
},
}
],
}
}
resp = self._postWithHeader(data)
assert resp.status_code, 200
event = Event.objects.get()
assert 'errors' not in event.data
exception = event.interfaces['exception']
frame_list = exception.values[0].stacktrace.frames
frame = frame_list[0]
assert frame.pre_context == [
'function add(a, b) {',
'\t"use strict";',
]
assert frame.context_line == u'\treturn a + b; // fôo'
assert frame.post_context == ['}', '']
frame = frame_list[1]
assert frame.pre_context == [
'function multiply(a, b) {',
'\t"use strict";',
]
assert frame.context_line == '\treturn a * b;'
assert frame.post_context == [
'}', 'function divide(a, b) {', '\t"use strict";', u'\ttry {',
'\t\treturn multiply(add(a, b), a, b) / c;'
]
@responses.activate
def test_sourcemap_expansion_with_missing_source(self):
"""
Tests a successful sourcemap expansion that points to source files
that are not found.
"""
responses.add(
responses.GET,
'http://example.com/file.min.js',
body=load_fixture('file.min.js'),
content_type='application/javascript; charset=utf-8'
)
responses.add(
responses.GET,
'http://example.com/file.sourcemap.js',
body=load_fixture('file.sourcemap.js'),
content_type='application/json; charset=utf-8'
)
responses.add(responses.GET, 'http://example.com/file1.js', body='Not Found', status=404)
data = {
'message': 'hello',
'platform': 'javascript',
'exception': {
'values': [
{
'type': 'Error',
'stacktrace': {
# Add two frames. We only want to see the
# error once though.
'frames': [
{
'abs_path': 'http://example.com/file.min.js',
'filename': 'file.min.js',
'lineno': 1,
'colno': 39,
},
{
'abs_path': 'http://example.com/file.min.js',
'filename': 'file.min.js',
'lineno': 1,
'colno': 39,
},
],
},
}
],
}
}
resp = self._postWithHeader(data)
assert resp.status_code, 200
event = Event.objects.get()
assert event.data['errors'] == [
{
'url': u'http://example.com/file1.js',
'type': 'fetch_invalid_http_code',
'value': 404
}
]
exception = event.interfaces['exception']
frame_list = exception.values[0].stacktrace.frames
frame = frame_list[0]
# no context information ...
assert not frame.pre_context
assert not frame.context_line
assert not frame.post_context
# ... but line, column numbers are still correctly mapped
assert frame.lineno == 3
assert frame.colno == 9
@responses.activate
def test_failed_sourcemap_expansion(self):
"""
Tests attempting to parse an indexed source map where each section has a "url"
property - this is unsupported and should fail.
"""
responses.add(
responses.GET,
'http://example.com/unsupported.min.js',
body=load_fixture('unsupported.min.js'),
content_type='application/javascript; charset=utf-8'
)
responses.add(
responses.GET,
'http://example.com/unsupported.sourcemap.js',
body=load_fixture('unsupported.sourcemap.js'),
content_type='application/json; charset=utf-8'
)
data = {
'message': 'hello',
'platform': 'javascript',
'exception': {
'values': [
{
'type': 'Error',
'stacktrace': {
'frames': [
{
'abs_path': 'http://example.com/unsupported.min.js',
'filename': 'indexed.min.js',
'lineno': 1,
'colno': 39,
},
],
},
}
],
}
}
resp = self._postWithHeader(data)
assert resp.status_code, 200
event = Event.objects.get()
assert event.data['errors'] == [
{
'url': u'http://example.com/unsupported.sourcemap.js',
'type': 'js_invalid_source'
}
]
def test_failed_sourcemap_expansion_data_url(self):
data = {
'message': 'hello',
'platform': 'javascript',
'exception': {
'values': [
{
'type': 'Error',
'stacktrace': {
'frames': [
{
'abs_path': 'data:application/javascript,base46,asfasf',
'filename': 'indexed.min.js',
'lineno': 1,
'colno': 39,
},
],
},
}
],
}
}
resp = self._postWithHeader(data)
assert resp.status_code, 200
event = Event.objects.get()
assert event.data['errors'] == [{'url': u'<data url>', 'type': 'js_no_source'}]
@responses.activate
def test_failed_sourcemap_expansion_missing_location_entirely(self):
responses.add(
responses.GET,
'http://example.com/indexed.min.js',
body='//# sourceMappingURL=indexed.sourcemap.js',
)
responses.add(
responses.GET,
'http://example.com/indexed.sourcemap.js',
body='{}'
)
data = {
'message': 'hello',
'platform': 'javascript',
'exception': {
'values': [
{
'type': 'Error',
'stacktrace': {
'frames': [
{
'abs_path': 'http://example.com/indexed.min.js',
'filename': 'indexed.min.js',
'lineno': 1,
'colno': 1,
},
{
'abs_path': 'http://example.com/indexed.min.js',
'filename': 'indexed.min.js',
},
],
},
}
],
}
}
resp = self._postWithHeader(data)
assert resp.status_code == 200
event = Event.objects.get()
assert 'errors' not in event.data
@responses.activate
def test_html_response_for_js(self):
responses.add(
responses.GET,
'http://example.com/file1.js',
body=' <!DOCTYPE html><html><head></head><body></body></html>'
)
responses.add(
responses.GET,
'http://example.com/file2.js',
body='<!doctype html><html><head></head><body></body></html>'
)
responses.add(
responses.GET,
'http://example.com/file.html',
body=(
'<!doctype html><html><head></head><body><script>/*legit case*/</script></body></html>'
)
)
data = {
'message': 'hello',
'platform': 'javascript',
'exception': {
'values': [
{
'type': 'Error',
'stacktrace': {
'frames': [
{
'abs_path': 'http://example.com/file1.js',
'filename': 'file.min.js',
'lineno': 1,
'colno': 39,
},
{
'abs_path': 'http://example.com/file2.js',
'filename': 'file.min.js',
'lineno': 1,
'colno': 39,
},
{
'abs_path': 'http://example.com/file.html',
'filename': 'file.html',
'lineno': 1,
'colno': 1,
},
],
},
}
],
}
}
resp = self._postWithHeader(data)
assert resp.status_code, 200
event = Event.objects.get()
assert event.data['errors'] == [
{
'url': u'http://example.com/file1.js',
'type': 'js_invalid_content'
}, {
'url': u'http://example.com/file2.js',
'type': 'js_invalid_content'
}
]
def test_node_processing(self):
project = self.project
release = Release.objects.create(
organization_id=project.organization_id,
version='nodeabc123',
)
release.add_project(project)
f_minified = File.objects.create(
name='dist.bundle.js',
type='release.file',
headers={'Content-Type': 'application/javascript'},
)
f_minified.putfile(open(get_fixture_path('dist.bundle.js'), 'rb'))
ReleaseFile.objects.create(
name=u'~/{}'.format(f_minified.name),
release=release,
organization_id=project.organization_id,
file=f_minified,
)
f_sourcemap = File.objects.create(
name='dist.bundle.js.map',
type='release.file',
headers={'Content-Type': 'application/javascript'},
)
f_sourcemap.putfile(open(get_fixture_path('dist.bundle.js.map'), 'rb'))
ReleaseFile.objects.create(
name=u'~/{}'.format(f_sourcemap.name),
release=release,
organization_id=project.organization_id,
file=f_sourcemap,
)
data = {
'message': 'hello',
'platform': 'node',
'release': 'nodeabc123',
'exception': {
'values': [
{
'type': 'Error',
'stacktrace': {
'frames': [
{
'filename': 'app:///dist.bundle.js',
'function': 'bar',
'lineno': 9,
'colno': 2321,
},
{
'filename': 'app:///dist.bundle.js',
'function': 'foo',
'lineno': 3,
'colno': 2308,
},
{
'filename': 'app:///dist.bundle.js',
'function': 'App',
'lineno': 3,
'colno': 1011,
},
{
'filename': 'app:///dist.bundle.js',
'function': 'Object.<anonymous>',
'lineno': 1,
'colno': 1014,
},
{
'filename': 'app:///dist.bundle.js',
'function': '__webpack_require__',
'lineno': 20,
'colno': 30,
},
{
'filename': 'app:///dist.bundle.js',
'function': '<unknown>',
'lineno': 18,
'colno': 63,
}
],
},
}
],
}
}
resp = self._postWithHeader(data)
assert resp.status_code, 200
event = Event.objects.get()
exception = event.interfaces['exception']
frame_list = exception.values[0].stacktrace.frames
assert len(frame_list) == 6
import pprint
pprint.pprint(frame_list[0].__dict__)
pprint.pprint(frame_list[1].__dict__)
pprint.pprint(frame_list[2].__dict__)
pprint.pprint(frame_list[3].__dict__)
pprint.pprint(frame_list[4].__dict__)
pprint.pprint(frame_list[5].__dict__)
assert frame_list[0].abs_path == 'webpack:///webpack/bootstrap d9a5a31d9276b73873d3'
assert frame_list[0].function == 'bar'
assert frame_list[0].lineno == 8
assert frame_list[1].abs_path == 'webpack:///webpack/bootstrap d9a5a31d9276b73873d3'
assert frame_list[1].function == 'foo'
assert frame_list[1].lineno == 2
assert frame_list[2].abs_path == 'webpack:///webpack/bootstrap d9a5a31d9276b73873d3'
assert frame_list[2].function == 'App'
assert frame_list[2].lineno == 2
assert frame_list[3].abs_path == 'app:///dist.bundle.js'
assert frame_list[3].function == 'Object.<anonymous>'
assert frame_list[3].lineno == 1
assert frame_list[4].abs_path == 'webpack:///webpack/bootstrap d9a5a31d9276b73873d3'
assert frame_list[4].function == '__webpack_require__'
assert frame_list[4].lineno == 19
assert frame_list[5].abs_path == 'webpack:///webpack/bootstrap d9a5a31d9276b73873d3'
assert frame_list[5].function == '<unknown>'
assert frame_list[5].lineno == 16
@responses.activate
def test_no_fetch_from_http(self):
responses.add(
responses.GET,
'http://example.com/node_app.min.js',
body=load_fixture('node_app.min.js'),
content_type='application/javascript; charset=utf-8'
)
responses.add(
responses.GET,
'http://example.com/node_app.min.js.map',
body=load_fixture('node_app.min.js.map'),
content_type='application/javascript; charset=utf-8'
)
data = {
'message': 'hello',
'platform': 'node',
'exception': {
'values': [
{
'type': 'Error',
'stacktrace': {
'frames': [
{
'abs_path': 'node_bootstrap.js',
'filename': 'node_bootstrap.js',
'lineno': 1,
'colno': 38,
},
{
'abs_path': 'timers.js',
'filename': 'timers.js',
'lineno': 1,
'colno': 39,
},
{
'abs_path': 'webpack:///internal',
'filename': 'internal',
'lineno': 1,
'colno': 43,
},
{
'abs_path': 'webpack:///~/some_dep/file.js',
'filename': 'file.js',
'lineno': 1,
'colno': 41,
},
{
'abs_path': 'webpack:///./node_modules/file.js',
'filename': 'file.js',
'lineno': 1,
'colno': 42,
},
{
'abs_path': 'http://example.com/node_app.min.js',
'filename': 'node_app.min.js',
'lineno': 1,
'colno': 40,
},
],
},
}
],
}
}
resp = self._postWithHeader(data)
assert resp.status_code, 200
event = Event.objects.get()
exception = event.interfaces['exception']
frame_list = exception.values[0].stacktrace.frames
# This one should not process, so this one should be none.
assert exception.values[0].raw_stacktrace is None
# None of the in app should update
for x in range(6):
assert not frame_list[x].in_app
| 35.533473 | 190 | 0.435659 | 4,328 | 50,955 | 5.002542 | 0.101201 | 0.03404 | 0.043324 | 0.026604 | 0.809847 | 0.768556 | 0.743892 | 0.718627 | 0.6917 | 0.665558 | 0 | 0.01937 | 0.442763 | 50,955 | 1,433 | 191 | 35.558269 | 0.74315 | 0.037955 | 0 | 0.616952 | 0 | 0.00815 | 0.221991 | 0.034634 | 0 | 0 | 0 | 0 | 0.100245 | 1 | 0.017115 | false | 0 | 0.007335 | 0.000815 | 0.026895 | 0.005705 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
0a6f9977b90c7e9a58126426c4ad1df217dd07d4 | 29 | py | Python | fastzy/__init__.py | letcommerce/fastzy | 62b957a7ebd28706b7bb745c88e6b4cd5221fc7b | [
"MIT"
] | 32 | 2020-02-10T15:11:06.000Z | 2021-12-30T10:09:36.000Z | fastzy/__init__.py | wavenator/fastzy | 62b957a7ebd28706b7bb745c88e6b4cd5221fc7b | [
"MIT"
] | null | null | null | fastzy/__init__.py | wavenator/fastzy | 62b957a7ebd28706b7bb745c88e6b4cd5221fc7b | [
"MIT"
] | 3 | 2020-04-05T03:54:36.000Z | 2022-03-12T12:28:05.000Z | from .fastzy import Searcher
| 14.5 | 28 | 0.827586 | 4 | 29 | 6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.137931 | 29 | 1 | 29 | 29 | 0.96 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
6a5a4f85601f1503cbc4825af84c35e15b8f5b39 | 39 | py | Python | usr/local/lib/python3.6/dist-packages/html2text/__main__.py | threefoldtech/threebot_prebuilt | 1f0e1c65c14cef079cd80f73927d7c8318755c48 | [
"Apache-2.0"
] | null | null | null | usr/local/lib/python3.6/dist-packages/html2text/__main__.py | threefoldtech/threebot_prebuilt | 1f0e1c65c14cef079cd80f73927d7c8318755c48 | [
"Apache-2.0"
] | null | null | null | usr/local/lib/python3.6/dist-packages/html2text/__main__.py | threefoldtech/threebot_prebuilt | 1f0e1c65c14cef079cd80f73927d7c8318755c48 | [
"Apache-2.0"
] | null | null | null | from html2text.cli import main
main()
| 9.75 | 30 | 0.769231 | 6 | 39 | 5 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.030303 | 0.153846 | 39 | 3 | 31 | 13 | 0.878788 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
6a6fdad40ed478084a90a74c87aabad049961528 | 53,848 | py | Python | morra/morph_parser2.py | steysie/morra | 556371dbc6f96bb2735aaa556dd6fd5653655644 | [
"BSD-3-Clause"
] | null | null | null | morra/morph_parser2.py | steysie/morra | 556371dbc6f96bb2735aaa556dd6fd5653655644 | [
"BSD-3-Clause"
] | null | null | null | morra/morph_parser2.py | steysie/morra | 556371dbc6f96bb2735aaa556dd6fd5653655644 | [
"BSD-3-Clause"
] | null | null | null | # -*- coding: utf-8 -*-
# Morra project: Morphological parser 2
#
# Copyright (C) 2020-present by Sergei Ternovykh
# License: BSD, see LICENSE for details
"""
Get the results of forward and backward parsers and make refining parse on the
ground of them.
"""
from collections import OrderedDict
from copy import deepcopy
import pickle
import random
from random import randint, random as rand
import sys
from corpuscula.utils import LOG_FILE, print_progress
from morra.base_parser import _AveragedPerceptron
from morra.features2 import Features2
from morra.morph_parser import MorphParser
class MorphParser2(MorphParser):
def __init__(self, features='RU',
guess_pos=None, guess_lemma=None, guess_feat=None):
super().__init__(
guess_pos=guess_pos, guess_lemma=guess_lemma, guess_feat=guess_feat
)
self.features = Features2(lang=features) \
if isinstance(features, str) else features
self._pos2_model = None
self._feats2_model = None
self._feats2_models = {}
def backup(self):
"""Get current state"""
o = super().backup()
o.update({'pos2_model_weights' : self._pos2_model.weights
if self._pos2_model else
None,
'feats2_model_weights' : self._feats2_model.weights
if self._feats2_model else
None,
'feats2_models_weights': {
x: y.weights for x, y in self._feats2_models.items()
}})
return o
def restore(self, o):
"""Restore current state from backup object"""
super().restore(o)
(pos2_model_weights ,
feats2_model_weights ,
feats2_models_weights) = [o.get(x) for x in ['pos2_model_weights' ,
'feats2_model_weights' ,
'feats2_models_weights']]
if pos2_model_weights:
self._pos2_model = _AveragedPerceptron()
self._pos2_model.weights = pos2_model_weights
else:
self._pos2_model = None
if feats2_model_weights:
self._feats2_model = _AveragedPerceptron()
self._feats2_model.weights = feats2_model_weights
else:
self._feats2_model = None
self._feats2_models = {}
if feats2_models_weights:
for feat, weights in feats2_models_weights.items():
model = self._feats2_models[feat] = _AveragedPerceptron()
model.weights = weights
def _save_pos2_model(self, file_path):
with open(file_path, 'wb') as f:
pickle.dump(self._pos2_model.weights if self._pos2_model else
None, f, 2)
def _load_pos2_model(self, file_path):
with open(file_path, 'rb') as f:
weights = pickle.load(f)
if weights:
self._pos2_model = _AveragedPerceptron()
self._pos2_model.weights = weights
else:
self._pos2_model = None
def _save_feats2_model(self, file_path):
with open(file_path, 'wb') as f:
pickle.dump(self._feats2_model.weights if self._feats2_model else
None, f, 2)
def _load_feats2_model(self, file_path):
with open(file_path, 'rb') as f:
weights = pickle.load(f)
if weights:
self._feats2_model = _AveragedPerceptron()
self._feats2_model.weights = weights
else:
self._feats2_model = None
def _save_feats2_models(self, file_path, feat=None):
with open(file_path, 'wb') as f:
pickle.dump(
(feat, self._feats2_models[feat].weights) if feat else
{x: y.weights for x, y in self._feats2_models.items()},
f, 2
)
def _load_feats2_models(self, file_path):
with open(file_path, 'rb') as f:
o = pickle.load(f)
if isinstance(o, tuple):
feat, weights = o
model = self._feats2_models[feat] = _AveragedPerceptron()
model.weights = weights
else:
models = self._feats2_models = {}
for feat, weights in o.items():
model = models[feat] = _AveragedPerceptron()
model.weights = weights
def predict_pos2(self, sentence, with_backoff=True, max_repeats=0,
inplace=True):
"""Tag the *sentence* with the POS-2 tagger.
:param sentence: sentence in Parsed CONLL-U format
:type sentence: list(dict)
:param with_backoff: if result of the tagger differs from both base
taggers, get one of the bases on the ground of
some heuristics
:param max_repeats: repeat a prediction step based on the previous one
while changes in prediction are diminishing and
``max_repeats`` is not reached. 0 (default) means
one repeat - only for tokens where POS-1 taggers
don't concur
:type max_repeats: int
:param inplace: if True, method changes and returns the given sentence
itself; elsewise, new sentence will be created
:return: tagged *sentence* in Parsed CONLL-U format
"""
cdict = self._cdict
model = self._pos2_model
assert model, 'ERROR: Use train_pos2() prior to prepare POS-2 tagger'
if not inplace:
sentence = deepcopy(sentence)
sent = sentence[0] if isinstance(sentence, tuple) else sentence
predict = self.predict_pos_ if hasattr(self, 'predict_pos_') else \
self.predict_pos
sent = predict(sent, rev=False, inplace=True)
sent_rev = predict(sent, rev=True, inplace=False)
tokens_straight = [(x['FORM'], x['UPOS'])
for x in sent
if x['FORM'] and x['UPOS']
and '-' not in x['ID']]
tokens_rev = [(x['FORM'], x['UPOS'])
for x in sent_rev
if x['FORM'] and x['UPOS']
and '-' not in x['ID']]
context, pos_context_straight = \
[list(x) for x in zip(*[t for t in tokens_straight])] \
if tokens_straight else \
[[]] * 2
pos_context_rev = [t[1] for t in tokens_rev]
## Rev model is better for initial word (with capital letter?)
tokens_ = [[t[0], None] for t in tokens_rev][:2] \
+ [[t[0], None] for t in tokens_straight][2:]
pos_context = pos_context_rev[:2] + pos_context_straight[2:]
###
changes = len(sent) + 1
i_ = 1
while True:
changes_prev = changes
changes = 0
pos_context_straight_i = iter(pos_context_straight)
pos_context_rev_i = iter(pos_context_rev )
i = 0
for token in sent:
wform = token['FORM']
if wform and '-' not in token['ID']:
pos_straight = next(pos_context_straight_i)
pos_rev = next(pos_context_rev_i )
if pos_straight != pos_rev or (not with_backoff
and max_repeats > 0):
guess, coef = cdict.predict_tag(wform, isfirst=i == 0)
if self._guess_pos:
guess, coef = self._guess_pos(guess, coef, i,
tokens_, cdict)
if guess is None or coef < 1.:
features = self.features.get_pos2_features(
i, context, pos_context
)
guess = model.predict(
features#, suggest=guess, suggest_coef=coef
)
if with_backoff and guess not in [pos_straight,
pos_rev]:
guess = pos_context[i]
if guess != token['UPOS']:
changes += 1
token['UPOS'] = tokens_[i][1] = pos_context[i] = guess
i += 1
if with_backoff or changes == 0:
break
elif changes > changes_prev:
for token, token_prev in zip(sent, sent_prev):
token['UPOS'] = token_prev['UPOS']
break
if i_ >= max_repeats:
break
sent_prev = deepcopy(sent)
i_ += 1
return sentence
def predict_feats2(self, sentence, joint=False, with_backoff=True,
max_repeats=0, feat=None, inplace=True):
"""Tag the *sentence* with the FEATS-2 tagger.
:param sentence: sentence in Parsed CONLL-U format; UPOS and LEMMA
fields must be already filled
:type sentence: list(dict)
:param joint: if True, use joint FEATS-2 model; elsewise, use separate
models (default)
:param with_backoff: if result of the tagger differs from both base
taggers, get one of the bases on the ground of
some heuristics
:param max_repeats: repeat a prediction step based on the previous one
while changes in prediction are diminishing and
``max_repeats`` is not reached. 0 (default) means
one repeat - only for tokens where FEATS-1 taggers
don't concur
:type max_repeats: int
:param feat: name of the feat to tag; if None, then all possible feats
will be tagged
:type feat: str
:param inplace: if True, method changes and returns the given sentence
itself; elsewise, new sentence will be created
:return: tagged *sentence* in Parsed CONLL-U format
"""
return (
self._predict_feats2_joint if joint else
self._predict_feats2_separate
)(
sentence, with_backoff=with_backoff, max_repeats=max_repeats,
feat=feat, inplace=inplace
)
def _predict_feats2_separate(self, sentence, with_backoff=True,
max_repeats=0, feat=None, inplace=True):
cdict = self._cdict
models = self._feats2_models
assert models, \
'ERROR: Use train_feats2(joint=False) prior to prepare ' \
'FEATS-2 tagger'
if not inplace:
sentence = deepcopy(sentence)
sent = sentence[0] if isinstance(sentence, tuple) else sentence
if not feat:
for token in sent:
token['FEATS'] = OrderedDict()
for feat in cdict.get_feats():
self._predict_feats2_separate(sent, with_backoff=with_backoff,
max_repeats=max_repeats,
feat=feat, inplace=True)
else:
default_val = '_'
model = models[feat]
val_cnt = len(cdict.get_feats()[feat]) - 1
sent = self._predict_feats_separate(
sent, rev=False, feat=feat, inplace=True
)
sent_rev = self._predict_feats_separate(
sent, rev=True, feat=feat, inplace=False
)
tokens = [(x['FORM'], x['LEMMA'], x['UPOS'], x['FEATS'])
for x in sent
if x['FORM'] and x['LEMMA'] and x['UPOS']
and '-' not in x['ID']]
tokens_rev = [(x['FORM'], x['LEMMA'], x['UPOS'], x['FEATS'])
for x in sent_rev
if x['FORM'] and x['LEMMA'] and x['UPOS']
and '-' not in x['ID']]
context, lemma_context, pos_context, feats_context_straight = \
[list(x) for x in zip(*[t for t in tokens])] if tokens else \
[[]] * 4
feats_context_rev = [t[3] for t in tokens_rev]
# Get straight version as backoff
tokens_ = [[*t[:3], None] for t in tokens]
feats_context = deepcopy(feats_context_straight)
###
changes = len(tokens) + 1
i_ = 1
while True:
changes_prev = changes
changes = 0
feats_context_straight_i = iter(feats_context_straight)
feats_context_rev_i = iter(feats_context_rev )
for i, (wform, lemma, pos, feats) in enumerate(tokens):
feat_val_straight = next(feats_context_straight_i) \
.get(feat, default_val)
feat_val_rev = next(feats_context_rev_i ) \
.get(feat, default_val)
if feat_val_straight != feat_val_rev or (not with_backoff
and max_repeats > 0):
guess, coef = \
cdict.predict_feat(feat, wform, lemma, pos)
if self._guess_feat:
guess, coef = \
self._guess_feat(guess, coef, i, feat,
tokens_, cdict)
if coef is not None and guess is None:
guess = default_val
if coef != 1.:
features = self.features.get_feat2_features(
i, feat, context,
lemma_context, pos_context, feats_context,
False, val_cnt
)
guess = model.predict(
features, suggest=guess, suggest_coef=coef
)
if with_backoff and guess not in [feat_val_rev,
feat_val_straight]:
guess = feats_context[i].get(feat, default_val)
if guess != feats.get(feat, default_val):
changes += 1
tokens_[i][3] = guess
if guess != default_val:
feats[feat] = feats_context[i][feat] = guess
else:
feats.pop(feat, None)
feats_context[i].pop(feat, None)
if with_backoff or changes == 0:
break
elif changes > changes_prev:
for token, token_prev in zip(tokens, tokens_prev):
tokens[3] = token_prev[3].copy()
break
if i_ >= max_repeats:
break
tokens_prev = deepcopy(tokens)
i_ += 1
return sentence
def _predict_feats2_joint(self, sentence, with_backoff=True, feat=None,
max_repeats=0, inplace=True):
assert not feat, 'ERROR: feat must be None with joint=True'
cdict = self._cdict
model = self._feats2_model
assert model, \
'ERROR: Use train_feats2(joint=True) prior to prepare ' \
'FEATS-2 tagger'
if not inplace:
sentence = deepcopy(sentence)
sent = sentence[0] if isinstance(sentence, tuple) else sentence
sent = self._predict_feats_joint(sent, rev=False, inplace=True)
sent_rev = self._predict_feats_joint(sent, rev=True, inplace=False)
tokens = [(x['FORM'], x['LEMMA'], x['UPOS'], x['FEATS'])
for x in sent
if x['FORM'] and x['LEMMA'] and x['UPOS']
and '-' not in x['ID']]
tokens_rev = [(x['FORM'], x['LEMMA'], x['UPOS'], x['FEATS'])
for x in sent_rev
if x['FORM'] and x['LEMMA'] and x['UPOS']
and '-' not in x['ID']]
context, lemma_context, pos_context, feats_context_straight = \
[list(x) for x in zip(*[t for t in tokens])] if tokens else \
[[]] * 4
feats_context_rev = [t[3] for t in tokens_rev]
# Rev model is better for initial word (with capital letter?)
feats_context = deepcopy(feats_context_rev[:1]
+ feats_context_straight[1:])
###
changes = len(feats_context_straight) + 1
i_ = 1
while True:
changes_prev = changes
changes = 0
feats_context_rev_i = iter(feats_context_rev )
feats_context_straight_i = iter(feats_context_straight)
for i, feats in enumerate(feats_context_straight):
feats_rev = next(feats_context_rev_i)
feat_vals_rev = \
'|'.join('='.join((x, feats_rev[x]))
for x in sorted(feats_rev))
feats_straight = next(feats_context_straight_i)
feat_vals_straight = \
'|'.join('='.join((x, feats_straight[x]))
for x in sorted(feats_straight))
if feat_vals_rev != feat_vals_straight or (not with_backoff
and max_repeats > 0):
features = self.features.get_feat2_features(
i, None,
context, lemma_context, pos_context, feats_context,
True, 0
)
guess = model.predict(features)
feats_ctx = '|'.join('='.join((x, feats_context[i][x]))
for x in sorted(feats_context[i]))
if with_backoff and guess not in [feat_vals_rev,
feat_vals_straight]:
guess = feats_ctx
elif guess != feats_ctx:
changes += 1
feats.clear()
feats_ctx = feats_context[i]
feats_ctx.clear()
if guess:
for feat, val in [t.split('=') for t in guess.split('|')]:
feats[feat] = feats_ctx[feat] = val
if with_backoff or changes == 0:
break
elif changes > changes_prev:
for feats, feats_prev in zip(feats_context_straight,
feats_context_straight_prev):
feats.clear()
feats.update(feats_prev)
break
if i_ >= max_repeats:
break
feats_context_straight_prev = deepcopy(feats_context_straight)
i_ += 1
return sentence
def predict2(self, sentence, pos_backoff=True, pos_repeats=0,
feats_joint=False, feats_backoff=True, feats_repeats=0,
inplace=True):
"""Tag the *sentence* with the all available taggers.
:param sentence: sentence in Parsed CONLL-U format
:type sentence: list(dict)
:type pos_backoff: if result of POS-2 tagger differs from both its
base taggers, get one of the bases on the ground
of some heuristics
:param pos_repeats: repeat a prediction step based on the previous one
while changes in prediction are diminishing and
``max_repeats`` is not reached. 0 means one
repeat - only for tokens where POS-1 taggers
don't concur
:type pos_repeats: int
:param feats_joint: if True, use joint model; elsewise, use separate
models (default)
:type feats_backoff: if result of FEATS-2 tagger differs from both its
base taggers, get one of the bases on the ground
of some heuristics
:param feats_repeats: repeat a prediction step based on the previous
one while changes in prediction are diminishing
and ``max_repeats`` is not reached. 0 (default)
means one repeat - only for tokens where FEATS-1
taggers don't concur
:type feats_repeats: int
:param inplace: if True, method changes and returns the given sentence
itself; elsewise, new sentence will be created
:return: tagged *sentence* in Parsed CONLL-U format
"""
return \
self.predict_feats2(
self.predict_lemma(
self.predict_pos2(
sentence, with_backoff=pos_backoff,
max_repeats=pos_repeats, inplace=inplace
),
inplace=inplace
),
joint=feats_joint, with_backoff=feats_backoff,
max_repeats=feats_repeats, inplace=inplace
)
def predict_pos2_sents(self, sentences=None, with_backoff=True,
max_repeats=0, inplace=True, save_to=None):
"""Apply ``self.predict_pos2()`` to each element of *sentences*.
:param sentences: a name of file in CONLL-U format or list/iterator of
sentences in Parsed CONLL-U. If None, then loaded
test corpus is used
:param with_backoff: if result of the tagger differs from both base
taggers, get one of the bases on the ground of
some heuristics
:param max_repeats: repeat a prediction step based on the previous one
while changes in prediction are diminishing and
``max_repeats`` is not reached. 0 (default) means
one repeat - only for tokens where POS-1 taggers
don't concur
:type max_repeats: int
:param inplace: if True, method changes and returns the given
sentences themselves; elsewise, new list of sentences
will be created
:param save_to: if not None then the result will be saved to the file
with a specified name
:type save_to: str
"""
return self._predict_sents(
sentences,
lambda sentences:
(self.predict_pos2(
s, with_backoff=with_backoff, max_repeats=max_repeats,
inplace=inplace
)
for s in sentences),
save_to=save_to
)
def predict_feats2_sents(self, sentences=None, joint=False,
with_backoff=True, max_repeats=0, feat=None,
inplace=True, save_to=None):
"""Apply ``self.predict_feats2()`` to each element of *sentences*.
:param sentences: a name of file in CONLL-U format or list/iterator of
sentences in Parsed CONLL-U. If None, then loaded
test corpus is used
:param joint: if True, use joint FEATS-2 model; elsewise, use separate
models (default)
:param with_backoff: if result of the tagger differs from both base
taggers, get one of the bases on the ground of
some heuristics
:param max_repeats: repeat a prediction step based on the previous one
while changes in prediction are diminishing and
``max_repeats`` is not reached. 0 (default) means
one repeat - only for tokens where FEATS-1 taggers
don't concur
:type max_repeats: int
:param feat: name of the feat to tag; if None, then all feats will be
tagged
:type feat: str
:param inplace: if True, method changes and returns the given
sentences themselves; elsewise, the new list of
sentences will be created
:param save_to: if not None then the result will be saved to the file
with a specified name
:type save_to: str
"""
return self._predict_sents(
sentences,
lambda sentences:
(self.predict_feats2(
s, joint=joint, with_backoff=with_backoff,
max_repeats=max_repeats, feat=feat, inplace=inplace
)
for s in sentences),
save_to=save_to
)
def predict2_sents(self, sentences=None, pos_backoff=True, pos_repeats=0,
feats_joint=False, feats_backoff=True, feats_repeats=0,
inplace=True, save_to=None):
"""Apply ``self.predict2()`` to each element of *sentences*.
:param sentences: a name of file in CONLL-U format or list/iterator of
sentences in Parsed CONLL-U. If None, then loaded
test corpus is used
:type pos_backoff: if result of POS-2 tagger differs from both its
base taggers, get one of the bases on the ground
of some heuristics
:param pos_repeats: repeat a prediction step based on the previous one
while changes in prediction are diminishing and
``max_repeats`` is not reached. 0 means one
repeat - only for tokens where POS-1 taggers
don't concur
:type pos_repeats: int
:param feats_joint: if True, use joint model; elsewise, use separate
models (default)
:type feats_backoff: if result of FEATS-2 tagger differs from both its
base taggers, get one of the bases on the ground
of some heuristics
:param feats_repeats: repeat a prediction step based on the previous
one while changes in prediction are diminishing
and ``max_repeats`` is not reached. 0 (default)
means one repeat - only for tokens where FEATS-1
taggers don't concur
:type feats_repeats: int
:param inplace: if True, method changes and returns the given
sentences themselves; elsewise, new list of sentences
will be created
:param save_to: if not None then the result will be saved to the file
with a specified name
:type save_to: str
"""
return self._predict_sents(
sentences,
lambda sentences:
(self.predict2(
s, pos_backoff=pos_backoff, pos_repeats=pos_repeats,
feats_joint=feats_joint, feats_backoff=feats_backoff,
feats_repeats=feats_repeats, inplace=inplace
)
for s in sentences),
save_to=save_to
)
def evaluate_pos2(self, gold=None, test=None, with_backoff=True,
max_repeats=0, pos=None, unknown_only=False,
silent=False):
"""Score the accuracy of the POS tagger against the *gold* standard.
Remove POS tags from the *gold* standard text, retag it using the
tagger, then compute the accuracy score. If *test* is not None, compute
the accuracy of the *test* corpus with respect to the *gold*.
:param gold: a corpus of tagged sentences to score the tagger on.
If *gold* is None then loaded test corpus is used
:param test: a corpus of tagged sentences to compare with *gold*
:param with_backoff: if result of the tagger differs from both base
taggers, get one of the bases on the ground of
some heuristics
:param max_repeats: repeat a prediction step based on the previous one
while changes in prediction are diminishing and
``max_repeats`` is not reached. 0 (default) means
one repeat - only for tokens where POS-1 taggers
don't concur
:type max_repeats: int
:param pos: name of the tag to evaluate the tagger; if None, then
tagger will be evaluated for all tags
:type pos: str
:param unknown_only: calculate accuracy score only for words that are
not present in train corpus
:param silent: suppress log
:return: accuracy score of the tagger against the gold
:rtype: float
"""
self.predict_pos_ = self.predict_pos
self.predict_pos = \
lambda sentence, rev=None, inplace=True: \
self.predict_pos2(sentence, with_backoff=with_backoff,
max_repeats=max_repeats, inplace=inplace)
res = self.evaluate_pos(gold=gold, test=test, pos=pos,
unknown_only=unknown_only, silent=silent)
self.predict_pos = self.predict_pos_
del self.predict_pos_
return res
def evaluate_feats2(self, gold=None, test=None, joint=False,
with_backoff=True, max_repeats=0,
feat=None, unknown_only=False, silent=False):
"""Score the accuracy of the FEATS-2 tagger against the *gold*
standard. Remove feats (or only one specified feat) from the *gold*
standard text, generate new feats using the tagger, then compute the
accuracy score. If *test* is not None, compute the accuracy of the
*test* corpus with respect to the *gold*.
:param gold: a corpus of tagged sentences to score the tagger on.
If *gold* is None then loaded test corpus is used
:param test: a corpus of tagged sentences to compare with *gold*
:param joint: if True, use joint FEATS-2 model; elsewise, use separate
models (default)
:param with_backoff: if result of the tagger differs from both base
taggers, get one of the bases on the ground of
some heuristics
:param max_repeats: repeat a prediction step based on the previous one
while changes in prediction are diminishing and
``max_repeats`` is not reached. 0 (default) means
one repeat - only for tokens where FEATS-1 taggers
don't concur
:type max_repeats: int
:param feat: name of the feat to evaluate the tagger; if None, then
tagger will be evaluated for all feats
:type feat: str
:param unknown_only: calculate accuracy score only for words that are
not present in train corpus
:param silent: suppress log
:return: accuracy scores of the tagger against the gold:
1. by tokens: the tagging of the whole token may be either
correct or not;
2. by tags: sum of correctly detected feats to sum of all
feats that are non-empty in either gold or retagged
sentence
:rtype: tuple(float, float)
"""
f = self.predict_feats
self.predict_feats = \
lambda sentence, joint=joint, rev=None, feat=feat, inplace=True: \
self.predict_feats2(sentence, joint=joint,
with_backoff=with_backoff,
max_repeats=max_repeats, feat=feat,
inplace=inplace)
res = self.evaluate_feats(gold=gold, test=test, joint=joint, feat=feat,
unknown_only=unknown_only, silent=silent)
self.predict_feats = f
return res
def evaluate2(self, gold=None, test=None, pos_backoff=True, pos_repeats=0,
feats_joint=False, feats_backoff=True, feats_repeats=0,
feat=None, unknown_only=False, silent=False):
"""Score a joint accuracy of the all available taggers against the
*gold* standard. Extract wforms from the *gold* standard text, retag it
using all the taggers, then compute a joint accuracy score. If *test*
is not None, compute the accuracy of the *test* corpus with respect to
the *gold*.
:param gold: a corpus of tagged sentences to score the tagger on.
If *gold* is None then loaded test corpus is used
:param test: a corpus of tagged sentences to compare with *gold*
:type pos_backoff: if result of POS-2 tagger differs from both its
base taggers, get one of the bases on the ground
of some heuristics
:param pos_repeats: repeat a prediction step based on the previous one
while changes in prediction are diminishing and
``max_repeats`` is not reached. 0 means one
repeat - only for tokens where POS-1 taggers
don't concur
:type pos_repeats: int
:param feats_joint: if True, use joint model; elsewise, use separate
models (default)
:type feats_backoff: if result of FEATS-2 tagger differs from both its
base taggers, get one of the bases on the ground
of some heuristics
:param feats_repeats: repeat a prediction step based on the previous
one while changes in prediction are diminishing
and ``max_repeats`` is not reached. 0 (default)
means one repeat - only for tokens where FEATS-1
taggers don't concur
:type feats_repeats: int
:param feat: name of the feat to evaluate the tagger; if None, then
tagger will be evaluated for all feats
:type feat: str
:param unknown_only: calculate accuracy score only for words that are
not present in train corpus
:param silent: suppress log
:return: joint accuracy scores of the taggers against the gold:
1. by tokens: the tagging of the whole token may be either
correct or not
2. by tags: sum of correctly detected tags to sum of all tags
that are non-empty in either gold or retagged sentences
:rtype: tuple(float, float)
"""
f = self.predict
self.predict = \
lambda sentence, pos_rev=None, \
feats_joint=feats_joint, feats_rev=None, inplace=False: \
self.predict2(
sentence, pos_backoff=pos_backoff, pos_repeats=pos_repeats,
feats_joint=feats_joint, feats_backoff=feats_backoff,
feats_repeats=feats_repeats, inplace=inplace
)
res = self.evaluate(gold=gold, test=test, feats_joint=feats_joint,
feat=feat, unknown_only=unknown_only,
silent=silent)
self.predict = f
return res
def train_pos2(self, epochs=5, test_max_repeats=0, no_train_evals=True,
seed=None, dropout=None, context_dropout=None):
"""Train a POS-2 tagger from ``self._train_corpus``.
:param epochs: number of training iterations. If epochs < 0, then the
best model will be searched based on evaluation of test
corpus. The search will be stopped when the result of
next |epochs| iterations will be worse than the best
one. It's allowed to specify epochs as tuple of both
variants (positive and negative)
:type epochs: int|tuple(int, int)
:param test_max_repeats: parameter for ``evaluate_pos2()``
:type test_max_repeats: int
:param no_train_evals: don't make interim and final evaluations on the
training set (save time)
:param seed: init value for the random number generator
:type seed: int
:param dropout: a fraction of weiths to be randomly set to 0 at each
predict to prevent overfitting
:type dropout: float
:param context_dropout: a fraction of POS tags to be randomly replaced
after predict to random POS tags to prevent
overfitting
:type context_dropout: float
"""
cdict, corpus_len, progress_step, progress_check_step, \
epochs, epochs_ = self._train_init(epochs, seed)
assert self._pos_model, \
'ERROR: Use train_pos() prior to prepare POS tagger'
assert self._pos_rev_model, \
'ERROR: Use train_pos(rev=True) prior to prepare ' \
'Reversed POS tagger'
model = self._pos2_model = \
_AveragedPerceptron(default_class=cdict.most_common_tag())
header = 'POS-2'
tags = sorted(cdict.get_tags())
last_tag_idx = len(tags) - 1
print(tags, file=LOG_FILE)
best_epoch, best_score, best_weights, eqs, bads, score = \
-1, -1, None, 0, 0, -1
epoch = 0
while True:
n = c = 0
td = fd = td2 = fd2 = tp = fp = 0
random.shuffle(self._train_corpus)
print('{} Epoch {}'.format(header, epoch), file=LOG_FILE)
for sent_no, sentence in enumerate(self._train_corpus):
if not sent_no % progress_check_step:
print_progress(sent_no, end_value=corpus_len,
step=progress_step)
tokens = [(x['FORM'], x['UPOS'])
for x in sentence
if x['FORM'] and '-' not in x['ID']]
context, pos_context = \
[list(x) for x in zip(*[t for t in tokens])] \
if tokens else \
[[]] * 2
tokens_ = [[t[0], None] for t in tokens]
for i, (wform, pos) in enumerate(tokens):
guess, coef = cdict.predict_tag(wform, isfirst=i == 0)
if self._guess_pos:
guess, coef = self._guess_pos(guess, coef, i,
tokens_, cdict)
if guess is not None:
if guess == pos:
td2 += 1
else:
fd2 += 1
if guess is None or coef < 1.:
features = self.features.get_pos2_features(
i, context, pos_context
)
guess = model.predict(
features,# suggest=guess, suggest_coef=coef,
dropout=dropout
)
if guess == pos:
tp += 1
else:
fp += 1
model.update(pos, guess, features)
elif guess == pos:
td += 1
else:
fd += 1
n += 1
c += guess == pos
tokens_[i][1] = pos_context[i] = \
guess if not context_dropout \
or rand() >= context_dropout else \
tags[randint(0, last_tag_idx)]
print_progress(sent_no + 1, end_value=corpus_len,
step=progress_step)
epoch, epochs, best_epoch, best_score, best_weights, \
eqs, bads, score = \
self._train_eval(
model, epoch, epochs, epochs_,
best_epoch, best_score, best_weights,
eqs, bads, score,
td, fd, td2, fd2, tp, fp, c, n, no_train_evals,
self.evaluate_pos2,
{'with_backoff': False, 'max_repeats': test_max_repeats}
)
if eqs == -1:
break
return self._train_done(
header, model, eqs, no_train_evals, self.evaluate_pos2,
{'with_backoff': False, 'max_repeats': test_max_repeats}
)
def train_feats2(self, joint=False, feat=None, epochs=5,
test_max_repeats=0, no_train_evals=True, seed=None,
dropout=None, context_dropout=None):
"""Train FEATS-2 taggers from ``self._train_corpus``.
:param joint: if True, use joint FEATS-2 model; elsewise, train
separate models (default)
:param feat: name of the feat to evaluate the tagger; if None, then
tagger will be evaluated for all feats
:type feat: str
:param epochs: number of training iterations. If epochs < 0, then the
best model will be searched based on evaluation of test
corpus. The search will be stopped when the result of
next |epochs| iterations will be worse than the best
one. It's allowed to specify epochs as tuple of both
variants (positive and negative)
:type epochs: int|tuple(int, int)
:param test_max_repeats: parameter for ``evaluate_feats2()``
:type test_max_repeats: int
:param no_train_evals: don't make interim and final evaluations on the
training set (save time)
:param seed: init value for the random number generator
:type seed: int
:param dropout: a fraction of weiths to be randomly set to 0 at each
predict to prevent overfitting
:type dropout: float
:param context_dropout: a fraction of FEATS tags to be randomly replaced
after predict to random FEATS tags to prevent
overfitting
:type context_dropout: float
"""
return (
self._train_feats2_joint if joint else
self._train_feats2_separate
)(
feat=feat, epochs=epochs,
no_train_evals=no_train_evals, test_max_repeats=test_max_repeats,
seed=seed, dropout=dropout, context_dropout=context_dropout
)
def _train_feats2_separate(self, feat=None, epochs=5,
no_train_evals=True, test_max_repeats=0,
seed=None, dropout=None, context_dropout=None):
cdict, corpus_len, progress_step, progress_check_step, \
epochs, epochs_ = self._train_init(epochs, seed)
assert self._feats_models, \
'ERROR: Use train_feats() prior to prepare FEATS tagger'
assert self._feats_rev_models, \
'ERROR: Use train_feats(rev=True) prior to prepare ' \
'Reversed FEATS tagger'
if feat:
models = self._feats2_models
else:
models = self._feats2_models = {}
default_val = '_'
feat_vals = cdict.get_feats()
if feat:
feat_vals = {feat: feat_vals[feat]}
for feat in sorted(feat_vals):
header = 'FEAT-2<<{}>>'.format(feat)
model = models[feat] = \
_AveragedPerceptron(default_class=default_val)
vals = sorted(feat_vals[feat])
last_val_idx = len(vals) - 1
print([x for x in vals if x != default_val], file=LOG_FILE)
best_epoch, best_score, best_weights, eqs, bads, score = \
-1, -1, None, 0, 0, -1
epoch = 0
while True:
n = c = 0
td = fd = td2 = fd2 = tp = fp = 0
random.shuffle(self._train_corpus)
print('{} Epoch {}'.format(header, epoch), file=LOG_FILE)
for sent_no, sentence in enumerate(self._train_corpus):
if not sent_no % progress_check_step:
print_progress(sent_no, end_value=corpus_len,
step=progress_step)
tokens = [(x['FORM'], x['LEMMA'], x['UPOS'], x['FEATS'])
for x in sentence
if x['FORM'] and x['LEMMA'] and x['UPOS']
and '-' not in x['ID']]
context, lemma_context, pos_context, feats_context = \
[list(x) for x in zip(*[t for t in tokens])] \
if tokens else \
[[]] * 4
tokens_ = [[*t[:3], None] for t in tokens]
for i, (wform, lemma, pos, feats) in enumerate(tokens):
gold_val = feats.get(feat, default_val)
guess, coef = \
self._cdict.predict_feat(feat,
wform, lemma, pos)
if self._guess_feat:
guess, coef = self._guess_feat(guess, coef, i,
feat, tokens_,
cdict)
if coef is not None:
if guess is None:
guess = default_val
if guess == gold_val:
td2 += 1
else:
fd2 += 1
if coef == 1.:
if guess == gold_val:
td += 1
else:
fd += 1
else:
features = self.features.get_feat2_features(
i, feat, context,
lemma_context, pos_context, feats_context,
False, last_val_idx
)
guess = model.predict(
features, suggest=guess, suggest_coef=coef,
dropout=dropout
)
if guess == gold_val:
tp += 1
else:
fp += 1
model.update(gold_val, guess, features)
if guess != default_val or gold_val != default_val:
n += 1
c += guess == gold_val
tokens_[i][3] = \
guess if not context_dropout \
or rand() >= context_dropout else \
vals[randint(0, last_val_idx)]
print_progress(sent_no + 1, end_value=corpus_len,
step=progress_step)
epoch, epochs, best_epoch, best_score, best_weights, \
eqs, bads, score = \
self._train_eval(
model, epoch, epochs, epochs_,
best_epoch, best_score, best_weights,
eqs, bads, score,
td, fd, td2, fd2, tp, fp, c, n, no_train_evals,
lambda **kwargs: self.evaluate_feats2(**kwargs)[1],
{'joint': False, 'with_backoff': False,
'max_repeats': test_max_repeats, 'feat': feat}
)
if eqs == -1:
break
res = self._train_done(
header, model, eqs, no_train_evals,
lambda **kwargs: self.evaluate_feats2(**kwargs)[1],
{'joint': False, 'with_backoff': False,
'max_repeats': test_max_repeats, 'feat': feat}
)
return res if feat else \
f_evaluate(joint=False, rev=rev, feat=feat, silent=True)
def _train_feats2_joint(self, feat=None, epochs=5,
no_train_evals=True, test_max_repeats=0,
seed=None, dropout=None, context_dropout=None):
cdict, corpus_len, progress_step, progress_check_step, \
epochs, epochs_ = self._train_init(epochs, seed)
assert not feat, 'ERROR: feat must be None with joint=True'
assert not context_dropout, \
'ERROR: context_dropout must be None with joint=True'
assert self._feats_model, \
'ERROR: Use train_feats(joint=True) prior to prepare ' \
'joint FEATS tagger'
assert self._feats_rev_model, \
'ERROR: Use train_feats(joint=True, rev=True) prior ' \
'to prepare Reversed joint FEATS tagger'
model = self._feats2_model = _AveragedPerceptron(default_class='')
header = 'FEATS-2'
best_epoch, best_score, best_weights, eqs, bads, score = \
-1, -1, None, 0, 0, -1
epoch = 0
while True:
n = c = 0
td = fd = td2 = fd2 = tp = fp = 0
random.shuffle(self._train_corpus)
print('{} Epoch {}'.format(header, epoch), file=LOG_FILE)
for sent_no, sentence in enumerate(self._train_corpus):
if not sent_no % progress_check_step:
print_progress(sent_no, end_value=corpus_len,
step=progress_step)
tokens = [(x['FORM'], x['LEMMA'], x['UPOS'], x['FEATS'])
for x in sentence
if x['FORM'] and x['LEMMA'] and x['UPOS']
and '-' not in x['ID']]
context, lemma_context, pos_context, feats_context = \
[list(x) for x in zip(*[t for t in tokens])] \
if tokens else \
[[]] * 4
for i, feats in enumerate(feats_context):
gold = '|'.join('='.join((x, feats[x]))
for x in sorted(feats))
features = self.features.get_feat2_features(
i, None,
context, lemma_context, pos_context, feats_context,
True, 0
)
guess = model.predict(features, dropout=dropout)
model.update(gold, guess, features)
n += 1
c += guess == gold
print_progress(sent_no + 1, end_value=corpus_len,
step=progress_step)
epoch, epochs, best_epoch, best_score, best_weights, \
eqs, bads, score = \
self._train_eval(
model, epoch, epochs, epochs_,
best_epoch, best_score, best_weights,
eqs, bads, score,
td, fd, td2, fd2, tp, fp, c, n, no_train_evals,
lambda **kwargs: self.evaluate_feats2(**kwargs)[1],
{'joint': True, 'with_backoff': False,
'max_repeats': test_max_repeats}
)
if eqs == -1:
break
return self._train_done(
header, model, eqs, no_train_evals,
lambda **kwargs: self.evaluate_feats2(**kwargs)[1],
{'joint': True, 'with_backoff': False,
'max_repeats': test_max_repeats}
)
| 49.131387 | 82 | 0.500836 | 5,852 | 53,848 | 4.43985 | 0.058442 | 0.028481 | 0.00485 | 0.006466 | 0.810561 | 0.777192 | 0.733816 | 0.704565 | 0.679124 | 0.64487 | 0 | 0.010544 | 0.429375 | 53,848 | 1,095 | 83 | 49.176256 | 0.835031 | 0.288553 | 0 | 0.528926 | 0 | 0 | 0.038463 | 0.004391 | 0 | 0 | 0 | 0 | 0.016529 | 1 | 0.033058 | false | 0 | 0.013774 | 0 | 0.070248 | 0.016529 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
6aa583b198ba87b413d62288e3bf3a71075f4929 | 151 | py | Python | keeper/admin.py | avrilmaomao/lunakeeper | 95c55007e703edea0bc39455b4bf0ff60d4914d8 | [
"MIT"
] | 1 | 2021-09-01T01:29:20.000Z | 2021-09-01T01:29:20.000Z | keeper/admin.py | avrilmaomao/lunakeeper | 95c55007e703edea0bc39455b4bf0ff60d4914d8 | [
"MIT"
] | 1 | 2021-09-01T03:27:13.000Z | 2021-09-01T03:27:13.000Z | keeper/admin.py | avrilmaomao/lunakeeper | 95c55007e703edea0bc39455b4bf0ff60d4914d8 | [
"MIT"
] | null | null | null | from django.contrib import admin
from .models import Pony, History
# Register your models here.
admin.site.register(Pony)
admin.site.register(History) | 25.166667 | 33 | 0.807947 | 22 | 151 | 5.545455 | 0.545455 | 0.147541 | 0.278689 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.10596 | 151 | 6 | 34 | 25.166667 | 0.903704 | 0.172185 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
6ac2d8bc98e3b55d56291f5a66a1b848fdef0da8 | 26,821 | py | Python | tests/hwsim/test_erp.py | zhijianli88/hostap | 6d49aeb76247c4145cb4f7c05afb7b35f27150c1 | [
"Unlicense"
] | null | null | null | tests/hwsim/test_erp.py | zhijianli88/hostap | 6d49aeb76247c4145cb4f7c05afb7b35f27150c1 | [
"Unlicense"
] | 1 | 2018-01-09T16:46:00.000Z | 2018-01-09T16:46:00.000Z | tests/hwsim/test_erp.py | zhijianli88/hostap | 6d49aeb76247c4145cb4f7c05afb7b35f27150c1 | [
"Unlicense"
] | null | null | null | # EAP Re-authentication Protocol (ERP) tests
# Copyright (c) 2014-2015, Jouni Malinen <j@w1.fi>
#
# This software may be distributed under the terms of the BSD license.
# See README for more details.
import binascii
import logging
logger = logging.getLogger()
import os
import time
import hostapd
from utils import HwsimSkip, alloc_fail, fail_test, wait_fail_trigger
from test_ap_eap import int_eap_server_params
from test_ap_psk import find_wpas_process, read_process_memory, verify_not_present, get_key_locations
def check_erp_capa(dev):
capab = dev.get_capability("erp")
if not capab or 'ERP' not in capab:
raise HwsimSkip("ERP not supported in the build")
def test_erp_initiate_reauth_start(dev, apdev):
"""Authenticator sending EAP-Initiate/Re-auth-Start, but ERP disabled on peer"""
params = hostapd.wpa2_eap_params(ssid="test-wpa2-eap")
params['erp_send_reauth_start'] = '1'
params['erp_domain'] = 'example.com'
hapd = hostapd.add_ap(apdev[0], params)
dev[0].request("ERP_FLUSH")
dev[0].connect("test-wpa2-eap", key_mgmt="WPA-EAP",
eap="PAX", identity="pax.user@example.com",
password_hex="0123456789abcdef0123456789abcdef",
scan_freq="2412")
def test_erp_enabled_on_server(dev, apdev):
"""ERP enabled on internal EAP server, but disabled on peer"""
params = int_eap_server_params()
params['erp_send_reauth_start'] = '1'
params['erp_domain'] = 'example.com'
params['eap_server_erp'] = '1'
hapd = hostapd.add_ap(apdev[0], params)
dev[0].request("ERP_FLUSH")
dev[0].connect("test-wpa2-eap", key_mgmt="WPA-EAP",
eap="PAX", identity="pax.user@example.com",
password_hex="0123456789abcdef0123456789abcdef",
scan_freq="2412")
def test_erp(dev, apdev):
"""ERP enabled on server and peer"""
check_erp_capa(dev[0])
params = int_eap_server_params()
params['erp_send_reauth_start'] = '1'
params['erp_domain'] = 'example.com'
params['eap_server_erp'] = '1'
params['disable_pmksa_caching'] = '1'
hapd = hostapd.add_ap(apdev[0], params)
dev[0].request("ERP_FLUSH")
dev[0].connect("test-wpa2-eap", key_mgmt="WPA-EAP",
eap="PSK", identity="psk.user@example.com",
password_hex="0123456789abcdef0123456789abcdef",
erp="1", scan_freq="2412")
for i in range(3):
dev[0].request("DISCONNECT")
dev[0].wait_disconnected(timeout=15)
dev[0].request("RECONNECT")
ev = dev[0].wait_event(["CTRL-EVENT-EAP-SUCCESS"], timeout=15)
if ev is None:
raise Exception("EAP success timed out")
if "EAP re-authentication completed successfully" not in ev:
raise Exception("Did not use ERP")
dev[0].wait_connected(timeout=15, error="Reconnection timed out")
def test_erp_server_no_match(dev, apdev):
"""ERP enabled on server and peer, but server has no key match"""
check_erp_capa(dev[0])
params = int_eap_server_params()
params['erp_send_reauth_start'] = '1'
params['erp_domain'] = 'example.com'
params['eap_server_erp'] = '1'
params['disable_pmksa_caching'] = '1'
hapd = hostapd.add_ap(apdev[0], params)
dev[0].request("ERP_FLUSH")
id = dev[0].connect("test-wpa2-eap", key_mgmt="WPA-EAP",
eap="PSK", identity="psk.user@example.com",
password_hex="0123456789abcdef0123456789abcdef",
erp="1", scan_freq="2412")
dev[0].request("DISCONNECT")
dev[0].wait_disconnected(timeout=15)
hapd.request("ERP_FLUSH")
dev[0].request("RECONNECT")
ev = dev[0].wait_event(["CTRL-EVENT-EAP-SUCCESS",
"CTRL-EVENT-EAP-FAILURE"], timeout=15)
if ev is None:
raise Exception("EAP result timed out")
if "CTRL-EVENT-EAP-SUCCESS" in ev:
raise Exception("Unexpected EAP success")
dev[0].request("DISCONNECT")
dev[0].select_network(id)
ev = dev[0].wait_event(["CTRL-EVENT-EAP-SUCCESS"], timeout=15)
if ev is None:
raise Exception("EAP success timed out")
if "EAP re-authentication completed successfully" in ev:
raise Exception("Unexpected use of ERP")
dev[0].wait_connected(timeout=15, error="Reconnection timed out")
def start_erp_as(apdev, erp_domain="example.com"):
params = { "ssid": "as", "beacon_int": "2000",
"radius_server_clients": "auth_serv/radius_clients.conf",
"radius_server_auth_port": '18128',
"eap_server": "1",
"eap_user_file": "auth_serv/eap_user.conf",
"ca_cert": "auth_serv/ca.pem",
"server_cert": "auth_serv/server.pem",
"private_key": "auth_serv/server.key",
"eap_sim_db": "unix:/tmp/hlr_auc_gw.sock",
"dh_file": "auth_serv/dh.conf",
"pac_opaque_encr_key": "000102030405060708090a0b0c0d0e0f",
"eap_fast_a_id": "101112131415161718191a1b1c1d1e1f",
"eap_fast_a_id_info": "test server",
"eap_server_erp": "1",
"erp_domain": erp_domain }
return hostapd.add_ap(apdev, params)
def test_erp_radius(dev, apdev):
"""ERP enabled on RADIUS server and peer"""
check_erp_capa(dev[0])
start_erp_as(apdev[1])
params = hostapd.wpa2_eap_params(ssid="test-wpa2-eap")
params['auth_server_port'] = "18128"
params['erp_send_reauth_start'] = '1'
params['erp_domain'] = 'example.com'
params['disable_pmksa_caching'] = '1'
hapd = hostapd.add_ap(apdev[0], params)
dev[0].request("ERP_FLUSH")
dev[0].connect("test-wpa2-eap", key_mgmt="WPA-EAP",
eap="PSK", identity="psk.user@example.com",
password_hex="0123456789abcdef0123456789abcdef",
erp="1", scan_freq="2412")
for i in range(3):
dev[0].request("DISCONNECT")
dev[0].wait_disconnected(timeout=15)
dev[0].request("RECONNECT")
ev = dev[0].wait_event(["CTRL-EVENT-EAP-SUCCESS"], timeout=15)
if ev is None:
raise Exception("EAP success timed out")
if "EAP re-authentication completed successfully" not in ev:
raise Exception("Did not use ERP")
dev[0].wait_connected(timeout=15, error="Reconnection timed out")
def erp_test(dev, hapd, **kwargs):
res = dev.get_capability("eap")
if kwargs['eap'] not in res:
logger.info("Skip ERP test with %s due to missing support" % kwargs['eap'])
return
hapd.dump_monitor()
dev.dump_monitor()
dev.request("ERP_FLUSH")
id = dev.connect("test-wpa2-eap", key_mgmt="WPA-EAP", erp="1",
scan_freq="2412", **kwargs)
dev.request("DISCONNECT")
dev.wait_disconnected(timeout=15)
hapd.dump_monitor()
dev.request("RECONNECT")
ev = dev.wait_event(["CTRL-EVENT-EAP-SUCCESS"], timeout=15)
if ev is None:
raise Exception("EAP success timed out")
if "EAP re-authentication completed successfully" not in ev:
raise Exception("Did not use ERP")
dev.wait_connected(timeout=15, error="Reconnection timed out")
ev = hapd.wait_event([ "AP-STA-CONNECTED" ], timeout=5)
if ev is None:
raise Exception("No connection event received from hostapd")
dev.request("DISCONNECT")
def test_erp_radius_eap_methods(dev, apdev):
"""ERP enabled on RADIUS server and peer"""
check_erp_capa(dev[0])
eap_methods = dev[0].get_capability("eap")
start_erp_as(apdev[1])
params = hostapd.wpa2_eap_params(ssid="test-wpa2-eap")
params['auth_server_port'] = "18128"
params['erp_send_reauth_start'] = '1'
params['erp_domain'] = 'example.com'
params['disable_pmksa_caching'] = '1'
hapd = hostapd.add_ap(apdev[0], params)
erp_test(dev[0], hapd, eap="AKA", identity="0232010000000000@example.com",
password="90dca4eda45b53cf0f12d7c9c3bc6a89:cb9cccc4b9258e6dca4760379fb82581:000000000123")
erp_test(dev[0], hapd, eap="AKA'", identity="6555444333222111@example.com",
password="5122250214c33e723a5dd523fc145fc0:981d464c7c52eb6e5036234984ad0bcf:000000000123")
erp_test(dev[0], hapd, eap="EKE", identity="erp-eke@example.com",
password="hello")
if "FAST" in eap_methods:
erp_test(dev[0], hapd, eap="FAST", identity="erp-fast@example.com",
password="password", ca_cert="auth_serv/ca.pem",
phase2="auth=GTC",
phase1="fast_provisioning=2",
pac_file="blob://fast_pac_auth_erp")
erp_test(dev[0], hapd, eap="GPSK", identity="erp-gpsk@example.com",
password="abcdefghijklmnop0123456789abcdef")
erp_test(dev[0], hapd, eap="IKEV2", identity="erp-ikev2@example.com",
password="password")
erp_test(dev[0], hapd, eap="PAX", identity="erp-pax@example.com",
password_hex="0123456789abcdef0123456789abcdef")
# TODO: PEAP (EMSK)
#if "MSCHAPV2" in eap_methods:
# erp_test(dev[0], hapd, eap="PEAP", identity="erp-peap@example.com",
# password="password", ca_cert="auth_serv/ca.pem",
# phase2="auth=MSCHAPV2")
erp_test(dev[0], hapd, eap="PSK", identity="erp-psk@example.com",
password_hex="0123456789abcdef0123456789abcdef")
if "PWD" in eap_methods:
erp_test(dev[0], hapd, eap="PWD", identity="erp-pwd@example.com",
password="secret password")
erp_test(dev[0], hapd, eap="SAKE", identity="erp-sake@example.com",
password_hex="0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef")
erp_test(dev[0], hapd, eap="SIM", identity="1232010000000000@example.com",
password="90dca4eda45b53cf0f12d7c9c3bc6a89:cb9cccc4b9258e6dca4760379fb82581")
erp_test(dev[0], hapd, eap="TLS", identity="erp-tls@example.com",
ca_cert="auth_serv/ca.pem", client_cert="auth_serv/user.pem",
private_key="auth_serv/user.key")
erp_test(dev[0], hapd, eap="TTLS", identity="erp-ttls@example.com",
password="password", ca_cert="auth_serv/ca.pem", phase2="auth=PAP")
def test_erp_key_lifetime_in_memory(dev, apdev, params):
"""ERP and key lifetime in memory"""
check_erp_capa(dev[0])
p = int_eap_server_params()
p['erp_send_reauth_start'] = '1'
p['erp_domain'] = 'example.com'
p['eap_server_erp'] = '1'
p['disable_pmksa_caching'] = '1'
hapd = hostapd.add_ap(apdev[0], p)
password = "63d2d21ac3c09ed567ee004a34490f1d16e7fa5835edf17ddba70a63f1a90a25"
pid = find_wpas_process(dev[0])
dev[0].request("ERP_FLUSH")
dev[0].connect("test-wpa2-eap", key_mgmt="WPA-EAP", eap="TTLS",
identity="pap-secret@example.com", password=password,
ca_cert="auth_serv/ca.pem", phase2="auth=PAP",
erp="1", scan_freq="2412")
# The decrypted copy of GTK is freed only after the CTRL-EVENT-CONNECTED
# event has been delivered, so verify that wpa_supplicant has returned to
# eloop before reading process memory.
time.sleep(1)
dev[0].ping()
buf = read_process_memory(pid, password)
dev[0].request("DISCONNECT")
dev[0].wait_disconnected(timeout=15)
dev[0].relog()
msk = None
emsk = None
rRK = None
rIK = None
pmk = None
ptk = None
gtk = None
with open(os.path.join(params['logdir'], 'log0'), 'r') as f:
for l in f.readlines():
if "EAP-TTLS: Derived key - hexdump" in l:
val = l.strip().split(':')[3].replace(' ', '')
msk = binascii.unhexlify(val)
if "EAP-TTLS: Derived EMSK - hexdump" in l:
val = l.strip().split(':')[3].replace(' ', '')
emsk = binascii.unhexlify(val)
if "EAP: ERP rRK - hexdump" in l:
val = l.strip().split(':')[3].replace(' ', '')
rRK = binascii.unhexlify(val)
if "EAP: ERP rIK - hexdump" in l:
val = l.strip().split(':')[3].replace(' ', '')
rIK = binascii.unhexlify(val)
if "WPA: PMK - hexdump" in l:
val = l.strip().split(':')[3].replace(' ', '')
pmk = binascii.unhexlify(val)
if "WPA: PTK - hexdump" in l:
val = l.strip().split(':')[3].replace(' ', '')
ptk = binascii.unhexlify(val)
if "WPA: Group Key - hexdump" in l:
val = l.strip().split(':')[3].replace(' ', '')
gtk = binascii.unhexlify(val)
if not msk or not emsk or not rIK or not rRK or not pmk or not ptk or not gtk:
raise Exception("Could not find keys from debug log")
if len(gtk) != 16:
raise Exception("Unexpected GTK length")
kck = ptk[0:16]
kek = ptk[16:32]
tk = ptk[32:48]
fname = os.path.join(params['logdir'],
'erp_key_lifetime_in_memory.memctx-')
logger.info("Checking keys in memory while associated")
get_key_locations(buf, password, "Password")
get_key_locations(buf, pmk, "PMK")
get_key_locations(buf, msk, "MSK")
get_key_locations(buf, emsk, "EMSK")
get_key_locations(buf, rRK, "rRK")
get_key_locations(buf, rIK, "rIK")
if password not in buf:
raise HwsimSkip("Password not found while associated")
if pmk not in buf:
raise HwsimSkip("PMK not found while associated")
if kck not in buf:
raise Exception("KCK not found while associated")
if kek not in buf:
raise Exception("KEK not found while associated")
if tk in buf:
raise Exception("TK found from memory")
if gtk in buf:
get_key_locations(buf, gtk, "GTK")
raise Exception("GTK found from memory")
logger.info("Checking keys in memory after disassociation")
buf = read_process_memory(pid, password)
# Note: Password is still present in network configuration
# Note: PMK is in EAP fast re-auth data
get_key_locations(buf, password, "Password")
get_key_locations(buf, pmk, "PMK")
get_key_locations(buf, msk, "MSK")
get_key_locations(buf, emsk, "EMSK")
get_key_locations(buf, rRK, "rRK")
get_key_locations(buf, rIK, "rIK")
verify_not_present(buf, kck, fname, "KCK")
verify_not_present(buf, kek, fname, "KEK")
verify_not_present(buf, tk, fname, "TK")
verify_not_present(buf, gtk, fname, "GTK")
dev[0].request("RECONNECT")
ev = dev[0].wait_event(["CTRL-EVENT-EAP-SUCCESS"], timeout=15)
if ev is None:
raise Exception("EAP success timed out")
if "EAP re-authentication completed successfully" not in ev:
raise Exception("Did not use ERP")
dev[0].wait_connected(timeout=15, error="Reconnection timed out")
dev[0].request("DISCONNECT")
dev[0].wait_disconnected(timeout=15)
dev[0].relog()
pmk = None
ptk = None
gtk = None
with open(os.path.join(params['logdir'], 'log0'), 'r') as f:
for l in f.readlines():
if "WPA: PMK - hexdump" in l:
val = l.strip().split(':')[3].replace(' ', '')
pmk = binascii.unhexlify(val)
if "WPA: PTK - hexdump" in l:
val = l.strip().split(':')[3].replace(' ', '')
ptk = binascii.unhexlify(val)
if "WPA: GTK in EAPOL-Key - hexdump" in l:
val = l.strip().split(':')[3].replace(' ', '')
gtk = binascii.unhexlify(val)
if not pmk or not ptk or not gtk:
raise Exception("Could not find keys from debug log")
kck = ptk[0:16]
kek = ptk[16:32]
tk = ptk[32:48]
logger.info("Checking keys in memory after ERP and disassociation")
buf = read_process_memory(pid, password)
# Note: Password is still present in network configuration
get_key_locations(buf, password, "Password")
get_key_locations(buf, pmk, "PMK")
get_key_locations(buf, msk, "MSK")
get_key_locations(buf, emsk, "EMSK")
get_key_locations(buf, rRK, "rRK")
get_key_locations(buf, rIK, "rIK")
verify_not_present(buf, kck, fname, "KCK")
verify_not_present(buf, kek, fname, "KEK")
verify_not_present(buf, tk, fname, "TK")
verify_not_present(buf, gtk, fname, "GTK")
dev[0].request("REMOVE_NETWORK all")
logger.info("Checking keys in memory after network profile removal")
buf = read_process_memory(pid, password)
# Note: rRK and rIK are still in memory
get_key_locations(buf, password, "Password")
get_key_locations(buf, pmk, "PMK")
get_key_locations(buf, msk, "MSK")
get_key_locations(buf, emsk, "EMSK")
get_key_locations(buf, rRK, "rRK")
get_key_locations(buf, rIK, "rIK")
verify_not_present(buf, password, fname, "password")
verify_not_present(buf, pmk, fname, "PMK")
verify_not_present(buf, kck, fname, "KCK")
verify_not_present(buf, kek, fname, "KEK")
verify_not_present(buf, tk, fname, "TK")
verify_not_present(buf, gtk, fname, "GTK")
verify_not_present(buf, msk, fname, "MSK")
verify_not_present(buf, emsk, fname, "EMSK")
dev[0].request("ERP_FLUSH")
logger.info("Checking keys in memory after ERP_FLUSH")
buf = read_process_memory(pid, password)
get_key_locations(buf, rRK, "rRK")
get_key_locations(buf, rIK, "rIK")
verify_not_present(buf, rRK, fname, "rRK")
verify_not_present(buf, rIK, fname, "rIK")
def test_erp_anonymous_identity(dev, apdev):
"""ERP and anonymous identity"""
check_erp_capa(dev[0])
params = int_eap_server_params()
params['erp_send_reauth_start'] = '1'
params['erp_domain'] = 'example.com'
params['eap_server_erp'] = '1'
params['disable_pmksa_caching'] = '1'
hapd = hostapd.add_ap(apdev[0], params)
dev[0].request("ERP_FLUSH")
dev[0].connect("test-wpa2-eap", key_mgmt="WPA-EAP", eap="TTLS",
identity="erp-ttls",
anonymous_identity="anonymous@example.com",
password="password",
ca_cert="auth_serv/ca.pem", phase2="auth=PAP",
erp="1", scan_freq="2412")
for i in range(3):
dev[0].request("DISCONNECT")
dev[0].wait_disconnected(timeout=15)
dev[0].request("RECONNECT")
ev = dev[0].wait_event(["CTRL-EVENT-EAP-SUCCESS"], timeout=15)
if ev is None:
raise Exception("EAP success timed out")
if "EAP re-authentication completed successfully" not in ev:
raise Exception("Did not use ERP")
dev[0].wait_connected(timeout=15, error="Reconnection timed out")
def test_erp_home_realm_oom(dev, apdev):
"""ERP and home realm OOM"""
check_erp_capa(dev[0])
params = int_eap_server_params()
params['erp_send_reauth_start'] = '1'
params['erp_domain'] = 'example.com'
params['eap_server_erp'] = '1'
params['disable_pmksa_caching'] = '1'
hapd = hostapd.add_ap(apdev[0], params)
for count in range(1, 3):
with alloc_fail(dev[0], count, "eap_get_realm"):
dev[0].request("ERP_FLUSH")
dev[0].connect("test-wpa2-eap", key_mgmt="WPA-EAP", eap="TTLS",
identity="erp-ttls@example.com",
anonymous_identity="anonymous@example.com",
password="password",
ca_cert="auth_serv/ca.pem", phase2="auth=PAP",
erp="1", scan_freq="2412", wait_connect=False)
dev[0].wait_connected(timeout=10)
wait_fail_trigger(dev[0], "GET_ALLOC_FAIL")
dev[0].request("REMOVE_NETWORK all")
dev[0].wait_disconnected()
for count in range(1, 3):
with alloc_fail(dev[0], count, "eap_get_realm"):
dev[0].request("ERP_FLUSH")
dev[0].connect("test-wpa2-eap", key_mgmt="WPA-EAP", eap="TTLS",
identity="erp-ttls",
anonymous_identity="anonymous@example.com",
password="password",
ca_cert="auth_serv/ca.pem", phase2="auth=PAP",
erp="1", scan_freq="2412", wait_connect=False)
dev[0].wait_connected(timeout=10)
wait_fail_trigger(dev[0], "GET_ALLOC_FAIL")
dev[0].request("REMOVE_NETWORK all")
dev[0].wait_disconnected()
for count in range(1, 3):
dev[0].request("ERP_FLUSH")
dev[0].connect("test-wpa2-eap", key_mgmt="WPA-EAP", eap="TTLS",
identity="erp-ttls@example.com",
anonymous_identity="anonymous@example.com",
password="password",
ca_cert="auth_serv/ca.pem", phase2="auth=PAP",
erp="1", scan_freq="2412", wait_connect=False)
dev[0].wait_connected(timeout=10)
if range > 1:
continue
with alloc_fail(dev[0], count, "eap_get_realm"):
dev[0].request("DISCONNECT")
dev[0].wait_disconnected(timeout=15)
dev[0].request("RECONNECT")
wait_fail_trigger(dev[0], "GET_ALLOC_FAIL")
dev[0].request("REMOVE_NETWORK all")
dev[0].wait_disconnected()
def test_erp_local_errors(dev, apdev):
"""ERP and local error cases"""
check_erp_capa(dev[0])
params = int_eap_server_params()
params['erp_send_reauth_start'] = '1'
params['erp_domain'] = 'example.com'
params['eap_server_erp'] = '1'
params['disable_pmksa_caching'] = '1'
hapd = hostapd.add_ap(apdev[0], params)
dev[0].request("ERP_FLUSH")
with alloc_fail(dev[0], 1, "eap_peer_erp_init"):
dev[0].connect("test-wpa2-eap", key_mgmt="WPA-EAP", eap="TTLS",
identity="erp-ttls@example.com",
anonymous_identity="anonymous@example.com",
password="password",
ca_cert="auth_serv/ca.pem", phase2="auth=PAP",
erp="1", scan_freq="2412")
dev[0].request("REMOVE_NETWORK all")
dev[0].wait_disconnected()
for count in range(1, 6):
dev[0].request("ERP_FLUSH")
with fail_test(dev[0], count, "hmac_sha256_kdf;eap_peer_erp_init"):
dev[0].connect("test-wpa2-eap", key_mgmt="WPA-EAP", eap="TTLS",
identity="erp-ttls@example.com",
anonymous_identity="anonymous@example.com",
password="password",
ca_cert="auth_serv/ca.pem", phase2="auth=PAP",
erp="1", scan_freq="2412")
dev[0].request("REMOVE_NETWORK all")
dev[0].wait_disconnected()
dev[0].request("ERP_FLUSH")
with alloc_fail(dev[0], 1, "eap_msg_alloc;eap_peer_erp_reauth_start"):
dev[0].connect("test-wpa2-eap", key_mgmt="WPA-EAP", eap="TTLS",
identity="erp-ttls@example.com",
anonymous_identity="anonymous@example.com",
password="password",
ca_cert="auth_serv/ca.pem", phase2="auth=PAP",
erp="1", scan_freq="2412")
dev[0].request("DISCONNECT")
dev[0].wait_disconnected(timeout=15)
dev[0].request("RECONNECT")
wait_fail_trigger(dev[0], "GET_ALLOC_FAIL")
dev[0].request("REMOVE_NETWORK all")
dev[0].wait_disconnected()
dev[0].request("ERP_FLUSH")
with fail_test(dev[0], 1, "hmac_sha256;eap_peer_erp_reauth_start"):
dev[0].connect("test-wpa2-eap", key_mgmt="WPA-EAP", eap="TTLS",
identity="erp-ttls@example.com",
anonymous_identity="anonymous@example.com",
password="password",
ca_cert="auth_serv/ca.pem", phase2="auth=PAP",
erp="1", scan_freq="2412")
dev[0].request("DISCONNECT")
dev[0].wait_disconnected(timeout=15)
dev[0].request("RECONNECT")
wait_fail_trigger(dev[0], "GET_FAIL")
dev[0].request("REMOVE_NETWORK all")
dev[0].wait_disconnected()
dev[0].request("ERP_FLUSH")
with fail_test(dev[0], 1, "hmac_sha256;eap_peer_finish"):
dev[0].connect("test-wpa2-eap", key_mgmt="WPA-EAP", eap="TTLS",
identity="erp-ttls@example.com",
anonymous_identity="anonymous@example.com",
password="password",
ca_cert="auth_serv/ca.pem", phase2="auth=PAP",
erp="1", scan_freq="2412")
dev[0].request("DISCONNECT")
dev[0].wait_disconnected(timeout=15)
dev[0].request("RECONNECT")
wait_fail_trigger(dev[0], "GET_FAIL")
dev[0].request("REMOVE_NETWORK all")
dev[0].wait_disconnected()
dev[0].request("ERP_FLUSH")
with alloc_fail(dev[0], 1, "eap_peer_erp_init"):
dev[0].connect("test-wpa2-eap", key_mgmt="WPA-EAP", eap="TTLS",
identity="erp-ttls@example.com",
anonymous_identity="anonymous@example.com",
password="password",
ca_cert="auth_serv/ca.pem", phase2="auth=PAP",
erp="1", scan_freq="2412")
dev[0].request("DISCONNECT")
dev[0].wait_disconnected(timeout=15)
dev[0].request("ERP_FLUSH")
with alloc_fail(dev[0], 1, "eap_peer_finish"):
dev[0].connect("test-wpa2-eap", key_mgmt="WPA-EAP", eap="TTLS",
identity="erp-ttls@example.com",
anonymous_identity="anonymous@example.com",
password="password",
ca_cert="auth_serv/ca.pem", phase2="auth=PAP",
erp="1", scan_freq="2412")
dev[0].request("DISCONNECT")
dev[0].wait_disconnected(timeout=15)
dev[0].request("RECONNECT")
wait_fail_trigger(dev[0], "GET_ALLOC_FAIL")
dev[0].request("REMOVE_NETWORK all")
dev[0].wait_disconnected()
dev[0].request("ERP_FLUSH")
with fail_test(dev[0], 1, "hmac_sha256_kdf;eap_peer_finish"):
dev[0].connect("test-wpa2-eap", key_mgmt="WPA-EAP", eap="TTLS",
identity="erp-ttls@example.com",
anonymous_identity="anonymous@example.com",
password="password",
ca_cert="auth_serv/ca.pem", phase2="auth=PAP",
erp="1", scan_freq="2412")
dev[0].request("DISCONNECT")
dev[0].wait_disconnected(timeout=15)
dev[0].request("RECONNECT")
wait_fail_trigger(dev[0], "GET_FAIL")
dev[0].request("REMOVE_NETWORK all")
dev[0].wait_disconnected()
| 43.120579 | 103 | 0.603743 | 3,497 | 26,821 | 4.454104 | 0.091793 | 0.040318 | 0.038842 | 0.031202 | 0.786595 | 0.730868 | 0.708012 | 0.693824 | 0.675976 | 0.661723 | 0 | 0.047866 | 0.253011 | 26,821 | 621 | 104 | 43.190016 | 0.729573 | 0.044033 | 0 | 0.702602 | 0 | 0 | 0.268916 | 0.077034 | 0 | 0 | 0 | 0.00161 | 0 | 1 | 0.024164 | false | 0.079926 | 0.01487 | 0 | 0.042751 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 5 |
6ae7bb37a4c83569b7c2eb2bcb22a5475eded362 | 161 | py | Python | tests/integration/testdata/package/deep-nested/ChildStackX/ChildStackY/FunctionA/main_a_2.py | torresxb1/aws-sam-cli | d307f2eb6e1a91a476a5e2ca6070f974b0c913f1 | [
"BSD-2-Clause",
"Apache-2.0"
] | 2,959 | 2018-05-08T21:48:56.000Z | 2020-08-24T14:35:39.000Z | tests/integration/testdata/package/deep-nested/ChildStackX/ChildStackY/FunctionA/main_a_2.py | torresxb1/aws-sam-cli | d307f2eb6e1a91a476a5e2ca6070f974b0c913f1 | [
"BSD-2-Clause",
"Apache-2.0"
] | 1,469 | 2018-05-08T22:44:28.000Z | 2020-08-24T20:19:24.000Z | tests/integration/testdata/package/deep-nested/ChildStackX/ChildStackY/FunctionA/main_a_2.py | torresxb1/aws-sam-cli | d307f2eb6e1a91a476a5e2ca6070f974b0c913f1 | [
"BSD-2-Clause",
"Apache-2.0"
] | 642 | 2018-05-08T22:09:19.000Z | 2020-08-17T09:04:37.000Z | import json
def handler(event, context):
"""
FunctionA in leaf template
"""
return {"statusCode": 200, "body": json.dumps({"hello": "world"})}
| 17.888889 | 70 | 0.602484 | 18 | 161 | 5.388889 | 0.944444 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.02381 | 0.217391 | 161 | 8 | 71 | 20.125 | 0.746032 | 0.161491 | 0 | 0 | 0 | 0 | 0.201681 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
6ae8da45a9b76307ffc29faf0dea3f751be9935a | 69 | py | Python | torch_earlystop/__init__.py | chdre/pytorch-earlystop | 99c916f17c438504b31bc42ff4d449d31cd2661f | [
"MIT"
] | 1 | 2021-05-02T19:08:46.000Z | 2021-05-02T19:08:46.000Z | torch_earlystop/__init__.py | chdre/pytorch-earlystop | 99c916f17c438504b31bc42ff4d449d31cd2661f | [
"MIT"
] | null | null | null | torch_earlystop/__init__.py | chdre/pytorch-earlystop | 99c916f17c438504b31bc42ff4d449d31cd2661f | [
"MIT"
] | null | null | null | from . import torch_earlystop
earlystop = torch_earlystop.EarlyStop
| 17.25 | 37 | 0.84058 | 8 | 69 | 7 | 0.5 | 0.5 | 0.821429 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.115942 | 69 | 3 | 38 | 23 | 0.918033 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
6af164476b0cb13bb07996c0959ade48f249a35f | 76 | py | Python | docs/wcwidth.py | tos-kamiya/fepl | e37c76d18cb6d11d90853eb9f27af0373a28478b | [
"Unlicense"
] | null | null | null | docs/wcwidth.py | tos-kamiya/fepl | e37c76d18cb6d11d90853eb9f27af0373a28478b | [
"Unlicense"
] | null | null | null | docs/wcwidth.py | tos-kamiya/fepl | e37c76d18cb6d11d90853eb9f27af0373a28478b | [
"Unlicense"
] | null | null | null | def wcswidth(s):
return sum((1 if 0 <= ord(c) < 256 else 2) for c in s)
| 25.333333 | 58 | 0.578947 | 17 | 76 | 2.588235 | 0.882353 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.107143 | 0.263158 | 76 | 2 | 59 | 38 | 0.678571 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | false | 0 | 0 | 0.5 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 5 |
0aa9286708fe986314b1cde6d1d11fed791a76a8 | 22,164 | py | Python | deform_conv_3d.py | KrakenLeaf/deform_conv_pytorch | abd9f734e2087b39bbd0be36ef3b4d5e5d834bbd | [
"Unlicense"
] | 7 | 2020-05-30T18:20:25.000Z | 2021-05-06T05:35:01.000Z | deform_conv_3d.py | KrakenLeaf/deform_conv_pytorch | abd9f734e2087b39bbd0be36ef3b4d5e5d834bbd | [
"Unlicense"
] | null | null | null | deform_conv_3d.py | KrakenLeaf/deform_conv_pytorch | abd9f734e2087b39bbd0be36ef3b4d5e5d834bbd | [
"Unlicense"
] | 7 | 2020-06-10T10:11:09.000Z | 2022-02-20T12:07:39.000Z | from torch.autograd import Variable, Function
import torch
from torch import nn
import numpy as np
# 2D
# ---------------------------------------------------------------------------------------------
# Original code of ChunhuanLin : https://github.com/ChunhuanLin/deform_conv_pytorch
class DeformConv2D(nn.Module):
def __init__(self, inc, outc, kernel_size=3, padding=1, bias=None):
super(DeformConv2D, self).__init__()
self.kernel_size = kernel_size
self.padding = padding
self.zero_padding = nn.ZeroPad2d(padding)
self.conv_kernel = nn.Conv2d(inc, outc, kernel_size=kernel_size, stride=kernel_size, bias=bias)
def forward(self, x, offset):
dtype = offset.data.type()
ks = self.kernel_size
N = offset.size(1) // 2
# Change offset's order from [x1, x2, ..., y1, y2, ...] to [x1, y1, x2, y2, ...]
# Codes below are written to make sure same results of MXNet implementation.
# You can remove them, and it won't influence the module's performance.
offsets_index = Variable(torch.cat([torch.arange(0, 2*N, 2), torch.arange(1, 2*N+1, 2)]), requires_grad=False).type_as(x).long()
offsets_index = offsets_index.unsqueeze(dim=0).unsqueeze(dim=-1).unsqueeze(dim=-1).expand(*offset.size())
offset = torch.gather(offset, dim=1, index=offsets_index)
# ------------------------------------------------------------------------
if self.padding:
x = self.zero_padding(x)
# interpolation points p (Eq. 2)
# ------------------------------
# (b, 2N, h, w)
p = self._get_p(offset, dtype)
# (b, h, w, 2N)
p = p.contiguous().permute(0, 2, 3, 1)
# Regular grid points q (Eq. 3)
# -----------------------------
q_lt = Variable(p.data, requires_grad=False).floor()
q_rb = q_lt + 1
q_lt = torch.cat([torch.clamp(q_lt[..., :N], 0, x.size(2)-1), torch.clamp(q_lt[..., N:], 0, x.size(3)-1)], dim=-1).long()
q_rb = torch.cat([torch.clamp(q_rb[..., :N], 0, x.size(2)-1), torch.clamp(q_rb[..., N:], 0, x.size(3)-1)], dim=-1).long()
q_lb = torch.cat([q_lt[..., :N], q_rb[..., N:]], -1)
q_rt = torch.cat([q_rb[..., :N], q_lt[..., N:]], -1)
# (b, h, w, N)
mask = torch.cat([p[..., :N].lt(self.padding)+p[..., :N].gt(x.size(2)-1-self.padding),
p[..., N:].lt(self.padding)+p[..., N:].gt(x.size(3)-1-self.padding)], dim=-1).type_as(p)
mask = mask.detach() # Detach from computational graph
floor_p = p - (p - torch.floor(p))
p = p*(1-mask) + floor_p*mask
p = torch.cat([torch.clamp(p[..., :N], 0, x.size(2)-1), torch.clamp(p[..., N:], 0, x.size(3)-1)], dim=-1)
# Interpolation kernel (Eq. 4)
# -----------------------------
# bilinear kernel (b, h, w, N)
g_lt = (1 + (q_lt[..., :N].type_as(p) - p[..., :N])) * (1 + (q_lt[..., N:].type_as(p) - p[..., N:])) # g(qx, px) * g(qy, py)
g_rb = (1 - (q_rb[..., :N].type_as(p) - p[..., :N])) * (1 - (q_rb[..., N:].type_as(p) - p[..., N:]))
g_lb = (1 + (q_lb[..., :N].type_as(p) - p[..., :N])) * (1 - (q_lb[..., N:].type_as(p) - p[..., N:]))
g_rt = (1 - (q_rt[..., :N].type_as(p) - p[..., :N])) * (1 + (q_rt[..., N:].type_as(p) - p[..., N:]))
# Interpolation - x(q) (Eq. 3)
# ----------------------------
# (b, c, h, w, N)
x_q_lt = self._get_x_q(x, q_lt, N)
x_q_rb = self._get_x_q(x, q_rb, N)
x_q_lb = self._get_x_q(x, q_lb, N)
x_q_rt = self._get_x_q(x, q_rt, N)
# (b, c, h, w, N)
x_offset = g_lt.unsqueeze(dim=1) * x_q_lt + \
g_rb.unsqueeze(dim=1) * x_q_rb + \
g_lb.unsqueeze(dim=1) * x_q_lb + \
g_rt.unsqueeze(dim=1) * x_q_rt
x_offset = self._reshape_x_offset(x_offset, ks)
out = self.conv_kernel(x_offset)
return out
def _get_p_n(self, N, dtype):
p_n_x, p_n_y = np.meshgrid(range(-(self.kernel_size-1)//2, (self.kernel_size-1)//2+1),
range(-(self.kernel_size-1)//2, (self.kernel_size-1)//2+1), indexing='ij')
# (2N, 1)
p_n = np.concatenate((p_n_x.flatten(), p_n_y.flatten()))
p_n = np.reshape(p_n, (1, 2*N, 1, 1))
p_n = Variable(torch.from_numpy(p_n).type(dtype), requires_grad=False)
return p_n
@staticmethod
def _get_p_0(h, w, N, dtype):
p_0_x, p_0_y = np.meshgrid(range(1, h+1), range(1, w+1), indexing='ij')
p_0_x = p_0_x.flatten().reshape(1, 1, h, w).repeat(N, axis=1)
p_0_y = p_0_y.flatten().reshape(1, 1, h, w).repeat(N, axis=1)
p_0 = np.concatenate((p_0_x, p_0_y), axis=1)
p_0 = Variable(torch.from_numpy(p_0).type(dtype), requires_grad=False)
return p_0
def _get_p(self, offset, dtype):
N, h, w = offset.size(1)//2, offset.size(2), offset.size(3)
# (1, 2N, 1, 1)
p_n = self._get_p_n(N, dtype)
# (1, 2N, h, w)
p_0 = self._get_p_0(h, w, N, dtype)
p = p_0 + p_n + offset
return p
def _get_x_q(self, x, q, N):
b, h, w, _ = q.size()
padded_w = x.size(3)
c = x.size(1)
# (b, c, h*w)
x = x.contiguous().view(b, c, -1)
# (b, h, w, N)
index = q[..., :N]*padded_w + q[..., N:] # offset_x*w + offset_y
# (b, c, h*w*N)
index = index.contiguous().unsqueeze(dim=1).expand(-1, c, -1, -1, -1).contiguous().view(b, c, -1)
x_offset = x.gather(dim=-1, index=index).contiguous().view(b, c, h, w, N)
return x_offset
@staticmethod
def _reshape_x_offset(x_offset, ks):
b, c, h, w, N = x_offset.size()
x_offset = torch.cat([x_offset[..., s:s+ks].contiguous().view(b, c, h, w*ks) for s in range(0, N, ks)], dim=-1)
x_offset = x_offset.contiguous().view(b, c, h*ks, w*ks)
return x_offset
# 3D
# --------------------------------------------------------------------------------------------------------
class DeformConv3D(nn.Module):
def __init__(self, inc, outc=[], kernel_size=3, padding=1, bias=None):
super(DeformConv3D, self).__init__()
self.kernel_size = kernel_size
self.padding = padding
#self.zero_padding = nn.functional.pad(padding)
self.conv_kernel = nn.Conv3d(inc, outc, kernel_size=kernel_size, stride=kernel_size, bias=bias)
def forward(self, x, offset):
dtype = offset.data.type()
ks = self.kernel_size
# Out_channels = 3 * kernel_size_x * kernel_size_y * kernel_size_z
M = offset.size(1)
N = M // 3 # Number of channels
if self.padding != 0:
# For simplicity we pad from both sides in all 3 dimensions
padding_use = (self.padding, self.padding, self.padding, self.padding ,self.padding, self.padding)
x = nn.functional.pad(x, padding_use, "constant", 0)
# Get input dimensions
b, c, h, w, d = x.size()
shape = (h, w, d)
# interpolation points p (Eq. 2)
# ------------------------------
# (b, 3N, h, w, d)
p = self._get_p(offset, dtype)
# (b, h, w, d, 3N)
p = p.contiguous().permute(0, 2, 3, 4, 1) # p = p_0 + p_n + offset
# Use grid_sample to interpolate
# ------------------------------
for ii in range(N):
# Normalize flow field to rake values in the range [-1, 1]
flow = p[..., [t for t in range(ii, M, N)]]
for jj in range(3):
flow[..., jj] = 2 * flow[..., jj] / (shape[jj] - 1) - 1
# Push through the spatial transformer
tmp = nn.functional.grid_sample(input=x, grid=flow, mode='bilinear', padding_mode='border').contiguous()
tmp = tmp.unsqueeze(dim=-1)
# Aggregate
if ii == 0:
xt = tmp
else:
xt = torch.cat((xt, tmp), dim=-1)
# For simplicity, ks is a scalar, implying kernel has same dimensions in all directions
x_offset = self._reshape_x_offset(xt, ks)
out = self.conv_kernel(x_offset)
return out
def _get_p_n(self, N, dtype):
p_n_x, p_n_y, p_n_z = np.meshgrid(range(-(self.kernel_size - 1) // 2, (self.kernel_size - 1) // 2 + 1),
range(-(self.kernel_size - 1) // 2, (self.kernel_size - 1) // 2 + 1),
range(-(self.kernel_size - 1) // 2, (self.kernel_size - 1) // 2 + 1),
indexing='ij')
# (2N, 1)
p_n = np.concatenate((p_n_x.flatten(), p_n_y.flatten(), p_n_z.flatten()))
p_n = np.reshape(p_n, (1, 3*N, 1, 1, 1))
p_n = Variable(torch.from_numpy(p_n).type(dtype), requires_grad=False)
return p_n
@staticmethod
def _get_p_0(h, w, d, N, dtype):
#p_0_x, p_0_y, p_0_z = np.meshgrid(range(1, h + 1), range(1, w + 1), range(1, d + 1), indexing='ij') # 1,...,N
p_0_x, p_0_y, p_0_z = np.meshgrid(range(0, h), range(0, w), range(0, d), indexing='ij') # 0,...N-1
p_0_x = p_0_x.flatten().reshape(1, 1, h, w, d).repeat(N, axis=1)
p_0_y = p_0_y.flatten().reshape(1, 1, h, w, d).repeat(N, axis=1)
p_0_z = p_0_z.flatten().reshape(1, 1, h, w, d).repeat(N, axis=1)
p_0 = np.concatenate((p_0_x, p_0_y, p_0_z), axis=1)
p_0 = Variable(torch.from_numpy(p_0).type(dtype), requires_grad=False)
return p_0
def _get_p(self, offset, dtype):
N, h, w, d = offset.size(1)//3, offset.size(2), offset.size(3), offset.size(4)
# (1, 3N, 1, 1, 1)
p_n = self._get_p_n(N, dtype)
# (1, 3N, h, w, d)
p_0 = self._get_p_0(h, w, d, N, dtype)
p = p_0 + p_n + offset
return p
def _get_x_q(self, x, q, N):
b, h, w, d, _ = q.size()
ny = x.size(3) # Padded dimension y
nz = x.size(4) # Padded dimension z
c = x.size(1) # Number of channels in input x
# (b, c, h*w)
x = x.contiguous().view(b, c, -1)
# (b, h, w, d, N)
offset_x = q[..., :N]
offset_y = q[..., N:2*N]
offset_z = q[..., 2*N:]
# Convert subscripts to linear indices (i.e. Matlab's sub2ind)
index = offset_x * ny * nz + offset_y * nz + offset_z
# (b, c, h*w*d*N)
index = index.contiguous().unsqueeze(dim=1).expand(-1, c, -1, -1, -1, -1).contiguous().view(b, c, -1)
x_offset = x.gather(dim=-1, index=index).contiguous().view(b, c, h, w, d, N)
return x_offset
@staticmethod
def _reshape_x_offset(x_offset, ks):
'''
This function arranges the interpolated x values in consecutive 3d blocks of size
kernel_size x kernel_size x kernel_size. Since the Conv3d stride is equal to kernel_size, the convolution
will happen only for the offset cubes and output the results in the proper locations
Note: We assume kernel size is the same for all dimensions (cube)
'''
b, c, h, w, d, N = x_offset.size()
x_offset = torch.cat([x_offset[..., s:s + ks*ks].contiguous().view(b, c, h, w, d*ks*ks) for s in range(0, N, ks*ks)], dim=-1)
N = x_offset.size(4)
x_offset = torch.cat([x_offset[..., s:s + d*ks*ks].contiguous().view(b, c, h, w*ks, d*ks) for s in range(0, N, d*ks*ks)], dim=-1)
x_offset = x_offset.contiguous().view(b, c, h*ks, w*ks, d*ks)
return x_offset
# Alternative realization
# ----------------------------
# This realization directly extends the 2D vewrsion. However, for large dimensions this approach is very memory consuming, although
# faster than the previous approach
class DeformConv3D_alternative(nn.Module):
def __init__(self, inc, outc=[], kernel_size=3, padding=1, bias=None):
super(DeformConv3D_alternative, self).__init__()
self.kernel_size = kernel_size
self.padding = padding
#self.zero_padding = nn.functional.pad(padding)
self.conv_kernel = nn.Conv3d(inc, outc, kernel_size=kernel_size, stride=kernel_size, bias=bias)
def forward(self, x, offset):
dtype = offset.data.type()
ks = self.kernel_size
# Out_channels = 3 * kernel_size_x * kernel_size_y * kernel_size_z
N = offset.size(1) // 3 # Number of channels
if self.padding != 0:
# For simplicity we pad from both sides in all 3 dimensions
padding_use = (self.padding, self.padding, self.padding, self.padding ,self.padding, self.padding)
x = nn.functional.pad(x, padding_use, "constant", 0)
# interpolation points p (Eq. 2)
# ------------------------------
# (b, 3N, h, w, d)
p = self._get_p(offset, dtype)
# (b, h, w, d, 3N)
p = p.contiguous().permute(0, 2, 3, 4, 1) # p = p_0 + p_n + offset
# Regular grid points q (Eq. 3)
# -----------------------------
q_lt = Variable(p.data, requires_grad=False).floor()
q_rb = q_lt + 1
# Enumerate all integral locations in the feature map x
# Clamp values between 0 and size of input, per each direction XYZ
# OS: lt - Left/Top, rt - Right/Top, lb - Left/Bottom, rb - Right/Bottom?
q_000 = torch.cat([torch.clamp(q_lt[..., :N], 0, x.size(2) - 1), # x.size() = b X c X h X w X d
torch.clamp(q_lt[..., N:2*N], 0, x.size(3) - 1),
torch.clamp(q_lt[..., 2*N:], 0, x.size(4) - 1)
], dim=-1).long()
q_111 = torch.cat([torch.clamp(q_rb[..., :N], 0, x.size(2) - 1),
torch.clamp(q_rb[..., N:2*N], 0, x.size(3) - 1),
torch.clamp(q_rb[..., 2*N:], 0, x.size(4) - 1)
], dim=-1).long()
q_001 = torch.cat([q_000[..., :N], q_000[..., N:2 * N], q_111[..., 2 * N:]], dim=-1)
q_010 = torch.cat([q_000[..., :N], q_111[..., N:2 * N], q_000[..., 2 * N:]], dim=-1)
q_011 = torch.cat([q_000[..., :N], q_111[..., N:2 * N], q_111[..., 2 * N:]], dim=-1)
q_100 = torch.cat([q_111[..., :N], q_000[..., N:2 * N], q_000[..., 2 * N:]], dim=-1)
q_101 = torch.cat([q_111[..., :N], q_000[..., N:2 * N], q_111[..., 2 * N:]], dim=-1)
q_110 = torch.cat([q_111[..., :N], q_111[..., N:2 * N], q_000[..., 2 * N:]], dim=-1)
# (b, h, w, d, N)
mask = torch.cat([p[..., :N].lt(self.padding) + p[..., :N].gt(x.size(2) - 1 - self.padding),
p[..., N:2*N].lt(self.padding) + p[..., N:2*N].gt(x.size(3) - 1 - self.padding),
p[..., 2*N:].lt(self.padding) + p[..., 2*N:].gt(x.size(4) - 1 - self.padding)
], dim=-1).type_as(p)
mask = mask.detach() # Detach from computational graph
floor_p = p - (p - torch.floor(p))
p = p*(1-mask) + floor_p*mask
p = torch.cat([torch.clamp(p[..., :N], 0, x.size(2) - 1),
torch.clamp(p[..., N:2*N], 0, x.size(3) - 1),
torch.clamp(p[..., 2*N:], 0, x.size(4) - 1)
], dim=-1)
# Interpolation kernel - x(q) (Eq. 4)
# -----------------------------------
# bilinear kernel (b, h, w, d, N)
g_000 = (1 + (q_000[..., :N].type_as(p) - p[..., :N])) * (1 + (q_000[..., N:2 * N].type_as(p) - p[..., N:2 * N])) * (1 + (q_000[..., 2 * N:].type_as(p) - p[..., 2 * N:]))
g_111 = (1 - (q_111[..., :N].type_as(p) - p[..., :N])) * (1 - (q_111[..., N:2 * N].type_as(p) - p[..., N:2 * N])) * (1 - (q_111[..., 2 * N:].type_as(p) - p[..., 2 * N:]))
g_001 = (1 + (q_000[..., :N].type_as(p) - p[..., :N])) * (1 + (q_000[..., N:2 * N].type_as(p) - p[..., N:2 * N])) * (1 - (q_111[..., 2 * N:].type_as(p) - p[..., 2 * N:]))
g_010 = (1 + (q_000[..., :N].type_as(p) - p[..., :N])) * (1 - (q_111[..., N:2 * N].type_as(p) - p[..., N:2 * N])) * (1 + (q_000[..., 2 * N:].type_as(p) - p[..., 2 * N:]))
g_011 = (1 + (q_000[..., :N].type_as(p) - p[..., :N])) * (1 - (q_111[..., N:2 * N].type_as(p) - p[..., N:2 * N])) * (1 - (q_111[..., 2 * N:].type_as(p) - p[..., 2 * N:]))
g_100 = (1 - (q_111[..., :N].type_as(p) - p[..., :N])) * (1 + (q_000[..., N:2 * N].type_as(p) - p[..., N:2 * N])) * (1 + (q_000[..., 2 * N:].type_as(p) - p[..., 2 * N:]))
g_101 = (1 - (q_111[..., :N].type_as(p) - p[..., :N])) * (1 + (q_000[..., N:2 * N].type_as(p) - p[..., N:2 * N])) * (1 - (q_111[..., 2 * N:].type_as(p) - p[..., 2 * N:]))
g_110 = (1 - (q_111[..., :N].type_as(p) - p[..., :N])) * (1 - (q_111[..., N:2 * N].type_as(p) - p[..., N:2 * N])) * (1 + (q_000[..., 2 * N:].type_as(p) - p[..., 2 * N:]))
# Interpolation - x(q) (Eq. 3)
# ----------------------------
# (b, c, h, w, d, N)
x_q_000 = self._get_x_q(x, q_000, N)
x_q_111 = self._get_x_q(x, q_111, N)
x_q_001 = self._get_x_q(x, q_001, N)
x_q_010 = self._get_x_q(x, q_010, N)
x_q_011 = self._get_x_q(x, q_011, N)
x_q_100 = self._get_x_q(x, q_100, N)
x_q_101 = self._get_x_q(x, q_101, N)
x_q_110 = self._get_x_q(x, q_110, N)
# (b, c, h, w, d, N)
x_offset = g_000.unsqueeze(dim=1) * x_q_000 + \
g_111.unsqueeze(dim=1) * x_q_111 + \
g_001.unsqueeze(dim=1) * x_q_001 + \
g_010.unsqueeze(dim=1) * x_q_010 + \
g_011.unsqueeze(dim=1) * x_q_011 + \
g_100.unsqueeze(dim=1) * x_q_100 + \
g_101.unsqueeze(dim=1) * x_q_101 + \
g_110.unsqueeze(dim=1) * x_q_110
# For simplicity, ks is a scalar, implying kernel has same dimensions in all directions
x_offset = self._reshape_x_offset(x_offset, ks)
out = self.conv_kernel(x_offset)
return out
def _get_p_n(self, N, dtype):
p_n_x, p_n_y, p_n_z = np.meshgrid(range(-(self.kernel_size - 1) // 2, (self.kernel_size - 1) // 2 + 1),
range(-(self.kernel_size - 1) // 2, (self.kernel_size - 1) // 2 + 1),
range(-(self.kernel_size - 1) // 2, (self.kernel_size - 1) // 2 + 1),
indexing='ij')
# (2N, 1)
p_n = np.concatenate((p_n_x.flatten(), p_n_y.flatten(), p_n_z.flatten()))
p_n = np.reshape(p_n, (1, 3*N, 1, 1, 1))
p_n = Variable(torch.from_numpy(p_n).type(dtype), requires_grad=False)
return p_n
@staticmethod
def _get_p_0(h, w, d, N, dtype):
p_0_x, p_0_y, p_0_z = np.meshgrid(range(1, h+1), range(1, w+1), range(1, d+1), indexing='ij')
p_0_x = p_0_x.flatten().reshape(1, 1, h, w, d).repeat(N, axis=1)
p_0_y = p_0_y.flatten().reshape(1, 1, h, w, d).repeat(N, axis=1)
p_0_z = p_0_z.flatten().reshape(1, 1, h, w, d).repeat(N, axis=1)
p_0 = np.concatenate((p_0_x, p_0_y, p_0_z), axis=1)
p_0 = Variable(torch.from_numpy(p_0).type(dtype), requires_grad=False)
return p_0
def _get_p(self, offset, dtype):
N, h, w, d = offset.size(1)//3, offset.size(2), offset.size(3), offset.size(4)
# (1, 3N, 1, 1, 1)
p_n = self._get_p_n(N, dtype)
# (1, 3N, h, w, d)
p_0 = self._get_p_0(h, w, d, N, dtype)
p = p_0 + p_n + offset
return p
def _get_x_q(self, x, q, N):
b, h, w, d, _ = q.size()
ny = x.size(3) # Padded dimension y
nz = x.size(4) # Padded dimension z
c = x.size(1) # Number of channels in input x
# (b, c, h*w)
x = x.contiguous().view(b, c, -1)
# (b, h, w, d, N)
offset_x = q[..., :N]
offset_y = q[..., N:2*N]
offset_z = q[..., 2*N:]
# Convert subscripts to linear indices (i.e. Matlab's sub2ind)
index = offset_x * ny * nz + offset_y * nz + offset_z
# (b, c, h*w*d*N)
index = index.contiguous().unsqueeze(dim=1).expand(-1, c, -1, -1, -1, -1).contiguous().view(b, c, -1)
x_offset = x.gather(dim=-1, index=index).contiguous().view(b, c, h, w, d, N)
return x_offset
@staticmethod
def _reshape_x_offset(x_offset, ks):
'''
This function arranges the interpolated x values in consecutive 3d blocks of size
kernel_size x kernel_size x kernel_size. Since the Conv3d stride is equal to kernel_size, the convolution
will happen only for the offset cubes and output the results in the proper locations
Note: We assume kernel size is the same for all dimensions (cube)
'''
b, c, h, w, d, N = x_offset.size()
x_offset = torch.cat([x_offset[..., s:s + ks*ks].contiguous().view(b, c, h, w, d*ks*ks) for s in range(0, N, ks*ks)], dim=-1)
N = x_offset.size(4)
x_offset = torch.cat([x_offset[..., s:s + d*ks*ks].contiguous().view(b, c, h, w*ks, d*ks) for s in range(0, N, d*ks*ks)], dim=-1)
x_offset = x_offset.contiguous().view(b, c, h*ks, w*ks, d*ks)
return x_offset
# TESTS #
# --------------------------------------------------------------------------------------
if __name__ == '__main__':
# 2D test
# -------------------------
input_2d = torch.rand(1, 1, 6, 6).cuda() # Batch X Channels X Height X Width
offsets_2d = nn.Conv2d(1, 18, kernel_size=3, padding=1).cuda()
conv_2d = DeformConv2D(1, 4, kernel_size=3, padding=1).cuda()
offs_2d = offsets_2d(input_2d)
output_2d = conv_2d(input_2d, offs_2d)
output_2d.backward(output_2d.data)
print(output_2d.size())
# 3D test
# -------------------------
input_3d = torch.rand(10, 4, 6, 7, 5).cuda() # Batch X Channels X Height X Width X Depth
offsets_3d = nn.Conv3d(4, 81, kernel_size=3, padding=1).cuda() # Out_channels = 3 * kernel_size_x * kernel_size_y * kernel_size_z
conv_3d = DeformConv3D(4, 4, kernel_size=3, padding=1).cuda()
offs_3d = offsets_3d(input_3d)
output_3d = conv_3d(input_3d, offs_3d)
output_3d.backward(output_3d.data)
print(output_3d.size())
| 46.271399 | 178 | 0.504331 | 3,626 | 22,164 | 2.89134 | 0.075565 | 0.016024 | 0.010301 | 0.024418 | 0.799981 | 0.772701 | 0.745231 | 0.737219 | 0.714613 | 0.699447 | 0 | 0.053387 | 0.277432 | 22,164 | 478 | 179 | 46.368201 | 0.601249 | 0.195226 | 0 | 0.541379 | 0 | 0 | 0.002836 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.072414 | false | 0 | 0.013793 | 0 | 0.158621 | 0.006897 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
0aacaeb6fbcd5a9bb73ec652340d07d2c6a0bb6e | 474 | py | Python | services/domain/api/crud/crud_base.py | gochronicles/monorepo-fastapi-postgresql | c76cd8b49ee58c1f55e31b4f2d5768f645ae0f5b | [
"MIT"
] | 1 | 2021-11-18T15:17:15.000Z | 2021-11-18T15:17:15.000Z | services/patient/api/crud/crud_base.py | gochronicles/monorepo-fastapi-postgresql | c76cd8b49ee58c1f55e31b4f2d5768f645ae0f5b | [
"MIT"
] | null | null | null | services/patient/api/crud/crud_base.py | gochronicles/monorepo-fastapi-postgresql | c76cd8b49ee58c1f55e31b4f2d5768f645ae0f5b | [
"MIT"
] | null | null | null | from abc import ABC, abstractmethod
class CRUDBase(ABC):
def __init__(self):
pass
@abstractmethod
def create(self, **kwargs):
pass
@abstractmethod
def update(self, **kwargs):
pass
@abstractmethod
def get(self, **kwargs):
pass
@abstractmethod
def get_all(self):
pass
@abstractmethod
def delete(self, **kwargs):
pass
@abstractmethod
def delete_all(self):
pass
| 15.290323 | 35 | 0.584388 | 48 | 474 | 5.645833 | 0.333333 | 0.398524 | 0.464945 | 0.413284 | 0.479705 | 0.250923 | 0 | 0 | 0 | 0 | 0 | 0 | 0.324895 | 474 | 30 | 36 | 15.8 | 0.846875 | 0 | 0 | 0.590909 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.318182 | false | 0.318182 | 0.045455 | 0 | 0.409091 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 5 |
0ab82f216f0e9b501cbc0f7b022a956e128f3b8e | 272 | py | Python | poop/hfdp/command/party/living_room_light_off_command.py | cassiobotaro/poop | fc218fbf638c50da8ea98dab7de26ad2a52e83f5 | [
"MIT"
] | 37 | 2020-12-27T00:13:07.000Z | 2022-01-31T19:30:18.000Z | poop/hfdp/command/party/living_room_light_off_command.py | cassiobotaro/poop | fc218fbf638c50da8ea98dab7de26ad2a52e83f5 | [
"MIT"
] | null | null | null | poop/hfdp/command/party/living_room_light_off_command.py | cassiobotaro/poop | fc218fbf638c50da8ea98dab7de26ad2a52e83f5 | [
"MIT"
] | 7 | 2020-12-26T22:33:47.000Z | 2021-11-07T01:29:59.000Z | from poop.hfdp.command.party.light import Light
class LivingRoomLightOffCommand:
def __init__(self, light: Light) -> None:
self.__light = light
def execute(self) -> None:
self.__light.off()
def undo(self) -> None:
self.__light.on()
| 20.923077 | 47 | 0.643382 | 33 | 272 | 5 | 0.515152 | 0.218182 | 0.236364 | 0.206061 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.238971 | 272 | 12 | 48 | 22.666667 | 0.797101 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.375 | false | 0 | 0.125 | 0 | 0.625 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 5 |
0ac27790f2a634af70692670a2f3714b3fc227a9 | 157 | py | Python | mocket/__init__.py | fvigo/python-mocket | bce7cde62177bb23008ff57c84faaca1294b645d | [
"BSD-3-Clause"
] | null | null | null | mocket/__init__.py | fvigo/python-mocket | bce7cde62177bb23008ff57c84faaca1294b645d | [
"BSD-3-Clause"
] | null | null | null | mocket/__init__.py | fvigo/python-mocket | bce7cde62177bb23008ff57c84faaca1294b645d | [
"BSD-3-Clause"
] | null | null | null | from mocket.mocket import Mocket, MocketEntry, Mocketizer, mocketize
__all__ = ("mocketize", "Mocket", "MocketEntry", "Mocketizer")
__version__ = "3.9.38"
| 26.166667 | 68 | 0.738854 | 17 | 157 | 6.352941 | 0.647059 | 0.314815 | 0.5 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.028777 | 0.11465 | 157 | 5 | 69 | 31.4 | 0.748201 | 0 | 0 | 0 | 0 | 0 | 0.267516 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.333333 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
0ac2947a517bfbc8c902146e52043256d0f94aaa | 198 | py | Python | python/main.py | CrumbleZ/dnd-spells | 8bdf2f5a23a297cca1e5d6d03f2afe972f983460 | [
"Fair"
] | null | null | null | python/main.py | CrumbleZ/dnd-spells | 8bdf2f5a23a297cca1e5d6d03f2afe972f983460 | [
"Fair"
] | 15 | 2018-08-24T08:18:48.000Z | 2022-03-02T15:00:19.000Z | python/main.py | CrumbleZ/dnd-spells | 8bdf2f5a23a297cca1e5d6d03f2afe972f983460 | [
"Fair"
] | null | null | null | from spells import Spell
import cards
if __name__ == "__main__":
#spell = Spell.get_spell("Abi-Dalzim’s Horrid Wilting")
spell = Spell.get_spell("Sleep")
cards.create_spell_card(spell)
| 24.75 | 59 | 0.722222 | 28 | 198 | 4.678571 | 0.607143 | 0.152672 | 0.198473 | 0.274809 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.161616 | 198 | 7 | 60 | 28.285714 | 0.789157 | 0.272727 | 0 | 0 | 0 | 0 | 0.090909 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.4 | 0 | 0.4 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
0ac5b20886f99508961e4d2615541bea07759c7f | 19,596 | py | Python | tests/st/scipy_st/test_ops.py | AK391/mindspore | f5aeaa9172dcd647885774e7f657593c81b79fc6 | [
"Apache-2.0"
] | null | null | null | tests/st/scipy_st/test_ops.py | AK391/mindspore | f5aeaa9172dcd647885774e7f657593c81b79fc6 | [
"Apache-2.0"
] | null | null | null | tests/st/scipy_st/test_ops.py | AK391/mindspore | f5aeaa9172dcd647885774e7f657593c81b79fc6 | [
"Apache-2.0"
] | null | null | null | # Copyright 2021 Huawei Technologies Co., Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ============================================================================
"""st for scipy.ops."""
from typing import Generic
from functools import reduce
import pytest
import numpy as np
import scipy as scp
from scipy.linalg import solve_triangular, eig, eigvals
from mindspore import Tensor, context
from mindspore.scipy.ops import EighNet, Eig, Cholesky, SolveTriangular
from mindspore.scipy.utils import _nd_transpose
from tests.st.scipy_st.utils import create_sym_pos_matrix, create_random_rank_matrix, compare_eigen_decomposition, \
match_exception_info
np.random.seed(0)
@pytest.mark.level0
@pytest.mark.platform_x86_cpu
@pytest.mark.platform_x86_gpu_training
@pytest.mark.env_onecard
@pytest.mark.parametrize('n', [3, 5, 7])
@pytest.mark.parametrize('dtype', [np.float64])
def test_cholesky(n: int, dtype: Generic):
"""
Feature: ALL TO ALL
Description: test cases for cholesky [N,N]
Expectation: the result match scipy cholesky
"""
context.set_context(mode=context.GRAPH_MODE)
a = create_sym_pos_matrix((n, n), dtype)
tensor_a = Tensor(a)
expect = scp.linalg.cholesky(a, lower=True)
cholesky_net = Cholesky(clean=True)
output = cholesky_net(tensor_a)
assert np.allclose(expect, output.asnumpy())
@pytest.mark.level0
@pytest.mark.platform_x86_gpu_training
@pytest.mark.platform_x86_cpu
@pytest.mark.env_onecard
@pytest.mark.parametrize('shape', [(3, 4, 4), (3, 5, 5), (2, 3, 5, 5)])
@pytest.mark.parametrize('lower', [True, False])
@pytest.mark.parametrize('data_type', [np.float32, np.float64])
def test_batch_cholesky(shape, lower: bool, data_type):
"""
Feature: ALL To ALL
Description: test cases for cholesky decomposition test cases for A[N,N]x = b[N,1]
Expectation: the result match to scipy
"""
b_s_l = list()
b_s_a = list()
tmp = np.zeros(shape[:-2])
inner_row = shape[-2]
inner_col = shape[-1]
for _, _ in np.ndenumerate(tmp):
a = create_sym_pos_matrix((inner_row, inner_col), data_type)
s_l = scp.linalg.cholesky(a, lower)
b_s_l.append(s_l)
b_s_a.append(a)
tensor_b_a = Tensor(np.array(b_s_a))
b_m_l = Cholesky(clean=True)(tensor_b_a)
if not lower:
b_m_l = _nd_transpose(b_m_l)
b_s_l = np.asarray(b_s_l).reshape(b_m_l.shape)
rtol = 1.e-3
atol = 1.e-3
if data_type == np.float64:
rtol = 1.e-5
atol = 1.e-8
assert np.allclose(b_m_l.asnumpy(), b_s_l, rtol=rtol, atol=atol)
@pytest.mark.level0
@pytest.mark.platform_x86_cpu
@pytest.mark.env_onecard
@pytest.mark.parametrize('shape', [(6, 6), (10, 10)])
@pytest.mark.parametrize('data_type, rtol, atol', [(np.float32, 1e-3, 1e-4), (np.float64, 1e-5, 1e-8),
(np.complex64, 1e-3, 1e-4), (np.complex128, 1e-5, 1e-8)])
def test_eig(shape, data_type, rtol, atol):
"""
Feature: ALL To ALL
Description: test cases for Eig operator
Expectation: the result match eigenvalue definition and scipy eig
"""
context.set_context(mode=context.GRAPH_MODE)
a = create_random_rank_matrix(shape, data_type)
tensor_a = Tensor(a)
# Check Eig with eigenvalue definition
msp_w, msp_v = Eig(True)(tensor_a)
w, v = msp_w.asnumpy(), msp_v.asnumpy()
assert np.allclose(a @ v - v @ np.diag(w), np.zeros_like(a), rtol, atol)
# Check Eig with scipy eig
mw, mv = w, v
sw, sv = eig(a)
compare_eigen_decomposition((mw, mv), (sw, sv), True, rtol, atol)
# Eig only calculate eigenvalues when compute_v is False
mw = Eig(False)(tensor_a)
mw = mw.asnumpy()
sw = eigvals(a)
compare_eigen_decomposition((mw,), (sw,), False, rtol, atol)
@pytest.mark.level0
@pytest.mark.platform_x86_cpu
@pytest.mark.env_onecard
@pytest.mark.parametrize('shape', [(2, 4, 4)])
@pytest.mark.parametrize('data_type, rtol, atol', [(np.float32, 1e-3, 1e-4), (np.float64, 1e-5, 1e-8),
(np.complex64, 1e-3, 1e-4), (np.complex128, 1e-5, 1e-8)])
def test_batch_eig(shape, data_type, rtol, atol):
"""
Feature: ALL To ALL
Description: test batch cases for Eig operator
Expectation: the result match eigenvalue definition
"""
context.set_context(mode=context.GRAPH_MODE)
a = create_random_rank_matrix(shape, data_type)
tensor_a = Tensor(a)
# Check Eig with eigenvalue definition
msp_w, msp_v = Eig(True)(tensor_a)
w, v = msp_w.asnumpy(), msp_v.asnumpy()
batch_enum = np.empty(shape=shape[:-2])
for batch_index, _ in np.ndenumerate(batch_enum):
batch_a = a[batch_index]
batch_w = w[batch_index]
batch_v = v[batch_index]
assert np.allclose(batch_a @ batch_v - batch_v @ np.diag(batch_w), np.zeros_like(batch_a), rtol, atol)
@pytest.mark.level0
@pytest.mark.platform_x86_cpu
@pytest.mark.platform_x86_gpu_training
@pytest.mark.env_onecard
@pytest.mark.parametrize('n', [4, 6, 9, 10])
def test_eigh_net(n: int):
"""
Feature: ALL To ALL
Description: test cases for eigen decomposition test cases for Ax= lambda * x /( A- lambda * E)X=0
Expectation: the result match to numpy
"""
context.set_context(mode=context.GRAPH_MODE)
rtol = 1e-3
atol = 1e-4
a = create_sym_pos_matrix((n, n), np.float32)
msp_eigh = EighNet(True, True)
msp_wl, msp_vl = msp_eigh(Tensor(np.array(a).astype(np.float32)))
msp_eigh = EighNet(True, False)
msp_wu, msp_vu = msp_eigh(Tensor(np.array(a).astype(np.float32)))
sym_al = (np.tril((np.tril(a) - np.tril(a).T)) + np.tril(a).T)
sym_au = (np.triu((np.triu(a) - np.triu(a).T)) + np.triu(a).T)
assert np.allclose(sym_al @ msp_vl.asnumpy() - msp_vl.asnumpy() @ np.diag(msp_wl.asnumpy()), np.zeros((n, n)), rtol,
atol)
assert np.allclose(sym_au @ msp_vu.asnumpy() - msp_vu.asnumpy() @ np.diag(msp_wu.asnumpy()), np.zeros((n, n)), rtol,
atol)
# test case for real scalar double 64
a = np.random.rand(n, n)
rtol = 1e-5
atol = 1e-8
msp_eigh = EighNet(True, True)
msp_wl, msp_vl = msp_eigh(Tensor(np.array(a).astype(np.float64)))
msp_eigh = EighNet(True, False)
msp_wu, msp_vu = msp_eigh(Tensor(np.array(a).astype(np.float64)))
sym_al = (np.tril((np.tril(a) - np.tril(a).T)) + np.tril(a).T)
sym_au = (np.triu((np.triu(a) - np.triu(a).T)) + np.triu(a).T)
assert np.allclose(sym_al @ msp_vl.asnumpy() - msp_vl.asnumpy() @ np.diag(msp_wl.asnumpy()), np.zeros((n, n)), rtol,
atol)
assert np.allclose(sym_au @ msp_vu.asnumpy() - msp_vu.asnumpy() @ np.diag(msp_wu.asnumpy()), np.zeros((n, n)), rtol,
atol)
# test for real scalar float64 no vector
msp_eigh = EighNet(False, True)
msp_wl0 = msp_eigh(Tensor(np.array(a).astype(np.float64)))
msp_eigh = EighNet(False, False)
msp_wu0 = msp_eigh(Tensor(np.array(a).astype(np.float64)))
assert np.allclose(msp_wl.asnumpy() - msp_wl0.asnumpy(), np.zeros((n, n)), rtol, atol)
assert np.allclose(msp_wu.asnumpy() - msp_wu0.asnumpy(), np.zeros((n, n)), rtol, atol)
# test case for complex64
rtol = 1e-3
atol = 1e-4
a = np.array(np.random.rand(n, n), dtype=np.complex64)
for i in range(0, n):
for j in range(0, n):
if i == j:
a[i][j] = complex(np.random.rand(1, 1), 0)
else:
a[i][j] = complex(np.random.rand(1, 1), np.random.rand(1, 1))
sym_al = (np.tril((np.tril(a) - np.tril(a).T)) + np.tril(a).conj().T)
sym_au = (np.triu((np.triu(a) - np.triu(a).T)) + np.triu(a).conj().T)
msp_eigh = EighNet(True, True)
msp_wl, msp_vl = msp_eigh(Tensor(np.array(a).astype(np.complex64)))
msp_eigh = EighNet(True, False)
msp_wu, msp_vu = msp_eigh(Tensor(np.array(a).astype(np.complex64)))
assert np.allclose(sym_al @ msp_vl.asnumpy() - msp_vl.asnumpy() @ np.diag(msp_wl.asnumpy()), np.zeros((n, n)), rtol,
atol)
assert np.allclose(sym_au @ msp_vu.asnumpy() - msp_vu.asnumpy() @ np.diag(msp_wu.asnumpy()), np.zeros((n, n)), rtol,
atol)
# test for complex128
rtol = 1e-5
atol = 1e-8
a = np.array(np.random.rand(n, n), dtype=np.complex128)
for i in range(0, n):
for j in range(0, n):
if i == j:
a[i][j] = complex(np.random.rand(1, 1), 0)
else:
a[i][j] = complex(np.random.rand(1, 1), np.random.rand(1, 1))
sym_al = (np.tril((np.tril(a) - np.tril(a).T)) + np.tril(a).conj().T)
sym_au = (np.triu((np.triu(a) - np.triu(a).T)) + np.triu(a).conj().T)
msp_eigh = EighNet(True, True)
msp_wl, msp_vl = msp_eigh(Tensor(np.array(a).astype(np.complex128)))
msp_eigh = EighNet(True, False)
msp_wu, msp_vu = msp_eigh(Tensor(np.array(a).astype(np.complex128)))
assert np.allclose(sym_al @ msp_vl.asnumpy() - msp_vl.asnumpy() @ np.diag(msp_wl.asnumpy()), np.zeros((n, n)), rtol,
atol)
assert np.allclose(sym_au @ msp_vu.asnumpy() - msp_vu.asnumpy() @ np.diag(msp_wu.asnumpy()), np.zeros((n, n)), rtol,
atol)
# test for real scalar complex128 no vector
msp_eigh = EighNet(False, True)
msp_wl0 = msp_eigh(Tensor(np.array(a).astype(np.complex128)))
msp_eigh = EighNet(False, False)
msp_wu0 = msp_eigh(Tensor(np.array(a).astype(np.complex128)))
assert np.allclose(msp_wl.asnumpy() - msp_wl0.asnumpy(), np.zeros((n, n)), rtol, atol)
assert np.allclose(msp_wu.asnumpy() - msp_wu0.asnumpy(), np.zeros((n, n)), rtol, atol)
@pytest.mark.level0
@pytest.mark.platform_x86_cpu
@pytest.mark.platform_x86_gpu_training
@pytest.mark.env_onecard
@pytest.mark.parametrize('n', [10, 20])
@pytest.mark.parametrize('trans', ["N", "T", "C"])
@pytest.mark.parametrize('dtype', [np.float32, np.float64])
@pytest.mark.parametrize('lower', [False, True])
@pytest.mark.parametrize('unit_diagonal', [False])
def test_solve_triangular_2d(n: int, dtype, lower: bool, unit_diagonal: bool, trans: str):
"""
Feature: ALL TO ALL
Description: test cases for [N x N] X [N X 1]
Expectation: the result match scipy
"""
context.set_context(mode=context.GRAPH_MODE)
a = (np.random.random((n, n)) + np.eye(n)).astype(dtype)
b = np.random.random((n, 1)).astype(dtype)
expect = solve_triangular(a, b, lower=lower, unit_diagonal=unit_diagonal, trans=trans)
solve = SolveTriangular(lower, unit_diagonal, trans)
output = solve(Tensor(a), Tensor(b)).asnumpy()
np.testing.assert_almost_equal(expect, output, decimal=5)
@pytest.mark.level0
@pytest.mark.platform_x86_cpu
@pytest.mark.platform_x86_gpu_training
@pytest.mark.env_onecard
@pytest.mark.parametrize('n', [10, 20])
@pytest.mark.parametrize('trans', ["N", "T", "C"])
@pytest.mark.parametrize('dtype', [np.float32, np.float64])
@pytest.mark.parametrize('lower', [False, True])
@pytest.mark.parametrize('unit_diagonal', [False, True])
def test_solve_triangular_1d(n: int, dtype, lower: bool, unit_diagonal: bool, trans: str):
"""
Feature: ALL TO ALL
Description: test cases for [N x N] X [N]
Expectation: the result match scipy
"""
context.set_context(mode=context.GRAPH_MODE)
a = (np.random.random((n, n)) + np.eye(n)).astype(dtype)
b = np.random.random(n).astype(dtype)
expect = solve_triangular(a, b, lower=lower, unit_diagonal=unit_diagonal, trans=trans)
solve = SolveTriangular(lower, unit_diagonal, trans)
output = solve(Tensor(a), Tensor(b)).asnumpy()
np.testing.assert_almost_equal(expect, output, decimal=5)
@pytest.mark.level0
@pytest.mark.platform_x86_cpu
@pytest.mark.platform_x86_gpu_training
@pytest.mark.env_onecard
@pytest.mark.parametrize('shape', [(4, 5), (10, 20)])
@pytest.mark.parametrize('trans', ["N", "T", "C"])
@pytest.mark.parametrize('dtype', [np.float32, np.float64])
@pytest.mark.parametrize('lower', [False, True])
@pytest.mark.parametrize('unit_diagonal', [False, True])
def test_solve_triangular_matrix(shape: int, dtype, lower: bool, unit_diagonal: bool, trans: str):
"""
Feature: ALL TO ALL
Description: test cases for [N x N] X [N]
Expectation: the result match scipy
"""
if trans == 'T':
n, m = shape
else:
m, n = shape
context.set_context(mode=context.GRAPH_MODE)
a = (np.random.random((m, m)) + np.eye(m)).astype(dtype)
b = np.random.random((m, n)).astype(dtype)
expect = solve_triangular(a, b, lower=lower, unit_diagonal=unit_diagonal, trans=trans)
output = SolveTriangular(lower, unit_diagonal, trans)(Tensor(a), Tensor(b)).asnumpy()
np.testing.assert_almost_equal(expect, output, decimal=5)
@pytest.mark.level0
@pytest.mark.platform_x86_gpu_training
@pytest.mark.platform_x86_cpu
@pytest.mark.env_onecard
@pytest.mark.parametrize('n', [10, 20, 15])
@pytest.mark.parametrize('batch', [(3,), (4, 5)])
@pytest.mark.parametrize('trans', ["N", "T", "C"])
@pytest.mark.parametrize('dtype', [np.float32, np.float64])
@pytest.mark.parametrize('lower', [False, True])
@pytest.mark.parametrize('unit_diagonal', [False, True])
def test_solve_triangular_batched(n: int, batch, dtype, lower: bool, unit_diagonal: bool, trans: str):
"""
Feature: ALL TO ALL
Description: test cases for solve_triangular for batched triangular matrix solver [..., N, N]
Expectation: the result match scipy solve_triangular result
"""
rtol, atol = 1.e-5, 1.e-8
if dtype == np.float32:
rtol, atol = 1.e-3, 1.e-3
np.random.seed(0)
a = create_random_rank_matrix(batch + (n, n), dtype)
b = create_random_rank_matrix(batch + (n,), dtype)
# mindspore
output = SolveTriangular(lower, unit_diagonal, trans)(Tensor(a), Tensor(b)).asnumpy()
# scipy
batch_num = reduce(lambda x, y: x * y, batch)
a_array = a.reshape((batch_num, n, n))
b_array = b.reshape((batch_num, n))
expect = np.stack([solve_triangular(a_array[i, :], b_array[i, :], lower=lower,
unit_diagonal=unit_diagonal, trans=trans)
for i in range(batch_num)])
expect = expect.reshape(output.shape)
assert np.allclose(expect, output, rtol=rtol, atol=atol)
@pytest.mark.level0
@pytest.mark.platform_x86_gpu_training
@pytest.mark.platform_x86_cpu
@pytest.mark.env_onecard
def test_solve_triangular_error_dims():
"""
Feature: ALL TO ALL
Description: test cases for solve_triangular for batched triangular matrix solver [..., N, N]
Expectation: solve_triangular raises expectated Exception
"""
# matrix a is 1D
a = create_random_rank_matrix((10,), dtype=np.float32)
b = create_random_rank_matrix((10,), dtype=np.float32)
with pytest.raises(ValueError) as err:
SolveTriangular()(Tensor(a), Tensor(b))
msg = "For 'SolveTriangular', the dimension of `a` should be at least 2, but got 1 dimensions."
match_exception_info(err, msg)
# matrix a is not square matrix
a = create_random_rank_matrix((4, 5), dtype=np.float32)
b = create_random_rank_matrix((10,), dtype=np.float32)
with pytest.raises(ValueError) as err:
SolveTriangular()(Tensor(a), Tensor(b))
msg = "For 'SolveTriangular', the last two dimensions of `a` should be the same, " \
"but got shape of [4, 5]. Please make sure that the shape of `a` be like [..., N, N]"
match_exception_info(err, msg)
a = create_random_rank_matrix((3, 5, 4, 5), dtype=np.float32)
b = create_random_rank_matrix((3, 5, 10,), dtype=np.float32)
with pytest.raises(ValueError) as err:
SolveTriangular()(Tensor(a), Tensor(b))
msg = "For 'SolveTriangular', the last two dimensions of `a` should be the same," \
" but got shape of [3, 5, 4, 5]. Please make sure that the shape of `a` be like [..., N, N]"
match_exception_info(err, msg)
@pytest.mark.level0
@pytest.mark.platform_x86_gpu_training
@pytest.mark.platform_x86_cpu
@pytest.mark.env_onecard
def test_solve_triangular_error_dims_mismatched():
"""
Feature: ALL TO ALL
Description: test cases for solve_triangular for batched triangular matrix solver [..., N, N]
Expectation: solve_triangular raises expectated Exception
"""
# dimension of a and b is not matched
a = create_random_rank_matrix((3, 4, 5, 5), dtype=np.float32)
b = create_random_rank_matrix((5, 10,), dtype=np.float32)
with pytest.raises(ValueError) as err:
SolveTriangular()(Tensor(a), Tensor(b))
msg = "For 'SolveTriangular', the dimension of `b` should be 'a.dim' or 'a.dim' - 1, " \
"which is 4 or 3, but got 2 dimensions."
match_exception_info(err, msg)
# last two dimensions not matched
a = create_random_rank_matrix((3, 4, 5, 5), dtype=np.float32)
b = create_random_rank_matrix((5, 10, 4), dtype=np.float32)
with pytest.raises(ValueError) as err:
SolveTriangular()(Tensor(a), Tensor(b))
msg = "For 'SolveTriangular', the last two dimensions of `a` and `b` should be matched, " \
"but got shape of [3, 4, 5, 5] and [5, 10, 4]. Please make sure that the shape of `a` " \
"and `b` be like [..., N, N] X [..., N, M] or [..., N, N] X [..., N]."
match_exception_info(err, msg)
a = create_random_rank_matrix((3, 4, 5, 5), dtype=np.float32)
b = create_random_rank_matrix((5, 10, 4, 1), dtype=np.float32)
with pytest.raises(ValueError) as err:
SolveTriangular()(Tensor(a), Tensor(b))
msg = "For 'SolveTriangular', the last two dimensions of `a` and `b` should be matched, " \
"but got shape of [3, 4, 5, 5] and [5, 10, 4, 1]. Please make sure that the shape of `a` " \
"and `b` be like [..., N, N] X [..., N, M] or [..., N, N] X [..., N]."
print(err.value)
match_exception_info(err, msg)
# batch dimensions not matched
a = create_random_rank_matrix((3, 4, 5, 5), dtype=np.float32)
b = create_random_rank_matrix((5, 10, 5), dtype=np.float32)
with pytest.raises(ValueError) as err:
SolveTriangular()(Tensor(a), Tensor(b))
msg = "For 'SolveTriangular', the batch dimensions of `a` and `b` should all be the same, " \
"but got shape of [3, 4, 5, 5] and [5, 10, 5]. Please make sure that " \
"the shape of `a` and `b` be like [a, b, c, ..., N, N] X [a, b, c, ..., N, M] " \
"or [a, b, c, ..., N, N] X [a, b, c, ..., N]."
match_exception_info(err, msg)
a = create_random_rank_matrix((3, 4, 5, 5), dtype=np.float32)
b = create_random_rank_matrix((5, 10, 5, 1), dtype=np.float32)
with pytest.raises(ValueError) as err:
SolveTriangular()(Tensor(a), Tensor(b))
msg = "For 'SolveTriangular', the batch dimensions of `a` and `b` should all be the same, " \
"but got shape of [3, 4, 5, 5] and [5, 10, 5, 1]. Please make sure that " \
"the shape of `a` and `b` be like [a, b, c, ..., N, N] X [a, b, c, ..., N, M] " \
"or [a, b, c, ..., N, N] X [a, b, c, ..., N]."
match_exception_info(err, msg)
| 42.786026 | 120 | 0.648142 | 3,051 | 19,596 | 4.022943 | 0.090462 | 0.059475 | 0.053039 | 0.037641 | 0.793221 | 0.76438 | 0.748982 | 0.736679 | 0.727554 | 0.712726 | 0 | 0.027977 | 0.195601 | 19,596 | 457 | 121 | 42.87965 | 0.750682 | 0.132935 | 0 | 0.606707 | 0 | 0.04878 | 0.105191 | 0 | 0 | 0 | 0 | 0 | 0.060976 | 1 | 0.033537 | false | 0 | 0.030488 | 0 | 0.064024 | 0.003049 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
0ac7c535e7536469079c0d90f08690102a8c6f1c | 286 | py | Python | tests/test_robotson.py | opensanca/socialbot | e860d13dcdaf90b63c8a29c6af18b54d9e3d5ad2 | [
"MIT",
"Unlicense"
] | 5 | 2015-08-12T11:54:06.000Z | 2021-04-02T02:43:03.000Z | tests/test_robotson.py | opensanca/socialbot | e860d13dcdaf90b63c8a29c6af18b54d9e3d5ad2 | [
"MIT",
"Unlicense"
] | 3 | 2015-07-30T15:44:03.000Z | 2016-03-21T15:44:53.000Z | tests/test_robotson.py | opensanca/socialbot | e860d13dcdaf90b63c8a29c6af18b54d9e3d5ad2 | [
"MIT",
"Unlicense"
] | 2 | 2016-01-14T00:45:55.000Z | 2020-06-08T18:29:34.000Z | import unittest
class RobotsonTest(unittest.TestCase):
def setUp(self):
pass
def tearDown(self):
pass
def test_listen_slack(self):
pass
def test_talk_to_cleverbot(self):
pass
def test_post_on_social_networks(self):
pass
| 15.052632 | 43 | 0.636364 | 35 | 286 | 4.942857 | 0.571429 | 0.231214 | 0.254335 | 0.260116 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.297203 | 286 | 18 | 44 | 15.888889 | 0.860697 | 0 | 0 | 0.416667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.416667 | false | 0.416667 | 0.083333 | 0 | 0.583333 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 5 |
0add3455c4d93761981e9f9c210f0d8ea7e7d1d7 | 176 | py | Python | mmdet/models/backbones/__init__.py | 711e/mmdetection | 89da8dbe4dbcfd7c92a184d54c7c87675e49c70c | [
"Apache-2.0"
] | null | null | null | mmdet/models/backbones/__init__.py | 711e/mmdetection | 89da8dbe4dbcfd7c92a184d54c7c87675e49c70c | [
"Apache-2.0"
] | null | null | null | mmdet/models/backbones/__init__.py | 711e/mmdetection | 89da8dbe4dbcfd7c92a184d54c7c87675e49c70c | [
"Apache-2.0"
] | null | null | null | from .resnet import ResNet
from .resnext import ResNeXt
from .ssd_vgg import SSDVGG
from .resnet_v1d import ResNet_v1d
__all__ = ['ResNet', 'ResNeXt', 'SSDVGG', 'ResNet_v1d']
| 25.142857 | 55 | 0.767045 | 25 | 176 | 5.08 | 0.36 | 0.212598 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.019608 | 0.130682 | 176 | 6 | 56 | 29.333333 | 0.810458 | 0 | 0 | 0 | 0 | 0 | 0.164773 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.8 | 0 | 0.8 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
e404773c218f571f16918c78928f0c5e2b7fb62e | 1,001 | py | Python | lib/rdfvalues/__init__.py | nahidupa/grr | 100a9d85ef2abb234e12e3ac2623caffb4116be7 | [
"Apache-2.0"
] | 1 | 2015-06-24T09:07:20.000Z | 2015-06-24T09:07:20.000Z | lib/rdfvalues/__init__.py | nahidupa/grr | 100a9d85ef2abb234e12e3ac2623caffb4116be7 | [
"Apache-2.0"
] | 3 | 2020-02-11T22:29:15.000Z | 2021-06-10T17:44:31.000Z | lib/rdfvalues/__init__.py | nahidupa/grr | 100a9d85ef2abb234e12e3ac2623caffb4116be7 | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/env python
"""AFF4 RDFValue implementations.
This module contains the various RDFValue implementations.
"""
# These need to register plugins so, pylint: disable=unused-import
from grr.lib.rdfvalues import aff4_rdfvalues
from grr.lib.rdfvalues import anomaly
from grr.lib.rdfvalues import artifacts
from grr.lib.rdfvalues import checks
from grr.lib.rdfvalues import client
from grr.lib.rdfvalues import config_file
from grr.lib.rdfvalues import crypto
from grr.lib.rdfvalues import data_server
from grr.lib.rdfvalues import data_store
from grr.lib.rdfvalues import flows
from grr.lib.rdfvalues import foreman
from grr.lib.rdfvalues import grr_rdf
from grr.lib.rdfvalues import hunts
from grr.lib.rdfvalues import nsrl
from grr.lib.rdfvalues import paths
from grr.lib.rdfvalues import plist
from grr.lib.rdfvalues import protodict
from grr.lib.rdfvalues import rekall_types
from grr.lib.rdfvalues import stats
from grr.lib.rdfvalues import structs
from grr.lib.rdfvalues import webhistory
| 33.366667 | 66 | 0.831169 | 156 | 1,001 | 5.294872 | 0.314103 | 0.177966 | 0.254237 | 0.483051 | 0.645278 | 0.070218 | 0 | 0 | 0 | 0 | 0 | 0.002242 | 0.108891 | 1,001 | 29 | 67 | 34.517241 | 0.923767 | 0.175824 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
7c26ef0c5ea1d6ade566a053cb447b212dd78755 | 134 | py | Python | ex11.py | Cloudlie/pythonlearning | 347a2ea3b85450139e0718aec37ddf6998bd5678 | [
"MIT"
] | null | null | null | ex11.py | Cloudlie/pythonlearning | 347a2ea3b85450139e0718aec37ddf6998bd5678 | [
"MIT"
] | null | null | null | ex11.py | Cloudlie/pythonlearning | 347a2ea3b85450139e0718aec37ddf6998bd5678 | [
"MIT"
] | null | null | null | print 'first num ?',
age = raw_input()
print 'second num ?',
height = raw_input()
print "So you're %r old , %r tall ."%(age,
height)
| 16.75 | 42 | 0.626866 | 22 | 134 | 3.727273 | 0.636364 | 0.195122 | 0.317073 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.186567 | 134 | 7 | 43 | 19.142857 | 0.752294 | 0 | 0 | 0 | 0 | 0 | 0.380597 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0.5 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 5 |
7c37c45940357d1625b037b5ddfcaf8927b4dc85 | 265 | py | Python | syntax.py | Tasty-Kiwi/Pewlang | 1fb9fc72a6e46ee90f4ab1f2dfb289c61b38b6b5 | [
"WTFPL"
] | 1 | 2021-02-14T06:20:20.000Z | 2021-02-14T06:20:20.000Z | syntax.py | Tasty-Kiwi/Pewlang | 1fb9fc72a6e46ee90f4ab1f2dfb289c61b38b6b5 | [
"WTFPL"
] | null | null | null | syntax.py | Tasty-Kiwi/Pewlang | 1fb9fc72a6e46ee90f4ab1f2dfb289c61b38b6b5 | [
"WTFPL"
] | 1 | 2021-02-12T17:22:48.000Z | 2021-02-12T17:22:48.000Z | BRAINF = ( # You shouldn't touch this tuple
'>',
'<',
'+',
'-',
',',
'.',
'[',
']'
)
CUSTOM_LANG = (
"pew", # >
"Pew", # <
"pEw", # +
"peW", # -
"PEw", # ,
"pEW", # .
"PeW", # [
"PEW" # ]
)
| 12.045455 | 44 | 0.249057 | 17 | 265 | 3.823529 | 0.588235 | 0.646154 | 0.830769 | 0.923077 | 0.369231 | 0.369231 | 0.369231 | 0 | 0 | 0 | 0 | 0 | 0.456604 | 265 | 21 | 45 | 12.619048 | 0.451389 | 0.173585 | 0 | 0 | 0 | 0 | 0.15311 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
7c6377cd1e23d2fb158d7f30d36a7b706b6c64a0 | 174 | py | Python | tests/context.py | duckduckmuse/domain-scan | c607217c58f630e750a96e45c293a6506276a50a | [
"CC0-1.0"
] | 367 | 2015-04-21T13:23:35.000Z | 2022-03-02T21:47:47.000Z | tests/context.py | duckduckmuse/domain-scan | c607217c58f630e750a96e45c293a6506276a50a | [
"CC0-1.0"
] | 219 | 2015-04-25T02:34:53.000Z | 2021-10-01T17:34:18.000Z | tests/context.py | duckduckmuse/domain-scan | c607217c58f630e750a96e45c293a6506276a50a | [
"CC0-1.0"
] | 148 | 2015-04-23T03:12:44.000Z | 2022-01-16T14:05:33.000Z | import os
import sys
sys.path.insert(0, os.path.abspath(os.path.join(os.path.dirname(__file__), '..')))
import gatherers # noqa
import scanners # noqa
import utils # noqa
| 21.75 | 82 | 0.729885 | 27 | 174 | 4.555556 | 0.518519 | 0.146341 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.006579 | 0.126437 | 174 | 7 | 83 | 24.857143 | 0.802632 | 0.08046 | 0 | 0 | 0 | 0 | 0.012821 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.833333 | 0 | 0.833333 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
7c8b66a0731dfed82416e69fccb9ce8e4792bc18 | 94 | py | Python | tests/__init__.py | lixuemin13/yz-core | 82774f807ac1002b77d0cc90f6695b1cc6ba0820 | [
"MIT"
] | 6 | 2021-01-26T10:27:04.000Z | 2022-03-19T16:13:12.000Z | tests/__init__.py | lixuemin13/yz-core | 82774f807ac1002b77d0cc90f6695b1cc6ba0820 | [
"MIT"
] | null | null | null | tests/__init__.py | lixuemin13/yz-core | 82774f807ac1002b77d0cc90f6695b1cc6ba0820 | [
"MIT"
] | 1 | 2021-01-27T02:11:55.000Z | 2021-01-27T02:11:55.000Z | #!/usr/bin/python3.6.8+
# -*- coding:utf-8 -*-
"""
@auth: cml
@date: 2021-01-23
@desc: ...
""" | 13.428571 | 23 | 0.510638 | 15 | 94 | 3.2 | 0.933333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.146341 | 0.12766 | 94 | 7 | 24 | 13.428571 | 0.439024 | 0.882979 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
7cddf3b1f461ee24171a00b732f96c96286860a7 | 140 | py | Python | gaphor/misc/tests/test_init.py | albanobattistella/gaphor | 5fc6b0ff39ba6dbbb73cb9b111f32d1eda790e14 | [
"Apache-2.0"
] | 1 | 2020-11-27T12:39:15.000Z | 2020-11-27T12:39:15.000Z | gaphor/misc/tests/test_init.py | albanobattistella/gaphor | 5fc6b0ff39ba6dbbb73cb9b111f32d1eda790e14 | [
"Apache-2.0"
] | null | null | null | gaphor/misc/tests/test_init.py | albanobattistella/gaphor | 5fc6b0ff39ba6dbbb73cb9b111f32d1eda790e14 | [
"Apache-2.0"
] | 3 | 2020-01-23T14:13:59.000Z | 2020-02-18T18:21:47.000Z | from gaphor.misc import get_config_dir
def test_config_dir():
config_dir = get_config_dir()
assert config_dir.endswith("gaphor")
| 17.5 | 40 | 0.757143 | 21 | 140 | 4.666667 | 0.52381 | 0.459184 | 0.244898 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.157143 | 140 | 7 | 41 | 20 | 0.830508 | 0 | 0 | 0 | 0 | 0 | 0.042857 | 0 | 0 | 0 | 0 | 0 | 0.25 | 1 | 0.25 | false | 0 | 0.25 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
6b31f58ed4774d18413b335058dc78bc95424219 | 20,847 | py | Python | older/fn_ldap_search/fn_ldap_search/util/customize.py | nickpartner-goahead/resilient-community-apps | 097c0dbefddbd221b31149d82af9809420498134 | [
"MIT"
] | 65 | 2017-12-04T13:58:32.000Z | 2022-03-24T18:33:17.000Z | older/fn_ldap_search/fn_ldap_search/util/customize.py | nickpartner-goahead/resilient-community-apps | 097c0dbefddbd221b31149d82af9809420498134 | [
"MIT"
] | 48 | 2018-03-02T19:17:14.000Z | 2022-03-09T22:00:38.000Z | older/fn_ldap_search/fn_ldap_search/util/customize.py | nickpartner-goahead/resilient-community-apps | 097c0dbefddbd221b31149d82af9809420498134 | [
"MIT"
] | 95 | 2018-01-11T16:23:39.000Z | 2022-03-21T11:34:29.000Z | # -*- coding: utf-8 -*-
"""Generate the Resilient customizations required for fn_ldap_search"""
from __future__ import print_function
from resilient_circuits.util import *
def customization_data(client=None):
"""Produce any customization definitions (types, fields, message destinations, etc)
that should be installed by `resilient-circuits customize`
"""
# This import data contains:
# Function inputs:
# ldap_param
# ldap_search_attributes
# ldap_search_base
# ldap_search_filter
# DataTables:
# ldap_query_results
# Message Destinations:
# ldap
# Functions:
# ldap_search
# Workflows:
# wf_ldap_search
# Rules:
# Example: LDAP Search - Person
yield ImportDefinition(u"""
eyJ0YXNrX29yZGVyIjogW10sICJ3b3JrZmxvd3MiOiBbeyJwcm9ncmFtbWF0aWNfbmFtZSI6ICJ3
Zl9sZGFwX3NlYXJjaCIsICJvYmplY3RfdHlwZSI6ICJhcnRpZmFjdCIsICJleHBvcnRfa2V5Ijog
IndmX2xkYXBfc2VhcmNoIiwgInV1aWQiOiAiOGExMzdkZTQtYjZkNi00YjJkLTkxMWMtYjMxNGQ2
ZjRkMzJhIiwgImxhc3RfbW9kaWZpZWRfYnkiOiAiYUBhLmNvbSIsICJuYW1lIjogIkV4YW1wbGU6
IExEQVAgU2VhcmNoIC0gUGVyc29uIiwgImNvbnRlbnQiOiB7InhtbCI6ICI8P3htbCB2ZXJzaW9u
PVwiMS4wXCIgZW5jb2Rpbmc9XCJVVEYtOFwiPz48ZGVmaW5pdGlvbnMgeG1sbnM9XCJodHRwOi8v
d3d3Lm9tZy5vcmcvc3BlYy9CUE1OLzIwMTAwNTI0L01PREVMXCIgeG1sbnM6YnBtbmRpPVwiaHR0
cDovL3d3dy5vbWcub3JnL3NwZWMvQlBNTi8yMDEwMDUyNC9ESVwiIHhtbG5zOm9tZ2RjPVwiaHR0
cDovL3d3dy5vbWcub3JnL3NwZWMvREQvMjAxMDA1MjQvRENcIiB4bWxuczpvbWdkaT1cImh0dHA6
Ly93d3cub21nLm9yZy9zcGVjL0RELzIwMTAwNTI0L0RJXCIgeG1sbnM6cmVzaWxpZW50PVwiaHR0
cDovL3Jlc2lsaWVudC5pYm0uY29tL2JwbW5cIiB4bWxuczp4c2Q9XCJodHRwOi8vd3d3LnczLm9y
Zy8yMDAxL1hNTFNjaGVtYVwiIHhtbG5zOnhzaT1cImh0dHA6Ly93d3cudzMub3JnLzIwMDEvWE1M
U2NoZW1hLWluc3RhbmNlXCIgdGFyZ2V0TmFtZXNwYWNlPVwiaHR0cDovL3d3dy5jYW11bmRhLm9y
Zy90ZXN0XCI+PHByb2Nlc3MgaWQ9XCJ3Zl9sZGFwX3NlYXJjaFwiIGlzRXhlY3V0YWJsZT1cInRy
dWVcIiBuYW1lPVwiRXhhbXBsZTogTERBUCBTZWFyY2ggLSBQZXJzb25cIj48ZG9jdW1lbnRhdGlv
bj5FeGFtcGxlIHdvcmtmbG93IHdoaWNoIHJ1bnMgYSBwZXJzb24gcXVlcnkgYWdhaW5zdCBhbiBM
REFQIHNlcnZlci48L2RvY3VtZW50YXRpb24+PHN0YXJ0RXZlbnQgaWQ9XCJTdGFydEV2ZW50XzE1
NWFzeG1cIj48b3V0Z29pbmc+U2VxdWVuY2VGbG93XzA3d3h0Zmw8L291dGdvaW5nPjwvc3RhcnRF
dmVudD48c2VydmljZVRhc2sgaWQ9XCJTZXJ2aWNlVGFza18xaDQ0N2YxXCIgbmFtZT1cImxkYXBf
c2VhcmNoXCIgcmVzaWxpZW50OnR5cGU9XCJmdW5jdGlvblwiPjxleHRlbnNpb25FbGVtZW50cz48
cmVzaWxpZW50OmZ1bmN0aW9uIHV1aWQ9XCI1YjI0MDg4Ni02Y2MxLTQ0ZDQtOTk3Yi02NTI0MDFm
ZGFmZjZcIj57XCJpbnB1dHNcIjp7fSxcInByZV9wcm9jZXNzaW5nX3NjcmlwdFwiOlwiIyMgIExE
QVAgU2VhcmNoIFdvcmtmbG93IC0gcHJlLXByb2Nlc3Npbmcgc2NyaXB0ICMjXFxuaW5wdXRzLmxk
YXBfc2VhcmNoX2Jhc2UgPSBcXFwiZGM9ZXhhbXBsZSxkYz1jb21cXFwiXFxuaW5wdXRzLmxkYXBf
c2VhcmNoX2ZpbHRlciA9IFxcXCIoJmFtcDsob2JqZWN0Q2xhc3M9cGVyc29uKSh1aWQ9JWxkYXBf
cGFyYW0lKSlcXFwiXFxuaW5wdXRzLmxkYXBfc2VhcmNoX2F0dHJpYnV0ZXMgPSBcXFwiY24sc24s
bWFpbCx0ZWxlcGhvbmVOdW1iZXJcXFwiXFxuaW5wdXRzLmxkYXBfcGFyYW0gPSAgYXJ0aWZhY3Qu
dmFsdWVcIixcInJlc3VsdF9uYW1lXCI6XCJcIixcInBvc3RfcHJvY2Vzc2luZ19zY3JpcHRcIjpc
IiMjICBMREFQIHNlYXJjaCB3b3JrZmxvdyAtIHBvc3QgcHJvY2Vzc2luZyBzY3JpcHQgIyNcXG4j
IEV4YW1wbGUgb2YgZXhwZWN0ZWQgcmVzdWx0cyAtIE9wZW5MZGFwXFxuXFxcIlxcXCJcXFwiXFxu
J2VudHJpZXMnOiBbe1xcXCJkblxcXCI6IFxcXCJ1aWQ9bmV3dG9uLGRjPWV4YW1wbGUsZGM9Y29t
XFxcIiwgXFxcInRlbGVwaG9uZU51bWJlclxcXCI6IFtdLCBcXFwidWlkXFxcIjogW1xcXCJuZXd0
b25cXFwiXSxcXG4gICAgXFxcIm1haWxcXFwiOiBbXFxcIm5ld3RvbkBsZGFwLmZvcnVtc3lzLmNv
bVxcXCJdLCBcXFwic25cXFwiOiBbXFxcIk5ld3RvblxcXCJdLCBcXFwiY25cXFwiOiBbXFxcIklz
YWFjIE5ld3RvblxcXCJdfSxcXG4gICAge1xcXCJkblxcXCI6IFxcXCJ1aWQ9ZWluc3RlaW4sZGM9
ZXhhbXBsZSxkYz1jb21cXFwiLCBcXFwidGVsZXBob25lTnVtYmVyXFxcIjogW1xcXCIzMTQtMTU5
LTI2NTNcXFwiXSwgXFxcInVpZFxcXCI6IFtcXFwiZWluc3RlaW5cXFwiXSxcXG4gICAgXFxcIm1h
aWxcXFwiOiBbXFxcImVpbnN0ZWluQGxkYXAuZm9ydW1zeXMuY29tXFxcIl0sIFxcXCJzblxcXCI6
IFtcXFwiRWluc3RlaW5cXFwiXSwgXFxcImNuXFxcIjogW1xcXCJBbGJlcnQgRWluc3RlaW5cXFwi
XX1dXFxuXFxcIlxcXCJcXFwiXFxuIyBFeGFtcGxlIG9mIGV4cGVjdGVkIHJlc3VsdHMgLSBBRFxc
blxcXCJcXFwiXFxcIlxcbiAnZW50cmllcyc6IFt7dSdkbic6IHUnQ049SXNhYWMgTmV3dG9uLE9V
PUlCTVJlc2lsaWVudCxEQz1pYm0sREM9cmVzaWxpZW50LERDPWNvbScsIFxcbiAgICAgICAgICAg
ICAgdSd0ZWxlcGhvbmVOdW1iZXInOiB1JzMxNC0xNTktMjY1MycsIHUnY24nOiB1J0lzYWFjIE5l
d3RvbicsIFxcbiAgICAgICAgICAgICAgdSdtYWlsJzogdSdlaW5zdGVpbkByZXNpbGllbnQuaWJt
LmNvbScsIHUnc24nOiB1J05ld3Rvbid9XVxcblxcXCJcXFwiXFxcIlxcbiMgIEdsb2JhbHNcXG5F
TlRSWV9UT19EQVRBVEFCTEVfTUFQID0ge1xcbiAgIFxcXCJ1aWRcXFwiOiBcXFwidWlkXFxcIixc
XG4gICBcXFwiY25cXFwiOiBcXFwiZnVsbG5hbWVcXFwiLFxcbiAgIFxcXCJzblxcXCI6IFxcXCJz
dXJuYW1lXFxcIixcXG4gICBcXFwibWFpbFxcXCI6IFxcXCJlbWFpbF9hZGRyZXNzXFxcIixcXG4g
ICBcXFwidGVsZXBob25lTnVtYmVyXFxcIjogXFxcInRlbGVwaG9uZV9udW1iZXJcXFwiXFxufVxc
biMgUHJvY2Vzc2luZ1xcbmZvciBlbnRyeSBpbiByZXN1bHRzLmVudHJpZXM6XFxuIGlmIGVudHJ5
IGlzIE5vbmU6XFxuICAgICBicmVha1xcbiBlbHNlOlxcbiAgICAgcm93ID0gaW5jaWRlbnQuYWRk
Um93KFxcXCJsZGFwX3F1ZXJ5X3Jlc3VsdHNcXFwiKVxcbiBmb3IgayBpbiBFTlRSWV9UT19EQVRB
VEFCTEVfTUFQOlxcbiAgICBpZiBlbnRyeVtrXSBpcyBOb25lOlxcbiAgICAgICAgcm93W0VOVFJZ
X1RPX0RBVEFUQUJMRV9NQVBba11dID0gXFxcIk4vQVxcXCJcXG4gICAgZWxzZTpcXG4gICAgICAg
IHRyeTpcXG4gICAgICAgICAgICBpZiBpc2luc3RhbmNlKGVudHJ5W2tdLCB1bmljb2RlKSBvciBs
ZW4oZW50cnlba10pID09IDA6XFxuICAgICAgICAgICAgICAgIHJvd1tFTlRSWV9UT19EQVRBVEFC
TEVfTUFQW2tdXSA9IHN0cihlbnRyeVtrXSlcXG4gICAgICAgICAgICBlbHNlOlxcbiAgICAgICAg
ICAgICAgICByb3dbRU5UUllfVE9fREFUQVRBQkxFX01BUFtrXV0gPSBzdHIoZW50cnlba11bMF0p
XFxuICAgICAgICBleGNlcHQgSW5kZXhFcnJvcjpcXG4gICAgICAgICAgICByb3dbRU5UUllfVE9f
REFUQVRBQkxFX01BUFtrXV0gPSBcXFwiTi9BXFxcIlxcblxcblxcblwifTwvcmVzaWxpZW50OmZ1
bmN0aW9uPjwvZXh0ZW5zaW9uRWxlbWVudHM+PGluY29taW5nPlNlcXVlbmNlRmxvd18wN3d4dGZs
PC9pbmNvbWluZz48b3V0Z29pbmc+U2VxdWVuY2VGbG93XzBraWc2MXc8L291dGdvaW5nPjwvc2Vy
dmljZVRhc2s+PHNlcXVlbmNlRmxvdyBpZD1cIlNlcXVlbmNlRmxvd18wN3d4dGZsXCIgc291cmNl
UmVmPVwiU3RhcnRFdmVudF8xNTVhc3htXCIgdGFyZ2V0UmVmPVwiU2VydmljZVRhc2tfMWg0NDdm
MVwiLz48ZW5kRXZlbnQgaWQ9XCJFbmRFdmVudF8xa2gwajZuXCI+PGluY29taW5nPlNlcXVlbmNl
Rmxvd18wa2lnNjF3PC9pbmNvbWluZz48L2VuZEV2ZW50PjxzZXF1ZW5jZUZsb3cgaWQ9XCJTZXF1
ZW5jZUZsb3dfMGtpZzYxd1wiIHNvdXJjZVJlZj1cIlNlcnZpY2VUYXNrXzFoNDQ3ZjFcIiB0YXJn
ZXRSZWY9XCJFbmRFdmVudF8xa2gwajZuXCIvPjx0ZXh0QW5ub3RhdGlvbiBpZD1cIlRleHRBbm5v
dGF0aW9uXzFreHhpeXRcIj48dGV4dD5TdGFydCB5b3VyIHdvcmtmbG93IGhlcmU8L3RleHQ+PC90
ZXh0QW5ub3RhdGlvbj48YXNzb2NpYXRpb24gaWQ9XCJBc3NvY2lhdGlvbl8xc2V1ajQ4XCIgc291
cmNlUmVmPVwiU3RhcnRFdmVudF8xNTVhc3htXCIgdGFyZ2V0UmVmPVwiVGV4dEFubm90YXRpb25f
MWt4eGl5dFwiLz48L3Byb2Nlc3M+PGJwbW5kaTpCUE1ORGlhZ3JhbSBpZD1cIkJQTU5EaWFncmFt
XzFcIj48YnBtbmRpOkJQTU5QbGFuZSBicG1uRWxlbWVudD1cInVuZGVmaW5lZFwiIGlkPVwiQlBN
TlBsYW5lXzFcIj48YnBtbmRpOkJQTU5TaGFwZSBicG1uRWxlbWVudD1cIlN0YXJ0RXZlbnRfMTU1
YXN4bVwiIGlkPVwiU3RhcnRFdmVudF8xNTVhc3htX2RpXCI+PG9tZ2RjOkJvdW5kcyBoZWlnaHQ9
XCIzNlwiIHdpZHRoPVwiMzZcIiB4PVwiMTYyXCIgeT1cIjE4OFwiLz48YnBtbmRpOkJQTU5MYWJl
bD48b21nZGM6Qm91bmRzIGhlaWdodD1cIjBcIiB3aWR0aD1cIjkwXCIgeD1cIjE1N1wiIHk9XCIy
MjNcIi8+PC9icG1uZGk6QlBNTkxhYmVsPjwvYnBtbmRpOkJQTU5TaGFwZT48YnBtbmRpOkJQTU5T
aGFwZSBicG1uRWxlbWVudD1cIlRleHRBbm5vdGF0aW9uXzFreHhpeXRcIiBpZD1cIlRleHRBbm5v
dGF0aW9uXzFreHhpeXRfZGlcIj48b21nZGM6Qm91bmRzIGhlaWdodD1cIjMwXCIgd2lkdGg9XCIx
MDBcIiB4PVwiOTlcIiB5PVwiMjU0XCIvPjwvYnBtbmRpOkJQTU5TaGFwZT48YnBtbmRpOkJQTU5F
ZGdlIGJwbW5FbGVtZW50PVwiQXNzb2NpYXRpb25fMXNldWo0OFwiIGlkPVwiQXNzb2NpYXRpb25f
MXNldWo0OF9kaVwiPjxvbWdkaTp3YXlwb2ludCB4PVwiMTY5XCIgeHNpOnR5cGU9XCJvbWdkYzpQ
b2ludFwiIHk9XCIyMjBcIi8+PG9tZ2RpOndheXBvaW50IHg9XCIxNTNcIiB4c2k6dHlwZT1cIm9t
Z2RjOlBvaW50XCIgeT1cIjI1NFwiLz48L2JwbW5kaTpCUE1ORWRnZT48YnBtbmRpOkJQTU5TaGFw
ZSBicG1uRWxlbWVudD1cIlNlcnZpY2VUYXNrXzFoNDQ3ZjFcIiBpZD1cIlNlcnZpY2VUYXNrXzFo
NDQ3ZjFfZGlcIj48b21nZGM6Qm91bmRzIGhlaWdodD1cIjgwXCIgd2lkdGg9XCIxMDBcIiB4PVwi
MjY2XCIgeT1cIjE2NlwiLz48L2JwbW5kaTpCUE1OU2hhcGU+PGJwbW5kaTpCUE1ORWRnZSBicG1u
RWxlbWVudD1cIlNlcXVlbmNlRmxvd18wN3d4dGZsXCIgaWQ9XCJTZXF1ZW5jZUZsb3dfMDd3eHRm
bF9kaVwiPjxvbWdkaTp3YXlwb2ludCB4PVwiMTk4XCIgeHNpOnR5cGU9XCJvbWdkYzpQb2ludFwi
IHk9XCIyMDZcIi8+PG9tZ2RpOndheXBvaW50IHg9XCIyNjZcIiB4c2k6dHlwZT1cIm9tZ2RjOlBv
aW50XCIgeT1cIjIwNlwiLz48YnBtbmRpOkJQTU5MYWJlbD48b21nZGM6Qm91bmRzIGhlaWdodD1c
IjEzXCIgd2lkdGg9XCIwXCIgeD1cIjIzMlwiIHk9XCIxODRcIi8+PC9icG1uZGk6QlBNTkxhYmVs
PjwvYnBtbmRpOkJQTU5FZGdlPjxicG1uZGk6QlBNTlNoYXBlIGJwbW5FbGVtZW50PVwiRW5kRXZl
bnRfMWtoMGo2blwiIGlkPVwiRW5kRXZlbnRfMWtoMGo2bl9kaVwiPjxvbWdkYzpCb3VuZHMgaGVp
Z2h0PVwiMzZcIiB3aWR0aD1cIjM2XCIgeD1cIjQyOFwiIHk9XCIxODhcIi8+PGJwbW5kaTpCUE1O
TGFiZWw+PG9tZ2RjOkJvdW5kcyBoZWlnaHQ9XCIxM1wiIHdpZHRoPVwiMFwiIHg9XCI0NDZcIiB5
PVwiMjI3XCIvPjwvYnBtbmRpOkJQTU5MYWJlbD48L2JwbW5kaTpCUE1OU2hhcGU+PGJwbW5kaTpC
UE1ORWRnZSBicG1uRWxlbWVudD1cIlNlcXVlbmNlRmxvd18wa2lnNjF3XCIgaWQ9XCJTZXF1ZW5j
ZUZsb3dfMGtpZzYxd19kaVwiPjxvbWdkaTp3YXlwb2ludCB4PVwiMzY2XCIgeHNpOnR5cGU9XCJv
bWdkYzpQb2ludFwiIHk9XCIyMDZcIi8+PG9tZ2RpOndheXBvaW50IHg9XCI0MjhcIiB4c2k6dHlw
ZT1cIm9tZ2RjOlBvaW50XCIgeT1cIjIwNlwiLz48YnBtbmRpOkJQTU5MYWJlbD48b21nZGM6Qm91
bmRzIGhlaWdodD1cIjEzXCIgd2lkdGg9XCIwXCIgeD1cIjM5N1wiIHk9XCIxODRcIi8+PC9icG1u
ZGk6QlBNTkxhYmVsPjwvYnBtbmRpOkJQTU5FZGdlPjwvYnBtbmRpOkJQTU5QbGFuZT48L2JwbW5k
aTpCUE1ORGlhZ3JhbT48L2RlZmluaXRpb25zPiIsICJ3b3JrZmxvd19pZCI6ICJ3Zl9sZGFwX3Nl
YXJjaCIsICJ2ZXJzaW9uIjogMTZ9LCAid29ya2Zsb3dfaWQiOiAxLCAiYWN0aW9ucyI6IFtdLCAi
bGFzdF9tb2RpZmllZF90aW1lIjogMTUyOTk0NzMwMDczMiwgImNyZWF0b3JfaWQiOiAiYUBhLmNv
bSIsICJkZXNjcmlwdGlvbiI6ICJFeGFtcGxlIHdvcmtmbG93IHdoaWNoIHJ1bnMgYSBwZXJzb24g
cXVlcnkgYWdhaW5zdCBhbiBMREFQIHNlcnZlci4ifV0sICJhY3Rpb25zIjogW3sibG9naWNfdHlw
ZSI6ICJhbGwiLCAibmFtZSI6ICJFeGFtcGxlOiBMREFQIFNlYXJjaCAtIFBlcnNvbiIsICJ2aWV3
X2l0ZW1zIjogW10sICJ0eXBlIjogMSwgIndvcmtmbG93cyI6IFsid2ZfbGRhcF9zZWFyY2giXSwg
Im9iamVjdF90eXBlIjogImFydGlmYWN0IiwgInRpbWVvdXRfc2Vjb25kcyI6IDg2NDAwLCAidXVp
ZCI6ICJhNGJmZTFhMC1mNjk0LTQ2NzUtYjRiYy04ZmEwODE5MWExNDAiLCAiYXV0b21hdGlvbnMi
OiBbXSwgImV4cG9ydF9rZXkiOiAiRXhhbXBsZTogTERBUCBTZWFyY2ggLSBQZXJzb24iLCAiY29u
ZGl0aW9ucyI6IFtdLCAiaWQiOiAyNywgIm1lc3NhZ2VfZGVzdGluYXRpb25zIjogW119XSwgImxh
eW91dHMiOiBbXSwgImV4cG9ydF9mb3JtYXRfdmVyc2lvbiI6IDIsICJpZCI6IDMsICJpbmR1c3Ry
aWVzIjogbnVsbCwgInBoYXNlcyI6IFtdLCAiYWN0aW9uX29yZGVyIjogW10sICJnZW9zIjogbnVs
bCwgInNlcnZlcl92ZXJzaW9uIjogeyJtYWpvciI6IDMwLCAidmVyc2lvbiI6ICIzMC4wLjAiLCAi
YnVpbGRfbnVtYmVyIjogMCwgIm1pbm9yIjogMH0sICJ0aW1lZnJhbWVzIjogbnVsbCwgIndvcmtz
cGFjZXMiOiBbXSwgImF1dG9tYXRpY190YXNrcyI6IFtdLCAiZnVuY3Rpb25zIjogW3siZGlzcGxh
eV9uYW1lIjogImxkYXBfc2VhcmNoIiwgInV1aWQiOiAiNWIyNDA4ODYtNmNjMS00NGQ0LTk5N2It
NjUyNDAxZmRhZmY2IiwgImNyZWF0b3IiOiB7ImRpc3BsYXlfbmFtZSI6ICJSZXNpbGllbnQgU3lz
YWRtaW4iLCAidHlwZSI6ICJ1c2VyIiwgImlkIjogNCwgIm5hbWUiOiAiYUBhLmNvbSJ9LCAidmll
d19pdGVtcyI6IFt7InNob3dfaWYiOiBudWxsLCAiZmllbGRfdHlwZSI6ICJfX2Z1bmN0aW9uIiwg
InNob3dfbGlua19oZWFkZXIiOiBmYWxzZSwgImVsZW1lbnQiOiAiZmllbGRfdXVpZCIsICJjb250
ZW50IjogImU3MmU0NjllLTM0MmUtNGVmOS04ODYzLWQ4NGRlZTI3NzU4YSIsICJzdGVwX2xhYmVs
IjogbnVsbH0sIHsic2hvd19pZiI6IG51bGwsICJmaWVsZF90eXBlIjogIl9fZnVuY3Rpb24iLCAi
c2hvd19saW5rX2hlYWRlciI6IGZhbHNlLCAiZWxlbWVudCI6ICJmaWVsZF91dWlkIiwgImNvbnRl
bnQiOiAiYTU1MGUwMTEtMzE4ZS00YjdjLWE0ZWUtN2VjMDExZWUwNDQ3IiwgInN0ZXBfbGFiZWwi
OiBudWxsfSwgeyJzaG93X2lmIjogbnVsbCwgImZpZWxkX3R5cGUiOiAiX19mdW5jdGlvbiIsICJz
aG93X2xpbmtfaGVhZGVyIjogZmFsc2UsICJlbGVtZW50IjogImZpZWxkX3V1aWQiLCAiY29udGVu
dCI6ICJkNDQ5YWE1My0yMGU4LTRhZjAtYmI2MS01MjVmNDIwOGIwN2IiLCAic3RlcF9sYWJlbCI6
IG51bGx9LCB7InNob3dfaWYiOiBudWxsLCAiZmllbGRfdHlwZSI6ICJfX2Z1bmN0aW9uIiwgInNo
b3dfbGlua19oZWFkZXIiOiBmYWxzZSwgImVsZW1lbnQiOiAiZmllbGRfdXVpZCIsICJjb250ZW50
IjogIjg0NmRmZjRmLTcxYjYtNGFiMi1iZmQzLWY5ZmYwMmUyNDY0MSIsICJzdGVwX2xhYmVsIjog
bnVsbH1dLCAiZXhwb3J0X2tleSI6ICJsZGFwX3NlYXJjaCIsICJsYXN0X21vZGlmaWVkX2J5Ijog
eyJkaXNwbGF5X25hbWUiOiAiUmVzaWxpZW50IFN5c2FkbWluIiwgInR5cGUiOiAidXNlciIsICJp
ZCI6IDQsICJuYW1lIjogImFAYS5jb20ifSwgIm5hbWUiOiAibGRhcF9zZWFyY2giLCAidmVyc2lv
biI6IDEsICJ3b3JrZmxvd3MiOiBbeyJwcm9ncmFtbWF0aWNfbmFtZSI6ICJ3Zl9sZGFwX3NlYXJj
aCIsICJvYmplY3RfdHlwZSI6ICJhcnRpZmFjdCIsICJ1dWlkIjogbnVsbCwgImFjdGlvbnMiOiBb
XSwgIm5hbWUiOiAiRXhhbXBsZTogTERBUCBTZWFyY2ggLSBQZXJzb24iLCAid29ya2Zsb3dfaWQi
OiAxLCAiZGVzY3JpcHRpb24iOiBudWxsfV0sICJsYXN0X21vZGlmaWVkX3RpbWUiOiAxNTI5OTM2
ODcyMTgyLCAiZGVzdGluYXRpb25faGFuZGxlIjogImxkYXAiLCAiaWQiOiAxLCAiZGVzY3JpcHRp
b24iOiB7ImNvbnRlbnQiOiAiUmVzaWxpZW50IEZ1bmN0aW9uIHRvIGRvIGEgc2VhcmNoIG9yIHF1
ZXJ5IGFnYWluc3QgYW4gTERBUCBzZXJ2ZXIuIiwgImZvcm1hdCI6ICJ0ZXh0In19XSwgIm5vdGlm
aWNhdGlvbnMiOiBudWxsLCAicmVndWxhdG9ycyI6IG51bGwsICJpbmNpZGVudF90eXBlcyI6IFt7
ImNyZWF0ZV9kYXRlIjogMTUzMDAwNjg2Mzc0MiwgImRlc2NyaXB0aW9uIjogIkN1c3RvbWl6YXRp
b24gUGFja2FnZXMgKGludGVybmFsKSIsICJleHBvcnRfa2V5IjogIkN1c3RvbWl6YXRpb24gUGFj
a2FnZXMgKGludGVybmFsKSIsICJpZCI6IDAsICJuYW1lIjogIkN1c3RvbWl6YXRpb24gUGFja2Fn
ZXMgKGludGVybmFsKSIsICJ1cGRhdGVfZGF0ZSI6IDE1MzAwMDY4NjM3NDIsICJ1dWlkIjogImJm
ZWVjMmQ0LTM3NzAtMTFlOC1hZDM5LTRhMDAwNDA0NGFhMCIsICJlbmFibGVkIjogZmFsc2UsICJz
eXN0ZW0iOiBmYWxzZSwgInBhcmVudF9pZCI6IG51bGwsICJoaWRkZW4iOiBmYWxzZX1dLCAic2Ny
aXB0cyI6IFtdLCAidHlwZXMiOiBbeyJkaXNwbGF5X25hbWUiOiAiTERBUCBRdWVyeSByZXN1bHRz
IiwgInV1aWQiOiAiMjc5NTM3MmEtOWU1ZS00YmI0LWEwN2EtMGNkMzg5ZjBjYjI5IiwgInR5cGVf
aWQiOiA4LCAiZmllbGRzIjogeyJ0ZWxlcGhvbmVfbnVtYmVyIjogeyJvcGVyYXRpb25zIjogW10s
ICJ0eXBlX2lkIjogMTAwMCwgIm9wZXJhdGlvbl9wZXJtcyI6IHt9LCAidGV4dCI6ICJUZWxlcGhv
bmUgTnVtYmVyIiwgImJsYW5rX29wdGlvbiI6IGZhbHNlLCAicHJlZml4IjogbnVsbCwgImNoYW5n
ZWFibGUiOiB0cnVlLCAiaWQiOiAxNTcsICJyZWFkX29ubHkiOiBmYWxzZSwgInV1aWQiOiAiMjZl
NDYwMjUtMTY2Yy00NTA1LWEwNTEtZGVlNDZlMTZkZjk4IiwgImNob3NlbiI6IGZhbHNlLCAiaW5w
dXRfdHlwZSI6ICJ0ZXh0IiwgInRvb2x0aXAiOiAiIiwgIndpZHRoIjogMTcyLCAiaW50ZXJuYWwi
OiBmYWxzZSwgInJpY2hfdGV4dCI6IGZhbHNlLCAidGVtcGxhdGVzIjogW10sICJleHBvcnRfa2V5
IjogImxkYXBfcXVlcnlfcmVzdWx0cy90ZWxlcGhvbmVfbnVtYmVyIiwgImhpZGVfbm90aWZpY2F0
aW9uIjogZmFsc2UsICJwbGFjZWhvbGRlciI6ICIiLCAibmFtZSI6ICJ0ZWxlcGhvbmVfbnVtYmVy
IiwgImRlZmF1bHRfY2hvc2VuX2J5X3NlcnZlciI6IGZhbHNlLCAidmFsdWVzIjogW10sICJvcmRl
ciI6IDR9LCAic3VybmFtZSI6IHsib3BlcmF0aW9ucyI6IFtdLCAidHlwZV9pZCI6IDEwMDAsICJv
cGVyYXRpb25fcGVybXMiOiB7fSwgInRleHQiOiAiU3VybmFtZSIsICJibGFua19vcHRpb24iOiBm
YWxzZSwgInByZWZpeCI6IG51bGwsICJjaGFuZ2VhYmxlIjogdHJ1ZSwgImlkIjogMTU1LCAicmVh
ZF9vbmx5IjogZmFsc2UsICJ1dWlkIjogIjU5NWY5NjQxLTQ2YzYtNDg1ZC04MDA1LThhZjk4ZWIy
YTk2YyIsICJjaG9zZW4iOiBmYWxzZSwgImlucHV0X3R5cGUiOiAidGV4dCIsICJ0b29sdGlwIjog
IiIsICJ3aWR0aCI6IDExMywgImludGVybmFsIjogZmFsc2UsICJyaWNoX3RleHQiOiBmYWxzZSwg
InRlbXBsYXRlcyI6IFtdLCAiZXhwb3J0X2tleSI6ICJsZGFwX3F1ZXJ5X3Jlc3VsdHMvc3VybmFt
ZSIsICJoaWRlX25vdGlmaWNhdGlvbiI6IGZhbHNlLCAicGxhY2Vob2xkZXIiOiAiIiwgIm5hbWUi
OiAic3VybmFtZSIsICJkZWZhdWx0X2Nob3Nlbl9ieV9zZXJ2ZXIiOiBmYWxzZSwgInZhbHVlcyI6
IFtdLCAib3JkZXIiOiAyfSwgImZ1bGxuYW1lIjogeyJvcGVyYXRpb25zIjogW10sICJ0eXBlX2lk
IjogMTAwMCwgIm9wZXJhdGlvbl9wZXJtcyI6IHt9LCAidGV4dCI6ICJGdWxsbmFtZSIsICJibGFu
a19vcHRpb24iOiBmYWxzZSwgInByZWZpeCI6IG51bGwsICJjaGFuZ2VhYmxlIjogdHJ1ZSwgImlk
IjogMTU2LCAicmVhZF9vbmx5IjogZmFsc2UsICJ1dWlkIjogImI0NjlkMGZlLWFiMzItNDEwMC1i
YmE1LTlkNDAyMTJmZDEyNyIsICJjaG9zZW4iOiBmYWxzZSwgImlucHV0X3R5cGUiOiAidGV4dCIs
ICJ0b29sdGlwIjogIiIsICJ3aWR0aCI6IDExMywgImludGVybmFsIjogZmFsc2UsICJyaWNoX3Rl
eHQiOiBmYWxzZSwgInRlbXBsYXRlcyI6IFtdLCAiZXhwb3J0X2tleSI6ICJsZGFwX3F1ZXJ5X3Jl
c3VsdHMvZnVsbG5hbWUiLCAiaGlkZV9ub3RpZmljYXRpb24iOiBmYWxzZSwgInBsYWNlaG9sZGVy
IjogIiIsICJuYW1lIjogImZ1bGxuYW1lIiwgImRlZmF1bHRfY2hvc2VuX2J5X3NlcnZlciI6IGZh
bHNlLCAidmFsdWVzIjogW10sICJvcmRlciI6IDF9LCAiZW1haWxfYWRkcmVzcyI6IHsib3BlcmF0
aW9ucyI6IFtdLCAidHlwZV9pZCI6IDEwMDAsICJvcGVyYXRpb25fcGVybXMiOiB7fSwgInRleHQi
OiAiRW1haWwgYWRkcmVzcyIsICJibGFua19vcHRpb24iOiBmYWxzZSwgInByZWZpeCI6IG51bGws
ICJjaGFuZ2VhYmxlIjogdHJ1ZSwgImlkIjogMTU5LCAicmVhZF9vbmx5IjogZmFsc2UsICJ1dWlk
IjogIjM5ODU5MmNiLTRkNTItNDczZS05MjZmLTMyYzdkMTJjYTExYyIsICJjaG9zZW4iOiBmYWxz
ZSwgImlucHV0X3R5cGUiOiAidGV4dCIsICJ0b29sdGlwIjogIiIsICJ3aWR0aCI6IDEzMiwgImlu
dGVybmFsIjogZmFsc2UsICJyaWNoX3RleHQiOiBmYWxzZSwgInRlbXBsYXRlcyI6IFtdLCAiZXhw
b3J0X2tleSI6ICJsZGFwX3F1ZXJ5X3Jlc3VsdHMvZW1haWxfYWRkcmVzcyIsICJoaWRlX25vdGlm
aWNhdGlvbiI6IGZhbHNlLCAicGxhY2Vob2xkZXIiOiAiIiwgIm5hbWUiOiAiZW1haWxfYWRkcmVz
cyIsICJkZWZhdWx0X2Nob3Nlbl9ieV9zZXJ2ZXIiOiBmYWxzZSwgInZhbHVlcyI6IFtdLCAib3Jk
ZXIiOiAzfSwgInVpZCI6IHsib3BlcmF0aW9ucyI6IFtdLCAidHlwZV9pZCI6IDEwMDAsICJvcGVy
YXRpb25fcGVybXMiOiB7fSwgInRleHQiOiAiVUlEIiwgImJsYW5rX29wdGlvbiI6IGZhbHNlLCAi
cHJlZml4IjogbnVsbCwgImNoYW5nZWFibGUiOiB0cnVlLCAiaWQiOiAxNTgsICJyZWFkX29ubHki
OiBmYWxzZSwgInV1aWQiOiAiOTQ1OTVlMjAtNDFjOC00ZTM3LWFkZmYtNzBlOTQ5YWE4ZGRjIiwg
ImNob3NlbiI6IGZhbHNlLCAiaW5wdXRfdHlwZSI6ICJ0ZXh0IiwgInRvb2x0aXAiOiAiIiwgIndp
ZHRoIjogMTEzLCAiaW50ZXJuYWwiOiBmYWxzZSwgInJpY2hfdGV4dCI6IGZhbHNlLCAidGVtcGxh
dGVzIjogW10sICJleHBvcnRfa2V5IjogImxkYXBfcXVlcnlfcmVzdWx0cy91aWQiLCAiaGlkZV9u
b3RpZmljYXRpb24iOiBmYWxzZSwgInBsYWNlaG9sZGVyIjogIiIsICJuYW1lIjogInVpZCIsICJk
ZWZhdWx0X2Nob3Nlbl9ieV9zZXJ2ZXIiOiBmYWxzZSwgInZhbHVlcyI6IFtdLCAib3JkZXIiOiAw
fX0sICJwYXJlbnRfdHlwZXMiOiBbImluY2lkZW50Il0sICJ0eXBlX25hbWUiOiAibGRhcF9xdWVy
eV9yZXN1bHRzIiwgImV4cG9ydF9rZXkiOiAibGRhcF9xdWVyeV9yZXN1bHRzIiwgImZvcl9jdXN0
b21fZmllbGRzIjogZmFsc2UsICJhY3Rpb25zIjogW10sICJwcm9wZXJ0aWVzIjogeyJmb3Jfd2hv
IjogW10sICJjYW5fZGVzdHJveSI6IGZhbHNlLCAiY2FuX2NyZWF0ZSI6IGZhbHNlfSwgImZvcl9h
Y3Rpb25zIjogZmFsc2UsICJmb3Jfbm90aWZpY2F0aW9ucyI6IGZhbHNlLCAic2NyaXB0cyI6IFtd
LCAiaWQiOiBudWxsfV0sICJtZXNzYWdlX2Rlc3RpbmF0aW9ucyI6IFt7InV1aWQiOiAiMjdlNzNi
MmYtNjkxMC00YmQ4LThiNTItNDgxZmVjNDNiNWM2IiwgImV4cG9ydF9rZXkiOiAibGRhcCIsICJu
YW1lIjogImxkYXAiLCAiZGVzdGluYXRpb25fdHlwZSI6IDAsICJwcm9ncmFtbWF0aWNfbmFtZSI6
ICJsZGFwIiwgImV4cGVjdF9hY2siOiB0cnVlLCAidXNlcnMiOiBbImFAYS5jb20iXX1dLCAiaW5j
aWRlbnRfYXJ0aWZhY3RfdHlwZXMiOiBbXSwgInJvbGVzIjogW10sICJmaWVsZHMiOiBbeyJvcGVy
YXRpb25zIjogW10sICJyZWFkX29ubHkiOiB0cnVlLCAidXVpZCI6ICJjM2YwZTNlZC0yMWUxLTRk
NTMtYWZmYi1mZTVjYTMzMDhjY2EiLCAidGVtcGxhdGVzIjogW10sICJ0eXBlX2lkIjogMCwgImNo
b3NlbiI6IGZhbHNlLCAidGV4dCI6ICJTaW11bGF0aW9uIiwgImRlZmF1bHRfY2hvc2VuX2J5X3Nl
cnZlciI6IGZhbHNlLCAiZXhwb3J0X2tleSI6ICJpbmNpZGVudC9pbmNfdHJhaW5pbmciLCAidG9v
bHRpcCI6ICJXaGV0aGVyIHRoZSBpbmNpZGVudCBpcyBhIHNpbXVsYXRpb24gb3IgYSByZWd1bGFy
IGluY2lkZW50LiAgVGhpcyBmaWVsZCBpcyByZWFkLW9ubHkuIiwgInJpY2hfdGV4dCI6IGZhbHNl
LCAib3BlcmF0aW9uX3Blcm1zIjoge30sICJwcmVmaXgiOiBudWxsLCAiaW50ZXJuYWwiOiBmYWxz
ZSwgInZhbHVlcyI6IFtdLCAiYmxhbmtfb3B0aW9uIjogZmFsc2UsICJpbnB1dF90eXBlIjogImJv
b2xlYW4iLCAiY2hhbmdlYWJsZSI6IHRydWUsICJoaWRlX25vdGlmaWNhdGlvbiI6IGZhbHNlLCAi
aWQiOiAxMTEsICJuYW1lIjogImluY190cmFpbmluZyJ9LCB7Im9wZXJhdGlvbnMiOiBbXSwgInR5
cGVfaWQiOiAxMSwgIm9wZXJhdGlvbl9wZXJtcyI6IHt9LCAidGV4dCI6ICJsZGFwX3NlYXJjaF9m
aWx0ZXIiLCAiYmxhbmtfb3B0aW9uIjogZmFsc2UsICJwcmVmaXgiOiBudWxsLCAiY2hhbmdlYWJs
ZSI6IHRydWUsICJpZCI6IDE2MSwgInJlYWRfb25seSI6IGZhbHNlLCAidXVpZCI6ICJhNTUwZTAx
MS0zMThlLTRiN2MtYTRlZS03ZWMwMTFlZTA0NDciLCAiY2hvc2VuIjogZmFsc2UsICJpbnB1dF90
eXBlIjogInRleHRhcmVhIiwgInRvb2x0aXAiOiAiVGhlIGZpbHRlciBvZiB0aGUgTERBUCBzZWFy
Y2ggcmVxdWVzdCIsICJpbnRlcm5hbCI6IGZhbHNlLCAicmljaF90ZXh0IjogZmFsc2UsICJ0ZW1w
bGF0ZXMiOiBbXSwgImV4cG9ydF9rZXkiOiAiX19mdW5jdGlvbi9sZGFwX3NlYXJjaF9maWx0ZXIi
LCAiaGlkZV9ub3RpZmljYXRpb24iOiBmYWxzZSwgInBsYWNlaG9sZGVyIjogIiIsICJuYW1lIjog
ImxkYXBfc2VhcmNoX2ZpbHRlciIsICJkZWZhdWx0X2Nob3Nlbl9ieV9zZXJ2ZXIiOiBmYWxzZSwg
InZhbHVlcyI6IFtdfSwgeyJvcGVyYXRpb25zIjogW10sICJ0eXBlX2lkIjogMTEsICJvcGVyYXRp
b25fcGVybXMiOiB7fSwgInRleHQiOiAibGRhcF9zZWFyY2hfYmFzZSIsICJibGFua19vcHRpb24i
OiBmYWxzZSwgInByZWZpeCI6IG51bGwsICJjaGFuZ2VhYmxlIjogdHJ1ZSwgImlkIjogMTYwLCAi
cmVhZF9vbmx5IjogZmFsc2UsICJ1dWlkIjogImU3MmU0NjllLTM0MmUtNGVmOS04ODYzLWQ4NGRl
ZTI3NzU4YSIsICJjaG9zZW4iOiBmYWxzZSwgImlucHV0X3R5cGUiOiAidGV4dCIsICJ0b29sdGlw
IjogIlRoZSBiYXNlIG9mIHRoZSBMREFQIHNlYXJjaCByZXF1ZXN0LiIsICJpbnRlcm5hbCI6IGZh
bHNlLCAicmljaF90ZXh0IjogZmFsc2UsICJ0ZW1wbGF0ZXMiOiBbXSwgImV4cG9ydF9rZXkiOiAi
X19mdW5jdGlvbi9sZGFwX3NlYXJjaF9iYXNlIiwgImhpZGVfbm90aWZpY2F0aW9uIjogZmFsc2Us
ICJwbGFjZWhvbGRlciI6ICIiLCAibmFtZSI6ICJsZGFwX3NlYXJjaF9iYXNlIiwgImRlZmF1bHRf
Y2hvc2VuX2J5X3NlcnZlciI6IGZhbHNlLCAidmFsdWVzIjogW119LCB7Im9wZXJhdGlvbnMiOiBb
XSwgInR5cGVfaWQiOiAxMSwgIm9wZXJhdGlvbl9wZXJtcyI6IHt9LCAidGV4dCI6ICJsZGFwX3Bh
cmFtIiwgImJsYW5rX29wdGlvbiI6IGZhbHNlLCAicHJlZml4IjogbnVsbCwgImNoYW5nZWFibGUi
OiB0cnVlLCAiaWQiOiAxNjIsICJyZWFkX29ubHkiOiBmYWxzZSwgInV1aWQiOiAiODQ2ZGZmNGYt
NzFiNi00YWIyLWJmZDMtZjlmZjAyZTI0NjQxIiwgImNob3NlbiI6IGZhbHNlLCAiaW5wdXRfdHlw
ZSI6ICJ0ZXh0IiwgInRvb2x0aXAiOiAiUGFyYW1ldGVyIHVzZWQgaW4gTERBUCBzZWFyY2giLCAi
aW50ZXJuYWwiOiBmYWxzZSwgInJpY2hfdGV4dCI6IGZhbHNlLCAidGVtcGxhdGVzIjogW10sICJl
eHBvcnRfa2V5IjogIl9fZnVuY3Rpb24vbGRhcF9wYXJhbSIsICJoaWRlX25vdGlmaWNhdGlvbiI6
IGZhbHNlLCAicGxhY2Vob2xkZXIiOiAiIiwgIm5hbWUiOiAibGRhcF9wYXJhbSIsICJkZWZhdWx0
X2Nob3Nlbl9ieV9zZXJ2ZXIiOiBmYWxzZSwgInZhbHVlcyI6IFtdfSwgeyJvcGVyYXRpb25zIjog
W10sICJ0eXBlX2lkIjogMTEsICJvcGVyYXRpb25fcGVybXMiOiB7fSwgInRleHQiOiAibGRhcF9z
ZWFyY2hfYXR0cmlidXRlcyIsICJibGFua19vcHRpb24iOiBmYWxzZSwgInByZWZpeCI6IG51bGws
ICJjaGFuZ2VhYmxlIjogdHJ1ZSwgImlkIjogMTYzLCAicmVhZF9vbmx5IjogZmFsc2UsICJ1dWlk
IjogImQ0NDlhYTUzLTIwZTgtNGFmMC1iYjYxLTUyNWY0MjA4YjA3YiIsICJjaG9zZW4iOiBmYWxz
ZSwgImlucHV0X3R5cGUiOiAidGV4dCIsICJ0b29sdGlwIjogIkEgc2luZ2xlIGF0dHJpYnV0ZSBv
ciBhIGxpc3Qgb2YgYXR0cmlidXRlcyB0byBiZSByZXR1cm5lZCBieSB0aGUgTERBUCBzZWFyY2gg
IiwgImludGVybmFsIjogZmFsc2UsICJyaWNoX3RleHQiOiBmYWxzZSwgInRlbXBsYXRlcyI6IFtd
LCAiZXhwb3J0X2tleSI6ICJfX2Z1bmN0aW9uL2xkYXBfc2VhcmNoX2F0dHJpYnV0ZXMiLCAiaGlk
ZV9ub3RpZmljYXRpb24iOiBmYWxzZSwgInBsYWNlaG9sZGVyIjogIiIsICJuYW1lIjogImxkYXBf
c2VhcmNoX2F0dHJpYnV0ZXMiLCAiZGVmYXVsdF9jaG9zZW5fYnlfc2VydmVyIjogZmFsc2UsICJ2
YWx1ZXMiOiBbXX1dLCAib3ZlcnJpZGVzIjogW10sICJleHBvcnRfZGF0ZSI6IDE1Mjk5NDg5NDk4
NzB9
"""
)
| 70.429054 | 87 | 0.972418 | 364 | 20,847 | 55.634615 | 0.928571 | 0.003457 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.110609 | 0.023361 | 20,847 | 295 | 88 | 70.667797 | 0.884037 | 0.025615 | 0 | 0 | 1 | 0 | 0.987768 | 0.974647 | 0 | 1 | 0 | 0 | 0 | 1 | 0.003745 | false | 0 | 0.011236 | 0 | 0.014981 | 0.003745 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 1 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
864b8b2cfa373961e305b43c0827a3de7f01b3db | 363 | py | Python | the_news_today/store/models.py | lukewhyte/inTheNews | 1f4ce738459910db711f504a49f93e59c2da6076 | [
"MIT"
] | null | null | null | the_news_today/store/models.py | lukewhyte/inTheNews | 1f4ce738459910db711f504a49f93e59c2da6076 | [
"MIT"
] | null | null | null | the_news_today/store/models.py | lukewhyte/inTheNews | 1f4ce738459910db711f504a49f93e59c2da6076 | [
"MIT"
] | null | null | null | from __future__ import unicode_literals
from django.db import models
from django.utils import timezone
class Headlines(models.Model):
BBC = models.CharField(max_length=200)
CNN = models.CharField(max_length=200)
aljazeera = models.CharField(max_length=200)
fox = models.CharField(max_length=200)
date = models.DateField(default=timezone.now)
| 33 | 49 | 0.774105 | 49 | 363 | 5.55102 | 0.510204 | 0.220588 | 0.264706 | 0.352941 | 0.397059 | 0 | 0 | 0 | 0 | 0 | 0 | 0.038339 | 0.137741 | 363 | 10 | 50 | 36.3 | 0.830671 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
86abba02936c9fe230e1022cd0193edc8c5e2b1b | 104 | py | Python | django_airavata/apps/auth/admin.py | sairohithA007/airavata-django-portal | fe18d65802f02c9faf805c8edfdee3341c66e93a | [
"Apache-2.0"
] | 19 | 2017-09-04T00:36:52.000Z | 2022-01-24T08:44:22.000Z | django_airavata/apps/auth/admin.py | sairohithA007/airavata-django-portal | fe18d65802f02c9faf805c8edfdee3341c66e93a | [
"Apache-2.0"
] | 35 | 2017-10-17T02:36:01.000Z | 2022-03-09T04:46:57.000Z | django_airavata/apps/auth/admin.py | sairohithA007/airavata-django-portal | fe18d65802f02c9faf805c8edfdee3341c66e93a | [
"Apache-2.0"
] | 38 | 2017-09-15T14:17:42.000Z | 2021-12-15T17:11:31.000Z | from django.contrib import admin
from .models import EmailTemplate
admin.site.register(EmailTemplate)
| 17.333333 | 34 | 0.836538 | 13 | 104 | 6.692308 | 0.692308 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.105769 | 104 | 5 | 35 | 20.8 | 0.935484 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
86bc2398c7d2224e39ece6dee47397c8d6b0144c | 117 | py | Python | kafka/__init__.py | olokshyn/CAR | 8dd41ec58216a5c4528ad50c6ffa275fb9f7ca3b | [
"MIT"
] | null | null | null | kafka/__init__.py | olokshyn/CAR | 8dd41ec58216a5c4528ad50c6ffa275fb9f7ca3b | [
"MIT"
] | null | null | null | kafka/__init__.py | olokshyn/CAR | 8dd41ec58216a5c4528ad50c6ffa275fb9f7ca3b | [
"MIT"
] | null | null | null | from .producer import KafkaProducer
from .consumer import KafkaConsumer, OFFSET_BEGINNING, OFFSET_END, OFFSET_LATEST
| 39 | 80 | 0.863248 | 14 | 117 | 7 | 0.714286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.094017 | 117 | 2 | 81 | 58.5 | 0.924528 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
86bfa822465c5b4460081bb815306ae3ddb39b55 | 67 | py | Python | elastalert/queries/__init__.py | JasperJuergensen/elastalert | 8033361083b5edad1845ad9b307b8280ef278da7 | [
"Apache-2.0"
] | 2 | 2020-06-19T13:02:19.000Z | 2021-02-11T19:35:46.000Z | elastalert/queries/__init__.py | JasperJuergensen/elastalert | 8033361083b5edad1845ad9b307b8280ef278da7 | [
"Apache-2.0"
] | 9 | 2020-04-09T15:40:37.000Z | 2022-01-19T17:49:22.000Z | elastalert/queries/__init__.py | JasperJuergensen/elastalert | 8033361083b5edad1845ad9b307b8280ef278da7 | [
"Apache-2.0"
] | null | null | null | # flake8: noqa
from elastalert.queries.base_query import BaseQuery
| 22.333333 | 51 | 0.835821 | 9 | 67 | 6.111111 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.016667 | 0.104478 | 67 | 2 | 52 | 33.5 | 0.9 | 0.179104 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
86faf67a786cdd39e91912fdf90e140bd9f89176 | 1,994 | py | Python | octave_conv_block.py | boxiXia/keras-octconv | 09b57e10c096fd71f23743a5800e9720e4338c94 | [
"MIT"
] | null | null | null | octave_conv_block.py | boxiXia/keras-octconv | 09b57e10c096fd71f23743a5800e9720e4338c94 | [
"MIT"
] | null | null | null | octave_conv_block.py | boxiXia/keras-octconv | 09b57e10c096fd71f23743a5800e9720e4338c94 | [
"MIT"
] | null | null | null | from tensorflow.keras.layers import ReLU, BatchNormalization
from tensorflow.keras import backend as K
from octave_conv import initial_octconv, final_octconv, octconv_block
def initial_oct_conv_bn_relu(ip, filters, kernel_size=(3, 3), strides=(1, 1),
alpha=0.5, padding='same', dilation=None, bias=False,
activation=True):
channel_axis = 1 if K.image_data_format() == 'channels_first' else -1
x_high, x_low = initial_octconv(ip, filters, kernel_size, strides, alpha,
padding, dilation, bias)
relu = ReLU()
x_high = BatchNormalization(axis=channel_axis)(x_high)
if activation:
x_high = relu(x_high)
x_low = BatchNormalization(axis=channel_axis)(x_low)
if activation:
x_low = relu(x_low)
return x_high, x_low
def final_oct_conv_bn_relu(ip_high, ip_low, filters, kernel_size=(3, 3), strides=(1, 1),
padding='same', dilation=None, bias=False, activation=True):
channel_axis = 1 if K.image_data_format() == 'channels_first' else -1
x = final_octconv(ip_high, ip_low, filters, kernel_size, strides,
padding, dilation, bias)
x = BatchNormalization(axis=channel_axis)(x)
if activation:
x = ReLU()(x)
return x
def oct_conv_bn_relu(ip_high, ip_low, filters, kernel_size=(3, 3), strides=(1, 1),
alpha=0.5, padding='same', dilation=None, bias=False, activation=True):
channel_axis = 1 if K.image_data_format() == 'channels_first' else -1
x_high, x_low = octconv_block(ip_high, ip_low, filters, kernel_size, strides, alpha,
padding, dilation, bias)
relu = ReLU()
x_high = BatchNormalization(axis=channel_axis)(x_high)
if activation:
x_high = relu(x_high)
x_low = BatchNormalization(axis=channel_axis)(x_low)
if activation:
x_low = relu(x_low)
return x_high, x_low
| 32.688525 | 92 | 0.640421 | 274 | 1,994 | 4.394161 | 0.182482 | 0.049834 | 0.084718 | 0.044851 | 0.790698 | 0.75 | 0.75 | 0.75 | 0.711794 | 0.711794 | 0 | 0.014835 | 0.256269 | 1,994 | 60 | 93 | 33.233333 | 0.797033 | 0 | 0 | 0.589744 | 0 | 0 | 0.027081 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.076923 | false | 0 | 0.076923 | 0 | 0.230769 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
812f0366408ce239a236138da036fe1f4f8555df | 7,418 | py | Python | mkalgo/data.py | saifuddin778/mkalgo | 3271c0507680cb62ded3c17c76aee1fbd8050e0d | [
"MIT"
] | 21 | 2017-05-06T06:38:46.000Z | 2021-12-14T10:04:06.000Z | mkalgo/data.py | saifuddin778/mkalgo | 3271c0507680cb62ded3c17c76aee1fbd8050e0d | [
"MIT"
] | 2 | 2018-05-24T04:27:49.000Z | 2021-03-01T17:26:34.000Z | mkalgo/data.py | saifuddin778/mkalgo | 3271c0507680cb62ded3c17c76aee1fbd8050e0d | [
"MIT"
] | 12 | 2017-07-10T05:37:32.000Z | 2022-01-11T06:26:17.000Z |
def hospital(n=2000):
""" Time series of hospital wait times from www.microprediction.com """
assert isinstance(n, int) == True
return [146.0, 126.0, 126.0, 124.0, 124.0, 157.0, 157.0, 155.0, 155.0, 230.0, 230.0, 265.0, 265.0, 221.0, 221.0,
194.0, 194.0, 181.0, 248.0, 248.0, 248.0, 214.0, 214.0, 172.0, 172.0, 201.0, 201.0, 216.0, 216.0, 179.0,
179.0, 232.0, 232.0, 202.0, 202.0, 195.0, 195.0, 190.0, 190.0, 185.0, 185.0, 201.0, 201.0, 221.0, 221.0,
214.0, 214.0, 213.0, 213.0, 215.0, 215.0, 199.0, 199.0, 183.0, 183.0, 201.0, 201.0, 197.0, 197.0, 181.0,
181.0, 177.0, 177.0, 193.0, 193.0, 179.0, 179.0, 192.0, 192.0, 177.0, 177.0, 187.0, 187.0, 177.0, 177.0,
201.0, 201.0, 209.0, 209.0, 202.0, 202.0, 188.0, 188.0, 145.0, 145.0, 139.0, 139.0, 150.0, 150.0, 136.0,
136.0, 114.0, 114.0, 96.0, 96.0, 78.0, 78.0, 56.0, 56.0, 53.0, 53.0, 50.0, 50.0, 43.0, 43.0, 37.0, 37.0,
43.0, 43.0, 53.0, 53.0, 58.0, 58.0, 60.0, 60.0, 50.0, 50.0, 42.0, 42.0, 40.0, 40.0, 36.0, 36.0, 42.0, 42.0,
39.0, 39.0, 30.0, 30.0, 14.0, 14.0, 23.0, 23.0, 23.0, 23.0, 10.0, 10.0, 16.0, 16.0, 15.0, 15.0, 5.0, 5.0,
49.0, 49.0, 72.0, 72.0, 97.0, 97.0, 107.0, 107.0, 81.0, 81.0, 82.0, 82.0, 109.0, 109.0, 123.0, 123.0, 99.0,
99.0, 102.0, 102.0, 132.0, 132.0, 154.0, 154.0, 172.0, 172.0, 153.0, 153.0, 149.0, 149.0, 150.0, 150.0,
136.0, 136.0, 130.0, 130.0, 118.0, 118.0, 92.0, 92.0, 92.0, 92.0, 105.0, 105.0, 82.0, 82.0, 78.0, 78.0,
72.0, 72.0, 58.0, 58.0, 53.0, 53.0, 60.0, 60.0, 50.0, 50.0, 43.0, 43.0, 38.0, 38.0, 32.0, 32.0, 30.0, 30.0,
32.0, 32.0, 26.0, 26.0, 36.0, 36.0, 37.0, 37.0, 42.0, 42.0, 51.0, 51.0, 51.0, 51.0, 54.0, 54.0, 32.0, 32.0,
33.0, 33.0, 35.0, 35.0, 32.0, 32.0, 51.0, 51.0, 67.0, 67.0, 56.0, 56.0, 51.0, 51.0, 41.0, 41.0, 28.0, 28.0,
31.0, 31.0, 38.0, 38.0, 45.0, 45.0, 53.0, 53.0, 51.0, 51.0, 65.0, 65.0, 64.0, 64.0, 52.0, 52.0, 39.0, 39.0,
52.0, 52.0, 54.0, 54.0, 82.0, 82.0, 107.0, 107.0, 97.0, 97.0, 90.0, 90.0, 79.0, 79.0, 80.0, 80.0, 75.0,
75.0, 85.0, 85.0, 62.0, 62.0, 58.0, 58.0, 55.0, 55.0, 50.0, 50.0, 56.0, 56.0, 62.0, 62.0, 74.0, 74.0, 77.0,
77.0, 66.0, 66.0, 80.0, 80.0, 88.0, 88.0, 81.0, 81.0, 92.0, 92.0, 151.0, 151.0, 172.0, 172.0, 173.0, 173.0,
233.0, 233.0, 262.0, 262.0, 262.0, 262.0, 212.0, 212.0, 224.0, 224.0, 277.0, 277.0, 248.0, 248.0, 234.0,
234.0, 236.0, 236.0, 218.0, 218.0, 169.0, 169.0, 174.0, 174.0, 248.0, 248.0, 231.0, 231.0, 231.0, 231.0,
233.0, 233.0, 187.0, 187.0, 174.0, 174.0, 242.0, 242.0, 176.0, 176.0, 158.0, 158.0, 159.0, 159.0, 168.0,
168.0, 183.0, 183.0, 172.0, 172.0, 167.0, 167.0, 184.0, 184.0, 173.0, 173.0, 166.0, 166.0, 166.0, 166.0,
167.0, 167.0, 164.0, 164.0, 157.0, 157.0, 147.0, 147.0, 146.0, 146.0, 143.0, 143.0, 113.0, 113.0, 94.0,
94.0, 80.0, 80.0, 77.0, 77.0, 65.0, 65.0, 73.0, 73.0, 74.0, 74.0, 60.0, 60.0, 45.0, 45.0, 36.0, 36.0, 91.0,
91.0, 91.0, 91.0, 163.0, 163.0, 163.0, 163.0, 222.0, 222.0, 192.0, 192.0, 194.0, 194.0, 218.0, 218.0, 250.0,
250.0, 236.0, 236.0, 241.0, 241.0, 216.0, 216.0, 191.0, 191.0, 194.0, 194.0, 189.0, 189.0, 164.0, 164.0,
157.0, 157.0, 174.0, 174.0, 219.0, 219.0, 192.0, 192.0, 152.0, 152.0, 153.0, 153.0, 141.0, 141.0, 133.0,
133.0, 145.0, 145.0, 170.0, 170.0, 153.0, 153.0, 150.0, 150.0, 173.0, 173.0, 171.0, 171.0, 162.0, 162.0,
166.0, 166.0, 148.0, 148.0, 141.0, 141.0, 151.0, 151.0, 131.0, 115.0, 115.0, 103.0, 103.0, 103.0, 75.0,
75.0, 64.0, 64.0, 50.0, 50.0, 48.0, 48.0, 36.0, 36.0, 23.0, 23.0, 30.0, 30.0, 38.0, 38.0, 24.0, 24.0, 49.0,
49.0, 44.0, 44.0, 44.0, 44.0, 50.0, 50.0, 48.0, 48.0, 61.0, 61.0, 65.0, 65.0, 77.0, 77.0, 66.0, 66.0, 51.0,
51.0, 52.0, 52.0, 54.0, 54.0, 62.0, 62.0, 68.0, 68.0, 66.0, 66.0, 58.0, 58.0, 82.0, 82.0, 98.0, 98.0, 95.0,
95.0, 102.0, 102.0, 94.0, 94.0, 78.0, 78.0, 68.0, 68.0, 61.0, 61.0, 56.0, 56.0, 63.0, 63.0, 83.0, 83.0,
87.0, 87.0, 80.0, 80.0, 77.0, 77.0, 74.0, 74.0, 68.0, 68.0, 63.0, 63.0, 52.0, 52.0, 61.0, 61.0, 70.0, 70.0,
72.0, 72.0, 65.0, 65.0, 63.0, 63.0, 61.0, 61.0, 44.0, 44.0, 37.0, 37.0, 46.0, 46.0, 43.0, 43.0, 55.0, 55.0,
68.0, 68.0, 67.0, 67.0, 59.0, 59.0, 55.0, 55.0, 66.0, 66.0, 69.0, 69.0, 96.0, 96.0, 156.0, 156.0, 190.0,
190.0, 162.0, 162.0, 187.0, 187.0, 193.0, 193.0, 191.0, 191.0, 238.0, 238.0, 256.0, 256.0, 295.0, 295.0,
264.0, 264.0, 261.0, 261.0, 265.0, 265.0, 244.0, 244.0, 262.0, 262.0, 271.0, 271.0, 233.0, 233.0, 217.0,
217.0, 213.0, 213.0, 205.0, 205.0, 162.0, 162.0, 160.0, 160.0, 188.0, 188.0, 170.0, 170.0, 143.0, 143.0,
140.0, 140.0, 146.0, 146.0, 121.0, 121.0, 94.0, 94.0, 78.0, 78.0, 75.0, 75.0, 64.0, 64.0, 72.0, 72.0, 80.0,
80.0, 71.0, 71.0, 62.0, 62.0, 56.0, 56.0, 48.0, 48.0, 49.0, 49.0, 54.0, 54.0, 58.0, 58.0, 59.0, 59.0, 72.0,
72.0, 81.0, 81.0, 117.0, 117.0, 113.0, 113.0, 153.0, 153.0, 173.0, 173.0, 128.0, 128.0, 118.0, 118.0, 125.0,
125.0, 102.0, 102.0, 120.0, 120.0, 139.0, 139.0, 123.0, 123.0, 196.0, 196.0, 241.0, 241.0, 198.0, 198.0,
169.0, 169.0, 182.0, 182.0, 161.0, 161.0, 119.0, 119.0, 198.0, 198.0, 192.0, 192.0, 198.0, 198.0, 228.0,
228.0, 196.0, 196.0, 187.0, 187.0, 201.0, 201.0, 187.0, 187.0, 169.0, 169.0, 136.0, 136.0, 145.0, 145.0,
146.0, 146.0, 136.0, 136.0, 137.0, 137.0, 124.0, 124.0, 130.0, 130.0, 139.0, 139.0, 118.0, 118.0, 102.0,
102.0, 87.0, 87.0, 79.0, 79.0, 67.0, 67.0, 60.0, 60.0, 56.0, 56.0, 52.0, 52.0, 52.0, 52.0, 54.0, 54.0, 77.0,
77.0, 85.0, 85.0, 85.0, 85.0, 134.0, 134.0, 128.0, 128.0, 129.0, 129.0, 198.0, 198.0, 252.0, 252.0, 122.0,
122.0, 365.0, 365.0, 365.0, 365.0, 234.0, 234.0, 214.0, 214.0, 229.0, 229.0, 152.0, 152.0, 256.0, 256.0,
256.0, 256.0, 311.0, 311.0, 325.0, 325.0, 305.0, 305.0, 258.0, 258.0, 196.0, 196.0, 243.0, 243.0, 292.0,
292.0, 203.0, 203.0, 185.0, 185.0, 175.0, 175.0, 213.0, 213.0, 220.0, 220.0, 220.0, 220.0, 213.0, 213.0,
193.0, 193.0, 192.0, 192.0, 156.0, 156.0, 173.0, 173.0, 154.0, 154.0, 147.0, 147.0, 151.0, 151.0, 112.0,
112.0, 90.0, 90.0, 90.0, 90.0, 89.0, 89.0, 82.0, 82.0, 70.0, 70.0, 64.0, 64.0, 56.0, 56.0, 46.0, 46.0, 34.0,
34.0, 89.0, 89.0, 86.0, 86.0, 90.0, 90.0, 130.0, 130.0, 116.0, 116.0, 128.0, 128.0, 318.0, 318.0, 346.0,
346.0, 289.0, 289.0, 254.0, 254.0, 269.0, 269.0, 231.0, 231.0, 198.0, 198.0, 199.0, 199.0, 199.0, 199.0,
144.0, 144.0, 117.0, 117.0, 89.0, 89.0, 75.0, 75.0, 73.0, 73.0, 92.0, 92.0, 96.0, 96.0, 98.0, 98.0, 110.0,
110.0, 102.0, 102.0, 92.0, 92.0, 76.0, 76.0, 83.0, 83.0, 98.0, 98.0, 125.0, 125.0, 131.0, 131.0, 134.0,
134.0, 142.0, 112.0, 112.0, 112.0, 91.0, 94.0, 94.0, 94.0, 91.0, 91.0, 71.0, 71.0, 58.0, 58.0, 81.0, 81.0,
102.0, 102.0, 112.0, 112.0, 117.0, 117.0, 114.0, 114.0, 112.0, 112.0, 89.0, 89.0, 76.0, 76.0, 52.0, 52.0,
39.0, 39.0, 33.0, 33.0, 32.0, 32.0, 30.0, 30.0, 23.0, 23.0, 64.0, 64.0, 56.0, 56.0, 66.0, 66.0, 91.0, 91.0,
30.0, 30.0, 28.0, 28.0, 86.0, 86.0, 99.0, 99.0, 97.0, 97.0][:n]
| 110.716418 | 120 | 0.489485 | 2,023 | 7,418 | 1.794859 | 0.115175 | 0.01322 | 0.017626 | 0.01322 | 0.809694 | 0.161939 | 0.161939 | 0.09832 | 0.09832 | 0 | 0 | 0.625044 | 0.2378 | 7,418 | 66 | 121 | 112.393939 | 0.017156 | 0.008493 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.015625 | 1 | 0.015625 | false | 0 | 0 | 0 | 0.03125 | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
813e8b2b09d684118c64f3df53b47dab204b8c7f | 1,984 | py | Python | tensorflow/contrib/summary/summary.py | master-hzz/tensorflow | 4b4b51cdd9e8c3c748b76dd8649bcd5556e84d76 | [
"Apache-2.0"
] | 2 | 2021-07-07T13:55:09.000Z | 2021-12-04T22:51:46.000Z | tensorflow/contrib/summary/summary.py | Yeesn/tensorflow | 31b79e42b9e1643b3bcdc9df992eb3ce216804c5 | [
"Apache-2.0"
] | null | null | null | tensorflow/contrib/summary/summary.py | Yeesn/tensorflow | 31b79e42b9e1643b3bcdc9df992eb3ce216804c5 | [
"Apache-2.0"
] | 1 | 2019-01-10T08:34:08.000Z | 2019-01-10T08:34:08.000Z | # Copyright 2017 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Contrib summary package.
The operations in this package are safe to use with eager execution turned or on
off.
"""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
# pylint: disable=unused-import
from tensorflow.contrib.summary.summary_ops import all_summary_ops
from tensorflow.contrib.summary.summary_ops import always_record_summaries
from tensorflow.contrib.summary.summary_ops import audio
from tensorflow.contrib.summary.summary_ops import create_summary_db_writer
from tensorflow.contrib.summary.summary_ops import create_summary_file_writer
from tensorflow.contrib.summary.summary_ops import eval_dir
from tensorflow.contrib.summary.summary_ops import generic
from tensorflow.contrib.summary.summary_ops import histogram
from tensorflow.contrib.summary.summary_ops import image
from tensorflow.contrib.summary.summary_ops import import_event
from tensorflow.contrib.summary.summary_ops import never_record_summaries
from tensorflow.contrib.summary.summary_ops import record_summaries_every_n_global_steps
from tensorflow.contrib.summary.summary_ops import scalar
from tensorflow.contrib.summary.summary_ops import should_record_summaries
from tensorflow.contrib.summary.summary_ops import summary_writer_initializer_op
| 46.139535 | 88 | 0.808468 | 275 | 1,984 | 5.64 | 0.396364 | 0.144423 | 0.203095 | 0.270793 | 0.479046 | 0.479046 | 0.479046 | 0.223727 | 0.187621 | 0 | 0 | 0.004497 | 0.103327 | 1,984 | 42 | 89 | 47.238095 | 0.867341 | 0.404738 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0.055556 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
d4ca1732778cafc57d4f067fa9558d4eafe97933 | 44 | py | Python | face_verification/db/__init__.py | uGokalp/FaceVerification | bfc40ea24c7ec4f0a3c9e003e9760014fbd36349 | [
"MIT"
] | null | null | null | face_verification/db/__init__.py | uGokalp/FaceVerification | bfc40ea24c7ec4f0a3c9e003e9760014fbd36349 | [
"MIT"
] | 23 | 2021-05-01T16:56:02.000Z | 2022-03-08T05:39:41.000Z | face_verification/db/__init__.py | uGokalp/FaceVerification | bfc40ea24c7ec4f0a3c9e003e9760014fbd36349 | [
"MIT"
] | null | null | null | from .db import Database, compare_embedding
| 22 | 43 | 0.840909 | 6 | 44 | 6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.113636 | 44 | 1 | 44 | 44 | 0.923077 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
d4cab7a6949697b9e13cdf00098ef16c6f1fe0e4 | 104 | py | Python | python/ql/test/query-tests/Variables/undefined/ud_helper.py | vadi2/codeql | a806a4f08696d241ab295a286999251b56a6860c | [
"MIT"
] | 4,036 | 2020-04-29T00:09:57.000Z | 2022-03-31T14:16:38.000Z | python/ql/test/query-tests/Variables/undefined/ud_helper.py | vadi2/codeql | a806a4f08696d241ab295a286999251b56a6860c | [
"MIT"
] | 2,970 | 2020-04-28T17:24:18.000Z | 2022-03-31T22:40:46.000Z | python/ql/test/query-tests/Variables/undefined/ud_helper.py | ScriptBox99/github-codeql | 2ecf0d3264db8fb4904b2056964da469372a235c | [
"MIT"
] | 794 | 2020-04-29T00:28:25.000Z | 2022-03-30T08:21:46.000Z |
def a(): pass
def b(): pass
def c(): pass
def d(): pass
def e(): pass
__all__ = [ 'a', 'b', 'c' ]
| 8 | 27 | 0.480769 | 19 | 104 | 2.421053 | 0.421053 | 0.608696 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.269231 | 104 | 12 | 28 | 8.666667 | 0.605263 | 0 | 0 | 0 | 0 | 0 | 0.029126 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.833333 | false | 0.833333 | 0 | 0 | 0.833333 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 5 |
d4f8dd9b9a4f530bf987474e1bfd972f5d45277d | 505 | py | Python | user_profile/views.py | dnetochaves/e-commerce | 97c2266934b6db883d520381520130b0472e9db4 | [
"MIT"
] | null | null | null | user_profile/views.py | dnetochaves/e-commerce | 97c2266934b6db883d520381520130b0472e9db4 | [
"MIT"
] | null | null | null | user_profile/views.py | dnetochaves/e-commerce | 97c2266934b6db883d520381520130b0472e9db4 | [
"MIT"
] | null | null | null | from django.shortcuts import render
from django.views.generic.list import ListView
from django.views import View
from django.http import HttpResponse
class Create(View):
def get(*args, **kwargs):
return HttpResponse('Create')
class Update(View):
def get(*args, **kwargs):
return HttpResponse('Update')
class Login(View):
def get(*args, **kwargs):
return HttpResponse('Login')
class Logout(View):
def get(*args, **kwargs):
return HttpResponse('Logout')
| 24.047619 | 46 | 0.685149 | 62 | 505 | 5.580645 | 0.354839 | 0.115607 | 0.115607 | 0.16185 | 0.439306 | 0.439306 | 0.439306 | 0 | 0 | 0 | 0 | 0 | 0.192079 | 505 | 20 | 47 | 25.25 | 0.848039 | 0 | 0 | 0.25 | 0 | 0 | 0.045545 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | true | 0 | 0.25 | 0.25 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 5 |
07720b7e2f51d1db4e2843ec6aee338fabd7b738 | 157 | py | Python | texasholdem/lobby/admin.py | stricoff92/games-hub | 23bbd308fc12e214abd8813607ce92fd0a20fa8c | [
"MIT"
] | null | null | null | texasholdem/lobby/admin.py | stricoff92/games-hub | 23bbd308fc12e214abd8813607ce92fd0a20fa8c | [
"MIT"
] | 5 | 2021-03-19T04:38:06.000Z | 2021-09-22T19:10:42.000Z | texasholdem/lobby/admin.py | stricoff92/games-hub | 23bbd308fc12e214abd8813607ce92fd0a20fa8c | [
"MIT"
] | null | null | null |
from lobby.models import Game, Player
from django.contrib import admin
# Register your models here.
admin.site.register(Player)
admin.site.register(Game)
| 17.444444 | 37 | 0.796178 | 23 | 157 | 5.434783 | 0.565217 | 0.144 | 0.272 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.121019 | 157 | 8 | 38 | 19.625 | 0.905797 | 0.165605 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
07ede115847aff8e66893a918b81c45be72f5bb7 | 132 | py | Python | src/CodeGeneratorTest.py | demin-dmitriy/almost-haskell | 2b252cf102291696aacda7bc32fdcfb537a8821e | [
"MIT"
] | 1 | 2019-01-10T01:51:27.000Z | 2019-01-10T01:51:27.000Z | src/CodeGeneratorTest.py | demin-dmitriy/almost-haskell | 2b252cf102291696aacda7bc32fdcfb537a8821e | [
"MIT"
] | null | null | null | src/CodeGeneratorTest.py | demin-dmitriy/almost-haskell | 2b252cf102291696aacda7bc32fdcfb537a8821e | [
"MIT"
] | null | null | null | from unittest import TestCase
from CodeGenerator import *
class CodeGeneratorTest(TestCase):
def testEmpty(self):
pass
| 18.857143 | 34 | 0.75 | 14 | 132 | 7.071429 | 0.785714 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.19697 | 132 | 6 | 35 | 22 | 0.933962 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0.2 | 0.4 | 0 | 0.8 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 0 | 5 |
6afa28a95800d444f685d30a75ec397f74238daa | 257 | py | Python | apps/core/apiviews.py | nfeslim/dashboard_nfe | 12f4943257e8029f9d47612fb2458290d3730e4a | [
"MIT"
] | null | null | null | apps/core/apiviews.py | nfeslim/dashboard_nfe | 12f4943257e8029f9d47612fb2458290d3730e4a | [
"MIT"
] | 20 | 2019-01-28T15:58:10.000Z | 2022-02-10T08:34:48.000Z | apps/core/apiviews.py | nfeslim/dashboard_nfe | 12f4943257e8029f9d47612fb2458290d3730e4a | [
"MIT"
] | 2 | 2019-01-28T13:34:54.000Z | 2019-05-26T17:39:43.000Z | from rest_framework.views import APIView
from rest_framework.response import Response
from rest_framework import status
class ItsAliveView(APIView):
def get(self, request):
return Response({"message": "It's Alive"}, status=status.HTTP_200_OK)
| 28.555556 | 77 | 0.774319 | 35 | 257 | 5.542857 | 0.628571 | 0.123711 | 0.262887 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.013575 | 0.140078 | 257 | 8 | 78 | 32.125 | 0.864253 | 0 | 0 | 0 | 0 | 0 | 0.066148 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0 | 0.5 | 0.166667 | 1 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 5 |
ed1b0c47cf7fc420680e875bc1a2a18d4d0a6fe8 | 14,464 | py | Python | sdk/python/pulumi_aws/eks/cluster.py | texdc/pulumi-aws | 93a7a28ab7db6b1cd7e6686c0b68aa4c89490d4f | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | sdk/python/pulumi_aws/eks/cluster.py | texdc/pulumi-aws | 93a7a28ab7db6b1cd7e6686c0b68aa4c89490d4f | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | sdk/python/pulumi_aws/eks/cluster.py | texdc/pulumi-aws | 93a7a28ab7db6b1cd7e6686c0b68aa4c89490d4f | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | # coding=utf-8
# *** WARNING: this file was generated by the Pulumi Terraform Bridge (tfgen) Tool. ***
# *** Do not edit by hand unless you're certain you know what you are doing! ***
import json
import warnings
import pulumi
import pulumi.runtime
from typing import Union
from .. import utilities, tables
class Cluster(pulumi.CustomResource):
arn: pulumi.Output[str]
"""
The Amazon Resource Name (ARN) of the cluster.
"""
certificate_authority: pulumi.Output[dict]
"""
Nested attribute containing `certificate-authority-data` for your cluster.
* `data` (`str`) - The base64 encoded certificate data required to communicate with your cluster. Add this to the `certificate-authority-data` section of the `kubeconfig` file for your cluster.
"""
created_at: pulumi.Output[str]
enabled_cluster_log_types: pulumi.Output[list]
"""
A list of the desired control plane logging to enable. For more information, see [Amazon EKS Control Plane Logging](https://docs.aws.amazon.com/eks/latest/userguide/control-plane-logs.html)
"""
endpoint: pulumi.Output[str]
"""
The endpoint for your Kubernetes API server.
"""
identities: pulumi.Output[list]
"""
Nested attribute containing identity provider information for your cluster. Only available on Kubernetes version 1.13 and 1.14 clusters created or upgraded on or after September 3, 2019.
* `oidcs` (`list`) - Nested attribute containing [OpenID Connect](https://openid.net/connect/) identity provider information for the cluster.
* `issuer` (`str`) - Issuer URL for the OpenID Connect identity provider.
"""
name: pulumi.Output[str]
"""
Name of the cluster.
"""
platform_version: pulumi.Output[str]
"""
The platform version for the cluster.
"""
role_arn: pulumi.Output[str]
"""
The Amazon Resource Name (ARN) of the IAM role that provides permissions for the Kubernetes control plane to make calls to AWS API operations on your behalf.
"""
status: pulumi.Output[str]
"""
The status of the EKS cluster. One of `CREATING`, `ACTIVE`, `DELETING`, `FAILED`.
"""
tags: pulumi.Output[dict]
"""
Key-value mapping of resource tags.
"""
version: pulumi.Output[str]
"""
Desired Kubernetes master version. If you do not specify a value, the latest available version at resource creation is used and no upgrades will occur except those automatically triggered by EKS. The value must be configured and increased to upgrade the version when desired. Downgrades are not supported by EKS.
"""
vpc_config: pulumi.Output[dict]
"""
Nested argument for the VPC associated with your cluster. Amazon EKS VPC resources have specific requirements to work properly with Kubernetes. For more information, see [Cluster VPC Considerations](https://docs.aws.amazon.com/eks/latest/userguide/network_reqs.html) and [Cluster Security Group Considerations](https://docs.aws.amazon.com/eks/latest/userguide/sec-group-reqs.html) in the Amazon EKS User Guide. Configuration detailed below.
* `endpointPrivateAccess` (`bool`) - Indicates whether or not the Amazon EKS private API server endpoint is enabled. Default is `false`.
* `endpointPublicAccess` (`bool`) - Indicates whether or not the Amazon EKS public API server endpoint is enabled. Default is `true`.
* `security_group_ids` (`list`) - List of security group IDs for the cross-account elastic network interfaces that Amazon EKS creates to use to allow communication between your worker nodes and the Kubernetes control plane.
* `subnet_ids` (`list`) - List of subnet IDs. Must be in at least two different availability zones. Amazon EKS creates cross-account elastic network interfaces in these subnets to allow communication between your worker nodes and the Kubernetes control plane.
* `vpc_id` (`str`) - The VPC associated with your cluster.
"""
def __init__(__self__, resource_name, opts=None, enabled_cluster_log_types=None, name=None, role_arn=None, tags=None, version=None, vpc_config=None, __props__=None, __name__=None, __opts__=None):
"""
Manages an EKS Cluster.
:param str resource_name: The name of the resource.
:param pulumi.ResourceOptions opts: Options for the resource.
:param pulumi.Input[list] enabled_cluster_log_types: A list of the desired control plane logging to enable. For more information, see [Amazon EKS Control Plane Logging](https://docs.aws.amazon.com/eks/latest/userguide/control-plane-logs.html)
:param pulumi.Input[str] name: Name of the cluster.
:param pulumi.Input[str] role_arn: The Amazon Resource Name (ARN) of the IAM role that provides permissions for the Kubernetes control plane to make calls to AWS API operations on your behalf.
:param pulumi.Input[dict] tags: Key-value mapping of resource tags.
:param pulumi.Input[str] version: Desired Kubernetes master version. If you do not specify a value, the latest available version at resource creation is used and no upgrades will occur except those automatically triggered by EKS. The value must be configured and increased to upgrade the version when desired. Downgrades are not supported by EKS.
:param pulumi.Input[dict] vpc_config: Nested argument for the VPC associated with your cluster. Amazon EKS VPC resources have specific requirements to work properly with Kubernetes. For more information, see [Cluster VPC Considerations](https://docs.aws.amazon.com/eks/latest/userguide/network_reqs.html) and [Cluster Security Group Considerations](https://docs.aws.amazon.com/eks/latest/userguide/sec-group-reqs.html) in the Amazon EKS User Guide. Configuration detailed below.
The **vpc_config** object supports the following:
* `endpointPrivateAccess` (`pulumi.Input[bool]`) - Indicates whether or not the Amazon EKS private API server endpoint is enabled. Default is `false`.
* `endpointPublicAccess` (`pulumi.Input[bool]`) - Indicates whether or not the Amazon EKS public API server endpoint is enabled. Default is `true`.
* `security_group_ids` (`pulumi.Input[list]`) - List of security group IDs for the cross-account elastic network interfaces that Amazon EKS creates to use to allow communication between your worker nodes and the Kubernetes control plane.
* `subnet_ids` (`pulumi.Input[list]`) - List of subnet IDs. Must be in at least two different availability zones. Amazon EKS creates cross-account elastic network interfaces in these subnets to allow communication between your worker nodes and the Kubernetes control plane.
* `vpc_id` (`pulumi.Input[str]`) - The VPC associated with your cluster.
> This content is derived from https://github.com/terraform-providers/terraform-provider-aws/blob/master/website/docs/r/eks_cluster.html.markdown.
"""
if __name__ is not None:
warnings.warn("explicit use of __name__ is deprecated", DeprecationWarning)
resource_name = __name__
if __opts__ is not None:
warnings.warn("explicit use of __opts__ is deprecated, use 'opts' instead", DeprecationWarning)
opts = __opts__
if opts is None:
opts = pulumi.ResourceOptions()
if not isinstance(opts, pulumi.ResourceOptions):
raise TypeError('Expected resource options to be a ResourceOptions instance')
if opts.version is None:
opts.version = utilities.get_version()
if opts.id is None:
if __props__ is not None:
raise TypeError('__props__ is only valid when passed in combination with a valid opts.id to get an existing resource')
__props__ = dict()
__props__['enabled_cluster_log_types'] = enabled_cluster_log_types
__props__['name'] = name
if role_arn is None:
raise TypeError("Missing required property 'role_arn'")
__props__['role_arn'] = role_arn
__props__['tags'] = tags
__props__['version'] = version
if vpc_config is None:
raise TypeError("Missing required property 'vpc_config'")
__props__['vpc_config'] = vpc_config
__props__['arn'] = None
__props__['certificate_authority'] = None
__props__['created_at'] = None
__props__['endpoint'] = None
__props__['identities'] = None
__props__['platform_version'] = None
__props__['status'] = None
super(Cluster, __self__).__init__(
'aws:eks/cluster:Cluster',
resource_name,
__props__,
opts)
@staticmethod
def get(resource_name, id, opts=None, arn=None, certificate_authority=None, created_at=None, enabled_cluster_log_types=None, endpoint=None, identities=None, name=None, platform_version=None, role_arn=None, status=None, tags=None, version=None, vpc_config=None):
"""
Get an existing Cluster resource's state with the given name, id, and optional extra
properties used to qualify the lookup.
:param str resource_name: The unique name of the resulting resource.
:param str id: The unique provider ID of the resource to lookup.
:param pulumi.ResourceOptions opts: Options for the resource.
:param pulumi.Input[str] arn: The Amazon Resource Name (ARN) of the cluster.
:param pulumi.Input[dict] certificate_authority: Nested attribute containing `certificate-authority-data` for your cluster.
:param pulumi.Input[list] enabled_cluster_log_types: A list of the desired control plane logging to enable. For more information, see [Amazon EKS Control Plane Logging](https://docs.aws.amazon.com/eks/latest/userguide/control-plane-logs.html)
:param pulumi.Input[str] endpoint: The endpoint for your Kubernetes API server.
:param pulumi.Input[list] identities: Nested attribute containing identity provider information for your cluster. Only available on Kubernetes version 1.13 and 1.14 clusters created or upgraded on or after September 3, 2019.
:param pulumi.Input[str] name: Name of the cluster.
:param pulumi.Input[str] platform_version: The platform version for the cluster.
:param pulumi.Input[str] role_arn: The Amazon Resource Name (ARN) of the IAM role that provides permissions for the Kubernetes control plane to make calls to AWS API operations on your behalf.
:param pulumi.Input[str] status: The status of the EKS cluster. One of `CREATING`, `ACTIVE`, `DELETING`, `FAILED`.
:param pulumi.Input[dict] tags: Key-value mapping of resource tags.
:param pulumi.Input[str] version: Desired Kubernetes master version. If you do not specify a value, the latest available version at resource creation is used and no upgrades will occur except those automatically triggered by EKS. The value must be configured and increased to upgrade the version when desired. Downgrades are not supported by EKS.
:param pulumi.Input[dict] vpc_config: Nested argument for the VPC associated with your cluster. Amazon EKS VPC resources have specific requirements to work properly with Kubernetes. For more information, see [Cluster VPC Considerations](https://docs.aws.amazon.com/eks/latest/userguide/network_reqs.html) and [Cluster Security Group Considerations](https://docs.aws.amazon.com/eks/latest/userguide/sec-group-reqs.html) in the Amazon EKS User Guide. Configuration detailed below.
The **certificate_authority** object supports the following:
* `data` (`pulumi.Input[str]`) - The base64 encoded certificate data required to communicate with your cluster. Add this to the `certificate-authority-data` section of the `kubeconfig` file for your cluster.
The **identities** object supports the following:
* `oidcs` (`pulumi.Input[list]`) - Nested attribute containing [OpenID Connect](https://openid.net/connect/) identity provider information for the cluster.
* `issuer` (`pulumi.Input[str]`) - Issuer URL for the OpenID Connect identity provider.
The **vpc_config** object supports the following:
* `endpointPrivateAccess` (`pulumi.Input[bool]`) - Indicates whether or not the Amazon EKS private API server endpoint is enabled. Default is `false`.
* `endpointPublicAccess` (`pulumi.Input[bool]`) - Indicates whether or not the Amazon EKS public API server endpoint is enabled. Default is `true`.
* `security_group_ids` (`pulumi.Input[list]`) - List of security group IDs for the cross-account elastic network interfaces that Amazon EKS creates to use to allow communication between your worker nodes and the Kubernetes control plane.
* `subnet_ids` (`pulumi.Input[list]`) - List of subnet IDs. Must be in at least two different availability zones. Amazon EKS creates cross-account elastic network interfaces in these subnets to allow communication between your worker nodes and the Kubernetes control plane.
* `vpc_id` (`pulumi.Input[str]`) - The VPC associated with your cluster.
> This content is derived from https://github.com/terraform-providers/terraform-provider-aws/blob/master/website/docs/r/eks_cluster.html.markdown.
"""
opts = pulumi.ResourceOptions.merge(opts, pulumi.ResourceOptions(id=id))
__props__ = dict()
__props__["arn"] = arn
__props__["certificate_authority"] = certificate_authority
__props__["created_at"] = created_at
__props__["enabled_cluster_log_types"] = enabled_cluster_log_types
__props__["endpoint"] = endpoint
__props__["identities"] = identities
__props__["name"] = name
__props__["platform_version"] = platform_version
__props__["role_arn"] = role_arn
__props__["status"] = status
__props__["tags"] = tags
__props__["version"] = version
__props__["vpc_config"] = vpc_config
return Cluster(resource_name, opts=opts, __props__=__props__)
def translate_output_property(self, prop):
return tables._CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
def translate_input_property(self, prop):
return tables._SNAKE_TO_CAMEL_CASE_TABLE.get(prop) or prop
| 71.60396 | 486 | 0.712873 | 1,907 | 14,464 | 5.253802 | 0.142108 | 0.034035 | 0.028745 | 0.018964 | 0.773031 | 0.758259 | 0.727518 | 0.708155 | 0.68959 | 0.667632 | 0 | 0.002348 | 0.205061 | 14,464 | 201 | 487 | 71.960199 | 0.869021 | 0.474903 | 0 | 0.024691 | 1 | 0 | 0.153308 | 0.028714 | 0 | 0 | 0 | 0 | 0 | 1 | 0.049383 | false | 0.012346 | 0.074074 | 0.024691 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
ed2ae176b8503efa26064198cd964e2e41a7aeea | 47 | py | Python | tests/conftest.py | chib0/asd-winter2019 | c7d95305b1e8b99013fd40da1e7ebe01c2d0102a | [
"Apache-2.0"
] | null | null | null | tests/conftest.py | chib0/asd-winter2019 | c7d95305b1e8b99013fd40da1e7ebe01c2d0102a | [
"Apache-2.0"
] | 4 | 2021-02-02T22:38:53.000Z | 2022-01-13T02:32:33.000Z | tests/conftest.py | chib0/asd-winter2019 | c7d95305b1e8b99013fd40da1e7ebe01c2d0102a | [
"Apache-2.0"
] | null | null | null | import pytest
from cortex import configuration
| 15.666667 | 32 | 0.87234 | 6 | 47 | 6.833333 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.12766 | 47 | 2 | 33 | 23.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
ed347dc88ace93a14ecac50dd3bbd62592ff3711 | 210 | py | Python | Introduction-to-data-visualization-with-matplotlib/2. Plotting time-series/script_1.py | nhutnamhcmus/datacamp-playground | 25457e813b1145e1d335562286715eeddd1c1a7b | [
"MIT"
] | 1 | 2021-05-08T11:09:27.000Z | 2021-05-08T11:09:27.000Z | Introduction-to-data-visualization-with-matplotlib/2. Plotting time-series/script_1.py | nhutnamhcmus/datacamp-playground | 25457e813b1145e1d335562286715eeddd1c1a7b | [
"MIT"
] | 1 | 2022-03-12T15:42:14.000Z | 2022-03-12T15:42:14.000Z | Introduction-to-data-visualization-with-matplotlib/2. Plotting time-series/script_1.py | nhutnamhcmus/datacamp-playground | 25457e813b1145e1d335562286715eeddd1c1a7b | [
"MIT"
] | 1 | 2021-04-30T18:24:19.000Z | 2021-04-30T18:24:19.000Z | # Import pandas as pd
import pandas as pd
# Read the data from file using read_csv
climate_change = pd.read_csv('climate_change.csv', parse_dates=['date'], index_col='date')
print(climate_change.head()) | 30 | 91 | 0.742857 | 34 | 210 | 4.382353 | 0.588235 | 0.261745 | 0.187919 | 0.214765 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.142857 | 210 | 7 | 92 | 30 | 0.827778 | 0.27619 | 0 | 0 | 0 | 0 | 0.180556 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.333333 | 0.333333 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
ed674b07d1a47099ea77d7eec63b55777a48e234 | 32 | py | Python | login.py | Serendipity-fan/hello_world | f7d5361b7d262e95fc11be4cddc21f5affb4da98 | [
"MIT"
] | null | null | null | login.py | Serendipity-fan/hello_world | f7d5361b7d262e95fc11be4cddc21f5affb4da98 | [
"MIT"
] | null | null | null | login.py | Serendipity-fan/hello_world | f7d5361b7d262e95fc11be4cddc21f5affb4da98 | [
"MIT"
] | null | null | null | num1 = 1
num2 = 100
num3 = 1000
| 8 | 11 | 0.625 | 6 | 32 | 3.333333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.478261 | 0.28125 | 32 | 3 | 12 | 10.666667 | 0.391304 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
ed6bf270a742341735a14b5fd838fbd62261aa42 | 481 | py | Python | day10in.py | unidavemeyer/aoc2020 | eabe6cb4143ac76a5d4047143665ee3bc0335275 | [
"MIT"
] | null | null | null | day10in.py | unidavemeyer/aoc2020 | eabe6cb4143ac76a5d4047143665ee3bc0335275 | [
"MIT"
] | null | null | null | day10in.py | unidavemeyer/aoc2020 | eabe6cb4143ac76a5d4047143665ee3bc0335275 | [
"MIT"
] | null | null | null | strIn = '''16
10
15
5
1
11
7
19
6
12
4'''
strIn = '''28
33
18
42
31
14
46
20
48
47
24
23
49
45
19
38
39
11
1
32
25
35
8
17
7
9
4
2
34
10
3'''
strIn = '''151
94
14
118
25
143
33
23
80
95
87
44
150
39
148
51
138
121
70
69
90
155
144
40
77
8
97
45
152
58
65
63
128
101
31
112
140
86
30
55
104
135
115
16
26
60
96
85
84
48
4
131
54
52
139
76
91
46
15
17
37
156
134
98
83
111
72
34
7
108
149
116
32
110
47
157
75
13
10
145
1
127
41
53
2
3
117
71
109
105
64
27
38
59
24
20
124
9
66'''
| 3.340278 | 14 | 0.64657 | 144 | 481 | 2.159722 | 0.763889 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.891566 | 0.309771 | 481 | 143 | 15 | 3.363636 | 0.045181 | 0 | 0 | 0.29078 | 0 | 0 | 0.902287 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
ed8307df6f023b1d9f6ae2edaf4689ae050849d0 | 69 | py | Python | docs/guide/snippets/json-validation/todos/asgi.py | teaglebuilt/bocadillo | b2138e77747d3ab9f87e4b352f6b7c1e72520fe1 | [
"MIT"
] | 434 | 2018-11-19T15:16:05.000Z | 2022-02-19T03:18:52.000Z | docs/guide/snippets/json-validation/todos/asgi.py | teaglebuilt/bocadillo | b2138e77747d3ab9f87e4b352f6b7c1e72520fe1 | [
"MIT"
] | 295 | 2018-11-20T15:11:17.000Z | 2020-03-14T19:42:03.000Z | docs/guide/snippets/json-validation/todos/asgi.py | teaglebuilt/bocadillo | b2138e77747d3ab9f87e4b352f6b7c1e72520fe1 | [
"MIT"
] | 62 | 2018-11-17T22:41:06.000Z | 2021-09-11T17:45:59.000Z | from bocadillo import configure
from .app import app
configure(app)
| 13.8 | 31 | 0.811594 | 10 | 69 | 5.6 | 0.5 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.144928 | 69 | 4 | 32 | 17.25 | 0.949153 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
71fc3002cf2501d5e63ea39a6aa56c2830d7e0de | 364 | py | Python | test/fixtures/python/file1.py | Bhanditz/jscpd | b24360c51ebe69d90fbffbf62273a39139efa79a | [
"MIT"
] | null | null | null | test/fixtures/python/file1.py | Bhanditz/jscpd | b24360c51ebe69d90fbffbf62273a39139efa79a | [
"MIT"
] | null | null | null | test/fixtures/python/file1.py | Bhanditz/jscpd | b24360c51ebe69d90fbffbf62273a39139efa79a | [
"MIT"
] | null | null | null | # hello
class A(object):
def __init__(self):
print("qwe")
self.test = None
if (self.test):
print(self.test)
def hello(self):
print("hello")
def hello1(self):
print("hello")
def hello3(self):
print("hello")
def hello4(self):
pass
if __name__ == "__main__":
a = A()
| 14 | 28 | 0.494505 | 42 | 364 | 4 | 0.428571 | 0.214286 | 0.25 | 0.303571 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.012931 | 0.362637 | 364 | 25 | 29 | 14.56 | 0.711207 | 0.013736 | 0 | 0.1875 | 0 | 0 | 0.072829 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.3125 | false | 0.0625 | 0 | 0 | 0.375 | 0.3125 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 5 |
9c055933dd9e17d9fa94fa301df178f70ea8d11b | 91 | py | Python | code/Examples/RJObject_GalaxyField/showresults.py | modsim/DNest4 | 4de91f440cd0455893e59da1ac5031399e5c0969 | [
"MIT"
] | 54 | 2016-01-20T10:00:27.000Z | 2022-01-24T14:38:11.000Z | code/Examples/RJObject_GalaxyField/showresults.py | modsim/DNest4 | 4de91f440cd0455893e59da1ac5031399e5c0969 | [
"MIT"
] | 30 | 2016-03-07T21:36:37.000Z | 2021-11-14T19:33:46.000Z | code/Examples/RJObject_GalaxyField/showresults.py | modsim/DNest4 | 4de91f440cd0455893e59da1ac5031399e5c0969 | [
"MIT"
] | 22 | 2016-01-21T13:37:11.000Z | 2021-11-14T17:23:45.000Z | import dnest4.classic as dn4
dn4.postprocess(single_precision=True, cut=0)
import display
| 18.2 | 45 | 0.824176 | 14 | 91 | 5.285714 | 0.857143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.04878 | 0.098901 | 91 | 4 | 46 | 22.75 | 0.853659 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
9c0b10e4274407d866b5228f2fcba22f13f00181 | 138 | py | Python | tests/sandbox/.venv_ccf_sandbox/lib/python3.8/site-packages/sklearn/cross_decomposition/__init__.py | iLuSIAnn/test | 10d0a20dc1a646b5c1f6c7bff2960e3f5df0510e | [
"Apache-2.0"
] | 6,989 | 2017-07-18T06:23:18.000Z | 2022-03-31T15:58:36.000Z | tests/sandbox/.venv_ccf_sandbox/lib/python3.8/site-packages/sklearn/cross_decomposition/__init__.py | iLuSIAnn/test | 10d0a20dc1a646b5c1f6c7bff2960e3f5df0510e | [
"Apache-2.0"
] | 1,978 | 2017-07-18T09:17:58.000Z | 2022-03-31T14:28:43.000Z | site-packages/sklearn/cross_decomposition/__init__.py | Wristlebane/Pyto | 901ac307b68486d8289105c159ca702318bea5b0 | [
"MIT"
] | 1,228 | 2017-07-18T09:03:13.000Z | 2022-03-29T05:57:40.000Z | from ._pls import PLSCanonical, PLSRegression, PLSSVD
from ._cca import CCA
__all__ = ['PLSCanonical', 'PLSRegression', 'PLSSVD', 'CCA']
| 27.6 | 60 | 0.746377 | 15 | 138 | 6.466667 | 0.533333 | 0.515464 | 0.639175 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.123188 | 138 | 4 | 61 | 34.5 | 0.801653 | 0 | 0 | 0 | 0 | 0 | 0.246377 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
9c561cff88f6277044f95abd00a10340e3cb45c4 | 5,394 | py | Python | elements/yolo.py | amirhosseinh77/Autonomous-Vehicle-Environment-Perception | f834ea23f80eda6e33796a0b97c909b43da37eb3 | [
"MIT"
] | 23 | 2021-04-01T16:28:32.000Z | 2022-03-05T18:17:17.000Z | elements/yolo.py | aidamohammadshahi/Autonomous-Vehicle-Environment-Perception | f834ea23f80eda6e33796a0b97c909b43da37eb3 | [
"MIT"
] | 1 | 2021-04-13T21:26:17.000Z | 2021-06-29T09:23:11.000Z | elements/yolo.py | aidamohammadshahi/Autonomous-Vehicle-Environment-Perception | f834ea23f80eda6e33796a0b97c909b43da37eb3 | [
"MIT"
] | 13 | 2021-04-06T20:26:14.000Z | 2022-02-08T01:31:36.000Z | import torch
import cv2
import numpy as np
from yolov5.models.experimental import attempt_load
from yolov5.utils.general import non_max_suppression, scale_coords
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
classes = {0: 'person',
2: 'car',
5: 'bus',
7: 'truck',
9: 'traffic light',
11: 'stop sign'}
margin = 0
class YOLO():
def __init__(self,model_path):
self.yolo_model = attempt_load(weights=model_path, map_location=device)
print("Yolo model loaded!")
self.conf_thres = 0.75
self.iou_thres = 0.7
def detect(self,left):
"""
Input :
BGR image
Output:
yolo return list of dict in format:
{ label : str
bbox : [(xmin,ymin),(xmax,ymax)]
score : float
cls : int
}
"""
img = cv2.resize(left, (640,384))
img = cv2.cvtColor(img,cv2.COLOR_BGR2RGB)
img = np.moveaxis(img,-1,0)
img = torch.from_numpy(img).to(device)
img = img.float()/255.0 # 0 - 255 to 0.0 - 1.0
if img.ndimension() == 3:
img = img.unsqueeze(0)
pred = self.yolo_model(img, augment=False)[0]
pred = non_max_suppression(pred, conf_thres=self.conf_thres, iou_thres=self.iou_thres, classes=None)
items = []
if pred[0] is not None and len(pred):
for p in pred[0]:
if int(p[5]) in list(classes.keys()):
score = np.round(p[4].cpu().detach().numpy(),2)
label = classes[int(p[5])]
# det[:, :4] = scale_coords(img.shape[2:], det[:, :4], im0.shape).round()
xmin = int(p[0] * left.shape[1] /640)
ymin = int(p[1] * left.shape[0] /384)
xmax = int(p[2] * left.shape[1] /640)
ymax = int(p[3] * left.shape[0] /384)
xmin = xmin - margin if xmin - margin > 0 else 0
ymin = ymin - margin if ymin - margin > 0 else 0
xmax = xmax + margin if xmax + margin < left.shape[1] else left.shape[1]
ymax = ymax + margin if ymax + margin < left.shape[0] else left.shape[0]
item = {'label': label,
'bbox' : [(xmin,ymin),(xmax,ymax)],
'score': score,
'cls' : int(p[5])
}
items.append(item)
return(items)
classes_sign = {0: 'Taghadom',
1: 'Chap Mamnoo',
2: 'Rast Mamnoo',
3: 'SL30',
4: 'Tavaghof Mamnoo',
5: 'Vorood Mamnoo',
6: 'Mostaghom',
7: 'SL40',
8: 'SL50',
9: 'SL60',
10: 'SL70',
11: 'SL80',
12: 'SL100',
13: 'No U-Turn',
}
margin_sign = 0
class YOLO_Sign():
def __init__(self,model_path):
self.yolo_model = attempt_load(weights=model_path, map_location=device)
print("Sign Detection model loaded!")
self.conf_thres = 0.75
self.iou_thres = 0.7
def detect_sign(self,left):
"""
Input :
BGR image
Output:
yolo return list of dict in format:
{ label : str
bbox : [(xmin,ymin),(xmax,ymax)]
score : float
cls : int
}
"""
img = cv2.resize(left, (640,384))
img = cv2.cvtColor(img,cv2.COLOR_BGR2RGB)
img = np.moveaxis(img,-1,0)
img = torch.from_numpy(img).to(device)
img = img.float()/255.0 # 0 - 255 to 0.0 - 1.0
if img.ndimension() == 3:
img = img.unsqueeze(0)
pred = self.yolo_model(img, augment=False)[0]
pred = non_max_suppression(pred, conf_thres= self.conf_thres, iou_thres=self.iou_thres, classes=None)
items = []
if pred[0] is not None and len(pred):
for p in pred[0]:
score = np.round(p[4].cpu().detach().numpy(),2)
label = classes_sign[int(p[5])]
# det[:, :4] = scale_coords(img.shape[2:], det[:, :4], im0.shape).round()
xmin = int(p[0] * left.shape[1] /640)
ymin = int(p[1] * left.shape[0] /384)
xmax = int(p[2] * left.shape[1] /640)
ymax = int(p[3] * left.shape[0] /384)
xmin = xmin - margin_sign if xmin - margin_sign > 0 else 0
ymin = ymin - margin_sign if ymin - margin_sign > 0 else 0
xmax = xmax + margin_sign if xmax + margin_sign < left.shape[1] else left.shape[1]
ymax = ymax + margin_sign if ymax + margin_sign < left.shape[0] else left.shape[0]
item = {'label': label,
'bbox' : [(xmin,ymin),(xmax,ymax)],
'score': score,
'cls': int(p[5])
}
items.append(item)
return(items) | 36.201342 | 109 | 0.456433 | 642 | 5,394 | 3.741433 | 0.213396 | 0.05995 | 0.033306 | 0.026644 | 0.766028 | 0.757702 | 0.724396 | 0.724396 | 0.724396 | 0.724396 | 0 | 0.059179 | 0.417316 | 5,394 | 149 | 110 | 36.201342 | 0.705059 | 0.110864 | 0 | 0.529412 | 0 | 0 | 0.051345 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.039216 | false | 0 | 0.04902 | 0 | 0.107843 | 0.019608 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
9c596d9c9525a44ceeeedd0e68e225cdabda4cb4 | 9,884 | py | Python | python/GenExpressionsFile.py | Greakz/mdh-cmake-cubevis | 6c64ec0e14dcdd07e69fa1f018aa7954eeeaf173 | [
"MIT"
] | null | null | null | python/GenExpressionsFile.py | Greakz/mdh-cmake-cubevis | 6c64ec0e14dcdd07e69fa1f018aa7954eeeaf173 | [
"MIT"
] | 5 | 2021-08-24T11:09:54.000Z | 2021-08-24T21:14:15.000Z | python/GenExpressionsFile.py | Greakz/mdh-cmake-cubevis | 6c64ec0e14dcdd07e69fa1f018aa7954eeeaf173 | [
"MIT"
] | null | null | null | from Expression import Expression
from Whitespace import ws
class ExpressionFileGenerator:
@staticmethod
def generate(data):
mdh_cube_nest_template_file = open("./src/mdhconfig/include/gen_templates/gen_mdh_expressions.template.h", "r")
raw_template = mdh_cube_nest_template_file.read()
mdh_cube_nest_template_file.close()
mdh_cube_nest_template_file = open("./src/mdhconfig/include/gen_mdh_expressions.h", "w")
split_t = raw_template.split("/*GENERATE*/")
combined_template = split_t[0] + ExpressionFileGenerator.generate_cube_nest_functions(data) + split_t[1]
split_t = combined_template.split("/*GENERATE-SINGLE-CALCULATIONS*/")
combined_template = split_t[0] + ExpressionFileGenerator.generate_single_calculations(data) + split_t[1]
mdh_cube_nest_template_file.write(combined_template)
mdh_cube_nest_template_file.close()
@staticmethod
def generate_cube_nest_functions(data):
result = ""
for cube_nest in data.cube_nests:
for cube in cube_nest["cubes"]:
result += ExpressionFileGenerator.gen_dim_funcs(data, cube_nest, cube)
for cube_nest in data.cube_nests:
for cube in cube_nest["cubes"]:
result += ExpressionFileGenerator.gen_step_size_variables(data, cube_nest, cube)
return result
@staticmethod
def generate_single_calculations(data):
# x_dim start
result = ""
extra_it = []
for cube_nest in data.cube_nests:
for cube in cube_nest["cubes"]:
for attribute, value in cube["extra_iterators"].items():
if not attribute in extra_it:
result += ws(2) + "int " + attribute + ";\n"
extra_it.append(attribute)
for cube_nest in data.cube_nests:
result += ws(2) + "// Cube Nest: " + cube_nest["name"] + "\n"
for cube in cube_nest["cubes"]:
for attribute, value in cube["extra_iterators"].items():
result += ws(2) + attribute + " = " + Expression.process_string(data, value[0]) + ";\n"
result += ExpressionFileGenerator.gen_dim_jump_func(data, cube, "x")
result += ExpressionFileGenerator.gen_dim_size_func(data, cube, "x")
result += ExpressionFileGenerator.gen_dim_jump_offset_func(data, cube, "x")
result += ExpressionFileGenerator.gen_dim_jump_func(data, cube, "y")
result += ExpressionFileGenerator.gen_dim_size_func(data, cube, "y")
result += ExpressionFileGenerator.gen_dim_jump_offset_func(data, cube, "y")
result += ExpressionFileGenerator.gen_dim_jump_func(data, cube, "z")
result += ExpressionFileGenerator.gen_dim_size_func(data, cube, "z")
result += ExpressionFileGenerator.gen_dim_jump_offset_func(data, cube, "z")
result += "\n"
return result
@staticmethod
def gen_dim_jump_func(data, cube, dim_token):
result = ""
result += ws(2) + "this->" + cube["cname_" + dim_token + "_dim_jump_f"] + " = ("
result += Expression.process_str_exp_but_set_clock_to_min_except_for(data, cube[dim_token + "_dim_str"][0],
cube["cube_iterators"], 1)
result += ") - ("
result += Expression.process_str_exp_but_set_clock_to_min_except_for(data, cube[dim_token + "_dim_str"][0],
cube["cube_iterators"], 0)
result += ");\n"
return result
@staticmethod
def gen_dim_size_func(data, cube, dim_token):
result = ""
result += ws(2) + "this->" + cube["cname_" + dim_token + "_dim_size_f"] + " = ("
result += Expression.process_str_exp_but_set_clock_to_min(data, cube[dim_token + "_dim_str"][1])
result += ") - ("
result += Expression.process_str_exp_but_set_clock_to_min(data, cube[dim_token + "_dim_str"][0])
result += ") + 1;\n"
return result
@staticmethod
def gen_dim_jump_offset_func(data, cube, dim_token):
result = ""
result += ws(2) + "this->" + cube["cname_" + dim_token + "_dim_jump_offset_f"] + " = "
result += " this->" + cube["cname_" + dim_token + "_dim_size_f"] + " - this->"
result += cube["cname_" + dim_token + "_dim_jump_f"] + ";\n"
return result
@staticmethod
def gen_dim_funcs(data, cube_nest, cube):
result = ""
result += ws(1) + "// Cube: " + cube_nest["name"] + "_" + cube["name"] + " - Dim_Min & Dim_Max\n"
parameter = ExpressionFileGenerator.gen_dim_extra_iterator_parameters(cube)
result += ws(1) + "int " + cube["cname_x_dim_start_f"] + "(" + parameter + ") {"
result += "return " + Expression.process_string(data, cube["x_dim_str"][0]) + ";}\n"
result += ws(1) + "int " + cube["cname_x_dim_end_f"] + "(" + parameter + ") {"
result += "return " + Expression.process_string(data, cube["x_dim_str"][1]) + ";}\n"
result += ws(1) + "int " + cube["cname_y_dim_start_f"] + "(" + parameter + ") {"
result += "return " + Expression.process_string(data, cube["y_dim_str"][0]) + ";}\n"
result += ws(1) + "int " + cube["cname_y_dim_end_f"] + "(" + parameter + ") {"
result += "return " + Expression.process_string(data, cube["y_dim_str"][1]) + ";}\n"
result += ws(1) + "int " + cube["cname_z_dim_start_f"] + "(" + parameter + ") {"
result += "return " + Expression.process_string(data, cube["z_dim_str"][0]) + ";}\n"
result += ws(1) + "int " + cube["cname_z_dim_end_f"] + "(" + parameter + ") {"
result += "return " + Expression.process_string(data, cube["z_dim_str"][1]) + ";}\n"
return result
@staticmethod
def gen_dim_extra_iterator_parameters(cube):
p_res = ""
for attribute, value in cube["extra_iterators"].items():
p_res += "int " + attribute + ", "
if len(p_res) > 0:
p_res = p_res[:-2]
return p_res
@staticmethod
def gen_step_size_variables(data, cube_nest, cube):
result = ""
result += ws(1) + "// Cube: " + cube_nest["name"] + "_" + cube["name"] + " - Step, Size & StepOffset\n"
result += ExpressionFileGenerator.gen_single_variable(cube["cname_x_dim_jump_f"])
result += ExpressionFileGenerator.gen_single_variable(cube["cname_x_dim_size_f"])
result += ExpressionFileGenerator.gen_single_variable(cube["cname_x_dim_jump_offset_f"])
result += ExpressionFileGenerator.gen_single_variable(cube["cname_y_dim_jump_f"])
result += ExpressionFileGenerator.gen_single_variable(cube["cname_y_dim_size_f"])
result += ExpressionFileGenerator.gen_single_variable(cube["cname_y_dim_jump_offset_f"])
result += ExpressionFileGenerator.gen_single_variable(cube["cname_z_dim_jump_f"])
result += ExpressionFileGenerator.gen_single_variable(cube["cname_z_dim_size_f"])
result += ExpressionFileGenerator.gen_single_variable(cube["cname_z_dim_jump_offset_f"])
return result
@staticmethod
def gen_single_variable(name):
return ws(1) + "int " + name + " = 0;\n"
@staticmethod
def gen_step_size_calculation(data, cube_nest, cube):
result = ""
result += ExpressionFileGenerator.gen_jump_calculation_for_dim(data, cube, "x")
result += ExpressionFileGenerator.gen_step_calculation_for_dim(data, cube, "x")
result += ExpressionFileGenerator.gen_step_offset_calculation_for_dim(data, cube, "x")
result += ExpressionFileGenerator.gen_jump_calculation_for_dim(data, cube, "y")
result += ExpressionFileGenerator.gen_step_calculation_for_dim(data, cube, "y")
result += ExpressionFileGenerator.gen_step_offset_calculation_for_dim(data, cube, "y")
result += ExpressionFileGenerator.gen_jump_calculation_for_dim(data, cube, "z")
result += ExpressionFileGenerator.gen_step_calculation_for_dim(data, cube, "z")
result += ExpressionFileGenerator.gen_step_offset_calculation_for_dim(data, cube, "z")
return result
@staticmethod
def gen_jump_calculation_for_dim(data, cube, dim_token):
result = ""
result += ws(2) + cube["cname_" + dim_token + "_dim_jump_f"] + " = ("
result += data.processStrExpButSetClockToMinExeptForOne(cube[dim_token + "_dim_str"][0],
cube[dim_token + "_dim_str"][2], 1)
result += ") - ("
result += data.processStrExpButSetClockToMinExeptForOne(cube[dim_token + "_dim_str"][0],
cube[dim_token + "_dim_str"][2], 0)
result += ");\n"
return result
@staticmethod
def gen_step_calculation_for_dim(data, cube, dim_token):
result = ""
result += ws(2) + cube["cname_" + dim_token + "_dim_size_f"] + " = ("
result += data.processStrExpButSetClockToMin(cube[dim_token + "_dim_str"][1])
result += ") - ("
result += data.processStrExpButSetClockToMin(cube[dim_token + "_dim_str"][0])
result += ") + 1;\n"
return result
@staticmethod
def gen_step_offset_calculation_for_dim(data, cube, dim_token):
result = ""
result += ws(2) + cube["cname_" + dim_token +"_dim_jump_offset_f"] + " = "
result += " this->" + cube["cname_" + dim_token +"_dim_size_f"] + " - this->"
result += cube["cname_" + dim_token +"_dim_jump_f"] + ";\n"
return result
def is_in(search_list, key):
for item in search_list:
if key == item:
return True
return False
| 47.519231 | 119 | 0.612707 | 1,149 | 9,884 | 4.898172 | 0.089643 | 0.061123 | 0.16489 | 0.044776 | 0.85963 | 0.829069 | 0.796731 | 0.761016 | 0.651919 | 0.582445 | 0 | 0.006513 | 0.25435 | 9,884 | 207 | 120 | 47.748792 | 0.757123 | 0.001113 | 0 | 0.391566 | 1 | 0 | 0.126937 | 0.022288 | 0 | 0 | 0 | 0 | 0 | 1 | 0.090361 | false | 0 | 0.012048 | 0.006024 | 0.198795 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
9c7efdb35b7d9d57beff86db7fbeac13a1f987e8 | 1,570 | py | Python | pineapple/advanced.py | pect0ral/python-pineapple | f4e07d0f33620450b799677555ebba1712a23d5b | [
"MIT"
] | 7 | 2018-09-12T21:29:04.000Z | 2019-12-04T07:16:56.000Z | pineapple/advanced.py | adde88/python-pineapple | f4e07d0f33620450b799677555ebba1712a23d5b | [
"MIT"
] | null | null | null | pineapple/advanced.py | adde88/python-pineapple | f4e07d0f33620450b799677555ebba1712a23d5b | [
"MIT"
] | 1 | 2019-06-03T19:45:20.000Z | 2019-06-03T19:45:20.000Z | from module import Module
class Advanced(Module):
def __init__(self, api):
"""
Methods this module should have:
getResources
dropCaches
getUSB
getFstab
saveFstab
getCSS
saveCSS
formatSDCard
formatSDCardStatus
checkForUpgrade
downloadUpgrade
getDownloadStatus
performUpgrade
getCurrentVersion
checkApiToken
addApiToken
getApiTokens
revokeApiToken
"""
super(Advanced, self).__init__(api, 'Advanced')
def getResources(self):
return self.request('getResources')
def dropCaches(self):
return self.request('dropCaches')
def getUSB(self):
return self.request('getUSB')
def getFstab(self):
return self.request('getFstab')
def setFstab(self, fstab):
return self.request('saveFstab', {'fstab', fstab})
def getCSS(self):
return self.request('getCSS')
def setCSS(self, css):
return self.request('saveCSS', {'css': css})
def formatSDCard(self):
return self.request('formatSDCard')
def getFormatSDCardStatus(self):
return self.request('formatSDCardStatus')
def checkForUpgrade(self):
return self.request('checkForUpgrade')
def downloadUpgrade(self, version):
return self.request('downloadUpgrade', {'version': version})
def getUpgradeDownloadStatus(self, checksum):
return self.request('getDownloadStatus', {'checksum': checksum})
def performUpgrade(self):
return self.request('performUpgrade')
def getFirmareVersion(self):
return self.request('getCurrentVersion')
| 27.54386 | 72 | 0.689809 | 150 | 1,570 | 7.166667 | 0.286667 | 0.130233 | 0.221395 | 0.195349 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.206369 | 1,570 | 56 | 73 | 28.035714 | 0.862761 | 0.177707 | 0 | 0 | 0 | 0 | 0.155485 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.46875 | false | 0 | 0.03125 | 0.4375 | 0.96875 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 5 |
92d5919e6fdae61967f3de82d85e4b522c5c621a | 12,317 | py | Python | bark/world/tests/evaluation/py_evaluator_rss_tests.py | amitsahu7/bark | 7ba226af803127adcf6f4f3fc6e1c49cecde6a33 | [
"MIT"
] | 174 | 2019-04-03T11:37:37.000Z | 2022-03-27T09:14:38.000Z | bark/world/tests/evaluation/py_evaluator_rss_tests.py | amitsahu7/bark | 7ba226af803127adcf6f4f3fc6e1c49cecde6a33 | [
"MIT"
] | 192 | 2019-04-05T09:41:40.000Z | 2022-03-03T14:14:28.000Z | bark/world/tests/evaluation/py_evaluator_rss_tests.py | amitsahu7/bark | 7ba226af803127adcf6f4f3fc6e1c49cecde6a33 | [
"MIT"
] | 55 | 2019-04-05T13:22:46.000Z | 2022-01-21T07:03:41.000Z | # Copyright (c) 2020 fortiss GmbH
#
# Authors: Julian Bernhard, Klemens Esterle, Patrick Hart and
# Tobias Kessler
#
# This work is licensed under the terms of the MIT license.
# For a copy, see <https://opensource.org/licenses/MIT>.
import unittest
import pickle
import numpy as np
from bark.core.world import *
from bark.runtime.commons.parameters import ParameterServer
from bark.runtime.commons.xodr_parser import XodrParser
from bark.core.models.behavior import BehaviorConstantAcceleration
from bark.core.models.execution import ExecutionModelInterpolate
from bark.core.models.dynamic import SingleTrackModel
from bark.core.world import World
from bark.core.world.goal_definition import GoalDefinitionPolygon
from bark.core.world.agent import Agent
from bark.core.world.map import MapInterface
from bark.core.geometry.standard_shapes import CarLimousine
from bark.core.geometry import Point2d, Polygon2d
from bark.core.world.evaluation import EvaluatorRSS
from bark.core.commons import SetVerboseLevel
from bark.runtime.viewer import MPViewer
class TestAgent(Agent):
"""Derived Agent Class
"""
def __init__(self, init_state, goal_polygon, map_interface, params):
behavior_model = BehaviorConstantAcceleration(params)
execution_model = ExecutionModelInterpolate(params)
dynamic_model = SingleTrackModel(params)
agent_2d_shape = CarLimousine()
agent_params = params.AddChild("agent")
super(TestAgent, self).__init__(init_state, behavior_model, dynamic_model,
execution_model, agent_2d_shape, agent_params,
GoalDefinitionPolygon(goal_polygon), map_interface)
class EvaluatorRSSTests(unittest.TestCase):
@staticmethod
def load_map(map):
xodr_parser = XodrParser(map)
map_interface = MapInterface()
map_interface.SetOpenDriveMap(xodr_parser.map)
return map_interface
def test_longitude_highway_safe(self):
"""
Checking Longitudinal Responses (true means safe)
"""
params = ParameterServer()
map = "bark/runtime/tests/data/city_highway_straight.xodr"
params["EvaluatorRss"]["MapFilename"] = map
map_interface = EvaluatorRSSTests.load_map(map)
world = World(params)
world.SetMap(map_interface)
goal_polygon = Polygon2d(
[0, 0, 0], [Point2d(-1, -1), Point2d(-1, 1), Point2d(1, 1), Point2d(1, -1)])
goal_polygon = goal_polygon.Translate(Point2d(1.8, 120))
# The safety distance seems more conservative than in the paper
# Hard coded
ego_state = np.array([0, 1.8, -114.9, np.pi/2, 10])
other_state = np.array([0, 1.8, -72.95, np.pi/2, 7])
ego = TestAgent(ego_state, goal_polygon, map_interface, params)
other = TestAgent(other_state, goal_polygon, map_interface, params)
world.AddAgent(ego)
world.AddAgent(other)
world.UpdateAgentRTree()
viewer = MPViewer(params=params, use_world_bounds=True)
viewer.drawWorld(world)
viewer.show(block=False)
evaluator_rss = EvaluatorRSS(ego.id, params)
pw_directional_evaluation_return = evaluator_rss.PairwiseDirectionalEvaluate(
world)
self.assertEqual(True, pw_directional_evaluation_return[other.id][0])
def test_longitude_highway_unsafe(self):
"""
Checking Longitudinal Responses (true means safe)
"""
params = ParameterServer()
map = "bark/runtime/tests/data/city_highway_straight.xodr"
params["EvaluatorRss"]["MapFilename"] = map
map_interface = EvaluatorRSSTests.load_map(map)
world = World(params)
world.SetMap(map_interface)
goal_polygon = Polygon2d(
[0, 0, 0], [Point2d(-1, -1), Point2d(-1, 1), Point2d(1, 1), Point2d(1, -1)])
goal_polygon = goal_polygon.Translate(Point2d(1.8, 120))
# The safety distance seems more conservative than in the paper
# Hard coded
ego_state = np.array([0, 1.8, -60.0, np.pi/2, 10])
other_state = np.array([0, 1.8, -68.0, np.pi/2, 10])
ego = TestAgent(ego_state, goal_polygon, map_interface, params)
other = TestAgent(other_state, goal_polygon, map_interface, params)
world.AddAgent(ego)
world.AddAgent(other)
world.UpdateAgentRTree()
viewer = MPViewer(params=params, use_world_bounds=True)
viewer.drawWorld(world)
viewer.show(block=False)
evaluator_rss = EvaluatorRSS(ego.id, params)
pw_directional_evaluation_return = evaluator_rss.PairwiseDirectionalEvaluate(
world)
self.assertEqual(False, pw_directional_evaluation_return[other.id][0])
def test_lateral_highway_safe(self):
"""
Checking Longitudinal Responses (true means safe)
"""
params = ParameterServer()
map = "bark/runtime/tests/data/city_highway_straight.xodr"
params["EvaluatorRss"]["MapFilename"] = map
map_interface = EvaluatorRSSTests.load_map(map)
world = World(params)
world.SetMap(map_interface)
goal_polygon_1 = Polygon2d(
[0, 0, 0], [Point2d(-1, -1), Point2d(-1, 1), Point2d(1, 1), Point2d(1, -1)])
goal_polygon_1 = goal_polygon_1.Translate(Point2d(5.5, 120))
goal_polygon_2 = Polygon2d(
[0, 0, 0], [Point2d(-1, -1), Point2d(-1, 1), Point2d(1, 1), Point2d(1, -1)])
goal_polygon_2 = goal_polygon_2.Translate(Point2d(1.8, 120))
# Hard coded
ego_state = np.array([0, 5.5, 10, np.pi/2, 10]) # straight north
other_state = np.array([0, 1.8, 0, np.pi/2, 15]) # straight north
ego = TestAgent(ego_state, goal_polygon_1, map_interface, params)
other = TestAgent(other_state, goal_polygon_2, map_interface, params)
world.AddAgent(ego)
world.AddAgent(other)
world.UpdateAgentRTree()
viewer = MPViewer(params=params, use_world_bounds=True)
viewer.drawWorld(world)
viewer.show(block=False)
evaluator_rss = EvaluatorRSS(ego.id, params)
self.assertEqual(
True, evaluator_rss.PairwiseDirectionalEvaluate(world)[other.id][1])
def test_lateral_highway_unsafe(self):
"""
Checking Lateral Responses (true means safe)
"""
params = ParameterServer()
map = "bark/runtime/tests/data/city_highway_straight.xodr"
params["EvaluatorRss"]["MapFilename"] = map
map_interface = EvaluatorRSSTests.load_map(map)
world = World(params)
world.SetMap(map_interface)
goal_polygon_1 = Polygon2d(
[0, 0, 0], [Point2d(-1, -1), Point2d(-1, 1), Point2d(1, 1), Point2d(1, -1)])
goal_polygon_1 = goal_polygon_1.Translate(Point2d(5.5, 120))
goal_polygon_2 = Polygon2d(
[0, 0, 0], [Point2d(-1, -1), Point2d(-1, 1), Point2d(1, 1), Point2d(1, -1)])
goal_polygon_2 = goal_polygon_2.Translate(Point2d(1.8, 120))
# Hard coded
ego_state = np.array([0, 5.0, 10, np.pi/2, 10]) # straight north
other_state = np.array([0, 3.1, 0, np.pi/2, 10]) # straight north
ego = TestAgent(ego_state, goal_polygon_1, map_interface, params)
other = TestAgent(other_state, goal_polygon_2, map_interface, params)
world.AddAgent(ego)
world.AddAgent(other)
world.UpdateAgentRTree()
viewer = MPViewer(params=params, use_world_bounds=True)
viewer.drawWorld(world)
viewer.show(block=False)
evaluator_rss = EvaluatorRSS(ego.id, params)
self.assertEqual(
False, evaluator_rss.PairwiseDirectionalEvaluate(world)[other.id][1])
def test_lateral_merging_safe(self):
"""
Checking Lateral Responses (true means safe)
"""
params = ParameterServer()
map = "bark/runtime/tests/data/DR_DEU_Merging_MT_v01_centered.xodr"
params["EvaluatorRss"]["MapFilename"] = map
map_interface = EvaluatorRSSTests.load_map(map)
world = World(params)
world.SetMap(map_interface)
goal_polygon = Polygon2d(
[0, 0, 0], [Point2d(-1, -1), Point2d(-1, 1), Point2d(1, 1), Point2d(1, -1)])
goal_polygon = goal_polygon.Translate(Point2d(-15.4, 108.6))
# Hard coded
ego_state = np.array([0, 68.1, 108, -np.pi, 5])
other_state = np.array([0, 64.1, 105, -np.pi, 5])
ego = TestAgent(ego_state, goal_polygon, map_interface, params)
other = TestAgent(other_state, goal_polygon, map_interface, params)
world.AddAgent(ego)
world.AddAgent(other)
world.UpdateAgentRTree()
viewer = MPViewer(params=params, use_world_bounds=True)
viewer.drawWorld(world)
viewer.show(block=False)
evaluator_rss = EvaluatorRSS(ego.id, params)
world.AddEvaluator("rss", evaluator_rss)
pw_directional_evaluation_return = evaluator_rss.PairwiseDirectionalEvaluate(
world)
self.assertEqual(True, pw_directional_evaluation_return[other.id][1])
def test_lateral_merging_unsafe(self):
"""
Checking Lateral Responses (true means safe)
"""
params = ParameterServer()
map = "bark/runtime/tests/data/DR_DEU_Merging_MT_v01_centered.xodr"
params["EvaluatorRss"]["MapFilename"] = map
map_interface = EvaluatorRSSTests.load_map(map)
world = World(params)
world.SetMap(map_interface)
goal_polygon = Polygon2d(
[0, 0, 0], [Point2d(-1, -1), Point2d(-1, 1), Point2d(1, 1), Point2d(1, -1)])
goal_polygon = goal_polygon.Translate(Point2d(-15.4, 108.6))
# Hard coded
ego_state = np.array([0, 62.8, 107.8, -np.pi+0.2, 5])
other_state = np.array([0, 67.5, 105.3, -np.pi, 5])
ego = TestAgent(ego_state, goal_polygon, map_interface, params)
other = TestAgent(other_state, goal_polygon, map_interface, params)
world.AddAgent(ego)
world.AddAgent(other)
world.UpdateAgentRTree()
viewer = MPViewer(params=params, use_world_bounds=True)
viewer.drawWorld(world)
viewer.show(block=False)
evaluator_rss = EvaluatorRSS(ego.id, params)
world.AddEvaluator("rss", evaluator_rss)
pw_directional_evaluation_return = evaluator_rss.PairwiseDirectionalEvaluate(
world)
self.assertEqual(False, pw_directional_evaluation_return[other.id][1])
def test_relevant_agents(self):
params = ParameterServer()
map = "bark/runtime/tests/data/city_highway_straight.xodr"
params["EvaluatorRss"]["MapFilename"] = map
map_interface = EvaluatorRSSTests.load_map(map)
world = World(params)
world.SetMap(map_interface)
goal_polygon_1 = Polygon2d(
[0, 0, 0], [Point2d(-1, -1), Point2d(-1, 1), Point2d(1, 1), Point2d(1, -1)])
goal_polygon_1 = goal_polygon_1.Translate(Point2d(5.5, 120))
goal_polygon_2 = Polygon2d(
[0, 0, 0], [Point2d(-1, -1), Point2d(-1, 1), Point2d(1, 1), Point2d(1, -1)])
goal_polygon_2 = goal_polygon_2.Translate(Point2d(1.8, 120))
ego_state = np.array([0, 5.5, 10, np.pi/2, 10])
other_1_state = np.array([0, 1.8, -10, np.pi/2, 15])
other_2_state = np.array([0, 1.8, -120, np.pi/2, 10])
ego = TestAgent(ego_state, goal_polygon_1, map_interface, params)
other_1 = TestAgent(other_1_state, goal_polygon_2,
map_interface, params)
other_2 = TestAgent(other_2_state, goal_polygon_2,
map_interface, params)
world.AddAgent(ego)
world.AddAgent(other_1)
world.AddAgent(other_2)
viewer = MPViewer(params=params, use_world_bounds=True)
viewer.drawWorld(world)
viewer.show(block=False)
evaluator_rss = EvaluatorRSS(ego.id, params)
responses = evaluator_rss.PairwiseEvaluate(world)
self.assertEqual(1, len(responses))
self.assertTrue(responses[other_1.id])
self.assertFalse(other_2.id in responses)
if __name__ == '__main__':
SetVerboseLevel(4)
unittest.main()
| 36.226471 | 91 | 0.649509 | 1,509 | 12,317 | 5.113983 | 0.121272 | 0.066995 | 0.04665 | 0.0622 | 0.771803 | 0.764805 | 0.75068 | 0.743164 | 0.743164 | 0.739018 | 0 | 0.043284 | 0.236584 | 12,317 | 339 | 92 | 36.333333 | 0.777411 | 0.063327 | 0 | 0.672727 | 0 | 0 | 0.048176 | 0.032352 | 0 | 0 | 0 | 0 | 0.040909 | 1 | 0.040909 | false | 0 | 0.081818 | 0 | 0.136364 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
13014bf7c0da62b0608ac85c54dbc29a6872a660 | 52 | py | Python | rules/default.py | sksum/PyParse | 99e41643de788f14073fdd667816d822329c2c2c | [
"MIT"
] | null | null | null | rules/default.py | sksum/PyParse | 99e41643de788f14073fdd667816d822329c2c2c | [
"MIT"
] | null | null | null | rules/default.py | sksum/PyParse | 99e41643de788f14073fdd667816d822329c2c2c | [
"MIT"
] | null | null | null | def getComponents(html):
return html.get_text()
| 17.333333 | 26 | 0.730769 | 7 | 52 | 5.285714 | 0.857143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.153846 | 52 | 2 | 27 | 26 | 0.840909 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | false | 0 | 0 | 0.5 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 5 |
132825375b7b192bce1811d14b04cef2927e5404 | 151 | py | Python | investment_dashboard/portfolio/admin.py | mjenrungrot/investment-dashboard | 89b296a635ee3c29171f7bf88cc8e49250981637 | [
"MIT"
] | null | null | null | investment_dashboard/portfolio/admin.py | mjenrungrot/investment-dashboard | 89b296a635ee3c29171f7bf88cc8e49250981637 | [
"MIT"
] | 4 | 2017-12-19T08:39:10.000Z | 2017-12-20T10:59:38.000Z | investment_dashboard/portfolio/admin.py | mjenrungrot/investment-dashboard | 89b296a635ee3c29171f7bf88cc8e49250981637 | [
"MIT"
] | null | null | null | from django.contrib import admin
from .models import PortfolioTransaction
# Register your models here.
admin.site.register(PortfolioTransaction)
| 25.166667 | 42 | 0.81457 | 17 | 151 | 7.235294 | 0.647059 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.13245 | 151 | 5 | 43 | 30.2 | 0.938931 | 0.172185 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
132e00c31975d4c69f01d54bfe78852c1f13f22e | 49 | py | Python | flypy/reinforcement_learning/dqn/atari_pong/atari_pong.py | bobbyscharmann/flypy | 39dce7decd9633e7d90bb4c77472c8c40aeda61c | [
"MIT"
] | null | null | null | flypy/reinforcement_learning/dqn/atari_pong/atari_pong.py | bobbyscharmann/flypy | 39dce7decd9633e7d90bb4c77472c8c40aeda61c | [
"MIT"
] | null | null | null | flypy/reinforcement_learning/dqn/atari_pong/atari_pong.py | bobbyscharmann/flypy | 39dce7decd9633e7d90bb4c77472c8c40aeda61c | [
"MIT"
] | null | null | null | """Implementation of Atari Pong in OpenAI Gym"""
| 24.5 | 48 | 0.734694 | 7 | 49 | 5.142857 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.142857 | 49 | 1 | 49 | 49 | 0.857143 | 0.857143 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.