hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
19956195bedb50ce48482ad3f61eff2abddcfc12 | 101 | py | Python | commonlibs/transform_tools/common_transform.py | floatingstarZ/loc_cls_exp | 8b971db671753d3571914aaa760cc13ac47018e8 | [
"Apache-2.0"
] | null | null | null | commonlibs/transform_tools/common_transform.py | floatingstarZ/loc_cls_exp | 8b971db671753d3571914aaa760cc13ac47018e8 | [
"Apache-2.0"
] | null | null | null | commonlibs/transform_tools/common_transform.py | floatingstarZ/loc_cls_exp | 8b971db671753d3571914aaa760cc13ac47018e8 | [
"Apache-2.0"
] | null | null | null | import sys
import os.path as osp
def path2name(fp):
return osp.splitext(osp.split(fp)[1])[0]
| 11.222222 | 44 | 0.683168 | 18 | 101 | 3.833333 | 0.777778 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.036145 | 0.178218 | 101 | 8 | 45 | 12.625 | 0.795181 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.5 | 0.25 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
19a44709f82090f52642461c47f560e4f8714947 | 83 | py | Python | blogofile/tests/plugin/plugin_test/blogofile_plugin_test/site_src/_filters/filter_to_override.py | gencer/blogofile | f8503eb5a4e9f7962b799adfdc3afc2dd6fe91c9 | [
"MIT"
] | 74 | 2015-01-12T07:16:55.000Z | 2022-02-02T10:41:55.000Z | blogofile/tests/plugin/plugin_test/blogofile_plugin_test/site_src/_filters/filter_to_override.py | Acidburn0zzz/blogofile | f8503eb5a4e9f7962b799adfdc3afc2dd6fe91c9 | [
"MIT"
] | 8 | 2015-04-14T22:44:14.000Z | 2016-10-05T04:59:26.000Z | blogofile/tests/plugin/plugin_test/blogofile_plugin_test/site_src/_filters/filter_to_override.py | Acidburn0zzz/blogofile | f8503eb5a4e9f7962b799adfdc3afc2dd6fe91c9 | [
"MIT"
] | 31 | 2015-03-07T11:57:37.000Z | 2022-03-08T06:33:25.000Z | def run(content):
return "This is text from the plugin version of the filter."
| 27.666667 | 64 | 0.722892 | 14 | 83 | 4.285714 | 0.928571 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.204819 | 83 | 2 | 65 | 41.5 | 0.909091 | 0 | 0 | 0 | 0 | 0 | 0.614458 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | false | 0 | 0 | 0.5 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
19aa1da9c4eb074fde59b4b1b10b3c03b528d3ef | 44 | py | Python | weather_forecast_retrieval/data/__init__.py | UofU-Cryosphere/weather_forecast_retrieval | b06fe2e09b49ace9eba55bf424c1b2d1e358858c | [
"CC0-1.0"
] | 6 | 2017-12-20T22:42:24.000Z | 2021-08-07T03:32:27.000Z | weather_forecast_retrieval/data/__init__.py | UofU-Cryosphere/weather_forecast_retrieval | b06fe2e09b49ace9eba55bf424c1b2d1e358858c | [
"CC0-1.0"
] | 26 | 2019-03-07T17:47:13.000Z | 2021-06-25T15:43:27.000Z | weather_forecast_retrieval/data/__init__.py | UofU-Cryosphere/weather_forecast_retrieval | b06fe2e09b49ace9eba55bf424c1b2d1e358858c | [
"CC0-1.0"
] | 3 | 2019-03-08T07:28:59.000Z | 2021-02-12T21:59:12.000Z | import weather_forecast_retrieval.data.hrrr
| 22 | 43 | 0.909091 | 6 | 44 | 6.333333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.045455 | 44 | 1 | 44 | 44 | 0.904762 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
19af2b745d6f911143f60029e0efd8fce26fb87c | 168 | py | Python | home/scripts/ia64-conf.py | qiyancos/Simics-3.0.31 | 9bd52d5abad023ee87a37306382a338abf7885f1 | [
"BSD-4-Clause",
"FSFAP"
] | 1 | 2020-06-15T10:41:18.000Z | 2020-06-15T10:41:18.000Z | home/scripts/ia64-conf.py | qiyancos/Simics-3.0.31 | 9bd52d5abad023ee87a37306382a338abf7885f1 | [
"BSD-4-Clause",
"FSFAP"
] | null | null | null | home/scripts/ia64-conf.py | qiyancos/Simics-3.0.31 | 9bd52d5abad023ee87a37306382a338abf7885f1 | [
"BSD-4-Clause",
"FSFAP"
] | 3 | 2020-08-10T10:25:02.000Z | 2021-09-12T01:12:09.000Z |
## Copyright 2002-2007 Virtutech AB
print
print "ia64-conf.py is obsolete, use ia64_conf.py instead"
print
eval_cli_line("run-python-file ../scripts/ia64_conf.py")
| 18.666667 | 58 | 0.761905 | 28 | 168 | 4.428571 | 0.714286 | 0.193548 | 0.241935 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.094595 | 0.119048 | 168 | 8 | 59 | 21 | 0.743243 | 0.190476 | 0 | 0.5 | 0 | 0 | 0.679389 | 0.175573 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0.75 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
5ff0693e1a228da3d19dd0227dfd8c46c5550d8b | 49 | py | Python | gs_flight/__init__.py | geoscan/gs_flight | feaeb34394cc597e33be3c1310ed32b913e61cde | [
"MIT"
] | null | null | null | gs_flight/__init__.py | geoscan/gs_flight | feaeb34394cc597e33be3c1310ed32b913e61cde | [
"MIT"
] | null | null | null | gs_flight/__init__.py | geoscan/gs_flight | feaeb34394cc597e33be3c1310ed32b913e61cde | [
"MIT"
] | null | null | null | from .core import FlightController, CallbackEvent | 49 | 49 | 0.877551 | 5 | 49 | 8.6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.081633 | 49 | 1 | 49 | 49 | 0.955556 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
275f477e5a7e929c47fc18c770cc6cc4b404aa81 | 145 | py | Python | planout_experiments/admin.py | adamhaney/django-planout-experiments | 226ce4375ad09983efc9ed303ca54896ec75303e | [
"MIT"
] | 5 | 2019-02-10T00:14:25.000Z | 2020-07-21T02:21:59.000Z | planout_experiments/admin.py | adamhaney/django-planout-experiments | 226ce4375ad09983efc9ed303ca54896ec75303e | [
"MIT"
] | 2 | 2019-04-13T23:16:07.000Z | 2021-06-10T21:10:16.000Z | planout_experiments/admin.py | adamhaney/django-planout-experiments | 226ce4375ad09983efc9ed303ca54896ec75303e | [
"MIT"
] | 1 | 2019-12-20T06:49:47.000Z | 2019-12-20T06:49:47.000Z | from django.contrib import admin
from .models import Experiment
@admin.register(Experiment)
class ExperimentAdmin(admin.ModelAdmin):
pass
| 16.111111 | 40 | 0.8 | 17 | 145 | 6.823529 | 0.705882 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.131034 | 145 | 8 | 41 | 18.125 | 0.920635 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.2 | 0.4 | 0 | 0.6 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
275fefeb425a4c020ca421510c2710a27a3085f2 | 390 | py | Python | stockbot/ticker/yahoo/market.py | tanlin2013/stockbot | 08322ed4d847ea9e58b091985cef5c128a694b12 | [
"Apache-2.0"
] | 1 | 2021-07-12T23:55:20.000Z | 2021-07-12T23:55:20.000Z | stockbot/ticker/yahoo/market.py | ajmal017/stockbot-7 | 08322ed4d847ea9e58b091985cef5c128a694b12 | [
"Apache-2.0"
] | null | null | null | stockbot/ticker/yahoo/market.py | ajmal017/stockbot-7 | 08322ed4d847ea9e58b091985cef5c128a694b12 | [
"Apache-2.0"
] | 1 | 2021-07-12T23:55:12.000Z | 2021-07-12T23:55:12.000Z | import json
from yfinance import Ticker
class Market(Ticker):
def __init__(self, ticker_symbol: str) -> None:
self._ticker_symbol = ticker_symbol.upper()
super(Market, self).__init__(self.ticker_symbol)
@property
def ticker_symbol(self) -> str:
return self._ticker_symbol
def __str__(self) -> str:
return json.dumps(self.info, indent=4)
| 22.941176 | 56 | 0.676923 | 50 | 390 | 4.88 | 0.42 | 0.295082 | 0.262295 | 0.163934 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.003289 | 0.220513 | 390 | 16 | 57 | 24.375 | 0.799342 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.272727 | false | 0 | 0.181818 | 0.181818 | 0.727273 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
2770ab632ea621e7b2718484c56eb27af1ca7de3 | 136 | py | Python | typistry/util/exception.py | kyprifog/typistry | aab285d909791106154874eb5331b65fc03849ae | [
"Apache-2.0"
] | 4 | 2021-04-12T17:49:34.000Z | 2022-01-23T19:54:29.000Z | typistry/util/exception.py | kyprifog/typistry | aab285d909791106154874eb5331b65fc03849ae | [
"Apache-2.0"
] | 24 | 2021-04-30T18:40:25.000Z | 2021-05-12T20:52:06.000Z | typistry/util/exception.py | kyprifog/typistry | aab285d909791106154874eb5331b65fc03849ae | [
"Apache-2.0"
] | 3 | 2021-04-12T19:40:43.000Z | 2021-09-07T21:56:36.000Z | import traceback
def message(e: Exception):
return ''.join(traceback.format_exception(etype=type(e), value=e, tb=e.__traceback__))
| 27.2 | 90 | 0.75 | 19 | 136 | 5.105263 | 0.684211 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.102941 | 136 | 4 | 91 | 34 | 0.795082 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
278462787f4cb9c2a38c821a5eac2522ed31080e | 52 | py | Python | geohashlite/__init__.py | bazze-io/python-geohash | a4a741a7f01e423d7bc5d831a3dfcf012462357f | [
"MIT"
] | 3 | 2019-07-30T06:33:25.000Z | 2019-08-08T01:44:24.000Z | geohashlite/__init__.py | bazze-io/python-geohash | a4a741a7f01e423d7bc5d831a3dfcf012462357f | [
"MIT"
] | 1 | 2021-03-29T15:23:34.000Z | 2021-03-29T15:23:34.000Z | geohashlite/__init__.py | bazze-io/python-geohash | a4a741a7f01e423d7bc5d831a3dfcf012462357f | [
"MIT"
] | 1 | 2021-03-25T21:26:24.000Z | 2021-03-25T21:26:24.000Z | from .geohash_shape import *
from .geohash import *
| 17.333333 | 28 | 0.769231 | 7 | 52 | 5.571429 | 0.571429 | 0.564103 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.153846 | 52 | 2 | 29 | 26 | 0.886364 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
279930099c5895041f844828cc09a8f180d8d4c1 | 372 | py | Python | tests/__init__.py | alexescalonafernandez/odoo-gulp-unittest | 863d69afa005967a96cf47047e6f720e53c39fb3 | [
"MIT"
] | null | null | null | tests/__init__.py | alexescalonafernandez/odoo-gulp-unittest | 863d69afa005967a96cf47047e6f720e53c39fb3 | [
"MIT"
] | null | null | null | tests/__init__.py | alexescalonafernandez/odoo-gulp-unittest | 863d69afa005967a96cf47047e6f720e53c39fb3 | [
"MIT"
] | null | null | null |
# -*- coding: utf-8 -*-
###############################################################################
# License, author and contributors information in: #
# __openerp__.py file at the root folder of this module. #
###############################################################################
from . import test_canary
| 41.333333 | 79 | 0.311828 | 23 | 372 | 4.826087 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.003425 | 0.215054 | 372 | 8 | 80 | 46.5 | 0.376712 | 0.419355 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
fd8b4c6827f89002a7d2e4216f8a281e88050009 | 8,947 | py | Python | main.py | NoHeartPen/markdown2anki | 4e6834eb8aa6ef67a940c90ea0865e4a11367dda | [
"MIT"
] | 1 | 2021-10-05T06:09:28.000Z | 2021-10-05T06:09:28.000Z | main.py | NoHeartPen/markdown2anki | 4e6834eb8aa6ef67a940c90ea0865e4a11367dda | [
"MIT"
] | null | null | null | main.py | NoHeartPen/markdown2anki | 4e6834eb8aa6ef67a940c90ea0865e4a11367dda | [
"MIT"
] | null | null | null | import re
import utils
import os
from pathlib import Path
import pyperclip
import shutil
import pyautogui
#将笔记中的图片上传到图床
cmd = "python -m markdown_img "
os.system(cmd)
path = os.getcwd() + '\markdown_img'+'\\'
p = Path(path)
FileList=list(p.glob("**/*.md"))
for file in FileList:
oldname = os.path.abspath(file)
newname = os.path.abspath(file).replace("_image","")
newname = os.path.abspath(newname).replace("\markdown_img","")
shutil.move(oldname,newname)
#将笔记转换为Anki支持的txt格式
path = os.getcwd()
p = Path(path)
FileList=list(p.glob("**/*.md"))
for file in FileList:
with open(file, 'r', encoding='UTF-8') as note_file:
cards = []
lines = note_file.readlines()
for line in lines:
if '|20' in line:
break
elif '本文由 [简悦 SimpRead]'in line:
continue
elif line != "\n":
line = line.replace('\n','\\')
card = line
cards.append(card)
elif line == "\n":
card = line
cards.append(card)
else:
continue
with open("{}.txt".format(file), 'w', encoding='UTF-8') as txt_file:
txt_file.writelines(cards)
# 调整文件语法 将markdown语法替换为Html语法
path = os.getcwd()
p = Path(path)
FileList=list(p.glob("**/*.txt"))
for file in FileList:
with open(file, 'r',encoding='UTF-8') as note_file:
cards = []
lines = note_file.readlines()
for line in lines:
if '![' in line:
Picture = line
Picture = re.sub(r'\!\[\]\((.*?)\)',utils.replace_picture,Picture)
card =Picture+"\n"
cards.append(card)
elif '](' in line:
Link = line
Link = re.sub(r'\[(.*?)\)',utils.replace_link,Link)
card = Link+"\n"
cards.append(card)
elif '`' in line:
Code = line
Code = re.sub(r'\`(.*?)\`',utils.replace_code,Code)
card = Code+"\n"
cards.append(card)
elif '['in line:
Hiragana = line
Hiragana = re.sub(r'\[(.*?)\]',utils.replace_hiragana,Hiragana)
card = Hiragana+"\n"
cards.append(card)
elif '**' in line:
Cloze = line
Cloze = re.sub(r'\*\*(.*?)\*\*', utils.replace_cloze, Cloze)
card = Cloze+"\n"
cards.append(card)
elif '~~' in line:
Cloze_SimpRead = line
Cloze_SimpRead = re.sub(r'~~(.*?)~~',utils.replace_cloze_SimpRead,Cloze_SimpRead)
card = Cloze_SimpRead + "\n"
cards.append(card)
else:
continue
filename=os.path.basename(file)
with open('./Anki/'+filename, 'w', encoding='UTF-8') as txt_file:
txt_file.writelines(cards)
# 处理一行中同时出现[]、**2种制卡语法的行
path = os.getcwd() + '\Anki'+'\\'
p = Path(path)
FileList=list(p.glob("**/*.txt"))
for file in FileList:
with open(file, 'r',encoding='UTF-8')as note_file:
cards = []
lines = note_file.readlines()
for line in lines:
if '**' in line:
Cloze = line
Cloze = re.sub(r'\*\*(.*?)\*\*', utils.replace_cloze, Cloze)
card = Cloze+"\n"
cards.append(card)
elif '~~' in line:
Cloze_SimpRead = line
Cloze_SimpRead = re.sub(r'~~(.*?)~~',utils.replace_cloze_SimpRead,Cloze_SimpRead)
card = Cloze_SimpRead + "\n"
cards.append(card)
elif "{{c1::" in line:#请注意,不要随意删掉这一条件分支,否则只含一种制卡语法的行会直接被丢弃!
card = line + "\n"
cards.append(card)
else:
continue
filename=os.path.basename(file)
with open('./Anki/'+filename, 'w', encoding='UTF-8') as txt_file:
txt_file.writelines(cards)
# 处理一行中同时出现[]、**、
# 三种制卡语法的行,一般是很久以后复习时又有了新的发现,写笔记很少会分的这么细
path = os.getcwd() + '\Anki'+'\\'
p = Path(path)
FileList=list(p.glob("**/*.txt"))
for file in FileList:
with open(file, 'r',encoding='UTF-8')as note_file:
cards = []
lines = note_file.readlines()
for line in lines:
if '~~' in line:
Cloze_SimpRead = line
Cloze_SimpRead = re.sub(r'~~(.*?)~~',utils.replace_cloze_SimpRead,Cloze_SimpRead)
card = Cloze_SimpRead + "\n"
cards.append(card)
elif "{{c1::" in line:#请注意,不要随意删掉这一条件分支,否则只含一种制卡语法的行会直接被丢弃!
card = line + "\n"
cards.append(card)
else:
continue
filename=os.path.basename(file)
with open('./Anki/'+filename, 'w', encoding='UTF-8') as txt_file:
txt_file.writelines(cards)
# 调整挖空次数
path = os.getcwd() + '\Anki'+'\\'
p = Path(path)
FileList=list(p.glob("**/*.txt"))
for file in FileList:
with open(file, 'r',encoding='UTF-8') as note_file:
cards = []
lines = note_file.readlines()
for line in lines:
if "{{c1::"in line:
colze = line
colze_number = colze.count("{{c1::")
number = 1
while number < colze_number:
number = number+1
colze=re.sub(r'\d{1}',str(number),colze,1)
number = int (number)
card = colze+"\n"
cards.append(card)
else:
continue
filename=os.path.basename(file)
with open('./Anki/'+filename, 'w', encoding='UTF-8') as txt_file:
txt_file.writelines(cards)
#补齐Anki字段数,由于自己习惯习惯了4条字段,所以用于分割字段的反斜杠上限就是3个
path = os.getcwd() + '\Anki'+'\\'
p = Path(path)
FileList=list(p.glob("**/*.txt"))
for file in FileList:
with open(file, 'r',encoding='UTF-8') as note_file:
cards = []
lines = note_file.readlines()
for line in lines:
number_backslash = line.count("\\")
if number_backslash == 1 :
line = line.replace("\n","")
card = line+"\\ "+"\\ "+"\n"
cards.append(card)
elif number_backslash == 2:
line = line.replace("\n","")
card = line+"\\ "+"\n"
cards.append(card)
elif number_backslash == 3:
card = line
cards.append(card)
else:
continue
filename=os.path.basename(file)
os.unlink('./Anki/'+filename)
with open('./Anki/'+filename.replace(".md",""), 'w', encoding='UTF-8') as txt_file:
txt_file.writelines(cards)
# 解决 注音的部分被也被多次挖空
path = os.getcwd() + '\Anki'+'\\'
p = Path(path)
FileList=list(p.glob("**/*.txt"))
for file in FileList:
with open(file, 'r',encoding='UTF-8') as note_file:
cards = []
lines = note_file.readlines()
for line in lines:
if "[{{c2"in line:
colze = line
colze=re.sub(r'\[\{\{c\d{1}',"[{{c1",colze)
card = colze+"\n"
cards.append(card)
elif "{{" in line:
card = line + "\n"
cards.append(card)
else:
continue
filename=os.path.basename(file)
with open('./Anki/'+filename, 'w', encoding='UTF-8') as txt_file:
txt_file.writelines(cards)
# 解决注音假名偏移问题
path = os.getcwd() + '\Anki'+'\\'
p = Path(path)
FileList=list(p.glob("**/*.txt"))
for file in FileList:
with open(file, 'r',encoding='UTF-8') as note_file:
cards = []
lines = note_file.readlines()
for line in lines:
if '[{{' in line:
hurigana = line
hurigana = re.sub(r'\w\[\{\{', utils.space_hurigana, hurigana)
card = hurigana+"\n"
cards.append(card)
elif "{{" in line:
card = line + "\n"
cards.append(card)
else:
continue
filename=os.path.basename(file)
with open('./Anki/'+filename, 'w', encoding='UTF-8') as txt_file:
txt_file.writelines(cards)
# 更改分割字段
path = os.getcwd() + '\Anki'+'\\'
p = Path(path)
FileList=list(p.glob("**/*.txt"))
for file in FileList:
with open(file, 'r',encoding='UTF-8') as note_file:
cards = []
lines = note_file.readlines()
for line in lines:
if '\\' in line:
hurigana = line
hurigana = re.sub(r'\\', r'\t', hurigana)
card = hurigana+"\n"
cards.append(card)
else:
continue
filename=os.path.basename(file)
with open('./Anki/'+filename, 'w', encoding='UTF-8') as txt_file:
txt_file.writelines(cards)
pyperclip.copy(path)
os.system("D:\\03Program\\Anki\\anki.exe")
pyautogui.hotkey('y')
| 32.772894 | 97 | 0.504974 | 1,010 | 8,947 | 4.40297 | 0.109901 | 0.054419 | 0.074207 | 0.068361 | 0.780076 | 0.754891 | 0.741848 | 0.705644 | 0.701147 | 0.701147 | 0 | 0.006577 | 0.337208 | 8,947 | 272 | 98 | 32.893382 | 0.743339 | 0.032525 | 0 | 0.735537 | 0 | 0 | 0.076184 | 0.003358 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.028926 | 0 | 0.028926 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
fd95f76e7191f4af792df3582510a435c1ae9144 | 260 | py | Python | emanager/imports.py | panditsomnath10016git/eManager | 692e3024b3604ac16e0f0f9dc6223ae8f63c6fb0 | [
"MIT"
] | null | null | null | emanager/imports.py | panditsomnath10016git/eManager | 692e3024b3604ac16e0f0f9dc6223ae8f63c6fb0 | [
"MIT"
] | null | null | null | emanager/imports.py | panditsomnath10016git/eManager | 692e3024b3604ac16e0f0f9dc6223ae8f63c6fb0 | [
"MIT"
] | 1 | 2021-06-15T15:50:00.000Z | 2021-06-15T15:50:00.000Z | # Using only for testing
from emanager.constants import *
from emanager.accounting.accounts import *
from emanager.accounting.bank_core import *
from emanager.sell.customer import *
from emanager.hr.worker import *
from emanager.utils.directories import *
| 21.666667 | 43 | 0.803846 | 34 | 260 | 6.117647 | 0.529412 | 0.346154 | 0.432692 | 0.269231 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.126923 | 260 | 11 | 44 | 23.636364 | 0.9163 | 0.084615 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
fdadbb76aac90396f866c4d0abe9e594d4a8d06a | 45,574 | py | Python | src/v5.1/resources/swagger_client/api/applicants_api.py | xmarcosx/edfi-notebook | 0564ebdf1d0f45a9d25056e7e61369f0a837534d | [
"Apache-2.0"
] | 2 | 2021-04-27T17:18:17.000Z | 2021-04-27T19:14:39.000Z | src/v5.1/resources/swagger_client/api/applicants_api.py | xmarcosx/edfi-notebook | 0564ebdf1d0f45a9d25056e7e61369f0a837534d | [
"Apache-2.0"
] | null | null | null | src/v5.1/resources/swagger_client/api/applicants_api.py | xmarcosx/edfi-notebook | 0564ebdf1d0f45a9d25056e7e61369f0a837534d | [
"Apache-2.0"
] | 1 | 2022-01-06T09:43:11.000Z | 2022-01-06T09:43:11.000Z | # coding: utf-8
"""
Ed-Fi Operational Data Store API
The Ed-Fi ODS / API enables applications to read and write education data stored in an Ed-Fi ODS through a secure REST interface. *** > *Note: Consumers of ODS / API information should sanitize all data for display and storage. The ODS / API provides reasonable safeguards against cross-site scripting attacks and other malicious content, but the platform does not and cannot guarantee that the data it contains is free of all potentially harmful content.* *** # noqa: E501
OpenAPI spec version: 3
Generated by: https://github.com/swagger-api/swagger-codegen.git
"""
from __future__ import absolute_import
import re # noqa: F401
# python 2 and python 3 compatibility library
import six
from swagger_client.api_client import ApiClient
class ApplicantsApi(object):
"""NOTE: This class is auto generated by the swagger code generator program.
Do not edit the class manually.
Ref: https://github.com/swagger-api/swagger-codegen
"""
def __init__(self, api_client=None):
if api_client is None:
api_client = ApiClient()
self.api_client = api_client
def delete_applicant_by_id(self, id, **kwargs): # noqa: E501
"""Deletes an existing resource using the resource identifier. # noqa: E501
The DELETE operation is used to delete an existing resource by identifier. If the resource doesn't exist, an error will result (the resource will not be found). # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.delete_applicant_by_id(id, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str id: A resource identifier that uniquely identifies the resource. (required)
:param str if_match: The ETag header value used to prevent the DELETE from removing a resource modified by another consumer.
:return: None
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.delete_applicant_by_id_with_http_info(id, **kwargs) # noqa: E501
else:
(data) = self.delete_applicant_by_id_with_http_info(id, **kwargs) # noqa: E501
return data
def delete_applicant_by_id_with_http_info(self, id, **kwargs): # noqa: E501
"""Deletes an existing resource using the resource identifier. # noqa: E501
The DELETE operation is used to delete an existing resource by identifier. If the resource doesn't exist, an error will result (the resource will not be found). # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.delete_applicant_by_id_with_http_info(id, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str id: A resource identifier that uniquely identifies the resource. (required)
:param str if_match: The ETag header value used to prevent the DELETE from removing a resource modified by another consumer.
:return: None
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['id', 'if_match'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method delete_applicant_by_id" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'id' is set
if self.api_client.client_side_validation and ('id' not in params or
params['id'] is None): # noqa: E501
raise ValueError("Missing the required parameter `id` when calling `delete_applicant_by_id`") # noqa: E501
collection_formats = {}
path_params = {}
if 'id' in params:
path_params['id'] = params['id'] # noqa: E501
query_params = []
header_params = {}
if 'if_match' in params:
header_params['If-Match'] = params['if_match'] # noqa: E501
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['oauth2_client_credentials'] # noqa: E501
return self.api_client.call_api(
'/tpdm/applicants/{id}', 'DELETE',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type=None, # noqa: E501
auth_settings=auth_settings,
async_req=params.get('async_req'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def deletes_applicants(self, **kwargs): # noqa: E501
"""Retrieves deleted resources based on change version. # noqa: E501
The DELETES operation is used to retrieve deleted resources. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.deletes_applicants(async_req=True)
>>> result = thread.get()
:param async_req bool
:param int offset: Indicates how many items should be skipped before returning results.
:param int limit: Indicates the maximum number of items that should be returned in the results.
:param int min_change_version: Used in synchronization to set sequence minimum ChangeVersion
:param int max_change_version: Used in synchronization to set sequence maximum ChangeVersion
:return: list[TpdmApplicant]
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.deletes_applicants_with_http_info(**kwargs) # noqa: E501
else:
(data) = self.deletes_applicants_with_http_info(**kwargs) # noqa: E501
return data
def deletes_applicants_with_http_info(self, **kwargs): # noqa: E501
"""Retrieves deleted resources based on change version. # noqa: E501
The DELETES operation is used to retrieve deleted resources. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.deletes_applicants_with_http_info(async_req=True)
>>> result = thread.get()
:param async_req bool
:param int offset: Indicates how many items should be skipped before returning results.
:param int limit: Indicates the maximum number of items that should be returned in the results.
:param int min_change_version: Used in synchronization to set sequence minimum ChangeVersion
:param int max_change_version: Used in synchronization to set sequence maximum ChangeVersion
:return: list[TpdmApplicant]
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['offset', 'limit', 'min_change_version', 'max_change_version'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method deletes_applicants" % key
)
params[key] = val
del params['kwargs']
if self.api_client.client_side_validation and ('limit' in params and params['limit'] > 500): # noqa: E501
raise ValueError("Invalid value for parameter `limit` when calling `deletes_applicants`, must be a value less than or equal to `500`") # noqa: E501
if self.api_client.client_side_validation and ('limit' in params and params['limit'] < 0): # noqa: E501
raise ValueError("Invalid value for parameter `limit` when calling `deletes_applicants`, must be a value greater than or equal to `0`") # noqa: E501
collection_formats = {}
path_params = {}
query_params = []
if 'offset' in params:
query_params.append(('offset', params['offset'])) # noqa: E501
if 'limit' in params:
query_params.append(('limit', params['limit'])) # noqa: E501
if 'min_change_version' in params:
query_params.append(('minChangeVersion', params['min_change_version'])) # noqa: E501
if 'max_change_version' in params:
query_params.append(('maxChangeVersion', params['max_change_version'])) # noqa: E501
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['oauth2_client_credentials'] # noqa: E501
return self.api_client.call_api(
'/tpdm/applicants/deletes', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='list[TpdmApplicant]', # noqa: E501
auth_settings=auth_settings,
async_req=params.get('async_req'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def get_applicants(self, **kwargs): # noqa: E501
"""Retrieves specific resources using the resource's property values (using the \"Get\" pattern). # noqa: E501
This GET operation provides access to resources using the \"Get\" search pattern. The values of any properties of the resource that are specified will be used to return all matching results (if it exists). # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_applicants(async_req=True)
>>> result = thread.get()
:param async_req bool
:param int offset: Indicates how many items should be skipped before returning results.
:param int limit: Indicates the maximum number of items that should be returned in the results.
:param int min_change_version: Used in synchronization to set sequence minimum ChangeVersion
:param int max_change_version: Used in synchronization to set sequence maximum ChangeVersion
:param bool total_count: Indicates if the total number of items available should be returned in the 'Total-Count' header of the response. If set to false, 'Total-Count' header will not be provided.
:param str applicant_identifier: Identifier assigned to a person making formal application for an open staff position.
:param str person_id: A unique alphanumeric code assigned to a person.
:param str source_system_descriptor: This descriptor defines the originating record source system for the person.
:param str teacher_candidate_identifier: A unique alphanumeric code assigned to a teacher candidate.
:param str citizenship_status_descriptor: An indicator of whether or not the person is a U.S. citizen.
:param str gender_descriptor: The gender with which a person associates.
:param str sex_descriptor: A person's gender.
:param date birth_date: The month, day, and year on which an individual was born.
:param bool economic_disadvantaged: An indication of inadequate financial condition of an individual's family, as determined by family income, number of family members/dependents, participation in public assistance programs, and/or other characteristics considered relevant by federal, state, and local policy.
:param bool first_generation_student: Indicator of whether individual is a first generation college student.
:param str first_name: A name given to an individual at birth, baptism, or during another naming ceremony, or through legal change.
:param str generation_code_suffix: An appendage, if any, used to denote an individual's generation in his family (e.g., Jr., Sr., III).
:param bool hispanic_latino_ethnicity: An indication that the individual traces his or her origin or descent to Mexico, Puerto Rico, Cuba, Central, and South America, and other Spanish cultures, regardless of race. The term, \"Spanish origin,\" can be used in addition to \"Hispanic or Latino\".
:param str id:
:param str last_surname: The name borne in common by members of a family.
:param str login_id: The login ID for the user; used for security access control interface.
:param str maiden_name: The person's maiden name.
:param str middle_name: A secondary name given to an individual at birth, baptism, or during another naming ceremony.
:param str personal_title_prefix: A prefix used to denote the title, degree, position, or seniority of the person.
:return: list[TpdmApplicant]
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.get_applicants_with_http_info(**kwargs) # noqa: E501
else:
(data) = self.get_applicants_with_http_info(**kwargs) # noqa: E501
return data
def get_applicants_with_http_info(self, **kwargs): # noqa: E501
"""Retrieves specific resources using the resource's property values (using the \"Get\" pattern). # noqa: E501
This GET operation provides access to resources using the \"Get\" search pattern. The values of any properties of the resource that are specified will be used to return all matching results (if it exists). # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_applicants_with_http_info(async_req=True)
>>> result = thread.get()
:param async_req bool
:param int offset: Indicates how many items should be skipped before returning results.
:param int limit: Indicates the maximum number of items that should be returned in the results.
:param int min_change_version: Used in synchronization to set sequence minimum ChangeVersion
:param int max_change_version: Used in synchronization to set sequence maximum ChangeVersion
:param bool total_count: Indicates if the total number of items available should be returned in the 'Total-Count' header of the response. If set to false, 'Total-Count' header will not be provided.
:param str applicant_identifier: Identifier assigned to a person making formal application for an open staff position.
:param str person_id: A unique alphanumeric code assigned to a person.
:param str source_system_descriptor: This descriptor defines the originating record source system for the person.
:param str teacher_candidate_identifier: A unique alphanumeric code assigned to a teacher candidate.
:param str citizenship_status_descriptor: An indicator of whether or not the person is a U.S. citizen.
:param str gender_descriptor: The gender with which a person associates.
:param str sex_descriptor: A person's gender.
:param date birth_date: The month, day, and year on which an individual was born.
:param bool economic_disadvantaged: An indication of inadequate financial condition of an individual's family, as determined by family income, number of family members/dependents, participation in public assistance programs, and/or other characteristics considered relevant by federal, state, and local policy.
:param bool first_generation_student: Indicator of whether individual is a first generation college student.
:param str first_name: A name given to an individual at birth, baptism, or during another naming ceremony, or through legal change.
:param str generation_code_suffix: An appendage, if any, used to denote an individual's generation in his family (e.g., Jr., Sr., III).
:param bool hispanic_latino_ethnicity: An indication that the individual traces his or her origin or descent to Mexico, Puerto Rico, Cuba, Central, and South America, and other Spanish cultures, regardless of race. The term, \"Spanish origin,\" can be used in addition to \"Hispanic or Latino\".
:param str id:
:param str last_surname: The name borne in common by members of a family.
:param str login_id: The login ID for the user; used for security access control interface.
:param str maiden_name: The person's maiden name.
:param str middle_name: A secondary name given to an individual at birth, baptism, or during another naming ceremony.
:param str personal_title_prefix: A prefix used to denote the title, degree, position, or seniority of the person.
:return: list[TpdmApplicant]
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['offset', 'limit', 'min_change_version', 'max_change_version', 'total_count', 'applicant_identifier', 'person_id', 'source_system_descriptor', 'teacher_candidate_identifier', 'citizenship_status_descriptor', 'gender_descriptor', 'sex_descriptor', 'birth_date', 'economic_disadvantaged', 'first_generation_student', 'first_name', 'generation_code_suffix', 'hispanic_latino_ethnicity', 'id', 'last_surname', 'login_id', 'maiden_name', 'middle_name', 'personal_title_prefix'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method get_applicants" % key
)
params[key] = val
del params['kwargs']
if self.api_client.client_side_validation and ('limit' in params and params['limit'] > 500): # noqa: E501
raise ValueError("Invalid value for parameter `limit` when calling `get_applicants`, must be a value less than or equal to `500`") # noqa: E501
if self.api_client.client_side_validation and ('limit' in params and params['limit'] < 0): # noqa: E501
raise ValueError("Invalid value for parameter `limit` when calling `get_applicants`, must be a value greater than or equal to `0`") # noqa: E501
if self.api_client.client_side_validation and ('applicant_identifier' in params and
len(params['applicant_identifier']) > 32):
raise ValueError("Invalid value for parameter `applicant_identifier` when calling `get_applicants`, length must be less than or equal to `32`") # noqa: E501
if self.api_client.client_side_validation and ('person_id' in params and
len(params['person_id']) > 32):
raise ValueError("Invalid value for parameter `person_id` when calling `get_applicants`, length must be less than or equal to `32`") # noqa: E501
if self.api_client.client_side_validation and ('source_system_descriptor' in params and
len(params['source_system_descriptor']) > 306):
raise ValueError("Invalid value for parameter `source_system_descriptor` when calling `get_applicants`, length must be less than or equal to `306`") # noqa: E501
if self.api_client.client_side_validation and ('teacher_candidate_identifier' in params and
len(params['teacher_candidate_identifier']) > 32):
raise ValueError("Invalid value for parameter `teacher_candidate_identifier` when calling `get_applicants`, length must be less than or equal to `32`") # noqa: E501
if self.api_client.client_side_validation and ('citizenship_status_descriptor' in params and
len(params['citizenship_status_descriptor']) > 306):
raise ValueError("Invalid value for parameter `citizenship_status_descriptor` when calling `get_applicants`, length must be less than or equal to `306`") # noqa: E501
if self.api_client.client_side_validation and ('gender_descriptor' in params and
len(params['gender_descriptor']) > 306):
raise ValueError("Invalid value for parameter `gender_descriptor` when calling `get_applicants`, length must be less than or equal to `306`") # noqa: E501
if self.api_client.client_side_validation and ('sex_descriptor' in params and
len(params['sex_descriptor']) > 306):
raise ValueError("Invalid value for parameter `sex_descriptor` when calling `get_applicants`, length must be less than or equal to `306`") # noqa: E501
if self.api_client.client_side_validation and ('first_name' in params and
len(params['first_name']) > 75):
raise ValueError("Invalid value for parameter `first_name` when calling `get_applicants`, length must be less than or equal to `75`") # noqa: E501
if self.api_client.client_side_validation and ('generation_code_suffix' in params and
len(params['generation_code_suffix']) > 10):
raise ValueError("Invalid value for parameter `generation_code_suffix` when calling `get_applicants`, length must be less than or equal to `10`") # noqa: E501
if self.api_client.client_side_validation and ('last_surname' in params and
len(params['last_surname']) > 75):
raise ValueError("Invalid value for parameter `last_surname` when calling `get_applicants`, length must be less than or equal to `75`") # noqa: E501
if self.api_client.client_side_validation and ('login_id' in params and
len(params['login_id']) > 60):
raise ValueError("Invalid value for parameter `login_id` when calling `get_applicants`, length must be less than or equal to `60`") # noqa: E501
if self.api_client.client_side_validation and ('maiden_name' in params and
len(params['maiden_name']) > 75):
raise ValueError("Invalid value for parameter `maiden_name` when calling `get_applicants`, length must be less than or equal to `75`") # noqa: E501
if self.api_client.client_side_validation and ('middle_name' in params and
len(params['middle_name']) > 75):
raise ValueError("Invalid value for parameter `middle_name` when calling `get_applicants`, length must be less than or equal to `75`") # noqa: E501
if self.api_client.client_side_validation and ('personal_title_prefix' in params and
len(params['personal_title_prefix']) > 30):
raise ValueError("Invalid value for parameter `personal_title_prefix` when calling `get_applicants`, length must be less than or equal to `30`") # noqa: E501
collection_formats = {}
path_params = {}
query_params = []
if 'offset' in params:
query_params.append(('offset', params['offset'])) # noqa: E501
if 'limit' in params:
query_params.append(('limit', params['limit'])) # noqa: E501
if 'min_change_version' in params:
query_params.append(('minChangeVersion', params['min_change_version'])) # noqa: E501
if 'max_change_version' in params:
query_params.append(('maxChangeVersion', params['max_change_version'])) # noqa: E501
if 'total_count' in params:
query_params.append(('totalCount', params['total_count'])) # noqa: E501
if 'applicant_identifier' in params:
query_params.append(('applicantIdentifier', params['applicant_identifier'])) # noqa: E501
if 'person_id' in params:
query_params.append(('personId', params['person_id'])) # noqa: E501
if 'source_system_descriptor' in params:
query_params.append(('sourceSystemDescriptor', params['source_system_descriptor'])) # noqa: E501
if 'teacher_candidate_identifier' in params:
query_params.append(('teacherCandidateIdentifier', params['teacher_candidate_identifier'])) # noqa: E501
if 'citizenship_status_descriptor' in params:
query_params.append(('citizenshipStatusDescriptor', params['citizenship_status_descriptor'])) # noqa: E501
if 'gender_descriptor' in params:
query_params.append(('genderDescriptor', params['gender_descriptor'])) # noqa: E501
if 'sex_descriptor' in params:
query_params.append(('sexDescriptor', params['sex_descriptor'])) # noqa: E501
if 'birth_date' in params:
query_params.append(('birthDate', params['birth_date'])) # noqa: E501
if 'economic_disadvantaged' in params:
query_params.append(('economicDisadvantaged', params['economic_disadvantaged'])) # noqa: E501
if 'first_generation_student' in params:
query_params.append(('firstGenerationStudent', params['first_generation_student'])) # noqa: E501
if 'first_name' in params:
query_params.append(('firstName', params['first_name'])) # noqa: E501
if 'generation_code_suffix' in params:
query_params.append(('generationCodeSuffix', params['generation_code_suffix'])) # noqa: E501
if 'hispanic_latino_ethnicity' in params:
query_params.append(('hispanicLatinoEthnicity', params['hispanic_latino_ethnicity'])) # noqa: E501
if 'id' in params:
query_params.append(('id', params['id'])) # noqa: E501
if 'last_surname' in params:
query_params.append(('lastSurname', params['last_surname'])) # noqa: E501
if 'login_id' in params:
query_params.append(('loginId', params['login_id'])) # noqa: E501
if 'maiden_name' in params:
query_params.append(('maidenName', params['maiden_name'])) # noqa: E501
if 'middle_name' in params:
query_params.append(('middleName', params['middle_name'])) # noqa: E501
if 'personal_title_prefix' in params:
query_params.append(('personalTitlePrefix', params['personal_title_prefix'])) # noqa: E501
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['oauth2_client_credentials'] # noqa: E501
return self.api_client.call_api(
'/tpdm/applicants', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='list[TpdmApplicant]', # noqa: E501
auth_settings=auth_settings,
async_req=params.get('async_req'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def get_applicants_by_id(self, id, **kwargs): # noqa: E501
"""Retrieves a specific resource using the resource's identifier (using the \"Get By Id\" pattern). # noqa: E501
This GET operation retrieves a resource by the specified resource identifier. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_applicants_by_id(id, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str id: A resource identifier that uniquely identifies the resource. (required)
:param str if_none_match: The previously returned ETag header value, used here to prevent the unnecessary data transfer of an unchanged resource.
:return: TpdmApplicant
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.get_applicants_by_id_with_http_info(id, **kwargs) # noqa: E501
else:
(data) = self.get_applicants_by_id_with_http_info(id, **kwargs) # noqa: E501
return data
def get_applicants_by_id_with_http_info(self, id, **kwargs): # noqa: E501
"""Retrieves a specific resource using the resource's identifier (using the \"Get By Id\" pattern). # noqa: E501
This GET operation retrieves a resource by the specified resource identifier. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_applicants_by_id_with_http_info(id, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str id: A resource identifier that uniquely identifies the resource. (required)
:param str if_none_match: The previously returned ETag header value, used here to prevent the unnecessary data transfer of an unchanged resource.
:return: TpdmApplicant
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['id', 'if_none_match'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method get_applicants_by_id" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'id' is set
if self.api_client.client_side_validation and ('id' not in params or
params['id'] is None): # noqa: E501
raise ValueError("Missing the required parameter `id` when calling `get_applicants_by_id`") # noqa: E501
collection_formats = {}
path_params = {}
if 'id' in params:
path_params['id'] = params['id'] # noqa: E501
query_params = []
header_params = {}
if 'if_none_match' in params:
header_params['If-None-Match'] = params['if_none_match'] # noqa: E501
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['oauth2_client_credentials'] # noqa: E501
return self.api_client.call_api(
'/tpdm/applicants/{id}', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='TpdmApplicant', # noqa: E501
auth_settings=auth_settings,
async_req=params.get('async_req'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def post_applicant(self, applicant, **kwargs): # noqa: E501
"""Creates or updates resources based on the natural key values of the supplied resource. # noqa: E501
The POST operation can be used to create or update resources. In database terms, this is often referred to as an \"upsert\" operation (insert + update). Clients should NOT include the resource \"id\" in the JSON body because it will result in an error (you must use a PUT operation to update a resource by \"id\"). The web service will identify whether the resource already exists based on the natural key values provided, and update or create the resource appropriately. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.post_applicant(applicant, async_req=True)
>>> result = thread.get()
:param async_req bool
:param TpdmApplicant applicant: The JSON representation of the \"applicant\" resource to be created or updated. (required)
:return: None
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.post_applicant_with_http_info(applicant, **kwargs) # noqa: E501
else:
(data) = self.post_applicant_with_http_info(applicant, **kwargs) # noqa: E501
return data
def post_applicant_with_http_info(self, applicant, **kwargs): # noqa: E501
"""Creates or updates resources based on the natural key values of the supplied resource. # noqa: E501
The POST operation can be used to create or update resources. In database terms, this is often referred to as an \"upsert\" operation (insert + update). Clients should NOT include the resource \"id\" in the JSON body because it will result in an error (you must use a PUT operation to update a resource by \"id\"). The web service will identify whether the resource already exists based on the natural key values provided, and update or create the resource appropriately. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.post_applicant_with_http_info(applicant, async_req=True)
>>> result = thread.get()
:param async_req bool
:param TpdmApplicant applicant: The JSON representation of the \"applicant\" resource to be created or updated. (required)
:return: None
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['applicant'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method post_applicant" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'applicant' is set
if self.api_client.client_side_validation and ('applicant' not in params or
params['applicant'] is None): # noqa: E501
raise ValueError("Missing the required parameter `applicant` when calling `post_applicant`") # noqa: E501
collection_formats = {}
path_params = {}
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'applicant' in params:
body_params = params['applicant']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['oauth2_client_credentials'] # noqa: E501
return self.api_client.call_api(
'/tpdm/applicants', 'POST',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type=None, # noqa: E501
auth_settings=auth_settings,
async_req=params.get('async_req'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def put_applicant(self, id, applicant, **kwargs): # noqa: E501
"""Updates or creates a resource based on the resource identifier. # noqa: E501
The PUT operation is used to update or create a resource by identifier. If the resource doesn't exist, the resource will be created using that identifier. Additionally, natural key values cannot be changed using this operation, and will not be modified in the database. If the resource \"id\" is provided in the JSON body, it will be ignored as well. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.put_applicant(id, applicant, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str id: A resource identifier that uniquely identifies the resource. (required)
:param TpdmApplicant applicant: The JSON representation of the \"applicant\" resource to be created or updated. (required)
:param str if_match: The ETag header value used to prevent the PUT from updating a resource modified by another consumer.
:return: None
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.put_applicant_with_http_info(id, applicant, **kwargs) # noqa: E501
else:
(data) = self.put_applicant_with_http_info(id, applicant, **kwargs) # noqa: E501
return data
def put_applicant_with_http_info(self, id, applicant, **kwargs): # noqa: E501
"""Updates or creates a resource based on the resource identifier. # noqa: E501
The PUT operation is used to update or create a resource by identifier. If the resource doesn't exist, the resource will be created using that identifier. Additionally, natural key values cannot be changed using this operation, and will not be modified in the database. If the resource \"id\" is provided in the JSON body, it will be ignored as well. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.put_applicant_with_http_info(id, applicant, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str id: A resource identifier that uniquely identifies the resource. (required)
:param TpdmApplicant applicant: The JSON representation of the \"applicant\" resource to be created or updated. (required)
:param str if_match: The ETag header value used to prevent the PUT from updating a resource modified by another consumer.
:return: None
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['id', 'applicant', 'if_match'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method put_applicant" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'id' is set
if self.api_client.client_side_validation and ('id' not in params or
params['id'] is None): # noqa: E501
raise ValueError("Missing the required parameter `id` when calling `put_applicant`") # noqa: E501
# verify the required parameter 'applicant' is set
if self.api_client.client_side_validation and ('applicant' not in params or
params['applicant'] is None): # noqa: E501
raise ValueError("Missing the required parameter `applicant` when calling `put_applicant`") # noqa: E501
collection_formats = {}
path_params = {}
if 'id' in params:
path_params['id'] = params['id'] # noqa: E501
query_params = []
header_params = {}
if 'if_match' in params:
header_params['If-Match'] = params['if_match'] # noqa: E501
form_params = []
local_var_files = {}
body_params = None
if 'applicant' in params:
body_params = params['applicant']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['oauth2_client_credentials'] # noqa: E501
return self.api_client.call_api(
'/tpdm/applicants/{id}', 'PUT',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type=None, # noqa: E501
auth_settings=auth_settings,
async_req=params.get('async_req'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
| 57.325786 | 508 | 0.652872 | 5,641 | 45,574 | 5.095373 | 0.079064 | 0.042028 | 0.019448 | 0.018509 | 0.910935 | 0.890652 | 0.860627 | 0.851825 | 0.834186 | 0.83175 | 0 | 0.016322 | 0.266007 | 45,574 | 794 | 509 | 57.397985 | 0.842934 | 0.41335 | 0 | 0.659091 | 0 | 0.040909 | 0.279503 | 0.07526 | 0 | 0 | 0 | 0 | 0 | 1 | 0.029545 | false | 0 | 0.009091 | 0 | 0.081818 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
e31b945416d3ec4b968c63bc8c9a120bc68eb7a8 | 847 | py | Python | JobSpiders/main.py | Genesis-m/JobSpiders | 26427b1bd1bc3c8989e5619ec84864d416419686 | [
"Apache-2.0"
] | 188 | 2018-07-23T14:16:26.000Z | 2022-03-30T13:53:04.000Z | JobSpiders/main.py | lemonbiz/JobSpiders | 1b35557eb391ad4e2b73de433c78da1a095ff8bc | [
"Apache-2.0"
] | 18 | 2018-08-15T10:28:41.000Z | 2022-03-11T23:46:56.000Z | JobSpiders/main.py | lemonbiz/JobSpiders | 1b35557eb391ad4e2b73de433c78da1a095ff8bc | [
"Apache-2.0"
] | 69 | 2018-08-22T06:36:45.000Z | 2022-03-30T13:53:05.000Z | from scrapy.cmdline import execute
import sys
import os
sys.path.append(os.path.dirname(os.path.abspath(__file__)))
# execute(["scrapy", "crawl", "job51"])
# os.system("scrapy crawl lagou2")
os.system("scrapy crawl job51")
# os.system("scrapy crawl zhaopin_java")
#
# os.system("scrapy crawl job_ai")
# os.system("scrapy crawl zhaopin_ai")
#
# os.system("scrapy crawl job_arithmetic")
# os.system("scrapy crawl zhaopin_arithmetic")
#
# os.system("scrapy crawl job_bigdata")
# os.system("scrapy crawl zhaopin_bigdata")
#
# os.system("scrapy crawl job_cplus")
# os.system("scrapy crawl zhaopin_cplus")
#
# os.system("scrapy crawl job_python")
# os.system("scrapy crawl zhaopin_python")
#
# os.system("scrapy crawl job_go")
# os.system("scrapy crawl zhaopin_go")
# execute(["scrapy", "crawl", "lagou"])
# execute(["scrapy", "crawl", "lagou2"])
| 24.2 | 59 | 0.717828 | 119 | 847 | 4.966387 | 0.218487 | 0.335025 | 0.35533 | 0.482234 | 0.71912 | 0.118443 | 0.118443 | 0 | 0 | 0 | 0 | 0.007947 | 0.108619 | 847 | 34 | 60 | 24.911765 | 0.774834 | 0.762692 | 0 | 0 | 0 | 0 | 0.101695 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.6 | 0 | 0.6 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
e34e563b19f5052de9e4205d5032155be453bcbb | 28 | py | Python | pyspeckit/spectrum/speclines/__init__.py | migueldvb/pyspeckit | fa7d875da7c684c8f6aaa3ba206ef3ff2e196652 | [
"MIT",
"BSD-3-Clause"
] | null | null | null | pyspeckit/spectrum/speclines/__init__.py | migueldvb/pyspeckit | fa7d875da7c684c8f6aaa3ba206ef3ff2e196652 | [
"MIT",
"BSD-3-Clause"
] | null | null | null | pyspeckit/spectrum/speclines/__init__.py | migueldvb/pyspeckit | fa7d875da7c684c8f6aaa3ba206ef3ff2e196652 | [
"MIT",
"BSD-3-Clause"
] | null | null | null | import optical
import radio
| 9.333333 | 14 | 0.857143 | 4 | 28 | 6 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.142857 | 28 | 2 | 15 | 14 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
e37f03d647305578ba13f47e32eb0db34a488192 | 176 | py | Python | src/test/data/pa3/AdditionalTestCase/UnitTest/Arithmetic_MUL.py | Leo-Enrique-Wu/chocopy_compiler_code_generation | 4606be0531b3de77411572aae98f73169f46b3b9 | [
"BSD-2-Clause"
] | null | null | null | src/test/data/pa3/AdditionalTestCase/UnitTest/Arithmetic_MUL.py | Leo-Enrique-Wu/chocopy_compiler_code_generation | 4606be0531b3de77411572aae98f73169f46b3b9 | [
"BSD-2-Clause"
] | null | null | null | src/test/data/pa3/AdditionalTestCase/UnitTest/Arithmetic_MUL.py | Leo-Enrique-Wu/chocopy_compiler_code_generation | 4606be0531b3de77411572aae98f73169f46b3b9 | [
"BSD-2-Clause"
] | null | null | null | x:int = 2147483647
print(25*4)
print(-25* -4)
print(-250*4)
print(250* -60)
print(x*-2) #overflow
print(-x*-2) #overflow
print(x*2) #overflow
print(-x*2) #overflow
| 17.6 | 24 | 0.619318 | 31 | 176 | 3.516129 | 0.322581 | 0.220183 | 0.256881 | 0.550459 | 0.550459 | 0.550459 | 0.550459 | 0.550459 | 0.550459 | 0.550459 | 0 | 0.197279 | 0.164773 | 176 | 9 | 25 | 19.555556 | 0.544218 | 0.181818 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 0.888889 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
8ba1afac8a04c15b021b0bc10a6f2fcbd1f9fdec | 3,628 | py | Python | data/augmentation.py | quanghona/SOLO_tf2 | 4aab0fc9115d210f08e694ec59b5f093ade8ce91 | [
"MIT"
] | 8 | 2021-03-07T10:25:21.000Z | 2022-02-20T23:57:24.000Z | data/augmentation.py | quanghona/SOLO_tf2 | 4aab0fc9115d210f08e694ec59b5f093ade8ce91 | [
"MIT"
] | null | null | null | data/augmentation.py | quanghona/SOLO_tf2 | 4aab0fc9115d210f08e694ec59b5f093ade8ce91 | [
"MIT"
] | null | null | null | import tensorflow as tf
"""
Implement the augmentation to the image and labels (contains categories and mask)
Note: The accepted image pass to the function is scaled to the range [0, 1]
Image and labels should have type tf.float32
"""
def randomBrightness(image, cat=None, mask=None, max_delta=0.2):
"""randomly change the brigthness of the image
Args:
image (tf.Tensor): tensor image object
cat (tf.Tensor, optional): Category label tensor. Defaults to None.
mask (tf.Tensor, optional): Mask label tensor. Defaults to None.
max_delta (float, optional): adjust factor, range [0, 1]. Defaults to 0.2.
Returns:
tf.Tensor: Adjusted image tensor
tf.Tensor: Category label tensor (same as input)
tf.Tensor: Mask label tensor (same as input)
"""
image = tf.image.random_brightness(image, max_delta)
return image, cat, mask
def randomContrast(image, cat=None, mask=None, lower=0.4, upper=0.6):
"""Randomly change the contrast of the image
Args:
image (tf.Tensor): tensor image object
cat (tf.Tensor, optional): Category label tensor. Defaults to None.
mask (tf.Tensor, optional): Mask label tensor. Defaults to None.
lower (float, optional): lower bound of contrast factor. Defaults to 0.4.
upper (float, optional): upper bound of contrast factor. Defaults to 0.6.
Returns:
tf.Tensor: Adjusted image tensor
tf.Tensor: Category label tensor (same as input)
tf.Tensor: Mask label tensor (same as input)
"""
image = tf.image.random_contrast(image, lower, upper)
return image, cat, mask
def flipHorizontal(image, cat, mask):
"""Flip the image and label horizontally
Note: this function flip the category and mask also
Args:
image (tf.Tensor): tensor image object
cat (tf.Tensor): Category label tensor
mask (tf.Tensor): Maks label tensor
Returns:
tf.Tensor: Adjusted image tensor
tf.Tensor: Category label tensor
tf.Tensor: Mask label tensor
"""
image = tf.image.flip_left_right(image)
cat = tf.image.flip_left_right(cat)
mask = tf.image.flip_left_right(mask)
return image, cat, mask
def randomHUE(image, cat=None, mask=None, max_delta=0.2):
"""Randomly change the HUE of the image
Args:
image (tf.Tensor): tensor image object
cat (tf.Tensor, optional): Category label tensor. Defaults to None.
mask (tf.Tensor, optional): Mask label tensor. Defaults to None.
max_delta (float, optional): Adjsut factor. Defaults to 0.2.
Returns:
tf.Tensor: Adjusted image tensor
tf.Tensor: Category label tensor (same as input)
tf.Tensor: Mask label tensor (same as input)
"""
image = tf.image.random_hue(image, max_delta)
return image, cat, mask
def randomSaturation(image, cat=None, mask=None, lower=0.4, upper=0.6):
"""Randomly change the saturation of the image
Args:
image (tf.Tensor): tensor image object
cat (tf.Tensor, optional): Category label tensor. Defaults to None.
mask (tf.Tensor, optional): Mask label tensor. Defaults to None.
lower (float, optional): lower bound of saturation factor. Defaults to 0.4.
upper (float, optional): upper bound of saturation factor. Defaults to 0.6.
Returns:
tf.Tensor: Adjusted image tensor
tf.Tensor: Category label tensor (same as input)
tf.Tensor: Mask label tensor (same as input)
"""
image = tf.image.random_saturation(image, lower, upper)
return image, cat, mask
| 36.28 | 83 | 0.670342 | 510 | 3,628 | 4.737255 | 0.147059 | 0.099338 | 0.078642 | 0.069536 | 0.799669 | 0.747517 | 0.747517 | 0.705298 | 0.677152 | 0.677152 | 0 | 0.010854 | 0.238148 | 3,628 | 99 | 84 | 36.646465 | 0.863242 | 0.641125 | 0 | 0.277778 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.277778 | false | 0 | 0.055556 | 0 | 0.611111 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 6 |
8bd5f810c869b3a9caea230411505505be2b7e8a | 69 | py | Python | novel_tools/toolkit/__init__.py | ALMSIVI/novel_tools | d1511607c5055ab293ce6b1294810c127ab9c01d | [
"WTFPL"
] | 1 | 2020-08-03T02:29:20.000Z | 2020-08-03T02:29:20.000Z | novel_tools/toolkit/__init__.py | ALMSIVI/novel_tools | d1511607c5055ab293ce6b1294810c127ab9c01d | [
"WTFPL"
] | null | null | null | novel_tools/toolkit/__init__.py | ALMSIVI/novel_tools | d1511607c5055ab293ce6b1294810c127ab9c01d | [
"WTFPL"
] | null | null | null | from .analyze_novel import analyze
from .generate_docs import docgen
| 23 | 34 | 0.855072 | 10 | 69 | 5.7 | 0.7 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.115942 | 69 | 2 | 35 | 34.5 | 0.934426 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
8bd97c279c95571399c69b03781ecd38ef1414b0 | 20 | py | Python | remc/target/x86/__init__.py | horus-4ever/python-remc | 1dac312634c9c8f4f60f4f01bdddfac45b34063e | [
"MIT"
] | null | null | null | remc/target/x86/__init__.py | horus-4ever/python-remc | 1dac312634c9c8f4f60f4f01bdddfac45b34063e | [
"MIT"
] | null | null | null | remc/target/x86/__init__.py | horus-4ever/python-remc | 1dac312634c9c8f4f60f4f01bdddfac45b34063e | [
"MIT"
] | null | null | null | from .asm import ASM | 20 | 20 | 0.8 | 4 | 20 | 4 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.15 | 20 | 1 | 20 | 20 | 0.941176 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
8be3f4289be4d7663394d0ca94b76dd1dab7548c | 150 | py | Python | visionlib/object/__init__.py | sumeshmn/Visionlib | c543ee038d6d1dcf9d88a8d7a782addd998e6036 | [
"MIT"
] | 46 | 2020-04-26T23:00:32.000Z | 2022-02-10T15:38:45.000Z | visionlib/object/__init__.py | sumeshmn/Visionlib | c543ee038d6d1dcf9d88a8d7a782addd998e6036 | [
"MIT"
] | 3 | 2020-05-18T22:34:32.000Z | 2021-10-01T03:07:59.000Z | visionlib/object/__init__.py | sumeshmn/Visionlib | c543ee038d6d1dcf9d88a8d7a782addd998e6036 | [
"MIT"
] | 2 | 2020-04-29T15:15:26.000Z | 2020-05-01T18:49:00.000Z | from .detection.detection import ODetection
from .classifier.xception_detector import Xceptionv1
from .classifier.inception_detector import Inception
| 37.5 | 52 | 0.88 | 17 | 150 | 7.647059 | 0.529412 | 0.215385 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.007246 | 0.08 | 150 | 3 | 53 | 50 | 0.934783 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
8be657c55561b1917d9554607aedb00fad864ea4 | 138 | py | Python | robotServices/robotCommunicator/__init__.py | frstp64/autonomous_drawing_robot | fc6e286048f034c2051ddd364b00c9bdc884c1c2 | [
"BSD-3-Clause"
] | null | null | null | robotServices/robotCommunicator/__init__.py | frstp64/autonomous_drawing_robot | fc6e286048f034c2051ddd364b00c9bdc884c1c2 | [
"BSD-3-Clause"
] | null | null | null | robotServices/robotCommunicator/__init__.py | frstp64/autonomous_drawing_robot | fc6e286048f034c2051ddd364b00c9bdc884c1c2 | [
"BSD-3-Clause"
] | null | null | null | from . import PacketID
from . import RobotCommunicator
from . import RobotCommunicatorEssentials
from . import RobotCommunicatorDebugging
| 27.6 | 41 | 0.855072 | 12 | 138 | 9.833333 | 0.5 | 0.338983 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.115942 | 138 | 4 | 42 | 34.5 | 0.967213 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
4774d52e9e88532278bb128bc46404f3ad694b41 | 30 | py | Python | shinden/__init__.py | ShaderLight/shinden.py | 5eeef7b80711e3d6ba7828a8e143072f620d83a7 | [
"MIT"
] | 4 | 2020-01-11T21:34:14.000Z | 2021-09-09T22:02:45.000Z | shinden/__init__.py | ShaderLight/shinden.py | 5eeef7b80711e3d6ba7828a8e143072f620d83a7 | [
"MIT"
] | null | null | null | shinden/__init__.py | ShaderLight/shinden.py | 5eeef7b80711e3d6ba7828a8e143072f620d83a7 | [
"MIT"
] | null | null | null | from shinden.scrapping import* | 30 | 30 | 0.866667 | 4 | 30 | 6.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.066667 | 30 | 1 | 30 | 30 | 0.928571 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
9a4c48bf42c628086fc5a046ff650552efc39bb3 | 4,260 | py | Python | models/demo/RFBs.py | KaiOtter/pytorch_DSOD_variants | f29088b13b24f24e2cf20e9a2dc800cd6dbde145 | [
"MIT"
] | 1 | 2018-11-01T07:31:35.000Z | 2018-11-01T07:31:35.000Z | models/demo/RFBs.py | KaiOtter/pytorch_DSOD_variants | f29088b13b24f24e2cf20e9a2dc800cd6dbde145 | [
"MIT"
] | 1 | 2018-12-21T12:42:03.000Z | 2018-12-21T12:42:03.000Z | models/demo/RFBs.py | Kaidy-Boom/pytorch_DSOD_variants | f29088b13b24f24e2cf20e9a2dc800cd6dbde145 | [
"MIT"
] | 1 | 2019-10-23T00:38:37.000Z | 2019-10-23T00:38:37.000Z | import torch
import torch.nn as nn
import torch.nn.functional as F
import torch as t
from torch.autograd import Variable
class FPN(nn.Module):
def __init__(self, lat_inC, top_inC, outC, mode='nearest'):
super(FPN, self).__init__()
assert mode in ['nearest', 'bilinear']
self.latlayer = nn.Conv2d(lat_inC, outC, 1, 1, padding=0)
self.toplayer = nn.Conv2d(top_inC, outC, 1, 1, padding=0)
self.up_mode = mode
# self.bottom_smooth = nn.Sequential(
# nn.Conv2d(outC, outC, 3, 1, padding=1),
# nn.BatchNorm2d(outC),
# nn.ReLU(),
# )
self.bottom_smooth = nn.Conv2d(outC, outC, 3, 1, padding=1)
def forward(self, bottom, top):
x = self.latlayer(bottom)
y = self.toplayer(top)
up_add = F.upsample(y, scale_factor=2, mode=self.up_mode) + x
out = self.bottom_smooth(up_add)
return out
class FPN2(nn.Module):
def __init__(self, lat_inC, top_inC, outC, mode='nearest'):
super(FPN2, self).__init__()
assert mode in ['nearest', 'bilinear']
self.latlayer = nn.Sequential(
nn.Conv2d(lat_inC, outC, 1, 1, padding=0, bias=False),
nn.BatchNorm2d(outC),
nn.ReLU()
)
self.toplayer =nn.Sequential(
nn.Conv2d(top_inC, outC, 1, 1, padding=0, bias=False),
nn.BatchNorm2d(outC),
nn.ReLU()
)
self.up_mode = mode
# remove bias was tried in FPN2_b. It was worse.
self.bottom_smooth = nn.Conv2d(outC, outC, 3, 1, padding=1)
def forward(self, bottom, top):
x = self.latlayer(bottom)
y = self.toplayer(top)
up_add = F.upsample(y, scale_factor=2, mode=self.up_mode) + x
out = self.bottom_smooth(up_add)
return out
class Deconv(nn.Module):
def __init__(self, lat_inC, top_inC, outC):
super(Deconv, self).__init__()
self.latlayer = nn.Sequential(
nn.Conv2d(lat_inC, outC, 3, 1, padding=1, bias=False),
nn.BatchNorm2d(outC),
nn.ReLU(inplace=True),
nn.Conv2d(outC, outC, 3, 1, padding=1, bias=False),
nn.BatchNorm2d(outC),
)
self.toplayer = nn.Sequential(
nn.ConvTranspose2d(top_inC, outC, 2, 2),
nn.Conv2d(outC, outC, 3, 1, padding=1, bias=False),
nn.BatchNorm2d(outC),
)
def forward(self, bottom, top):
x = self.latlayer(bottom)
y = self.toplayer(top)
out = F.relu(x*y)
return out
class Deconv_s(nn.Module):
def __init__(self, lat_inC, top_inC, outC):
super(Deconv_s, self).__init__()
self.latlayer = nn.Sequential(
nn.Conv2d(lat_inC, outC, 3, 1, padding=1, bias=False),
nn.BatchNorm2d(outC),
nn.LeakyReLU(inplace=True),
# nn.Conv2d(outC, outC, 3, 1, padding=1, bias=False),
# nn.BatchNorm2d(outC),
)
self.toplayer = nn.Sequential(
nn.ConvTranspose2d(top_inC, outC, 2, 2),
# nn.Conv2d(outC, outC, 3, 1, padding=1, bias=False),
# nn.BatchNorm2d(outC),
)
def forward(self, bottom, top):
x = self.latlayer(bottom)
y = self.toplayer(top)
out = F.relu(x*y)
return out
class RON(nn.Module):
def __init__(self, lat_inC, top_inC, outC):
super(RON, self).__init__()
self.latlayer = nn.Conv2d(lat_inC, outC, 3, 1, padding=1)
self.toplayer = nn.ConvTranspose2d(top_inC, outC, 2, 2)
def forward(self, bottom, top):
x = self.latlayer(bottom)
y = self.toplayer(top)
return x + y
class RON2(nn.Module):
def __init__(self, lat_inC, top_inC, outC):
super(RON2, self).__init__()
self.latlayer = nn.Sequential(
nn.Conv2d(lat_inC, outC, 3, 1, padding=1, bias=False),
nn.BatchNorm2d(outC),
nn.ReLU()
)
self.toplayer = nn.Sequential(
nn.ConvTranspose2d(top_inC, outC, 2, 2),
# nn.BatchNorm2d(outC),
# nn.ReLU()
)
def forward(self, bottom, top):
x = self.latlayer(bottom)
y = self.toplayer(top)
return x + y | 32.030075 | 69 | 0.565962 | 568 | 4,260 | 4.086268 | 0.116197 | 0.054287 | 0.051702 | 0.061611 | 0.886687 | 0.874192 | 0.864283 | 0.843171 | 0.831969 | 0.783714 | 0 | 0.030293 | 0.302582 | 4,260 | 133 | 70 | 32.030075 | 0.750926 | 0.081925 | 0 | 0.673267 | 0 | 0 | 0.011285 | 0 | 0 | 0 | 0 | 0 | 0.019802 | 1 | 0.118812 | false | 0 | 0.049505 | 0 | 0.287129 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
d04346dcf0682e783f29837edb85f8a9a9104818 | 30 | py | Python | tfw2v/__init__.py | ruathudo/tfw2v | 7f43789bd8185b0a3d13363f48044980d56b5173 | [
"CC-BY-4.0"
] | 5 | 2021-12-02T16:41:04.000Z | 2021-12-19T21:19:31.000Z | tfw2v/__init__.py | ruathudo/tfw2v | 7f43789bd8185b0a3d13363f48044980d56b5173 | [
"CC-BY-4.0"
] | null | null | null | tfw2v/__init__.py | ruathudo/tfw2v | 7f43789bd8185b0a3d13363f48044980d56b5173 | [
"CC-BY-4.0"
] | null | null | null | from .similarity import TFW2V
| 15 | 29 | 0.833333 | 4 | 30 | 6.25 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.038462 | 0.133333 | 30 | 1 | 30 | 30 | 0.923077 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
d0558dd760801aab2e9fcc206ef8b0a1e33ed0ca | 96 | py | Python | venv/lib/python3.8/site-packages/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_filtering.py | GiulianaPola/select_repeats | 17a0d053d4f874e42cf654dd142168c2ec8fbd11 | [
"MIT"
] | 2 | 2022-03-13T01:58:52.000Z | 2022-03-31T06:07:54.000Z | venv/lib/python3.8/site-packages/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_filtering.py | DesmoSearch/Desmobot | b70b45df3485351f471080deb5c785c4bc5c4beb | [
"MIT"
] | 19 | 2021-11-20T04:09:18.000Z | 2022-03-23T15:05:55.000Z | venv/lib/python3.8/site-packages/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_filtering.py | DesmoSearch/Desmobot | b70b45df3485351f471080deb5c785c4bc5c4beb | [
"MIT"
] | null | null | null | /home/runner/.cache/pip/pool/f1/21/c2/11b65e8cd0955076111adcbf313ffeac58f3f5a4091c213c4364aba08d | 96 | 96 | 0.895833 | 9 | 96 | 9.555556 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.40625 | 0 | 96 | 1 | 96 | 96 | 0.489583 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
d084c73f9699ad97c0e634f9b6c9984de2c7653b | 68,148 | py | Python | tests/simulations/system/test_system_unitary_heat_pump.py | john-grando/pyExpandObjects | c08b1d1bc45684bc71c0f49b4d2f22c707cd4aa4 | [
"BSD-3-Clause"
] | null | null | null | tests/simulations/system/test_system_unitary_heat_pump.py | john-grando/pyExpandObjects | c08b1d1bc45684bc71c0f49b4d2f22c707cd4aa4 | [
"BSD-3-Clause"
] | 1 | 2021-02-03T01:56:56.000Z | 2021-02-03T01:56:56.000Z | tests/simulations/system/test_system_unitary_heat_pump.py | john-grando/pyExpandObjects | c08b1d1bc45684bc71c0f49b4d2f22c707cd4aa4 | [
"BSD-3-Clause"
] | 1 | 2022-01-11T18:31:05.000Z | 2022-01-11T18:31:05.000Z | from pathlib import Path
from tests.simulations import BaseSimulationTest
from src.epjson_handler import EPJSON
test_dir = Path(__file__).parent.parent.parent
hot_water_objects = {
"HVACTemplate:Plant:Boiler": {
"Main Boiler": {
"boiler_type": "HotWaterBoiler",
"capacity": "Autosize",
"efficiency": 0.8,
"fuel_type": "NaturalGas",
"priority": "1"
}
},
"HVACTemplate:Plant:HotWaterLoop": {
"Hot Water Loop": {
"hot_water_design_setpoint": 82,
"hot_water_plant_operation_scheme_type": "Default",
"hot_water_pump_configuration": "ConstantFlow",
"hot_water_pump_rated_head": 179352,
"hot_water_reset_outdoor_dry_bulb_high": 10,
"hot_water_reset_outdoor_dry_bulb_low": -6.7,
"hot_water_setpoint_at_outdoor_dry_bulb_high": 65.6,
"hot_water_setpoint_at_outdoor_dry_bulb_low": 82.2,
"hot_water_setpoint_reset_type": "OutdoorAirTemperatureReset",
"pump_control_type": "Intermittent"
}
}
}
schedule_objects = {
"Schedule:Compact": {
"Always0.8": {
"data": [
{
"field": "Through: 12/31"
},
{
"field": "For: AllDays"
},
{
"field": "Until: 24:00"
},
{
"field": 0.8
}
],
"schedule_type_limits_name": "Any Number"
},
"Always6.8": {
"data": [
{
"field": "Through: 12/31"
},
{
"field": "For: AllDays"
},
{
"field": "Until: 24:00"
},
{
"field": 6.8
}
],
"schedule_type_limits_name": "Any Number"
},
"Always12.5": {
"data": [
{
"field": "Through: 12/31"
},
{
"field": "For: AllDays"
},
{
"field": "Until: 24:00"
},
{
"field": 12.5
}
],
"schedule_type_limits_name": "Any Number"
},
"Always15.5": {
"data": [
{
"field": "Through: 12/31"
},
{
"field": "For: AllDays"
},
{
"field": "Until: 24:00"
},
{
"field": 15.5
}
],
"schedule_type_limits_name": "Any Number"
},
"Always62": {
"data": [
{
"field": "Through: 12/31"
},
{
"field": "For: AllDays"
},
{
"field": "Until: 24:00"
},
{
"field": 62.0
}
],
"schedule_type_limits_name": "Any Number"
},
"Always29": {
"data": [
{
"field": "Through: 12/31"
},
{
"field": "For: AllDays"
},
{
"field": "Until: 24:00"
},
{
"field": 29.0
}
],
"schedule_type_limits_name": "Any Number"
}
}
}
class TestSimulationsSystemUnitaryHeatPump(BaseSimulationTest):
def setUp(self):
self.ej = EPJSON()
base_idf_file_path = test_dir.joinpath('..', 'simulation', 'ExampleFiles',
'HVACTemplate-5ZoneUnitaryHeatPump.idf')
base_copy_file_path = self._copy_to_test_directory(base_idf_file_path)
# read in base file, then edit inputs for alternate tests
self.base_epjson = self.get_epjson_object_from_idf_file(base_copy_file_path)
self.base_epjson.pop('Output:Variable')
return
def teardown(self):
return
@BaseSimulationTest._test_logger(doc_text="Simulation:System:UnitaryHeatPump:system_availability_schedule_name")
def test_system_availability_schedule_name(self):
self.base_epjson['HVACTemplate:System:UnitaryHeatPump:AirToAir']['Heat Pump 1'][
'system_availability_schedule_name'] = 'OCCUPY-1'
self.base_epjson['HVACTemplate:System:UnitaryHeatPump:AirToAir']['Heat Pump 1'][
'night_cycle_control'] = 'CycleOnAny'
base_file_path = self.create_idf_file_from_epjson(epjson=self.base_epjson, file_name='base_pre_input.epJSON')
self.perform_full_comparison(base_idf_file_path=base_file_path)
epjson_output = self.ej._get_json_file(test_dir.joinpath(
'..', 'simulation', 'test', 'test_input_epjson.epJSON'))
self.assertEqual(
'OCCUPY-1',
epjson_output['Fan:OnOff']['Heat Pump 1 Supply Fan']['availability_schedule_name'])
self.assertEqual(
'OCCUPY-1',
epjson_output['AvailabilityManager:NightCycle']['Heat Pump 1 Availability']['fan_schedule_name'])
return
@BaseSimulationTest._test_logger(doc_text="Simulation:System:UnitaryHeatPump:cooling_supplY_air_flow_rate")
def test_cooling_supply_air_flow_rate(self):
# todo_eo: rated_air_flow_rate in cooling coil is set by this input, but is not in other systems. Is that
# correct? Usually just the Sizing:System an AirLoop objects are set.
self.base_epjson['HVACTemplate:System:UnitaryHeatPump:AirToAir']['Heat Pump 1'][
'cooling_supply_air_flow_rate'] = 1.01
base_file_path = self.create_idf_file_from_epjson(epjson=self.base_epjson, file_name='base_pre_input.epJSON')
self.perform_full_comparison(base_idf_file_path=base_file_path)
epjson_output = self.ej._get_json_file(test_dir.joinpath(
'..', 'simulation', 'test', 'test_input_epjson.epJSON'))
self.assertEqual(
1.01,
epjson_output['Sizing:System']['Heat Pump 1 Sizing System']['cooling_supply_air_flow_rate'])
self.assertEqual(
1.01,
epjson_output['AirLoopHVAC:UnitaryHeatPump:AirToAir']['Heat Pump 1 Heat Pump'][
'cooling_supply_air_flow_rate'])
self.assertEqual(
1.01,
epjson_output['Coil:Cooling:DX:SingleSpeed']['Heat Pump 1 Cooling Coil'][
'rated_air_flow_rate'])
return
@BaseSimulationTest._test_logger(doc_text="Simulation:System:UnitaryHeatPump:heating_supplY_air_flow_rate")
def test_heating_supply_air_flow_rate(self):
self.base_epjson['HVACTemplate:System:UnitaryHeatPump:AirToAir']['Heat Pump 1'][
'heating_supply_air_flow_rate'] = 1.01
base_file_path = self.create_idf_file_from_epjson(epjson=self.base_epjson, file_name='base_pre_input.epJSON')
self.perform_full_comparison(base_idf_file_path=base_file_path)
epjson_output = self.ej._get_json_file(test_dir.joinpath(
'..', 'simulation', 'test', 'test_input_epjson.epJSON'))
self.assertEqual(
1.01,
epjson_output['Sizing:System']['Heat Pump 1 Sizing System']['heating_supply_air_flow_rate'])
self.assertEqual(
1.01,
epjson_output['AirLoopHVAC:UnitaryHeatPump:AirToAir']['Heat Pump 1 Heat Pump'][
'heating_supply_air_flow_rate'])
self.assertEqual(
1.01,
epjson_output['Coil:Heating:DX:SingleSpeed']['Heat Pump 1 Cooling Coil'][
'rated_air_flow_rate'])
return
@BaseSimulationTest._test_logger(doc_text="Simulation:System:UnitaryHeatPump:no_load_supplY_air_flow_rate")
def test_no_load_supply_air_flow_rate(self):
self.base_epjson['HVACTemplate:System:UnitaryHeatPump:AirToAir']['Heat Pump 1'][
'no_load_supply_air_flow_rate'] = 1.01
base_file_path = self.create_idf_file_from_epjson(epjson=self.base_epjson, file_name='base_pre_input.epJSON')
self.perform_full_comparison(base_idf_file_path=base_file_path)
epjson_output = self.ej._get_json_file(test_dir.joinpath(
'..', 'simulation', 'test', 'test_input_epjson.epJSON'))
self.assertEqual(
1.01,
epjson_output['AirLoopHVAC:UnitaryHeatPump:AirToAir']['Heat Pump 1 Heat Pump'][
'no_load_supply_air_flow_rate'])
return
@BaseSimulationTest._test_logger(doc_text="Simulation:System:UnitaryHeatPump:"
"supply_fan_operating_mode_schedule_name")
def test_supply_fan_operating_mode_schedule_name(self):
self.base_epjson['HVACTemplate:System:UnitaryHeatPump:AirToAir']['Heat Pump 1'][
'supply_fan_operating_mode_schedule_name'] = 'OCCUPY-1'
base_file_path = self.create_idf_file_from_epjson(epjson=self.base_epjson, file_name='base_pre_input.epJSON')
self.perform_full_comparison(base_idf_file_path=base_file_path)
epjson_output = self.ej._get_json_file(test_dir.joinpath(
'..', 'simulation', 'test', 'test_input_epjson.epJSON'))
self.assertEqual(
'OCCUPY-1',
epjson_output['AirLoopHVAC:UnitaryHeatPump:AirToAir']['Heat Pump 1 Heat Pump'][
'supply_air_fan_operating_mode_schedule_name'])
return
@BaseSimulationTest._test_logger(doc_text="Simulation:System:UnitaryHeatPump:"
"supply_fan_placement_blow_through")
def test_supply_fan_placement_blow_through(self):
self.base_epjson['HVACTemplate:System:UnitaryHeatPump:AirToAir']['Heat Pump 1'][
'supply_fan_placement_blow_through'] = 'BlowThrough'
base_file_path = self.create_idf_file_from_epjson(epjson=self.base_epjson, file_name='base_pre_input.epJSON')
self.perform_full_comparison(base_idf_file_path=base_file_path)
epjson_output = self.ej._get_json_file(test_dir.joinpath(
'..', 'simulation', 'test', 'test_input_epjson.epJSON'))
self.assertEqual(
'BlowThrough',
epjson_output['AirLoopHVAC:UnitaryHeatPump:AirToAir']['Heat Pump 1 Heat Pump'][
'fan_placement'])
return
@BaseSimulationTest._test_logger(doc_text="Simulation:System:UnitaryHeatPump:"
"supply_fan_placement_draw_through")
def test_supply_fan_placement_draw_through(self):
self.base_epjson['HVACTemplate:System:UnitaryHeatPump:AirToAir']['Heat Pump 1'][
'supply_fan_placement'] = 'DrawThrough'
base_file_path = self.create_idf_file_from_epjson(epjson=self.base_epjson, file_name='base_pre_input.epJSON')
self.perform_full_comparison(base_idf_file_path=base_file_path)
epjson_output = self.ej._get_json_file(test_dir.joinpath(
'..', 'simulation', 'test', 'test_input_epjson.epJSON'))
self.assertEqual(
'DrawThrough',
epjson_output['AirLoopHVAC:UnitaryHeatPump:AirToAir']['Heat Pump 1 Heat Pump'][
'fan_placement'])
return
@BaseSimulationTest._test_logger(doc_text="Simulation:System:UnitaryHeatPump:supply_fan_total_efficiency")
def test_supply_fan_total_efficiency(self):
self.base_epjson['HVACTemplate:System:UnitaryHeatPump:AirToAir']['Heat Pump 1'][
'supply_fan_total_efficiency'] = 0.65
base_file_path = self.create_idf_file_from_epjson(epjson=self.base_epjson, file_name='base_pre_input.epJSON')
self.perform_full_comparison(base_idf_file_path=base_file_path)
epjson_output = self.ej._get_json_file(test_dir.joinpath(
'..', 'simulation', 'test', 'test_input_epjson.epJSON'))
self.assertEqual(
0.65,
epjson_output['Fan:OnOff']['Heat Pump 1 Supply Fan']['fan_total_efficiency'])
return
@BaseSimulationTest._test_logger(doc_text="Simulation:System:UnitaryHeatPump:supply_fan_delta_pressure")
def test_supply_fan_delta_pressure(self):
self.base_epjson['HVACTemplate:System:UnitaryHeatPump:AirToAir']['Heat Pump 1'][
'supply_fan_delta_pressure'] = 500
base_file_path = self.create_idf_file_from_epjson(epjson=self.base_epjson, file_name='base_pre_input.epJSON')
self.perform_full_comparison(base_idf_file_path=base_file_path)
epjson_output = self.ej._get_json_file(test_dir.joinpath(
'..', 'simulation', 'test', 'test_input_epjson.epJSON'))
self.assertEqual(
500,
epjson_output['Fan:OnOff']['Heat Pump 1 Supply Fan']['pressure_rise'])
return
@BaseSimulationTest._test_logger(doc_text="Simulation:System:UnitaryHeatPump:supply_fan_motor_efficiency")
def test_supply_fan_motor_efficiency(self):
self.base_epjson['HVACTemplate:System:UnitaryHeatPump:AirToAir']['Heat Pump 1'][
'supply_fan_motor_efficiency'] = 0.8
base_file_path = self.create_idf_file_from_epjson(epjson=self.base_epjson, file_name='base_pre_input.epJSON')
self.perform_full_comparison(base_idf_file_path=base_file_path)
epjson_output = self.ej._get_json_file(test_dir.joinpath(
'..', 'simulation', 'test', 'test_input_epjson.epJSON'))
self.assertEqual(
0.8,
epjson_output['Fan:OnOff']['Heat Pump 1 Supply Fan']['motor_efficiency'])
return
@BaseSimulationTest._test_logger(doc_text="Simulation:System:UnitaryHeatPump:"
"supply_fan_motor_in_air_stream_fraction")
def test_supply_fan_motor_in_air_stream_fraction(self):
self.base_epjson['HVACTemplate:System:UnitaryHeatPump:AirToAir']['Heat Pump 1'][
'supply_fan_motor_in_air_stream_fraction'] = 0.9
base_file_path = self.create_idf_file_from_epjson(epjson=self.base_epjson, file_name='base_pre_input.epJSON')
self.perform_full_comparison(base_idf_file_path=base_file_path)
epjson_output = self.ej._get_json_file(test_dir.joinpath(
'..', 'simulation', 'test', 'test_input_epjson.epJSON'))
self.assertEqual(
0.9,
epjson_output['Fan:OnOff']['Heat Pump 1 Supply Fan']['motor_in_airstream_fraction'])
return
@BaseSimulationTest._test_logger(doc_text="Simulation:System:UnitaryHeatPump:"
"cooling_coil_availability_schedule_name")
def test_cooling_coil_availability_schedule_name(self):
self.base_epjson['HVACTemplate:System:UnitaryHeatPump:AirToAir']['Heat Pump 1'][
'cooling_coil_availability_schedule_name'] = 'OCCUPY-1'
base_file_path = self.create_idf_file_from_epjson(epjson=self.base_epjson, file_name='base_pre_input.epJSON')
self.perform_full_comparison(base_idf_file_path=base_file_path)
epjson_output = self.ej._get_json_file(test_dir.joinpath(
'..', 'simulation', 'test', 'test_input_epjson.epJSON'))
self.assertEqual(
'OCCUPY-1',
epjson_output['Coil:Cooling:DX:SingleSpeed']['Heat Pump 1 Cooling Coil']['availability_schedule_name'])
return
@BaseSimulationTest._test_logger(doc_text="Simulation:System:UnitaryHeatPump:"
"cooling_design_supply_air_temperature")
def test_cooling_design_supply_air_temperature(self):
# todo_eo: why is the SetpointManager:SingleZone:Cooling object not affected by this input
self.base_epjson['HVACTemplate:System:UnitaryHeatPump:AirToAir']['Heat Pump 1'][
'cooling_design_supply_air_temperature'] = 12.9
base_file_path = self.create_idf_file_from_epjson(epjson=self.base_epjson, file_name='base_pre_input.epJSON')
self.perform_full_comparison(base_idf_file_path=base_file_path)
epjson_output = self.ej._get_json_file(test_dir.joinpath(
'..', 'simulation', 'test', 'test_input_epjson.epJSON'))
self.assertEqual(
12.9,
epjson_output['Sizing:System']['Heat Pump 1 Sizing System'][
'central_cooling_design_supply_air_temperature'])
self.assertEqual(
12.9,
epjson_output['SetpointManager:SingleZone:Cooling']['Heat Pump 1 Cooling Supply Air Temp Manager'][
'minimum_supply_air_temperature'])
return
@BaseSimulationTest._test_logger(doc_text="Simulation:System:UnitaryHeatPump:"
"cooling_coil_gross_rated_total_capacity")
def test_cooling_coil_gross_rated_total_capacity(self):
self.base_epjson['HVACTemplate:System:UnitaryHeatPump:AirToAir']['Heat Pump 1'][
'cooling_coil_gross_rated_total_capacity'] = 2000
base_file_path = self.create_idf_file_from_epjson(epjson=self.base_epjson, file_name='base_pre_input.epJSON')
self.perform_full_comparison(base_idf_file_path=base_file_path)
epjson_output = self.ej._get_json_file(test_dir.joinpath(
'..', 'simulation', 'test', 'test_input_epjson.epJSON'))
self.assertEqual(
2000,
epjson_output['Coil:Cooling:DX:SingleSpeed']['Heat Pump 1 Cooling Coil'][
'gross_rated_total_cooling_capacity'])
return
@BaseSimulationTest._test_logger(doc_text="Simulation:System:UnitaryHeatPump:"
"cooling_coil_gross_rated_sensible_heat_ratio")
def test_cooling_coil_gross_rated_sensible_heat_ratio(self):
self.base_epjson['HVACTemplate:System:UnitaryHeatPump:AirToAir']['Heat Pump 1'][
'cooling_coil_gross_rated_sensible_heat_ratio'] = 0.75
base_file_path = self.create_idf_file_from_epjson(epjson=self.base_epjson, file_name='base_pre_input.epJSON')
self.perform_full_comparison(base_idf_file_path=base_file_path)
epjson_output = self.ej._get_json_file(test_dir.joinpath(
'..', 'simulation', 'test', 'test_input_epjson.epJSON'))
self.assertEqual(
0.75,
epjson_output['Coil:Cooling:DX:SingleSpeed']['Heat Pump 1 Cooling Coil'][
'gross_rated_sensible_heat_ratio'])
return
@BaseSimulationTest._test_logger(doc_text="Simulation:System:UnitaryHeatPump:"
"cooling_coil_gross_rated_cop")
def test_cooling_coil_gross_rated_cop(self):
self.base_epjson['HVACTemplate:System:UnitaryHeatPump:AirToAir']['Heat Pump 1'][
'cooling_coil_gross_rated_cop'] = 3.1
base_file_path = self.create_idf_file_from_epjson(epjson=self.base_epjson, file_name='base_pre_input.epJSON')
self.perform_full_comparison(base_idf_file_path=base_file_path)
epjson_output = self.ej._get_json_file(test_dir.joinpath(
'..', 'simulation', 'test', 'test_input_epjson.epJSON'))
self.assertEqual(
3.1,
epjson_output['Coil:Cooling:DX:SingleSpeed']['Heat Pump 1 Cooling Coil'][
'gross_rated_cooling_cop'])
return
@BaseSimulationTest._test_logger(doc_text="Simulation:System:UnitaryHeatPump:"
"heat_pump_heating_coil_availability_schedule_name")
def test_heat_pump_heating_coil_availability_schedule_name(self):
self.base_epjson['HVACTemplate:System:UnitaryHeatPump:AirToAir']['Heat Pump 1'][
'heat_pump_heating_coil_availability_schedule_name'] = 'OCCUPY-1'
base_file_path = self.create_idf_file_from_epjson(epjson=self.base_epjson, file_name='base_pre_input.epJSON')
self.perform_full_comparison(base_idf_file_path=base_file_path)
epjson_output = self.ej._get_json_file(test_dir.joinpath(
'..', 'simulation', 'test', 'test_input_epjson.epJSON'))
self.assertEqual(
'OCCUPY-1',
epjson_output['Coil:Heating:DX:SingleSpeed']['Heat Pump 1 Heating Coil']['availability_schedule_name'])
return
@BaseSimulationTest._test_logger(doc_text="Simulation:System:UnitaryHeatPump:"
"heating_design_supply_air_temperature")
def test_heating_design_supply_air_temperature(self):
# todo_eo: why is the SetpointManager:SingleZone:Cooling object not affected by this input
self.base_epjson['HVACTemplate:System:UnitaryHeatPump:AirToAir']['Heat Pump 1'][
'heating_design_supply_air_temperature'] = 48
base_file_path = self.create_idf_file_from_epjson(epjson=self.base_epjson, file_name='base_pre_input.epJSON')
self.perform_full_comparison(base_idf_file_path=base_file_path)
epjson_output = self.ej._get_json_file(test_dir.joinpath(
'..', 'simulation', 'test', 'test_input_epjson.epJSON'))
self.assertEqual(
48,
epjson_output['Sizing:System']['Heat Pump 1 Sizing System'][
'central_heating_design_supply_air_temperature'])
self.assertEqual(
48,
epjson_output['SetpointManager:SingleZone:Cooling']['Heat Pump 1 Cooling Supply Air Temp Manager'][
'maximum_supply_air_temperature'])
return
@BaseSimulationTest._test_logger(doc_text="Simulation:System:UnitaryHeatPump:"
"heat_pump_heating_coil_rated_cop")
def test_heat_pump_heating_coil_rated_cop(self):
self.base_epjson['HVACTemplate:System:UnitaryHeatPump:AirToAir']['Heat Pump 1'][
'heat_pump_heating_coil_rated_cop'] = 2.9
base_file_path = self.create_idf_file_from_epjson(epjson=self.base_epjson, file_name='base_pre_input.epJSON')
self.perform_full_comparison(base_idf_file_path=base_file_path)
epjson_output = self.ej._get_json_file(test_dir.joinpath(
'..', 'simulation', 'test', 'test_input_epjson.epJSON'))
self.assertEqual(
2.9,
epjson_output['Coil:Heating:DX:SingleSpeed']['Heat Pump 1 Heating Coil'][
'gross_rated_heating_cop'])
return
@BaseSimulationTest._test_logger(doc_text="Simulation:System:UnitaryHeatPump:"
"heat_pump_heating_minimum_outdoor_dry_bulb_temperature")
def test_heat_pump_heating_minimum_outdoor_dry_bulb_temperature(self):
self.base_epjson['HVACTemplate:System:UnitaryHeatPump:AirToAir']['Heat Pump 1'][
'heat_pump_heating_minimum_outdoor_dry_bulb_temperature'] = -7.5
base_file_path = self.create_idf_file_from_epjson(epjson=self.base_epjson, file_name='base_pre_input.epJSON')
self.perform_full_comparison(base_idf_file_path=base_file_path)
epjson_output = self.ej._get_json_file(test_dir.joinpath(
'..', 'simulation', 'test', 'test_input_epjson.epJSON'))
self.assertEqual(
-7.5,
epjson_output['Coil:Heating:DX:SingleSpeed']['Heat Pump 1 Heating Coil'][
'minimum_outdoor_dry_bulb_temperature_for_compressor_operation'])
return
@BaseSimulationTest._test_logger(doc_text="Simulation:System:UnitaryHeatPump:"
"heat_pump_defrost_maximum_outdoor_dry_bulb_temperature")
def test_heat_pump_defrost_maximum_outdoor_dry_bulb_temperature(self):
self.base_epjson['HVACTemplate:System:UnitaryHeatPump:AirToAir']['Heat Pump 1'][
'heat_pump_defrost_maximum_outdoor_dry_bulb_temperature'] = 4.5
base_file_path = self.create_idf_file_from_epjson(epjson=self.base_epjson, file_name='base_pre_input.epJSON')
self.perform_full_comparison(base_idf_file_path=base_file_path)
epjson_output = self.ej._get_json_file(test_dir.joinpath(
'..', 'simulation', 'test', 'test_input_epjson.epJSON'))
self.assertEqual(
4.5,
epjson_output['Coil:Heating:DX:SingleSpeed']['Heat Pump 1 Heating Coil'][
'maximum_outdoor_dry_bulb_temperature_for_defrost_operation'])
return
@BaseSimulationTest._test_logger(doc_text="Simulation:System:UnitaryHeatPump:"
"heat_pump_defrost_strategy_reverse_cycle")
def test_heat_pump_defrost_strategy_reverse_cycle(self):
self.base_epjson['HVACTemplate:System:UnitaryHeatPump:AirToAir']['Heat Pump 1'][
'heat_pump_defrost_strategy'] = 'ReverseCycle'
base_file_path = self.create_idf_file_from_epjson(epjson=self.base_epjson, file_name='base_pre_input.epJSON')
self.perform_full_comparison(base_idf_file_path=base_file_path)
epjson_output = self.ej._get_json_file(test_dir.joinpath(
'..', 'simulation', 'test', 'test_input_epjson.epJSON'))
self.assertEqual(
'ReverseCycle',
epjson_output['Coil:Heating:DX:SingleSpeed']['Heat Pump 1 Heating Coil'][
'defrost_strategy'])
return
@BaseSimulationTest._test_logger(doc_text="Simulation:System:UnitaryHeatPump:"
"heat_pump_defrost_strategy_resistive")
def test_heat_pump_defrost_strategy_resistive(self):
self.base_epjson['HVACTemplate:System:UnitaryHeatPump:AirToAir']['Heat Pump 1'][
'heat_pump_defrost_strategy'] = 'Resistive'
base_file_path = self.create_idf_file_from_epjson(epjson=self.base_epjson, file_name='base_pre_input.epJSON')
self.perform_full_comparison(base_idf_file_path=base_file_path)
epjson_output = self.ej._get_json_file(test_dir.joinpath(
'..', 'simulation', 'test', 'test_input_epjson.epJSON'))
self.assertEqual(
'Resistive',
epjson_output['Coil:Heating:DX:SingleSpeed']['Heat Pump 1 Heating Coil'][
'defrost_strategy'])
return
@BaseSimulationTest._test_logger(doc_text="Simulation:System:UnitaryHeatPump:"
"heat_pump_defrost_control_timed")
def test_heat_pump_defrost_control_timed(self):
self.base_epjson['HVACTemplate:System:UnitaryHeatPump:AirToAir']['Heat Pump 1'][
'heat_pump_defrost_control'] = 'Timed'
base_file_path = self.create_idf_file_from_epjson(epjson=self.base_epjson, file_name='base_pre_input.epJSON')
self.perform_full_comparison(base_idf_file_path=base_file_path)
epjson_output = self.ej._get_json_file(test_dir.joinpath(
'..', 'simulation', 'test', 'test_input_epjson.epJSON'))
self.assertEqual(
'Timed',
epjson_output['Coil:Heating:DX:SingleSpeed']['Heat Pump 1 Heating Coil'][
'defrost_control'])
return
@BaseSimulationTest._test_logger(doc_text="Simulation:System:UnitaryHeatPump:"
"heat_pump_defrost_control_on_demand")
def test_heat_pump_defrost_control_on_demand(self):
self.base_epjson['HVACTemplate:System:UnitaryHeatPump:AirToAir']['Heat Pump 1'][
'heat_pump_defrost_control'] = 'OnDemand'
base_file_path = self.create_idf_file_from_epjson(epjson=self.base_epjson, file_name='base_pre_input.epJSON')
self.perform_full_comparison(base_idf_file_path=base_file_path)
epjson_output = self.ej._get_json_file(test_dir.joinpath(
'..', 'simulation', 'test', 'test_input_epjson.epJSON'))
self.assertEqual(
'OnDemand',
epjson_output['Coil:Heating:DX:SingleSpeed']['Heat Pump 1 Heating Coil'][
'defrost_control'])
return
@BaseSimulationTest._test_logger(doc_text="Simulation:System:UnitaryHeatPump:"
"heat_pump_defrost_time_period_fraction")
def test_heat_pump_defrost_time_period_fraction(self):
self.base_epjson['HVACTemplate:System:UnitaryHeatPump:AirToAir']['Heat Pump 1'][
'heat_pump_defrost_time_period_fraction'] = 0.06
base_file_path = self.create_idf_file_from_epjson(epjson=self.base_epjson, file_name='base_pre_input.epJSON')
self.perform_full_comparison(base_idf_file_path=base_file_path)
epjson_output = self.ej._get_json_file(test_dir.joinpath(
'..', 'simulation', 'test', 'test_input_epjson.epJSON'))
self.assertEqual(
0.06,
epjson_output['Coil:Heating:DX:SingleSpeed']['Heat Pump 1 Heating Coil'][
'defrost_time_period_fraction'])
return
@BaseSimulationTest._test_logger(doc_text="Simulation:System:UnitaryHeatPump:"
"supplemental_heating_coil_type_electric")
def test_supplemental_heating_coil_type_electric(self):
self.base_epjson['HVACTemplate:System:UnitaryHeatPump:AirToAir']['Heat Pump 1'][
'supplemental_heating_coil_type'] = 'Electric'
base_file_path = self.create_idf_file_from_epjson(epjson=self.base_epjson, file_name='base_pre_input.epJSON')
self.perform_full_comparison(base_idf_file_path=base_file_path)
epjson_output = self.ej._get_json_file(test_dir.joinpath(
'..', 'simulation', 'test', 'test_input_epjson.epJSON'))
self.assertIsNotNone(epjson_output['Coil:Heating:Electric'].get('Heat Pump 1 Supp Heating Coil'))
return
@BaseSimulationTest._test_logger(doc_text="Simulation:System:UnitaryHeatPump:"
"supplemental_heating_coil_type_gas")
def test_supplemental_heating_coil_type_gas(self):
self.base_epjson['HVACTemplate:System:UnitaryHeatPump:AirToAir']['Heat Pump 1'][
'supplemental_heating_coil_type'] = 'Gas'
base_file_path = self.create_idf_file_from_epjson(epjson=self.base_epjson, file_name='base_pre_input.epJSON')
self.perform_full_comparison(base_idf_file_path=base_file_path)
epjson_output = self.ej._get_json_file(test_dir.joinpath(
'..', 'simulation', 'test', 'test_input_epjson.epJSON'))
self.assertIsNotNone(epjson_output['Coil:Heating:Fuel'].get('Heat Pump 1 Supp Heating Coil'))
return
@BaseSimulationTest._test_logger(doc_text="Simulation:System:UnitaryHeatPump:"
"supplemental_heating_coil_type_hot_water")
def test_supplemental_heating_coil_type_hot_water(self):
self.ej.merge_epjson(
super_dictionary=self.base_epjson,
object_dictionary=hot_water_objects)
self.base_epjson['HVACTemplate:System:UnitaryHeatPump:AirToAir']['Heat Pump 1'][
'supplemental_heating_coil_type'] = 'HotWater'
base_file_path = self.create_idf_file_from_epjson(epjson=self.base_epjson, file_name='base_pre_input.epJSON')
self.perform_full_comparison(base_idf_file_path=base_file_path)
epjson_output = self.ej._get_json_file(test_dir.joinpath(
'..', 'simulation', 'test', 'test_input_epjson.epJSON'))
self.assertIsNotNone(epjson_output['Coil:Heating:Water'].get('Heat Pump 1 Supp Heating Coil'))
return
@BaseSimulationTest._test_logger(doc_text="Simulation:System:UnitaryHeatPump:"
"supplemental_heating_coil_availability_schedule_name")
def test_supplemental_heating_coil_availability_schedule_name(self):
self.base_epjson['HVACTemplate:System:UnitaryHeatPump:AirToAir']['Heat Pump 1'][
'supplemental_heating_coil_availability_schedule_name'] = 'OCCUPY-1'
base_file_path = self.create_idf_file_from_epjson(epjson=self.base_epjson, file_name='base_pre_input.epJSON')
self.perform_full_comparison(base_idf_file_path=base_file_path)
epjson_output = self.ej._get_json_file(test_dir.joinpath(
'..', 'simulation', 'test', 'test_input_epjson.epJSON'))
self.assertEqual(
'OCCUPY-1',
epjson_output['Coil:Heating:Electric']['Heat Pump 1 Supp Heating Coil']['availability_schedule_name'])
return
@BaseSimulationTest._test_logger(doc_text="Simulation:System:UnitaryHeatPump:"
"supplemental_heating_coil_capacity")
def test_supplemental_heating_coil_capacity(self):
self.base_epjson['HVACTemplate:System:UnitaryHeatPump:AirToAir']['Heat Pump 1'][
'supplemental_heating_coil_capacity'] = 1000
base_file_path = self.create_idf_file_from_epjson(epjson=self.base_epjson, file_name='base_pre_input.epJSON')
self.perform_full_comparison(base_idf_file_path=base_file_path)
epjson_output = self.ej._get_json_file(test_dir.joinpath(
'..', 'simulation', 'test', 'test_input_epjson.epJSON'))
self.assertEqual(
1000,
epjson_output['Coil:Heating:Electric']['Heat Pump 1 Supp Heating Coil']['nominal_capacity'])
return
@BaseSimulationTest._test_logger(doc_text="Simulation:System:UnitaryHeatPump:"
"supplemental_heating_coil_maximum_outdoor_dry_bulb_temperature")
def test_supplemental_heating_coil_maximum_outdoor_dry_bulb_temperature(self):
self.base_epjson['HVACTemplate:System:UnitaryHeatPump:AirToAir']['Heat Pump 1'][
'supplemental_heating_coil_maximum_outdoor_dry_bulb_temperature'] = 20
base_file_path = self.create_idf_file_from_epjson(epjson=self.base_epjson, file_name='base_pre_input.epJSON')
self.perform_full_comparison(base_idf_file_path=base_file_path)
epjson_output = self.ej._get_json_file(test_dir.joinpath(
'..', 'simulation', 'test', 'test_input_epjson.epJSON'))
self.assertEqual(
20,
epjson_output['AirLoopHVAC:UnitaryHeatPump:AirToAir']['Heat Pump 1 Heat Pump'][
'maximum_outdoor_dry_bulb_temperature_for_supplemental_heater_operation'])
return
@BaseSimulationTest._test_logger(doc_text="Simulation:System:UnitaryHeatPump:"
"supplemental_gas_heating_coil_efficiency")
def test_supplemental_gas_heating_coil_efficiency(self):
self.base_epjson['HVACTemplate:System:UnitaryHeatPump:AirToAir']['Heat Pump 1'][
'supplemental_heating_coil_type'] = 'Gas'
self.base_epjson['HVACTemplate:System:UnitaryHeatPump:AirToAir']['Heat Pump 1'][
'supplemental_gas_heating_coil_efficiency'] = 0.77
base_file_path = self.create_idf_file_from_epjson(epjson=self.base_epjson, file_name='base_pre_input.epJSON')
self.perform_full_comparison(base_idf_file_path=base_file_path)
epjson_output = self.ej._get_json_file(test_dir.joinpath(
'..', 'simulation', 'test', 'test_input_epjson.epJSON'))
self.assertEqual(
0.77,
epjson_output['Coil:Heating:Fuel']['Heat Pump 1 Supp Heating Coil']['burner_efficiency'])
return
@BaseSimulationTest._test_logger(doc_text="Simulation:System:UnitaryHeatPump:"
"outdoor_air_flow_rates")
def test_outdoor_air_flow_rates(self):
self.base_epjson['HVACTemplate:System:UnitaryHeatPump:AirToAir']['Heat Pump 1'][
'maximum_outdoor_air_flow_rate'] = 0.66
self.base_epjson['HVACTemplate:System:UnitaryHeatPump:AirToAir']['Heat Pump 1'][
'minimum_outdoor_air_flow_rate'] = 0.1
base_file_path = self.create_idf_file_from_epjson(epjson=self.base_epjson, file_name='base_pre_input.epJSON')
self.perform_full_comparison(base_idf_file_path=base_file_path)
epjson_output = self.ej._get_json_file(test_dir.joinpath(
'..', 'simulation', 'test', 'test_input_epjson.epJSON'))
self.assertEqual(
0.66,
epjson_output['Controller:OutdoorAir']['Heat Pump 1 OA Controller']['maximum_outdoor_air_flow_rate'])
self.assertEqual(
0.1,
epjson_output['Controller:OutdoorAir']['Heat Pump 1 OA Controller']['minimum_outdoor_air_flow_rate'])
return
@BaseSimulationTest._test_logger(doc_text="Simulation:System:UnitaryHeatPump:minimum_outdoor_air_schedule_name")
def test_minimum_outdoor_air_schedule_name(self):
self.ej.merge_epjson(
super_dictionary=self.base_epjson,
object_dictionary=schedule_objects)
self.base_epjson['HVACTemplate:System:UnitaryHeatPump:AirToAir']['Heat Pump 1'][
'minimum_outdoor_air_schedule_name'] = 'Always0.8'
base_file_path = self.create_idf_file_from_epjson(epjson=self.base_epjson, file_name='base_pre_input.epJSON')
self.perform_full_comparison(base_idf_file_path=base_file_path)
epjson_output = self.ej._get_json_file(test_dir.joinpath(
'..', 'simulation', 'test', 'test_input_epjson.epJSON'))
self.assertEqual(
'Always0.8',
epjson_output['Controller:OutdoorAir']['Heat Pump 1 OA Controller'][
'minimum_outdoor_air_schedule_name'])
return
@BaseSimulationTest._test_logger(doc_text="Simulation:System:UnitaryHeatPump:"
"supplemental_gas_heating_coil_parasitic_electric_load")
def test_supplemental_gas_heating_coil_parasitic_electric_load(self):
self.base_epjson['HVACTemplate:System:UnitaryHeatPump:AirToAir']['Heat Pump 1'][
'supplemental_heating_coil_type'] = 'Gas'
self.base_epjson['HVACTemplate:System:UnitaryHeatPump:AirToAir']['Heat Pump 1'][
'supplemental_gas_heating_coil_parasitic_electric_load'] = 1
base_file_path = self.create_idf_file_from_epjson(epjson=self.base_epjson, file_name='base_pre_input.epJSON')
self.perform_full_comparison(base_idf_file_path=base_file_path)
epjson_output = self.ej._get_json_file(test_dir.joinpath(
'..', 'simulation', 'test', 'test_input_epjson.epJSON'))
self.assertEqual(
1,
epjson_output['Coil:Heating:Fuel']['Heat Pump 1 Supp Heating Coil']['parasitic_electric_load'])
return
@BaseSimulationTest._test_logger(doc_text="Simulation:System:UnitaryHeatPump:economizer_type_no_economizer")
def test_economizer_type_no_economizer(self):
self.base_epjson['HVACTemplate:System:UnitaryHeatPump:AirToAir']['Heat Pump 1'][
'economizer_type'] = 'NoEconomizer'
base_file_path = self.create_idf_file_from_epjson(epjson=self.base_epjson, file_name='base_pre_input.epJSON')
self.perform_full_comparison(base_idf_file_path=base_file_path)
epjson_output = self.ej._get_json_file(test_dir.joinpath('..', 'simulation', 'test', 'test_input_epjson.epJSON'))
self.assertEqual(
'NoEconomizer',
epjson_output['Controller:OutdoorAir']['Heat Pump 1 OA Controller']['economizer_control_type'])
return
@BaseSimulationTest._test_logger(doc_text="Simulation:System:UnitaryHeatPump:economizer_type_fixed_dry_bulb")
def test_economizer_type_fixed_dry_bulb(self):
self.base_epjson['HVACTemplate:System:UnitaryHeatPump:AirToAir']['Heat Pump 1'][
'economizer_type'] = 'FixedDryBulb'
base_file_path = self.create_idf_file_from_epjson(epjson=self.base_epjson, file_name='base_pre_input.epJSON')
self.perform_full_comparison(base_idf_file_path=base_file_path)
epjson_output = self.ej._get_json_file(test_dir.joinpath('..', 'simulation', 'test', 'test_input_epjson.epJSON'))
self.assertEqual(
'FixedDryBulb',
epjson_output['Controller:OutdoorAir']['Heat Pump 1 OA Controller']['economizer_control_type'])
return
@BaseSimulationTest._test_logger(doc_text="Simulation:System:UnitaryHeatPump:economizer_type_fixed_enthalpy")
def test_economizer_type_fixed_enthalpy(self):
self.base_epjson['HVACTemplate:System:UnitaryHeatPump:AirToAir']['Heat Pump 1'][
'economizer_type'] = 'FixedEnthalpy'
base_file_path = self.create_idf_file_from_epjson(epjson=self.base_epjson, file_name='base_pre_input.epJSON')
self.perform_full_comparison(base_idf_file_path=base_file_path)
epjson_output = self.ej._get_json_file(test_dir.joinpath('..', 'simulation', 'test', 'test_input_epjson.epJSON'))
self.assertEqual(
'FixedEnthalpy',
epjson_output['Controller:OutdoorAir']['Heat Pump 1 OA Controller']['economizer_control_type'])
return
@BaseSimulationTest._test_logger(doc_text="Simulation:System:UnitaryHeatPump:economizer_type_differential_dry_bulb")
def test_economizer_type_differential_dry_bulb(self):
self.base_epjson['HVACTemplate:System:UnitaryHeatPump:AirToAir']['Heat Pump 1'][
'economizer_type'] = 'DifferentialDryBulb'
base_file_path = self.create_idf_file_from_epjson(epjson=self.base_epjson, file_name='base_pre_input.epJSON')
self.perform_full_comparison(base_idf_file_path=base_file_path)
epjson_output = self.ej._get_json_file(test_dir.joinpath('..', 'simulation', 'test', 'test_input_epjson.epJSON'))
self.assertEqual(
'DifferentialDryBulb',
epjson_output['Controller:OutdoorAir']['Heat Pump 1 OA Controller']['economizer_control_type'])
return
@BaseSimulationTest._test_logger(doc_text="Simulation:System:UnitaryHeatPump:economizer_type_differential_enthalpy")
def test_economizer_type_differential_enthalpy(self):
self.base_epjson['HVACTemplate:System:UnitaryHeatPump:AirToAir']['Heat Pump 1'][
'economizer_type'] = 'DifferentialEnthalpy'
base_file_path = self.create_idf_file_from_epjson(epjson=self.base_epjson, file_name='base_pre_input.epJSON')
self.perform_full_comparison(base_idf_file_path=base_file_path)
epjson_output = self.ej._get_json_file(test_dir.joinpath('..', 'simulation', 'test', 'test_input_epjson.epJSON'))
self.assertEqual(
'DifferentialEnthalpy',
epjson_output['Controller:OutdoorAir']['Heat Pump 1 OA Controller']['economizer_control_type'])
return
@BaseSimulationTest._test_logger(doc_text="Simulation:System:UnitaryHeatPump:"
"economizer_type_fixed_dew_point_and_dry_bulb")
def test_economizer_type_fixed_dew_point_and_dry_bulb(self):
self.base_epjson['HVACTemplate:System:UnitaryHeatPump:AirToAir']['Heat Pump 1'][
'economizer_type'] = 'FixedDewPointAndDryBulb'
base_file_path = self.create_idf_file_from_epjson(epjson=self.base_epjson, file_name='base_pre_input.epJSON')
self.perform_full_comparison(base_idf_file_path=base_file_path)
epjson_output = self.ej._get_json_file(test_dir.joinpath('..', 'simulation', 'test', 'test_input_epjson.epJSON'))
self.assertEqual(
'FixedDewPointAndDryBulb',
epjson_output['Controller:OutdoorAir']['Heat Pump 1 OA Controller']['economizer_control_type'])
return
@BaseSimulationTest._test_logger(doc_text="Simulation:System:UnitaryHeatPump:"
"electronic_enthalpy")
def test_economizer_type_electronic_enthalpy(self):
self.base_epjson['HVACTemplate:System:UnitaryHeatPump:AirToAir']['Heat Pump 1'][
'economizer_type'] = 'ElectronicEnthalpy'
base_file_path = self.create_idf_file_from_epjson(epjson=self.base_epjson, file_name='base_pre_input.epJSON')
self.perform_full_comparison(base_idf_file_path=base_file_path)
epjson_output = self.ej._get_json_file(test_dir.joinpath('..', 'simulation', 'test', 'test_input_epjson.epJSON'))
self.assertEqual(
'ElectronicEnthalpy',
epjson_output['Controller:OutdoorAir']['Heat Pump 1 OA Controller']['economizer_control_type'])
return
@BaseSimulationTest._test_logger(doc_text="Simulation:System:UnitaryHeatPump:economizer_type_"
"differential_dry_bulb_and_enthalpy")
def test_economizer_type_differential_dry_bulb_and_enthalpy(self):
self.base_epjson['HVACTemplate:System:UnitaryHeatPump:AirToAir']['Heat Pump 1'][
'economizer_type'] = 'DifferentialDryBulbAndEnthalpy'
base_file_path = self.create_idf_file_from_epjson(epjson=self.base_epjson, file_name='base_pre_input.epJSON')
self.perform_full_comparison(base_idf_file_path=base_file_path)
epjson_output = self.ej._get_json_file(test_dir.joinpath('..', 'simulation', 'test', 'test_input_epjson.epJSON'))
self.assertEqual(
'DifferentialDryBulbAndEnthalpy',
epjson_output['Controller:OutdoorAir']['Heat Pump 1 OA Controller']['economizer_control_type'])
return
@BaseSimulationTest._test_logger(doc_text="Simulation:System:UnitaryHeatPump:economizer_lockout_no_lockout")
def test_economizer_lockout_no_lockout(self):
self.base_epjson['HVACTemplate:System:UnitaryHeatPump:AirToAir']['Heat Pump 1'][
'economizer_lockout'] = 'NoLockout'
base_file_path = self.create_idf_file_from_epjson(epjson=self.base_epjson, file_name='base_pre_input.epJSON')
self.perform_full_comparison(base_idf_file_path=base_file_path)
epjson_output = self.ej._get_json_file(test_dir.joinpath('..', 'simulation', 'test', 'test_input_epjson.epJSON'))
self.assertEqual(
'NoLockout',
epjson_output['Controller:OutdoorAir']['Heat Pump 1 OA Controller']['lockout_type'])
return
@BaseSimulationTest._test_logger(doc_text="Simulation:System:UnitaryHeatPump:economizer_lockout_lockout_with_heating")
def test_economizer_lockout_lockout_with_heating(self):
self.base_epjson['HVACTemplate:System:UnitaryHeatPump:AirToAir']['Heat Pump 1'][
'economizer_lockout'] = 'LockoutWithHeating'
base_file_path = self.create_idf_file_from_epjson(epjson=self.base_epjson, file_name='base_pre_input.epJSON')
self.perform_full_comparison(base_idf_file_path=base_file_path)
epjson_output = self.ej._get_json_file(test_dir.joinpath(
'..', 'simulation', 'test', 'test_input_epjson.epJSON'))
self.assertEqual(
'LockoutWithHeating',
epjson_output['Controller:OutdoorAir']['Heat Pump 1 OA Controller']['lockout_type'])
return
@BaseSimulationTest._test_logger(doc_text="Simulation:System:UnitaryHeatPump:"
"economizer_lockout_lockout_with_compressor")
def test_economizer_lockout_lockout_with_compressor(self):
self.base_epjson['HVACTemplate:System:UnitaryHeatPump:AirToAir']['Heat Pump 1'][
'economizer_lockout'] = 'LockoutWithCompressor'
base_file_path = self.create_idf_file_from_epjson(epjson=self.base_epjson, file_name='base_pre_input.epJSON')
self.perform_full_comparison(base_idf_file_path=base_file_path)
epjson_output = self.ej._get_json_file(test_dir.joinpath(
'..', 'simulation', 'test', 'test_input_epjson.epJSON'))
self.assertEqual(
'LockoutWithCompressor',
epjson_output['Controller:OutdoorAir']['Heat Pump 1 OA Controller']['lockout_type'])
return
@BaseSimulationTest._test_logger(doc_text="Simulation:System:UnitaryHeatPump:economizer_temperature_limits")
def test_economizer_temperature_limits(self):
self.base_epjson['HVACTemplate:System:UnitaryHeatPump:AirToAir']['Heat Pump 1'][
'economizer_type'] = 'FixedDryBulb'
self.base_epjson['HVACTemplate:System:UnitaryHeatPump:AirToAir']['Heat Pump 1'][
'economizer_maximum_limit_dry_bulb_temperature'] = 18
self.base_epjson['HVACTemplate:System:UnitaryHeatPump:AirToAir']['Heat Pump 1'][
'economizer_minimum_limit_dry_bulb_temperature'] = 5
base_file_path = self.create_idf_file_from_epjson(epjson=self.base_epjson, file_name='base_pre_input.epJSON')
self.perform_full_comparison(base_idf_file_path=base_file_path)
epjson_output = self.ej._get_json_file(test_dir.joinpath(
'..', 'simulation', 'test', 'test_input_epjson.epJSON'))
self.assertEqual(
18,
epjson_output['Controller:OutdoorAir']['Heat Pump 1 OA Controller'][
'economizer_maximum_limit_dry_bulb_temperature'])
self.assertEqual(
5,
epjson_output['Controller:OutdoorAir']['Heat Pump 1 OA Controller'][
'economizer_minimum_limit_dry_bulb_temperature'])
return
@BaseSimulationTest._test_logger(doc_text="Simulation:System:UnitaryHeatPump:economizer_upper_enthalpy_limit")
def test_economizer_maximum_limit_enthalpy(self):
self.base_epjson['HVACTemplate:System:UnitaryHeatPump:AirToAir']['Heat Pump 1'][
'economizer_maximum_limit_enthalpy'] = 100
base_file_path = self.create_idf_file_from_epjson(epjson=self.base_epjson, file_name='base_pre_input.epJSON')
self.perform_full_comparison(base_idf_file_path=base_file_path)
epjson_output = self.ej._get_json_file(test_dir.joinpath(
'..', 'simulation', 'test', 'test_input_epjson.epJSON'))
self.assertEqual(
100,
epjson_output['Controller:OutdoorAir']['Heat Pump 1 OA Controller']['economizer_maximum_limit_enthalpy'])
return
@BaseSimulationTest._test_logger(doc_text="Simulation:System:UnitaryHeatPump:"
"economizer_maximum_limit_dewpoint_temperature")
def test_economizer_maximum_limit_dewpoint_temperature(self):
# todo_eo: Notes say that limit is applied regardless of what economizer type is applied. However, EO only
# applies the value when certain economizer is selected. Figure out what is preferred method.
# self.base_epjson['HVACTemplate:System:UnitaryHeatPump:AirToAir']['Heat Pump 1'][
# 'economizer_type'] = 'FixedDewPointAndDryBulb'
self.base_epjson['HVACTemplate:System:UnitaryHeatPump:AirToAir']['Heat Pump 1'][
'economizer_maximum_limit_dewpoint_temperature'] = 20
base_file_path = self.create_idf_file_from_epjson(epjson=self.base_epjson, file_name='base_pre_input.epJSON')
self.perform_full_comparison(base_idf_file_path=base_file_path)
epjson_output = self.ej._get_json_file(test_dir.joinpath(
'..', 'simulation', 'test', 'test_input_epjson.epJSON'))
self.assertEqual(
20,
epjson_output['Controller:OutdoorAir']['Heat Pump 1 OA Controller'][
'economizer_maximum_limit_dewpoint_temperature'])
return
@BaseSimulationTest._test_logger(doc_text="Simulation:System:UnitaryHeatPump:supply_plenum_name")
def test_supply_plenum_name(self):
self.base_epjson['HVACTemplate:System:UnitaryHeatPump:AirToAir']['Heat Pump 1'][
'supply_plenum_name'] = 'PLENUM-1'
base_file_path = self.create_idf_file_from_epjson(epjson=self.base_epjson, file_name='base_pre_input.epJSON')
self.perform_full_comparison(base_idf_file_path=base_file_path)
epjson_output = self.ej._get_json_file(test_dir.joinpath(
'..', 'simulation', 'test', 'test_input_epjson.epJSON'))
self.assertEqual(
'PLENUM-1',
epjson_output['AirLoopHVAC:SupplyPlenum']['Heat Pump 1 Supply Plenum']['zone_name'])
return
@BaseSimulationTest._test_logger(doc_text="Simulation:System:UnitaryHeatPump:return_plenum_name")
def test_return_plenum_name(self):
self.base_epjson['HVACTemplate:System:UnitaryHeatPump:AirToAir']['Heat Pump 1'][
'return_plenum_name'] = 'PLENUM-1'
base_file_path = self.create_idf_file_from_epjson(epjson=self.base_epjson, file_name='base_pre_input.epJSON')
self.perform_full_comparison(base_idf_file_path=base_file_path)
epjson_output = self.ej._get_json_file(test_dir.joinpath(
'..', 'simulation', 'test', 'test_input_epjson.epJSON'))
self.assertEqual(
'PLENUM-1',
epjson_output['AirLoopHVAC:ReturnPlenum']['Heat Pump 1 Return Plenum']['zone_name'])
return
@BaseSimulationTest._test_logger(doc_text="Simulation:System:UnitaryHeatPump:night_cycle_control_stay_off")
def test_night_cycle_control_stay_off(self):
self.base_epjson['HVACTemplate:System:UnitaryHeatPump:AirToAir']['Heat Pump 1'][
'night_cycle_control'] = 'StayOff'
base_file_path = self.create_idf_file_from_epjson(epjson=self.base_epjson, file_name='base_pre_input.epJSON')
self.perform_full_comparison(base_idf_file_path=base_file_path)
epjson_output = self.ej._get_json_file(test_dir.joinpath(
'..', 'simulation', 'test', 'test_input_epjson.epJSON'))
self.assertEqual(
'StayOff',
epjson_output['AvailabilityManager:NightCycle']['Heat Pump 1 Availability']['control_type'])
return
@BaseSimulationTest._test_logger(doc_text="Simulation:System:UnitaryHeatPump:night_cycle_control_cycle_on_any")
def test_night_cycle_control_cycle_on_any(self):
self.base_epjson['HVACTemplate:System:UnitaryHeatPump:AirToAir']['Heat Pump 1'][
'night_cycle_control'] = 'CycleOnAny'
base_file_path = self.create_idf_file_from_epjson(epjson=self.base_epjson, file_name='base_pre_input.epJSON')
self.perform_full_comparison(base_idf_file_path=base_file_path)
epjson_output = self.ej._get_json_file(test_dir.joinpath(
'..', 'simulation', 'test', 'test_input_epjson.epJSON'))
self.assertEqual(
'CycleOnAny',
epjson_output['AvailabilityManager:NightCycle']['Heat Pump 1 Availability']['control_type'])
return
@BaseSimulationTest._test_logger(doc_text="Simulation:System:UnitaryHeatPump:"
"night_cycle_control_cycle_on_control_zone")
def test_night_cycle_control_cycle_on_control_zone(self):
self.base_epjson['HVACTemplate:System:UnitaryHeatPump:AirToAir']['Heat Pump 1'][
'night_cycle_control'] = 'CycleOnControlZone'
self.base_epjson['HVACTemplate:System:UnitaryHeatPump:AirToAir']['Heat Pump 1'][
'night_cycle_control_zone_name'] = 'SPACE1-1'
base_file_path = self.create_idf_file_from_epjson(epjson=self.base_epjson, file_name='base_pre_input.epJSON')
self.perform_full_comparison(base_idf_file_path=base_file_path)
epjson_output = self.ej._get_json_file(test_dir.joinpath(
'..', 'simulation', 'test', 'test_input_epjson.epJSON'))
self.assertEqual(
'CycleOnControlZone',
epjson_output['AvailabilityManager:NightCycle']['Heat Pump 1 Availability']['control_type'])
self.assertEqual(
'SPACE1-1',
epjson_output['AvailabilityManager:NightCycle']['Heat Pump 1 Availability'][
'control_zone_or_zone_list_name'])
return
@BaseSimulationTest._test_logger(doc_text="Simulation:System:UnitaryHeatPump:heat_recovery_sensible")
def test_heat_recovery_sensible(self):
self.base_epjson['HVACTemplate:System:UnitaryHeatPump:AirToAir']['Heat Pump 1'][
'heat_recovery_type'] = 'Sensible'
base_file_path = self.create_idf_file_from_epjson(epjson=self.base_epjson, file_name='base_pre_input.epJSON')
self.perform_full_comparison(base_idf_file_path=base_file_path)
epjson_output = self.ej._get_json_file(test_dir.joinpath(
'..', 'simulation', 'test', 'test_input_epjson.epJSON'))
self.assertIsNotNone(epjson_output.get('HeatExchanger:AirToAir:SensibleAndLatent'))
return
@BaseSimulationTest._test_logger(doc_text="Simulation:System:UnitaryHeatPump:heat_recovery_enthalpy")
def test_heat_recovery_enthalpy(self):
self.base_epjson['HVACTemplate:System:UnitaryHeatPump:AirToAir']['Heat Pump 1'][
'heat_recovery_type'] = 'Enthalpy'
base_file_path = self.create_idf_file_from_epjson(epjson=self.base_epjson, file_name='base_pre_input.epJSON')
self.perform_full_comparison(base_idf_file_path=base_file_path)
epjson_output = self.ej._get_json_file(test_dir.joinpath(
'..', 'simulation', 'test', 'test_input_epjson.epJSON'))
self.assertIsNotNone(epjson_output.get('HeatExchanger:AirToAir:SensibleAndLatent'))
return
@BaseSimulationTest._test_logger(doc_text="Simulation:System:UnitaryHeatPump:heat_recovery_effectiveness_sensible")
def test_heat_recovery_effectiveness_sensible(self):
self.base_epjson['HVACTemplate:System:UnitaryHeatPump:AirToAir']['Heat Pump 1'][
'heat_recovery_type'] = 'Sensible'
self.base_epjson['HVACTemplate:System:UnitaryHeatPump:AirToAir']['Heat Pump 1'][
'sensible_heat_recovery_effectiveness'] = 0.72
base_file_path = self.create_idf_file_from_epjson(epjson=self.base_epjson, file_name='base_pre_input.epJSON')
self.perform_full_comparison(base_idf_file_path=base_file_path)
epjson_output = self.ej._get_json_file(test_dir.joinpath('..', 'simulation', 'test', 'test_input_epjson.epJSON'))
self.assertIsNotNone(epjson_output.get('HeatExchanger:AirToAir:SensibleAndLatent'))
self.assertEqual(
0.77,
epjson_output['HeatExchanger:AirToAir:SensibleAndLatent']['Heat Pump 1 Heat Recovery'][
'sensible_effectiveness_at_75_cooling_air_flow'])
self.assertEqual(
0.77,
epjson_output['HeatExchanger:AirToAir:SensibleAndLatent']['Heat Pump 1 Heat Recovery'][
'sensible_effectiveness_at_75_heating_air_flow'])
self.assertEqual(
0.72,
epjson_output['HeatExchanger:AirToAir:SensibleAndLatent']['Heat Pump 1 Heat Recovery'][
'sensible_effectiveness_at_100_cooling_air_flow'])
self.assertEqual(
0.72,
epjson_output['HeatExchanger:AirToAir:SensibleAndLatent']['Heat Pump 1 Heat Recovery'][
'sensible_effectiveness_at_100_heating_air_flow'])
return
@BaseSimulationTest._test_logger(doc_text="Simulation:System:UnitaryHeatPump:heat_recovery_effectiveness_enthalpy")
def test_heat_recovery_effectiveness_enthalpy(self):
self.base_epjson['HVACTemplate:System:UnitaryHeatPump:AirToAir']['Heat Pump 1'][
'heat_recovery_type'] = 'Enthalpy'
self.base_epjson['HVACTemplate:System:UnitaryHeatPump:AirToAir']['Heat Pump 1'][
'sensible_heat_recovery_effectiveness'] = 0.72
self.base_epjson['HVACTemplate:System:UnitaryHeatPump:AirToAir']['Heat Pump 1'][
'latent_heat_recovery_effectiveness'] = 0.61
base_file_path = self.create_idf_file_from_epjson(epjson=self.base_epjson, file_name='base_pre_input.epJSON')
self.perform_full_comparison(base_idf_file_path=base_file_path)
epjson_output = self.ej._get_json_file(test_dir.joinpath('..', 'simulation', 'test', 'test_input_epjson.epJSON'))
self.assertIsNotNone(epjson_output.get('HeatExchanger:AirToAir:SensibleAndLatent'))
self.assertEqual(
0.77,
epjson_output['HeatExchanger:AirToAir:SensibleAndLatent']['Heat Pump 1 Heat Recovery'][
'sensible_effectiveness_at_75_cooling_air_flow'])
self.assertEqual(
0.77,
epjson_output['HeatExchanger:AirToAir:SensibleAndLatent']['Heat Pump 1 Heat Recovery'][
'sensible_effectiveness_at_75_heating_air_flow'])
self.assertEqual(
0.72,
epjson_output['HeatExchanger:AirToAir:SensibleAndLatent']['Heat Pump 1 Heat Recovery'][
'sensible_effectiveness_at_100_cooling_air_flow'])
self.assertEqual(
0.72,
epjson_output['HeatExchanger:AirToAir:SensibleAndLatent']['Heat Pump 1 Heat Recovery'][
'sensible_effectiveness_at_100_heating_air_flow'])
self.assertEqual(
0.61,
epjson_output['HeatExchanger:AirToAir:SensibleAndLatent']['Heat Pump 1 Heat Recovery'][
'latent_effectiveness_at_100_cooling_air_flow'])
self.assertEqual(
0.61,
epjson_output['HeatExchanger:AirToAir:SensibleAndLatent']['Heat Pump 1 Heat Recovery'][
'latent_effectiveness_at_100_heating_air_flow'])
self.assertEqual(
0.66,
epjson_output['HeatExchanger:AirToAir:SensibleAndLatent']['Heat Pump 1 Heat Recovery'][
'latent_effectiveness_at_75_cooling_air_flow'])
self.assertEqual(
0.66,
epjson_output['HeatExchanger:AirToAir:SensibleAndLatent']['Heat Pump 1 Heat Recovery'][
'latent_effectiveness_at_75_heating_air_flow'])
return
@BaseSimulationTest._test_logger(doc_text="Simulation:System:UnitaryHeatPump:humidifier_type")
def test_humidifier_type(self):
# todo_eo: legacy fails due to missing dehumidification default schedule
# ** Severe ** ZoneControl:Humidistat="HEAT PUMP 1 HUMIDISTAT invalid
# Dehumidifying Relative Humidity Setpoint Schedule Name="" not found.
self.base_epjson['HVACTemplate:System:UnitaryHeatPump:AirToAir']['Heat Pump 1'][
'humidifier_type'] = 'ElectricSteam'
self.base_epjson['HVACTemplate:System:UnitaryHeatPump:AirToAir']['Heat Pump 1'][
'humidifier_control_zone_name'] = 'SPACE1-1'
self.base_epjson['HVACTemplate:System:UnitaryHeatPump:AirToAir']['Heat Pump 1'][
'humidifier_setpoint'] = 29
base_file_path = self.create_idf_file_from_epjson(epjson=self.base_epjson, file_name='base_pre_input.epJSON')
self.perform_full_comparison(base_idf_file_path=base_file_path)
epjson_output = self.ej._get_json_file(test_dir.joinpath(
'..', 'simulation', 'test', 'test_input_epjson.epJSON'))
self.assertIsNotNone(epjson_output['Humidifier:Steam:Electric'].get('Heat Pump 1 Humidifier'))
self.assertEqual(
'HVACTemplate-Always29.0',
epjson_output['ZoneControl:Humidistat']['Heat Pump 1 Humidification Humidistat'][
'humidifying_relative_humidity_setpoint_schedule_name'])
return
@BaseSimulationTest._test_logger(doc_text="Simulation:System:UnitaryHeatPump:humidifier_inputs")
def test_humidifier_inputs(self):
# todo_eo: legacy fails due to missing dehumidification default schedule
# ** Severe ** ZoneControl:Humidistat="HEAT PUMP 1 HUMIDISTAT invalid
# Dehumidifying Relative Humidity Setpoint Schedule Name="" not found.
self.base_epjson['HVACTemplate:System:UnitaryHeatPump:AirToAir']['Heat Pump 1'][
'humidifier_type'] = 'ElectricSteam'
self.base_epjson['HVACTemplate:System:UnitaryHeatPump:AirToAir']['Heat Pump 1'][
'humidifier_control_zone_name'] = 'SPACE1-1'
self.base_epjson['HVACTemplate:System:UnitaryHeatPump:AirToAir']['Heat Pump 1'][
'humidifier_setpoint'] = 29
self.base_epjson['HVACTemplate:System:UnitaryHeatPump:AirToAir']['Heat Pump 1'][
'humidifier_availability_schedule_name'] = 'OCCUPY-1'
self.base_epjson['HVACTemplate:System:UnitaryHeatPump:AirToAir']['Heat Pump 1'][
'humidifier_rated_capacity'] = 1
self.base_epjson['HVACTemplate:System:UnitaryHeatPump:AirToAir']['Heat Pump 1'][
'humidifier_rated_electric_power'] = 1000
base_file_path = self.create_idf_file_from_epjson(epjson=self.base_epjson, file_name='base_pre_input.epJSON')
self.perform_full_comparison(base_idf_file_path=base_file_path)
epjson_output = self.ej._get_json_file(test_dir.joinpath('..', 'simulation', 'test', 'test_input_epjson.epJSON'))
self.assertIsNotNone(epjson_output['Humidifier:Steam:Electric'].get('Heat Pump 1 Humidifier'))
self.assertEqual(
'OCCUPY-1',
epjson_output['Humidifier:Steam:Electric']['Heat Pump 1 Humidifier']['availability_schedule_name'])
self.assertEqual(
1,
epjson_output['Humidifier:Steam:Electric']['Heat Pump 1 Humidifier']['rated_capacity'])
self.assertEqual(
1000,
epjson_output['Humidifier:Steam:Electric']['Heat Pump 1 Humidifier']['rated_power'])
self.assertEqual(
'HVACTemplate-Always29.0',
epjson_output['ZoneControl:Humidistat']['Heat Pump 1 Humidification Humidistat'][
'humidifying_relative_humidity_setpoint_schedule_name'])
return
@BaseSimulationTest._test_logger(doc_text="Simulation:System:UnitaryHeatPump:return_fan_yes")
def test_return_fan_yes(self):
self.base_epjson['HVACTemplate:System:UnitaryHeatPump:AirToAir']['Heat Pump 1'][
'return_fan'] = 'Yes'
base_file_path = self.create_idf_file_from_epjson(epjson=self.base_epjson, file_name='base_pre_input.epJSON')
self.perform_full_comparison(base_idf_file_path=base_file_path)
epjson_output = self.ej._get_json_file(test_dir.joinpath('..', 'simulation', 'test', 'test_input_epjson.epJSON'))
self.assertIsNotNone(epjson_output['Fan:ConstantVolume'].get('Heat Pump 1 Return Fan'))
return
@BaseSimulationTest._test_logger(doc_text="Simulation:System:UnitaryHeatPump:return_fan_no")
def test_return_fan_no(self):
self.base_epjson['HVACTemplate:System:UnitaryHeatPump:AirToAir']['Heat Pump 1'][
'return_fan'] = 'No'
base_file_path = self.create_idf_file_from_epjson(epjson=self.base_epjson, file_name='base_pre_input.epJSON')
self.perform_full_comparison(base_idf_file_path=base_file_path)
epjson_output = self.ej._get_json_file(test_dir.joinpath('..', 'simulation', 'test', 'test_input_epjson.epJSON'))
self.assertIsNone(epjson_output.get('Fan:ConstantVolume'))
return
@BaseSimulationTest._test_logger(doc_text="Simulation:System:UnitaryHeatPump:return_fan_inputs")
def test_return_fan_inputs(self):
self.base_epjson['HVACTemplate:System:UnitaryHeatPump:AirToAir']['Heat Pump 1'][
'return_fan'] = 'Yes'
self.base_epjson['HVACTemplate:System:UnitaryHeatPump:AirToAir']['Heat Pump 1'][
'return_fan_total_efficiency'] = 0.72
self.base_epjson['HVACTemplate:System:UnitaryHeatPump:AirToAir']['Heat Pump 1'][
'return_fan_delta_pressure'] = 295
self.base_epjson['HVACTemplate:System:UnitaryHeatPump:AirToAir']['Heat Pump 1'][
'return_fan_motor_efficiency'] = 0.85
self.base_epjson['HVACTemplate:System:UnitaryHeatPump:AirToAir']['Heat Pump 1'][
'return_fan_motor_in_air_stream_fraction'] = 0.9
base_file_path = self.create_idf_file_from_epjson(epjson=self.base_epjson, file_name='base_pre_input.epJSON')
self.perform_full_comparison(base_idf_file_path=base_file_path)
epjson_output = self.ej._get_json_file(test_dir.joinpath('..', 'simulation', 'test', 'test_input_epjson.epJSON'))
self.assertEqual(
0.72,
epjson_output['Fan:ConstantVolume']['Heat Pump 1 Return Fan']['fan_total_efficiency'])
self.assertEqual(
295,
epjson_output['Fan:ConstantVolume']['Heat Pump 1 Return Fan']['pressure_rise'])
self.assertEqual(
0.85,
epjson_output['Fan:ConstantVolume']['Heat Pump 1 Return Fan']['motor_efficiency'])
self.assertEqual(
0.9,
epjson_output['Fan:ConstantVolume']['Heat Pump 1 Return Fan']['motor_in_airstream_fraction'])
return
| 57.801527 | 122 | 0.679961 | 7,747 | 68,148 | 5.577901 | 0.042081 | 0.039063 | 0.036865 | 0.066718 | 0.93444 | 0.908914 | 0.877279 | 0.85784 | 0.850597 | 0.839443 | 0 | 0.010658 | 0.215237 | 68,148 | 1,178 | 123 | 57.850594 | 0.797345 | 0.017051 | 0 | 0.634615 | 0 | 0 | 0.366433 | 0.265373 | 0 | 0 | 0 | 0.000849 | 0.086081 | 1 | 0.06044 | false | 0 | 0.002747 | 0.000916 | 0.124542 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
d086d0c21717be22228c6f720ec0737182f40aba | 35 | py | Python | My_module/__init__.py | Yigitcan-MSU/My_modules | 858f8cf055484673f673e9d2572762df0e2097a2 | [
"BSD-3-Clause"
] | null | null | null | My_module/__init__.py | Yigitcan-MSU/My_modules | 858f8cf055484673f673e9d2572762df0e2097a2 | [
"BSD-3-Clause"
] | null | null | null | My_module/__init__.py | Yigitcan-MSU/My_modules | 858f8cf055484673f673e9d2572762df0e2097a2 | [
"BSD-3-Clause"
] | null | null | null | from .my_module import times_table
| 17.5 | 34 | 0.857143 | 6 | 35 | 4.666667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.114286 | 35 | 1 | 35 | 35 | 0.903226 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
1906830bbf11669b3f67a8ddf06c80d31c24c6e9 | 260 | py | Python | tests/cases/issues/test_issue_128_cases.py | broglep-work/python-pytest-cases | 4976c0073a2fad5fbe5de34a5d1199efda0b7da9 | [
"BSD-3-Clause"
] | 213 | 2018-07-05T21:21:21.000Z | 2022-03-22T04:54:53.000Z | tests/cases/issues/test_issue_128_cases.py | broglep-work/python-pytest-cases | 4976c0073a2fad5fbe5de34a5d1199efda0b7da9 | [
"BSD-3-Clause"
] | 259 | 2018-06-22T16:46:33.000Z | 2022-03-23T19:39:15.000Z | tests/cases/issues/test_issue_128_cases.py | broglep-work/python-pytest-cases | 4976c0073a2fad5fbe5de34a5d1199efda0b7da9 | [
"BSD-3-Clause"
] | 27 | 2019-03-26T12:46:49.000Z | 2022-02-21T16:56:23.000Z | # Authors: Sylvain MARIE <sylvain.marie@se.com>
# + All contributors to <https://github.com/smarie/python-pytest-cases>
#
# License: 3-clause BSD, <https://github.com/smarie/python-pytest-cases/blob/master/LICENSE>
def case_one(bird):
return bird
| 37.142857 | 92 | 0.715385 | 37 | 260 | 5 | 0.675676 | 0.12973 | 0.151351 | 0.216216 | 0.4 | 0.4 | 0.4 | 0 | 0 | 0 | 0 | 0.004425 | 0.130769 | 260 | 6 | 93 | 43.333333 | 0.814159 | 0.826923 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | false | 0 | 0 | 0.5 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
efb90ba7bb6fe52ca65ddead9361f98b5421bb9c | 199 | py | Python | src/web/port/routes/__init__.py | eric-yates/port | b7a2de9597828f3b635a90ee1e12c4833dfb4cb8 | [
"MIT"
] | null | null | null | src/web/port/routes/__init__.py | eric-yates/port | b7a2de9597828f3b635a90ee1e12c4833dfb4cb8 | [
"MIT"
] | null | null | null | src/web/port/routes/__init__.py | eric-yates/port | b7a2de9597828f3b635a90ee1e12c4833dfb4cb8 | [
"MIT"
] | null | null | null | from port.routes.category import category
from port.routes.errorhandlers import errorhandlers
from port.routes.main import main
from port.routes.plugin import plugin
from port.routes.auth import auth | 39.8 | 51 | 0.854271 | 30 | 199 | 5.666667 | 0.3 | 0.235294 | 0.411765 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.095477 | 199 | 5 | 52 | 39.8 | 0.944444 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
efedfd1fe436351ac125b996b6e7da5af71346f1 | 34 | py | Python | canvas_todo/__init__.py | ryansingman/canvas-todo | ec37cb06c03fe942a8c57b98bc7b9f0086c6e661 | [
"MIT"
] | null | null | null | canvas_todo/__init__.py | ryansingman/canvas-todo | ec37cb06c03fe942a8c57b98bc7b9f0086c6e661 | [
"MIT"
] | null | null | null | canvas_todo/__init__.py | ryansingman/canvas-todo | ec37cb06c03fe942a8c57b98bc7b9f0086c6e661 | [
"MIT"
] | null | null | null | from . import config, canvas_todo
| 17 | 33 | 0.794118 | 5 | 34 | 5.2 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.147059 | 34 | 1 | 34 | 34 | 0.896552 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
4bd219c0f222c532a87c3b5a65e9f21b043b129d | 4,802 | py | Python | tests/handlers/api_admins_test.py | lsst-sqre/gafaelfawr | 5d965e9d7901987ef041fd341637d75407b55227 | [
"MIT"
] | null | null | null | tests/handlers/api_admins_test.py | lsst-sqre/gafaelfawr | 5d965e9d7901987ef041fd341637d75407b55227 | [
"MIT"
] | 155 | 2020-05-29T17:21:13.000Z | 2022-03-28T19:55:52.000Z | tests/handlers/api_admins_test.py | lsst-sqre/gafaelfawr | 5d965e9d7901987ef041fd341637d75407b55227 | [
"MIT"
] | 3 | 2021-04-26T18:02:58.000Z | 2022-02-03T01:29:13.000Z | """Tests for the ``/auth/api/v1/admins`` routes."""
from __future__ import annotations
from typing import TYPE_CHECKING
import pytest
if TYPE_CHECKING:
from tests.support.setup import SetupTest
@pytest.mark.asyncio
async def test_admins(setup: SetupTest) -> None:
r = await setup.client.get("/auth/api/v1/admins")
assert r.status_code == 401
token_data = await setup.create_session_token()
r = await setup.client.get(
"/auth/api/v1/admins",
headers={"Authorization": f"bearer {token_data.token}"},
)
assert r.status_code == 403
assert r.json()["detail"][0] == {
"msg": "Token does not have required scope admin:token",
"type": "permission_denied",
}
token_data = await setup.create_session_token(scopes=["admin:token"])
r = await setup.client.get(
"/auth/api/v1/admins",
headers={"Authorization": f"bearer {token_data.token}"},
)
assert r.status_code == 200
assert r.json() == [{"username": "admin"}]
admin_service = setup.factory.create_admin_service()
admin_service.add_admin("example", actor="admin", ip_address="127.0.0.1")
r = await setup.client.get(
"/auth/api/v1/admins",
headers={"Authorization": f"bearer {token_data.token}"},
)
assert r.status_code == 200
assert r.json() == [{"username": "admin"}, {"username": "example"}]
@pytest.mark.asyncio
async def test_add_delete(setup: SetupTest) -> None:
r = await setup.client.post(
"/auth/api/v1/admins", json={"username": "some-user"}
)
assert r.status_code == 401
r = await setup.client.delete("/auth/api/v1/admins/admin")
assert r.status_code == 401
token_data = await setup.create_session_token(username="admin")
csrf = await setup.login(token_data.token)
r = await setup.client.post(
"/auth/api/v1/admins",
headers={"X-CSRF-Token": csrf},
json={"username": "new-admin"},
)
assert r.status_code == 403
assert r.json()["detail"][0] == {
"msg": "Token does not have required scope admin:token",
"type": "permission_denied",
}
r = await setup.client.delete(
"/auth/api/v1/admins/admin", headers={"X-CSRF-Token": csrf}
)
assert r.status_code == 403
assert r.json()["detail"][0] == {
"msg": "Token does not have required scope admin:token",
"type": "permission_denied",
}
token_data = await setup.create_session_token(
username="admin", scopes=["admin:token"]
)
csrf = await setup.login(token_data.token)
r = await setup.client.post(
"/auth/api/v1/admins", json={"username": "new-admin"}
)
assert r.status_code == 403
assert r.json()["detail"][0]["type"] == "invalid_csrf"
r = await setup.client.post(
"/auth/api/v1/admins",
headers={"X-CSRF-Token": csrf},
json={"username": "new-admin"},
)
assert r.status_code == 204
r = await setup.client.get("/auth/api/v1/admins")
assert r.status_code == 200
assert r.json() == [{"username": "admin"}, {"username": "new-admin"}]
r = await setup.client.delete("/auth/api/v1/admins/admin")
assert r.status_code == 403
assert r.json()["detail"][0]["type"] == "invalid_csrf"
r = await setup.client.delete(
"/auth/api/v1/admins/admin", headers={"X-CSRF-Token": csrf}
)
assert r.status_code == 204
r = await setup.client.get("/auth/api/v1/admins")
assert r.json() == [{"username": "new-admin"}]
# We can still retrieve the list because we have a token with scope
# admin:token, but since we (admin) were removed as an admin, we should no
# longer be able to add new admins.
r = await setup.client.post(
"/auth/api/v1/admins",
headers={"X-CSRF-Token": csrf},
json={"username": "another-admin"},
)
assert r.status_code == 403
@pytest.mark.asyncio
async def test_bootstrap(setup: SetupTest) -> None:
token = str(setup.config.bootstrap_token)
r = await setup.client.post(
"/auth/api/v1/admins",
headers={"Authorization": f"bearer {token}"},
json={"username": "example"},
)
assert r.status_code == 204
r = await setup.client.get(
"/auth/api/v1/admins",
headers={"Authorization": f"bearer {token}"},
)
assert r.status_code == 200
assert r.json() == [{"username": "admin"}, {"username": "example"}]
r = await setup.client.delete(
"/auth/api/v1/admins/admin",
headers={"Authorization": f"bearer {token}"},
)
assert r.status_code == 204
r = await setup.client.get(
"/auth/api/v1/admins",
headers={"Authorization": f"bearer {token}"},
)
assert r.status_code == 200
assert r.json() == [{"username": "example"}]
| 32.890411 | 78 | 0.615369 | 631 | 4,802 | 4.59271 | 0.160063 | 0.070048 | 0.062112 | 0.10352 | 0.812629 | 0.805728 | 0.767081 | 0.754658 | 0.754658 | 0.728433 | 0 | 0.022582 | 0.21616 | 4,802 | 145 | 79 | 33.117241 | 0.747343 | 0.045606 | 0 | 0.644628 | 0 | 0 | 0.269508 | 0.027322 | 0 | 0 | 0 | 0 | 0.239669 | 1 | 0 | false | 0 | 0.033058 | 0 | 0.033058 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
ef44b150a4213c464d35321099160ce2719504ca | 79 | py | Python | src/tap_apple_search_ads/schema/__init__.py | mighty-digital/tap-apple-search-ads | de7de13509c06e4ce4ef89884b23a9b9d7182d56 | [
"MIT"
] | 1 | 2022-01-18T15:04:40.000Z | 2022-01-18T15:04:40.000Z | src/tap_apple_search_ads/schema/__init__.py | mighty-digital/tap-apple-search-ads | de7de13509c06e4ce4ef89884b23a9b9d7182d56 | [
"MIT"
] | null | null | null | src/tap_apple_search_ads/schema/__init__.py | mighty-digital/tap-apple-search-ads | de7de13509c06e4ce4ef89884b23a9b9d7182d56 | [
"MIT"
] | null | null | null | """Working with JSON schemas."""
from . import from_file as from_file # noqa
| 19.75 | 44 | 0.708861 | 12 | 79 | 4.5 | 0.75 | 0.296296 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.177215 | 79 | 3 | 45 | 26.333333 | 0.830769 | 0.405063 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
32a3791528ce3418e92280e776cef368a0cabe7b | 77 | py | Python | Statistics/RandomNumberNoSeed.py | meahesachin/statscalc | 431b39299494d9188848f2829e7269e385612d89 | [
"MIT"
] | null | null | null | Statistics/RandomNumberNoSeed.py | meahesachin/statscalc | 431b39299494d9188848f2829e7269e385612d89 | [
"MIT"
] | null | null | null | Statistics/RandomNumberNoSeed.py | meahesachin/statscalc | 431b39299494d9188848f2829e7269e385612d89 | [
"MIT"
] | 1 | 2021-05-31T04:28:38.000Z | 2021-05-31T04:28:38.000Z | import random
def randomnumbernoseed(a ,b):
return random.randint(a, b)
| 15.4 | 31 | 0.727273 | 11 | 77 | 5.090909 | 0.727273 | 0.071429 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.168831 | 77 | 4 | 32 | 19.25 | 0.875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
32b1d8c398d1d5546d5d091930159b2315098c92 | 249 | py | Python | flapp/ext/site/main.py | ernane/flapp | b31eff9accce5d150a3b852dfb8c5394ed9f589a | [
"MIT"
] | null | null | null | flapp/ext/site/main.py | ernane/flapp | b31eff9accce5d150a3b852dfb8c5394ed9f589a | [
"MIT"
] | 1 | 2020-08-30T20:48:05.000Z | 2020-08-30T20:48:57.000Z | flapp/ext/site/main.py | ernane/flapp | b31eff9accce5d150a3b852dfb8c5394ed9f589a | [
"MIT"
] | null | null | null | from flask import Blueprint, render_template
bp = Blueprint("site", __name__)
@bp.route("/")
def index():
return render_template("base.html", page="Index")
@bp.route("/home")
def home():
return render_template("base.html", page="Home")
| 17.785714 | 53 | 0.682731 | 33 | 249 | 4.939394 | 0.515152 | 0.257669 | 0.245399 | 0.294479 | 0.392638 | 0.392638 | 0 | 0 | 0 | 0 | 0 | 0 | 0.136546 | 249 | 13 | 54 | 19.153846 | 0.75814 | 0 | 0 | 0 | 0 | 0 | 0.148594 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.125 | 0.25 | 0.625 | 0.25 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
32b86965d7e6c59495e190103925793fc429ea39 | 181 | py | Python | AndroidFTPBackup/models.py | SanketRevankar/AndroidFTP-DataBackup | 54d740be49305ca993ebd541bb4998817886bdfd | [
"Apache-2.0"
] | 6 | 2019-02-06T18:14:54.000Z | 2022-01-25T15:30:10.000Z | AndroidFTPBackup/models.py | SanketRevankar/AndroidFTP-DataBackup | 54d740be49305ca993ebd541bb4998817886bdfd | [
"Apache-2.0"
] | null | null | null | AndroidFTPBackup/models.py | SanketRevankar/AndroidFTP-DataBackup | 54d740be49305ca993ebd541bb4998817886bdfd | [
"Apache-2.0"
] | null | null | null | from django.db import models
class LastBackup(models.Model):
id = models.TextField('Date last updated', primary_key=True)
pub_date = models.TextField('Date last updated')
| 25.857143 | 64 | 0.745856 | 25 | 181 | 5.32 | 0.68 | 0.225564 | 0.285714 | 0.345865 | 0.451128 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.149171 | 181 | 6 | 65 | 30.166667 | 0.863636 | 0 | 0 | 0 | 0 | 0 | 0.187845 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.25 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 6 |
089421831de172607bb6f7811e7bddc7e18b4b5c | 49 | py | Python | src/hltv_api/__init__.py | hoangvu01/hltv_python | 227947c7bfb2cbd475f1aefcda6447e677658820 | [
"MIT"
] | 2 | 2021-12-22T16:10:40.000Z | 2021-12-22T16:44:50.000Z | src/hltv_api/__init__.py | hoangvu01/hltv_python | 227947c7bfb2cbd475f1aefcda6447e677658820 | [
"MIT"
] | null | null | null | src/hltv_api/__init__.py | hoangvu01/hltv_python | 227947c7bfb2cbd475f1aefcda6447e677658820 | [
"MIT"
] | null | null | null | from hltv_api.api import matches, results, stats
| 24.5 | 48 | 0.816327 | 8 | 49 | 4.875 | 0.875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.122449 | 49 | 1 | 49 | 49 | 0.906977 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
3eb4e9d9c121dcfef2ab29a3e320a169bcd6d223 | 49 | py | Python | Backend/equation test/venv/Lib/site-packages/ln/__init__.py | chehansivaruban/Cyber---SDGP | 6676c284c34c4c15279bf3e5cfb73fd4e5b7391a | [
"CC0-1.0"
] | 1 | 2021-05-18T10:55:32.000Z | 2021-05-18T10:55:32.000Z | Backend/equation test/venv/Lib/site-packages/ln/__init__.py | chehansivaruban/Cyber---SDGP | 6676c284c34c4c15279bf3e5cfb73fd4e5b7391a | [
"CC0-1.0"
] | null | null | null | Backend/equation test/venv/Lib/site-packages/ln/__init__.py | chehansivaruban/Cyber---SDGP | 6676c284c34c4c15279bf3e5cfb73fd4e5b7391a | [
"CC0-1.0"
] | 2 | 2021-03-29T19:00:55.000Z | 2021-04-02T13:18:07.000Z | from kb import Name, Verb, Number, Prop, Rule, KB | 49 | 49 | 0.734694 | 9 | 49 | 4 | 0.888889 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.163265 | 49 | 1 | 49 | 49 | 0.878049 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
3ec661b22b912eb0009b49726ebecfd247dc2285 | 24 | py | Python | networks/FlowNetS/__init__.py | donegaci/memc-net | 9bdb0ab6ce99af22a165db2cedacd148dd6083c0 | [
"MIT"
] | 145 | 2019-06-13T07:57:18.000Z | 2022-03-16T07:44:20.000Z | networks/FlowNetS/__init__.py | donegaci/memc-net | 9bdb0ab6ce99af22a165db2cedacd148dd6083c0 | [
"MIT"
] | 5 | 2019-08-08T08:33:44.000Z | 2021-04-14T07:08:44.000Z | networks/FlowNetS/__init__.py | donegaci/memc-net | 9bdb0ab6ce99af22a165db2cedacd148dd6083c0 | [
"MIT"
] | 19 | 2019-06-28T14:02:39.000Z | 2022-02-20T12:10:41.000Z | from .FlowNetS import *
| 12 | 23 | 0.75 | 3 | 24 | 6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.166667 | 24 | 1 | 24 | 24 | 0.9 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
3ecfa194913df8ee8de62da46fb64c9ff9906a46 | 128 | py | Python | Data Structures and Algorithms/Edabit Algo Solutions/EASY PROBLEMS/FourPassengersAndADriver.py | akkik04/Python-DataStructures-and-Algorithms | 8db63173218e5a9205dbb325935c71fec93b695c | [
"MIT"
] | 1 | 2022-01-22T18:19:07.000Z | 2022-01-22T18:19:07.000Z | Data Structures and Algorithms/Edabit Algo Solutions/EASY PROBLEMS/FourPassengersAndADriver.py | akkik04/Python-DataStructures-and-Algorithms | 8db63173218e5a9205dbb325935c71fec93b695c | [
"MIT"
] | null | null | null | Data Structures and Algorithms/Edabit Algo Solutions/EASY PROBLEMS/FourPassengersAndADriver.py | akkik04/Python-DataStructures-and-Algorithms | 8db63173218e5a9205dbb325935c71fec93b695c | [
"MIT"
] | null | null | null | import math
def cars_needed(n):
# returning the final output, which is 'n' / 5 rounded upwards.
return math.ceil(n / 5)
| 18.285714 | 65 | 0.679688 | 21 | 128 | 4.095238 | 0.809524 | 0.046512 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.02 | 0.21875 | 128 | 6 | 66 | 21.333333 | 0.84 | 0.476563 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
3ee3542fa9fe4519f54aa6787645e30583b8a5e9 | 42 | py | Python | docs/demo.py | sourcery-ai-bot/justuse | 01e023eca60c45f354f6fcfebce9e8ec08f28270 | [
"MIT"
] | 27 | 2021-06-14T22:48:47.000Z | 2022-03-27T13:52:23.000Z | docs/demo.py | sourcery-ai-bot/justuse | 01e023eca60c45f354f6fcfebce9e8ec08f28270 | [
"MIT"
] | 375 | 2021-05-27T22:21:57.000Z | 2022-03-31T17:27:54.000Z | docs/demo.py | sourcery-ai-bot/justuse | 01e023eca60c45f354f6fcfebce9e8ec08f28270 | [
"MIT"
] | 7 | 2021-06-13T17:54:43.000Z | 2021-12-02T20:02:01.000Z | def foo():
print("Hello justuse-user!") | 21 | 31 | 0.642857 | 6 | 42 | 4.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.142857 | 42 | 2 | 31 | 21 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0.44186 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | true | 0 | 0 | 0 | 0.5 | 0.5 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
411572dedb4beac00b729cbc6f87bec26e400e58 | 171 | py | Python | src/prefect/environments/storage/local.py | vnsn/prefect | 972345597975155dba9e3232bcc430d0a6258a37 | [
"Apache-2.0"
] | 1 | 2021-05-12T12:47:12.000Z | 2021-05-12T12:47:12.000Z | src/prefect/environments/storage/local.py | vnsn/prefect | 972345597975155dba9e3232bcc430d0a6258a37 | [
"Apache-2.0"
] | 7 | 2021-06-26T08:05:20.000Z | 2022-03-26T08:05:32.000Z | src/prefect/environments/storage/local.py | vnsn/prefect | 972345597975155dba9e3232bcc430d0a6258a37 | [
"Apache-2.0"
] | 1 | 2021-10-16T08:33:56.000Z | 2021-10-16T08:33:56.000Z | from prefect.storage import Local as _Local
from prefect.environments.storage.base import _DeprecatedStorageMixin
class Local(_Local, _DeprecatedStorageMixin):
pass
| 24.428571 | 69 | 0.836257 | 19 | 171 | 7.315789 | 0.578947 | 0.158273 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.116959 | 171 | 6 | 70 | 28.5 | 0.92053 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.25 | 0.5 | 0 | 0.75 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
f5af825a49fb458bb4b3ca46abff345249eeac84 | 1,017 | py | Python | Task/core/route.py | Janik-Tarverdyan/Python_Task | c8a3c07f1294c62bd76881e84841f45221e1c2ce | [
"BSD-3-Clause"
] | null | null | null | Task/core/route.py | Janik-Tarverdyan/Python_Task | c8a3c07f1294c62bd76881e84841f45221e1c2ce | [
"BSD-3-Clause"
] | null | null | null | Task/core/route.py | Janik-Tarverdyan/Python_Task | c8a3c07f1294c62bd76881e84841f45221e1c2ce | [
"BSD-3-Clause"
] | null | null | null | from core import controller, task
class Base:
@task.route('/')
def home():
return controller.Page.Home()
@task.route('/api/new_rate', methods=['GET', 'POST'])
def new_rate():
return controller.Page.New_Rate()
class Errors:
# Error handlers
def Handler():
@task.errorhandler(400)
def bad_request(Error):
return controller.Error.NO_400(Error)
@task.errorhandler(401)
def unauthorized_page(Error):
return controller.Error.NO_401(Error)
@task.errorhandler(403)
def forbidden_page(Error):
return controller.Error.NO_403(Error)
@task.errorhandler(404)
def page_not_found(Error):
return controller.Error.NO_404(Error)
@task.errorhandler(405)
def method_not_allowed(Error):
return controller.Error.NO_405(Error)
@task.errorhandler(500)
def server_Error_page(Error):
return controller.Error.NO_505(Error)
| 23.651163 | 57 | 0.618486 | 118 | 1,017 | 5.177966 | 0.338983 | 0.209493 | 0.206219 | 0.255319 | 0.294599 | 0.157119 | 0 | 0 | 0 | 0 | 0 | 0.048649 | 0.27237 | 1,017 | 42 | 58 | 24.214286 | 0.777027 | 0.013766 | 0 | 0 | 0 | 0 | 0.020979 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.321429 | false | 0 | 0.035714 | 0.285714 | 0.714286 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
de34732639b8b15babda6875ffd450a3506bba59 | 160 | py | Python | counsel_users/admin.py | Kipkorir-Gideon/Councel | c6ba2ea4a5d1794e4af4c863492e1872da73acac | [
"MIT"
] | null | null | null | counsel_users/admin.py | Kipkorir-Gideon/Councel | c6ba2ea4a5d1794e4af4c863492e1872da73acac | [
"MIT"
] | 5 | 2022-01-24T07:05:51.000Z | 2022-02-03T20:32:43.000Z | counsel_users/admin.py | Kipkorir-Gideon/Councel | c6ba2ea4a5d1794e4af4c863492e1872da73acac | [
"MIT"
] | 1 | 2022-02-04T16:24:22.000Z | 2022-02-04T16:24:22.000Z | from django.contrib import admin
from counsel_users import *
from counsel_users.models import Account
# Register your models here.
admin.site.register(Account) | 26.666667 | 40 | 0.83125 | 23 | 160 | 5.695652 | 0.565217 | 0.167939 | 0.244275 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.1125 | 160 | 6 | 41 | 26.666667 | 0.922535 | 0.1625 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.75 | 0 | 0.75 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
ded96e92d07a892deaa4f048218ce279212e2f28 | 8,211 | py | Python | test-common/functiontest/testcase/grp_client/sut_scan.py | imotai8915/fedb | fe0c130f70206325f130f00be6c1483c9c36d113 | [
"Apache-2.0"
] | 1 | 2021-08-23T12:02:30.000Z | 2021-08-23T12:02:30.000Z | test-common/functiontest/testcase/grp_client/sut_scan.py | imotai8915/fedb | fe0c130f70206325f130f00be6c1483c9c36d113 | [
"Apache-2.0"
] | null | null | null | test-common/functiontest/testcase/grp_client/sut_scan.py | imotai8915/fedb | fe0c130f70206325f130f00be6c1483c9c36d113 | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/env python3
# -*- coding: utf-8 -*-
# Copyright 2021 4Paradigm
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# -*- coding: utf-8 -*-
from framework.test_suite import TestSuite
from job_helper import JobHelper, RtidbClient
class Scan(TestSuite):
"""
scan 操作
"""
def setUp(self):
pass
def tearDown(self):
pass
def testScanCommon(self):
"""
scan操作,正确处理
"""
jobHelper = JobHelper()
jobHelper.append(jobHelper.rtidbClient.create_table)
jobHelper.append(jobHelper.rtidbClient.put)
jobHelper.append(jobHelper.rtidbClient.scan)
retStatus = jobHelper.run()
self.assertTrue(retStatus)
def testScanMulti(self):
"""
scan操作,命中多条stime,etime内数据
"""
jobHelper = JobHelper()
jobHelper.append(jobHelper.rtidbClient.create_table)
jobHelper.append(jobHelper.rtidbClient.put, time=1494496520)
jobHelper.append(jobHelper.rtidbClient.put, time=1494496521)
jobHelper.append(jobHelper.rtidbClient.put, time=1494496522)
jobHelper.append(jobHelper.rtidbClient.put, time=1494496523)
jobHelper.append(jobHelper.rtidbClient.put, time=1494496524)
jobHelper.append(jobHelper.rtidbClient.put, time=1494496525)
jobHelper.append(jobHelper.rtidbClient.put, time=1494496526)
jobHelper.append(jobHelper.rtidbClient.scan,
stime=1494496525,
etime=1494496521)
jobHelper.run(autoidentity=False)
retStatus, retMsg = jobHelper.identify(jobHelper.input_message(),
jobHelper.scanout_message(),
inputJunkFunc=lambda x: 1494496521 < x['time'] <= 1494496525)
self.assertTrue(retStatus, retMsg)
def testScanTidInvalid(self):
"""
table_id不存在的表进行scan
"""
jobHelper = JobHelper()
jobHelper.append(jobHelper.rtidbClient.create_table, tid=123)
jobHelper.append(jobHelper.rtidbClient.put, tid=123)
jobHelper.append(jobHelper.rtidbClient.scan, tid=456)
retStatus = jobHelper.run(autoidentity=False, logcheck=False)
self.assertTrue(retStatus)
self.assertEqual(0, len(jobHelper.scanout_message()))
def testScanDroped(self):
"""
已经drop的表进行scan
"""
jobHelper = JobHelper()
jobHelper.append(jobHelper.rtidbClient.create_table)
jobHelper.append(jobHelper.rtidbClient.put)
jobHelper.append(jobHelper.rtidbClient.drop_table)
jobHelper.append(jobHelper.rtidbClient.scan)
retStatus = jobHelper.run(failonerror=False, autoidentity=False, logcheck=False)
self.assertTrue(retStatus)
self.assertEqual(0, len(jobHelper.scanout_message()))
def testScanPkInvalid(self):
"""
不存在的partition_key进行scan
"""
jobHelper = JobHelper()
jobHelper.append(jobHelper.rtidbClient.create_table)
jobHelper.append(jobHelper.rtidbClient.put, pk='123')
jobHelper.append(jobHelper.rtidbClient.scan, pk='456')
retStatus = jobHelper.run(failonerror=False, autoidentity=False)
self.assertTrue(retStatus)
self.assertEqual(0, len(jobHelper.scanout_message()))
def testScanStimeEmpty(self):
"""
timestamp_start为空
"""
jobHelper = JobHelper()
jobHelper.append(jobHelper.rtidbClient.create_table)
jobHelper.append(jobHelper.rtidbClient.put)
jobHelper.append(jobHelper.rtidbClient.scan, stime=0)
retStatus = jobHelper.run(failonerror=False, autoidentity=False)
self.assertTrue(retStatus)
self.assertEqual(0, len(jobHelper.scanout_message()))
def testScanEtimeEmpty(self):
"""
timestamp_end为空
"""
jobHelper = JobHelper()
jobHelper.append(jobHelper.rtidbClient.create_table)
jobHelper.append(jobHelper.rtidbClient.put)
jobHelper.append(jobHelper.rtidbClient.scan, etime=0)
retStatus = jobHelper.run(failonerror=False, autoidentity=False)
self.assertTrue(retStatus)
self.assertEqual(1, len(jobHelper.scanout_message()))
def testScanTimeEqual(self):
"""
timestamp_start = timestamp_end
"""
jobHelper = JobHelper()
jobHelper.append(jobHelper.rtidbClient.create_table)
jobHelper.append(jobHelper.rtidbClient.put, time=1494496520)
jobHelper.append(jobHelper.rtidbClient.put, time=1494496521)
jobHelper.append(jobHelper.rtidbClient.put, time=1494496522)
jobHelper.append(jobHelper.rtidbClient.put, time=1494496523)
jobHelper.append(jobHelper.rtidbClient.scan,
stime=1494496521,
etime=1494496521)
jobHelper.run(autoidentity=False)
retStatus, retMsg = jobHelper.identify(jobHelper.input_message(),
jobHelper.scanout_message(),
inputJunkFunc=lambda x: x['time'] == 1494496521)
self.assertFalse(retStatus, retMsg)
def testScanTimeDisorderly(self):
"""
timestamp_start < timestamp_end
"""
jobHelper = JobHelper()
jobHelper.append(jobHelper.rtidbClient.create_table)
jobHelper.append(jobHelper.rtidbClient.put, time=1494496520)
jobHelper.append(jobHelper.rtidbClient.put, time=1494496521)
jobHelper.append(jobHelper.rtidbClient.put, time=1494496522)
jobHelper.append(jobHelper.rtidbClient.put, time=1494496523)
jobHelper.append(jobHelper.rtidbClient.scan,
stime=1494496521,
etime=1494496522)
retStatus = jobHelper.run(autoidentity=False)
self.assertTrue(retStatus)
self.assertEqual(0, len(jobHelper.scanout_message()))
def testScanTimeLongYear(self):
"""
timestamp_start ,timestamp_end之间跨越100年
"""
jobHelper = JobHelper()
jobHelper.append(jobHelper.rtidbClient.create_table)
jobHelper.append(jobHelper.rtidbClient.put, time=1494496520)
jobHelper.append(jobHelper.rtidbClient.put, time=1494496521)
jobHelper.append(jobHelper.rtidbClient.put, time=1494496522)
jobHelper.append(jobHelper.rtidbClient.put, time=1494496523)
jobHelper.append(jobHelper.rtidbClient.scan,
stime=1494496521 + 10 * 365 * 24 * 3600,
etime=1494496521)
retStatus = jobHelper.run(autoidentity=False)
self.assertTrue(retStatus)
retStatus, retMsg = jobHelper.identify(jobHelper.input_message(),
jobHelper.scanout_message(),
inputJunkFunc=lambda x: x['time'] > 1494496521)
self.assertTrue(retStatus, retMsg)
def testScanMiss(self):
"""
scan时间段内无数据
"""
jobHelper = JobHelper()
jobHelper.append(jobHelper.rtidbClient.create_table)
jobHelper.append(jobHelper.rtidbClient.put, time=1494496520)
jobHelper.append(jobHelper.rtidbClient.put, time=1494496521)
jobHelper.append(jobHelper.rtidbClient.put, time=1494496522)
jobHelper.append(jobHelper.rtidbClient.put, time=1494496523)
jobHelper.append(jobHelper.rtidbClient.scan,
stime=1494496511,
etime=1494496512)
retStatus = jobHelper.run(autoidentity=False)
self.assertTrue(retStatus)
self.assertEqual(0, len(jobHelper.scanout_message()))
| 38.549296 | 108 | 0.648764 | 762 | 8,211 | 6.942257 | 0.206037 | 0.200378 | 0.235917 | 0.344045 | 0.773724 | 0.751418 | 0.700756 | 0.697732 | 0.65879 | 0.65879 | 0 | 0.068639 | 0.25478 | 8,211 | 212 | 109 | 38.731132 | 0.795882 | 0.105225 | 0 | 0.692308 | 0 | 0 | 0.002552 | 0 | 0 | 0 | 0 | 0 | 0.146154 | 1 | 0.1 | false | 0.015385 | 0.015385 | 0 | 0.123077 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
deead7e309476559418a0ff95b4f6b8fdef01ddb | 26 | py | Python | sdk/pam/lib/libpam_arvados.py | tomclegg/arvados | 138eedbea2658ef397c7a58ce48d6f06077ba140 | [
"Apache-2.0"
] | 2 | 2017-02-26T15:39:54.000Z | 2018-12-03T15:53:01.000Z | sdk/pam/lib/libpam_arvados.py | tomclegg/arvados | 138eedbea2658ef397c7a58ce48d6f06077ba140 | [
"Apache-2.0"
] | null | null | null | sdk/pam/lib/libpam_arvados.py | tomclegg/arvados | 138eedbea2658ef397c7a58ce48d6f06077ba140 | [
"Apache-2.0"
] | 1 | 2017-11-21T12:31:42.000Z | 2017-11-21T12:31:42.000Z | from arvados_pam import *
| 13 | 25 | 0.807692 | 4 | 26 | 5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.153846 | 26 | 1 | 26 | 26 | 0.909091 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
7232df2b15c29843cf15a5f65ff093deb94ae559 | 36 | py | Python | py3dbp/__init__.py | janoxakes/3dbinpacking | 9c2c658fdf5088faf266ed1568bbf10bfdaf41ff | [
"MIT"
] | null | null | null | py3dbp/__init__.py | janoxakes/3dbinpacking | 9c2c658fdf5088faf266ed1568bbf10bfdaf41ff | [
"MIT"
] | null | null | null | py3dbp/__init__.py | janoxakes/3dbinpacking | 9c2c658fdf5088faf266ed1568bbf10bfdaf41ff | [
"MIT"
] | null | null | null | from .main import Bin, Item, Packer
| 18 | 35 | 0.75 | 6 | 36 | 4.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.166667 | 36 | 1 | 36 | 36 | 0.9 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
a0d8c42fc09200d04d2cc721352ea14fba40b6c7 | 45 | py | Python | python_guis/__init__.py | ptheywood/python-guis | 361fb4ae5517ac258617f5320919e5b79a1b78ec | [
"MIT"
] | null | null | null | python_guis/__init__.py | ptheywood/python-guis | 361fb4ae5517ac258617f5320919e5b79a1b78ec | [
"MIT"
] | null | null | null | python_guis/__init__.py | ptheywood/python-guis | 361fb4ae5517ac258617f5320919e5b79a1b78ec | [
"MIT"
] | null | null | null | def inc(x: float) -> float:
return x + 1
| 15 | 27 | 0.555556 | 8 | 45 | 3.125 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.03125 | 0.288889 | 45 | 2 | 28 | 22.5 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | false | 0 | 0 | 0.5 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
a0e1fc7efb0c78b0bb79ad87191ba72a8271e234 | 128 | py | Python | amazing/exceptions.py | danieloconell/maze-solver | f60e476d827d59bfa17cd2148787332707846882 | [
"MIT"
] | null | null | null | amazing/exceptions.py | danieloconell/maze-solver | f60e476d827d59bfa17cd2148787332707846882 | [
"MIT"
] | 2 | 2021-06-08T19:35:19.000Z | 2021-09-08T00:44:59.000Z | amazing/exceptions.py | danieloconell/amazing | f60e476d827d59bfa17cd2148787332707846882 | [
"MIT"
] | null | null | null | class NodeNotFound(Exception):
pass
class MazeNotSolved(Exception):
pass
class AlgorithmNotFound(Exception):
pass
| 14.222222 | 35 | 0.75 | 12 | 128 | 8 | 0.5 | 0.40625 | 0.375 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.179688 | 128 | 8 | 36 | 16 | 0.914286 | 0 | 0 | 0.5 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.5 | 0 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
9d2c0420773332d6736fc43d0f6de51ebb45f244 | 115 | py | Python | lib/models/ScanRefer2D/__init__.py | aligholami/kepler | a272ac427e09892cd44ade70e910272c4f69c638 | [
"Unlicense"
] | null | null | null | lib/models/ScanRefer2D/__init__.py | aligholami/kepler | a272ac427e09892cd44ade70e910272c4f69c638 | [
"Unlicense"
] | null | null | null | lib/models/ScanRefer2D/__init__.py | aligholami/kepler | a272ac427e09892cd44ade70e910272c4f69c638 | [
"Unlicense"
] | null | null | null | from .cap_snt import ScanRefer2DShowAndTellCaptioning
from .cap_satnt import ScanRefer2DShowAttendAndTellCaptioning | 57.5 | 61 | 0.921739 | 10 | 115 | 10.4 | 0.7 | 0.134615 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.018519 | 0.06087 | 115 | 2 | 61 | 57.5 | 0.944444 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
19cef438662e9cbc28832cf93d02490c99b14560 | 28 | py | Python | tests/__init__.py | hmisee/BoardGames | 6ffb8801b5ceee3a526a0185fb729cbf4ee027ee | [
"MIT"
] | null | null | null | tests/__init__.py | hmisee/BoardGames | 6ffb8801b5ceee3a526a0185fb729cbf4ee027ee | [
"MIT"
] | null | null | null | tests/__init__.py | hmisee/BoardGames | 6ffb8801b5ceee3a526a0185fb729cbf4ee027ee | [
"MIT"
] | null | null | null | from . import test_character | 28 | 28 | 0.857143 | 4 | 28 | 5.75 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.107143 | 28 | 1 | 28 | 28 | 0.92 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
19d1c60c3166093228929f2d1efbfe082a20180c | 108 | py | Python | KSPModule/__init__.py | GonDragon/KSPModule | 736def2c65a18c9df61536c51f5d04a909468797 | [
"MIT"
] | null | null | null | KSPModule/__init__.py | GonDragon/KSPModule | 736def2c65a18c9df61536c51f5d04a909468797 | [
"MIT"
] | null | null | null | KSPModule/__init__.py | GonDragon/KSPModule | 736def2c65a18c9df61536c51f5d04a909468797 | [
"MIT"
] | null | null | null | from KSPModule.Module import Module, Attribute
from KSPModule.Reader import Reader, UnbalancedBracketsError
| 36 | 60 | 0.87037 | 12 | 108 | 7.833333 | 0.583333 | 0.276596 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.092593 | 108 | 2 | 61 | 54 | 0.959184 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
c21e466bd127383031fd21776459e9ad544056b3 | 6,227 | py | Python | tests/test_http.py | dj-wasabi/dj-wasabi-release | 8d4742b8b5e9dd4c183c5e07d14086781d848488 | [
"MIT"
] | null | null | null | tests/test_http.py | dj-wasabi/dj-wasabi-release | 8d4742b8b5e9dd4c183c5e07d14086781d848488 | [
"MIT"
] | 10 | 2021-01-07T20:22:18.000Z | 2021-03-16T19:47:39.000Z | tests/test_http.py | dj-wasabi/dj-wasabi-release | 8d4742b8b5e9dd4c183c5e07d14086781d848488 | [
"MIT"
] | null | null | null | """File containing tests specific for the djWasabi/request.py file."""
import sys
import os
import requests
import json
import pytest
import responses
from requests.exceptions import RequestException
currentPath = os.path.dirname(os.path.realpath(__file__))
rootPath = os.path.join(currentPath, "..")
libraryDir = os.path.join(rootPath, "lib")
sys.path.append(libraryDir)
from djWasabi import djWasabi
@responses.activate
def test_http__get_name():
with open("tests/resources/dj-wasabi-release.json") as f:
jsonData = json.load(f)
responses.add(responses.GET, 'https://fake.url.com/dj-wasabi-release',
json=jsonData, status=200)
request = djWasabi.http.request(debug=True)
success, output = request._get(url='https://fake.url.com/dj-wasabi-release')
assert success
assert output.json()['name'] == "dj-wasabi-release"
assert output.status_code == 200
def test_http__get_no_url():
"""Test the _get function without providing url.
:return:
"""
with pytest.raises(ValueError, match="Please provide the URL."):
request = djWasabi.http.request()
request._get()
@responses.activate
def test_http__get_name_fail():
with open("tests/resources/dj-wasabi-release.json") as f:
jsonData = json.load(f)
responses.add(responses.GET, 'https://fake.url.com/dj-wasabi-release',
json=jsonData, status=200)
request = djWasabi.http.request()
success, output = request._get(
url='https://fake.url.com/dj-wasabi-releas',
username="fake", password="fake")
assert not success
@responses.activate
def test_http__patch_name():
with open("tests/resources/dj-wasabi-release.json") as f:
jsonData = json.load(f)
responses.add(responses.PATCH, 'https://fake.url.com/dj-wasabi-release',
json=jsonData, status=200)
request = djWasabi.http.request()
success, output = request._patch(url='https://fake.url.com/dj-wasabi-release')
assert success
assert output.json()['name'] == "dj-wasabi-release"
assert output.status_code == 200
def test_http__patch_no_url():
"""Test the _patch function without providing url.
:return:
"""
with pytest.raises(ValueError, match="Please provide the URL."):
request = djWasabi.http.request()
request._patch()
@responses.activate
def test_http__patch_name_fail():
with open("tests/resources/dj-wasabi-release.json") as f:
jsonData = json.load(f)
responses.add(responses.PATCH, 'https://fake.url.com/dj-wasabi-release',
json=jsonData, status=200)
request = djWasabi.http.request()
success, output = request._patch(
url='https://fake.url.com/dj-wasabi-releas',
username="fake", password="fake")
assert not success
@responses.activate
def test_http__post_name():
with open("tests/resources/dj-wasabi-release.json") as f:
jsonData = json.load(f)
responses.add(responses.POST, 'https://fake.url.com/dj-wasabi-release',
json=jsonData, status=201)
request = djWasabi.http.request()
success, output = request._post(url='https://fake.url.com/dj-wasabi-release')
assert success
assert output.json()['name'] == "dj-wasabi-release"
assert output.status_code == 201
def test_http__post_no_url():
"""Test the _post function without providing url.
:return:
"""
with pytest.raises(ValueError, match="Please provide the URL."):
request = djWasabi.http.request()
request._post()
@responses.activate
def test_http__post_name_fail():
with open("tests/resources/dj-wasabi-release.json") as f:
jsonData = json.load(f)
responses.add(responses.POST, 'https://fake.url.com/dj-wasabi-release',
json=jsonData, status=201)
request = djWasabi.http.request()
success, output = request._post(
url='https://fake.url.com/dj-wasabi-releas',
username="fake", password="fake")
assert not success
@responses.activate
def test_http__put_name():
with open("tests/resources/dj-wasabi-release.json") as f:
jsonData = json.load(f)
responses.add(responses.PUT, 'https://fake.url.com/dj-wasabi-release',
json=jsonData, status=200)
request = djWasabi.http.request()
success, output = request._put(url='https://fake.url.com/dj-wasabi-release')
assert success
assert output.json()['name'] == "dj-wasabi-release"
assert output.status_code == 200
def test_http__put_no_url():
"""Test the _put function without providing url.
:return:
"""
with pytest.raises(ValueError, match="Please provide the URL."):
request = djWasabi.http.request()
request._put()
@responses.activate
def test_http__put_name_fail():
with open("tests/resources/dj-wasabi-release.json") as f:
jsonData = json.load(f)
responses.add(responses.PUT, 'https://fake.url.com/dj-wasabi-release',
json=jsonData, status=200)
request = djWasabi.http.request()
success, output = request._put(
url='https://fake.url.com/dj-wasabi-releas',
username="fake", password="fake")
assert not success
@responses.activate
def test_http__delete_name():
responses.add(responses.DELETE, 'https://fake.url.com/dj-wasabi-release',
json="", status=204)
request = djWasabi.http.request()
success, output = request._delete(url='https://fake.url.com/dj-wasabi-release')
assert success
assert output.json() == ""
assert output.status_code == 204
def test_http__delete_no_url():
"""Test the _delete function without providing url.
:return:
"""
with pytest.raises(ValueError, match="Please provide the URL."):
request = djWasabi.http.request()
request._delete()
@responses.activate
def test_http__delete_name_fail():
responses.add(responses.DELETE, 'https://fake.url.com/dj-wasabi-release',
json="", status=204)
request = djWasabi.http.request()
success, output = request._delete(
url='https://fake.url.com/dj-wasabi-releas',
username="fake", password="fake")
assert not success
| 30.228155 | 83 | 0.669986 | 797 | 6,227 | 5.110414 | 0.097867 | 0.062853 | 0.099435 | 0.073656 | 0.876504 | 0.876504 | 0.876504 | 0.822981 | 0.822981 | 0.822981 | 0 | 0.00893 | 0.190782 | 6,227 | 205 | 84 | 30.37561 | 0.799365 | 0.055404 | 0 | 0.666667 | 0 | 0 | 0.224423 | 0.05236 | 0 | 0 | 0 | 0 | 0.144928 | 1 | 0.108696 | false | 0.036232 | 0.057971 | 0 | 0.166667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
dfda0b7f61f016137d9a8969b398b8dd49ea21ab | 2,689 | py | Python | app/src/simulation.py | grayvalley/sanbox | dd86fb7c9b8188a4d136c4055ab9151420a06d76 | [
"Apache-2.0"
] | null | null | null | app/src/simulation.py | grayvalley/sanbox | dd86fb7c9b8188a4d136c4055ab9151420a06d76 | [
"Apache-2.0"
] | null | null | null | app/src/simulation.py | grayvalley/sanbox | dd86fb7c9b8188a4d136c4055ab9151420a06d76 | [
"Apache-2.0"
] | null | null | null | import threading
import numpy as np
from src.side import (
Side
)
from src.event_generator import (
EventTypes,
EventGenerator,
event_generation_loop
)
def run_market_data_simulation(config, state):
# Create threads that create the market events
threads = []
# Create buy limit order add sampling threads
n_levels = 15
thread_id = 1
instrument = "0"
for level in range(1, n_levels + 1):
generator = EventGenerator(thread_id, instrument, EventTypes.ADD, Side.B, level, 1.10 * np.exp(-0.08*(level - 1)), 1)
kwargs = {'state': state, 'generator': generator}
state.add_simulation_thread(threading.Thread(target=event_generation_loop, kwargs=kwargs))
thread_id += 1
# Create sell limit order add sampling threads
for level in range(1, n_levels + 1):
generator = EventGenerator(thread_id, instrument, EventTypes.ADD, Side.S, level, 1.10 * np.exp(-0.08*(level - 1)), 1)
kwargs = {'state': state, 'generator': generator}
state.add_simulation_thread(threading.Thread(target=event_generation_loop, kwargs=kwargs))
thread_id += 1
# Create buy limit order cancel sampling threads
for level in range(1, n_levels + 1):
generator = EventGenerator(thread_id, instrument, EventTypes.CANCEL, Side.B, level, 1.0 * np.exp(-0.10*(level - 1)), 1)
kwargs = {'state': state, 'generator': generator}
state.add_simulation_thread(threading.Thread(target=event_generation_loop, kwargs=kwargs))
thread_id += 1
# Create sell limit order cancel sampling threads
for level in range(1, n_levels + 1):
generator = EventGenerator(thread_id, instrument, EventTypes.CANCEL, Side.S, level, 1.0 * np.exp(-0.10*(level - 1)), 1)
kwargs = {'state': state, 'generator': generator}
state.add_simulation_thread(threading.Thread(target=event_generation_loop, kwargs=kwargs))
thread_id += 1
# Create a buy market order sampling thread
generator = EventGenerator(thread_id, instrument, EventTypes.MARKET_ORDER, Side.B, None, 0.5, None)
kwargs = {'state': state, 'generator': generator}
state.add_simulation_thread(threading.Thread(target=event_generation_loop, kwargs=kwargs))
thread_id += 1
# Create a sell market order sampling thread
generator = EventGenerator(thread_id, instrument, EventTypes.MARKET_ORDER, Side.S, None, 0.5, None)
kwargs = {'state': state, 'generator': generator}
state.add_simulation_thread(threading.Thread(target=event_generation_loop, kwargs=kwargs))
thread_id += 1
# Start the threads
for thread in state.get_simulation_threads():
thread.start()
| 41.369231 | 127 | 0.695054 | 354 | 2,689 | 5.132768 | 0.161017 | 0.057237 | 0.073198 | 0.102367 | 0.824436 | 0.807375 | 0.807375 | 0.807375 | 0.807375 | 0.807375 | 0 | 0.024052 | 0.195984 | 2,689 | 64 | 128 | 42.015625 | 0.816374 | 0.123094 | 0 | 0.488889 | 0 | 0 | 0.036186 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.022222 | false | 0 | 0.088889 | 0 | 0.111111 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
dfe87059dc8390501d8a6169ff44d8ff476eaa13 | 17,534 | py | Python | database.py | JeromeTan1997/Library | 3ff31290c2be61dc19fd8de0f912a2cdab5b9918 | [
"MIT"
] | 1 | 2019-06-08T09:01:24.000Z | 2019-06-08T09:01:24.000Z | database.py | JeromeTan1997/Library | 3ff31290c2be61dc19fd8de0f912a2cdab5b9918 | [
"MIT"
] | null | null | null | database.py | JeromeTan1997/Library | 3ff31290c2be61dc19fd8de0f912a2cdab5b9918 | [
"MIT"
] | 1 | 2021-05-31T02:31:08.000Z | 2021-05-31T02:31:08.000Z | from sqlalchemy import create_engine
from sqlalchemy.orm import scoped_session, sessionmaker
from tables import *
from datetime import date
engine = create_engine('sqlite:///database.db', convert_unicode=True)
db_session = scoped_session(sessionmaker(autocommit=False, autoflush=False, bind=engine))
Base.query = db_session.query_property()
def init_db():
Base.metadata.create_all(bind=engine)
def book_entry(isbn, book_name, author, publisher, publish_year_month, librarian_id, book_id, location):
librarian = db_session.query(Librarian).filter(Librarian.id == librarian_id).first()
cip = db_session.query(CIP).filter(CIP.isbn == isbn).first()
if cip is None:
cip = CIP(isbn=isbn, book_name=book_name, author=author, publisher=publisher, publish_year_month=publish_year_month, librarian=librarian)
status = BookStatus.available if location == '图书流通室' else BookStatus.unborrowable
book = Book(id=book_id, cip=cip, location=location, status=status, librarian=librarian)
db_session.add(book)
db_session.commit()
def populate_data():
librarian1 = Librarian(id='lib0001', name='张三', password='lib0001')
librarian2 = Librarian(id='lib0002', name='李四', password='lib0002')
librarian3 = Librarian(id='lib0003', name='王五', password='lib0003')
reader1 = Reader(id='rea0001', name='李明', password='rea0001', tel='18800201111', email='18800201111@163.com')
reader2 = Reader(id='rea0002', name='刘晓明', password='rea0002', tel='18800202222', email='18800202222@163.com')
reader3 = Reader(id='rea0003', name='张颖', password='rea0003', tel='18800203333', email='18800203333@163.com')
reader4 = Reader(id='rea0004', name='谭卓然', password='rea0004', tel='18800200539', email='Jerome1997@iCloud.com')
cip1 = CIP(isbn='ISBN7-302-02368-9', book_name='数据结构', author='严蔚敏 吴伟民', publisher='清华大学出版社',
publish_year_month=date(year=1997, month=4, day=1), librarian=librarian1)
cip2 = CIP(isbn='ISBN7-302-02357-1', book_name='数据库使用教程', author='董建全', publisher='清华大学出版社',
publish_year_month=date(year=2016, month=4, day=1), librarian=librarian1)
book1 = Book(id='C832.1', cip=cip1, location='图书流通室', status=BookStatus.available, librarian=librarian1)
book2 = Book(id='C832.2', cip=cip1, location='图书流通室', status=BookStatus.available, librarian=librarian1)
book3 = Book(id='C832.3', cip=cip1, location='图书流通室', status=BookStatus.available, librarian=librarian1)
book4 = Book(id='C832.4', cip=cip1, location='图书阅览室', status=BookStatus.unborrowable, librarian=librarian1)
book5 = Book(id='C357.1', cip=cip2, location='图书流通室', status=BookStatus.available, librarian=librarian1)
book6 = Book(id='C357.2', cip=cip2, location='图书流通室', status=BookStatus.available, librarian=librarian1)
db_session.add_all([librarian1, librarian2, librarian3,
reader1, reader2, reader3, reader4,
cip1, cip2,
book1, book2, book3, book4, book5, book6])
db_session.commit()
book_entry('ISBN9-787-53998-1', '巨人的陨落', '[英]肯·福莱特', '江苏凤凰文艺出版社', date(year=2015, month=5, day=1), 'lib0001',
'C787.1', '图书流通室')
book_entry('ISBN9-787-53998-1', '巨人的陨落', '[英]肯·福莱特', '江苏凤凰文艺出版社', date(year=2015, month=5, day=1), 'lib0001',
'C787.2', '图书流通室')
book_entry('ISBN9-787-53998-1', '巨人的陨落', '[英]肯·福莱特', '江苏凤凰文艺出版社', date(year=2015, month=5, day=1), 'lib0001',
'C787.3', '图书流通室')
book_entry('ISBN9-101-17892-2', '百年孤独', '[哥伦比亚]加西亚·马尔克斯', '南海出版公司', date(year=2011, month=6, day=1), 'lib0001',
'C101.1', '图书流通室')
book_entry('ISBN9-101-17892-2', '百年孤独', '[哥伦比亚]加西亚·马尔克斯', '南海出版公司', date(year=2011, month=6, day=1), 'lib0001',
'C101.2', '图书流通室')
book_entry('ISBN9-101-17892-2', '百年孤独', '[哥伦比亚]加西亚·马尔克斯', '南海出版公司', date(year=2011, month=6, day=1), 'lib0001',
'C101.3', '图书流通室')
book_entry('ISBN9-101-17892-2', '百年孤独', '[哥伦比亚]加西亚·马尔克斯', '南海出版公司', date(year=2011, month=6, day=1), 'lib0001',
'C101.4', '图书流通室')
book_entry('ISBN9-102-15732-6', '追风筝的人', '[美]卡勒德·胡赛尼', '上海人民出版社', date(year=2006, month=5, day=1), 'lib0001',
'C102.1', '图书流通室')
book_entry('ISBN9-102-15732-6', '追风筝的人', '[美]卡勒德·胡赛尼', '上海人民出版社', date(year=2006, month=5, day=1), 'lib0001',
'C102.2', '图书流通室')
book_entry('ISBN9-102-15732-6', '追风筝的人', '[美]卡勒德·胡赛尼', '上海人民出版社', date(year=2006, month=5, day=1), 'lib0001',
'C102.3', '图书流通室')
book_entry('ISBN9-103-89422-3', '霍乱时期的爱情', '[哥伦比亚]加西亚·马尔克斯', '南海出版公司', date(year=2012, month=9, day=1), 'lib0001',
'C103.1', '图书流通室')
book_entry('ISBN9-103-89422-3', '霍乱时期的爱情', '[哥伦比亚]加西亚·马尔克斯', '南海出版公司', date(year=2012, month=9, day=1), 'lib0001',
'C103.2', '图书流通室')
book_entry('ISBN9-103-89422-3', '霍乱时期的爱情', '[哥伦比亚]加西亚·马尔克斯', '南海出版公司', date(year=2012, month=9, day=1), 'lib0001',
'C103.3', '图书流通室')
book_entry('ISBN9-103-89422-3', '霍乱时期的爱情', '[哥伦比亚]加西亚·马尔克斯', '南海出版公司', date(year=2012, month=9, day=1), 'lib0001',
'C103.4', '图书流通室')
book_entry('ISBN9-104-23529-5', '新名字的故事', '[意]埃莱娜·费兰特', '人民文学出版社', date(year=2017, month=4, day=1), 'lib0001',
'C104.1', '图书流通室')
book_entry('ISBN9-104-23529-5', '新名字的故事', '[意]埃莱娜·费兰特', '人民文学出版社', date(year=2017, month=4, day=1), 'lib0001',
'C104.2', '图书流通室')
book_entry('ISBN9-104-23529-5', '新名字的故事', '[意]埃莱娜·费兰特', '人民文学出版社', date(year=2017, month=4, day=1), 'lib0001',
'C104.3', '图书流通室')
book_entry('ISBN9-105-36265-1', '雪落香杉树', '[美]戴维·伽特森', '人民文学出版社', date(year=2017, month=6, day=1), 'lib0001',
'C105.1', '图书流通室')
book_entry('ISBN9-105-36265-1', '雪落香杉树', '[美]戴维·伽特森', '人民文学出版社', date(year=2017, month=6, day=1), 'lib0001',
'C105.2', '图书流通室')
book_entry('ISBN9-106-35268-9', '1984', '[英]乔治·奥威尔', '北京十月文艺出版社', date(year=2010, month=4, day=1), 'lib0001',
'C106.1', '图书流通室')
book_entry('ISBN9-106-35268-9', '1984', '[英]乔治·奥威尔', '北京十月文艺出版社', date(year=2010, month=4, day=1), 'lib0001',
'C106.2', '图书流通室')
book_entry('ISBN9-106-35268-9', '1984', '[英]乔治·奥威尔', '北京十月文艺出版社', date(year=2010, month=4, day=1), 'lib0001',
'C106.3', '图书流通室')
book_entry('ISBN9-107-75325-4', '鱼王', '[俄]维克托·阿斯塔菲耶夫', '广西师范大学出版社', date(year=2017, month=4, day=1), 'lib0001',
'C107.1', '图书流通室')
book_entry('ISBN9-107-75325-4', '鱼王', '[俄]维克托·阿斯塔菲耶夫', '广西师范大学出版社', date(year=2017, month=4, day=1), 'lib0001',
'C107.2', '图书流通室')
book_entry('ISBN9-107-75325-4', '鱼王', '[俄]维克托·阿斯塔菲耶夫', '广西师范大学出版社', date(year=2017, month=4, day=1), 'lib0001',
'C107.3', '图书流通室')
book_entry('ISBN9-108-83685-7', '月亮和六便士', '[英]毛姆', '上海译文出版社', date(year=2006, month=8, day=1), 'lib0001', 'C108.1',
'图书流通室')
book_entry('ISBN9-108-83685-7', '月亮和六便士', '[英]毛姆', '上海译文出版社', date(year=2006, month=8, day=1), 'lib0001', 'C108.2',
'图书流通室')
book_entry('ISBN9-108-83685-7', '月亮和六便士', '[英]毛姆', '上海译文出版社', date(year=2006, month=8, day=1), 'lib0001', 'C108.3',
'图书流通室')
book_entry('ISBN9-109-39257-7', '活着', '余华', '南海出版公司', date(year=1998, month=5, day=1), 'lib0001', 'C109.1', '图书流通室')
book_entry('ISBN9-109-39257-7', '活着', '余华', '南海出版公司', date(year=1998, month=5, day=1), 'lib0001', 'C109.2', '图书流通室')
book_entry('ISBN9-109-39257-7', '活着', '余华', '南海出版公司', date(year=1998, month=5, day=1), 'lib0001', 'C109.3', '图书流通室')
book_entry('ISBN9-110-76593-3', '杀死一只知更鸟', '[美]哈珀·李', '译林出版社', date(year=2012, month=9, day=1), 'lib0001', 'C110.1',
'图书流通室')
book_entry('ISBN9-110-76593-3', '杀死一只知更鸟', '[美]哈珀·李', '译林出版社', date(year=2012, month=9, day=1), 'lib0001', 'C110.2',
'图书流通室')
book_entry('ISBN9-110-76593-3', '杀死一只知更鸟', '[美]哈珀·李', '译林出版社', date(year=2012, month=9, day=1), 'lib0001', 'C110.3',
'图书流通室')
book_entry('ISBN9-111-93786-2', '囚鸟', '[美]库尔特·冯内古特', '百花洲文艺出版社', date(year=2017, month=6, day=1), 'lib0001',
'C111.1', '图书流通室')
book_entry('ISBN9-111-93786-2', '囚鸟', '[美]库尔特·冯内古特', '百花洲文艺出版社', date(year=2017, month=6, day=1), 'lib0001',
'C111.2', '图书流通室')
book_entry('ISBN9-111-93786-2', '囚鸟', '[美]库尔特·冯内古特', '百花洲文艺出版社', date(year=2017, month=6, day=1), 'lib0001',
'C111.3', '图书流通室')
book_entry('ISBN9-112-98467-9', '局外人', '[法]阿尔贝·加缪', '上海译文出版社', date(year=2010, month=9, day=1), 'lib0001', 'C112.1',
'图书流通室')
book_entry('ISBN9-112-98467-9', '局外人', '[法]阿尔贝·加缪', '上海译文出版社', date(year=2010, month=9, day=1), 'lib0001', 'C112.2',
'图书流通室')
book_entry('ISBN9-112-98467-9', '局外人', '[法]阿尔贝·加缪', '上海译文出版社', date(year=2010, month=9, day=1), 'lib0001', 'C112.3',
'图书流通室')
book_entry('ISBN5-310-89456-3', '万历十五年', '[美]黄仁宇', '生活·读书·新知三联书店', date(year=1997, month=5, day=1), 'lib0001',
'C310.1', '图书流通室')
book_entry('ISBN5-310-89456-3', '万历十五年', '[美]黄仁宇', '生活·读书·新知三联书店', date(year=1997, month=5, day=1), 'lib0001',
'C310.2', '图书流通室')
book_entry('ISBN5-310-89456-3', '万历十五年', '[美]黄仁宇', '生活·读书·新知三联书店', date(year=1997, month=5, day=1), 'lib0001',
'C310.3', '图书流通室')
book_entry('ISBN5-311-45812-4', '中国历代政治得失', '钱穆', '生活·读书·新知三联书店', date(year=2001, month=5, day=1), 'lib0001',
'C311.1', '图书流通室')
book_entry('ISBN5-311-45812-4', '中国历代政治得失', '钱穆', '生活·读书·新知三联书店', date(year=2001, month=5, day=1), 'lib0001',
'C311.2', '图书流通室')
book_entry('ISBN5-311-45812-4', '中国历代政治得失', '钱穆', '生活·读书·新知三联书店', date(year=2001, month=5, day=1), 'lib0001',
'C311.3', '图书流通室')
book_entry('ISBN5-312-90845-1', '大国的崩溃', '[美]沙希利·浦洛基', '四川人民出版社', date(year=2017, month=5, day=1), 'lib0001',
'C312.1', '图书流通室')
book_entry('ISBN5-312-90845-1', '大国的崩溃', '[美]沙希利·浦洛基', '四川人民出版社', date(year=2017, month=5, day=1), 'lib0001',
'C312.2', '图书流通室')
book_entry('ISBN5-312-90845-1', '大国的崩溃', '[美]沙希利·浦洛基', '四川人民出版社', date(year=2017, month=5, day=1), 'lib0001',
'C312.3', '图书流通室')
book_entry('ISBN5-313-78560-4', '海洋与文明', '[美]林肯·佩恩', '天津人民出版社', date(year=2017, month=4, day=1), 'lib0001',
'C313.1', '图书流通室')
book_entry('ISBN5-313-78560-4', '海洋与文明', '[美]林肯·佩恩', '天津人民出版社', date(year=2017, month=4, day=1), 'lib0001',
'C313.2', '图书流通室')
book_entry('ISBN5-313-78560-4', '海洋与文明', '[美]林肯·佩恩', '天津人民出版社', date(year=2017, month=4, day=1), 'lib0001',
'C313.3', '图书流通室')
book_entry('ISBN5-314-75689-7', '明朝那些事儿', '当年明月', '中国海关出版社', date(year=2009, month=4, day=1), 'lib0001', 'C314.1',
'图书流通室')
book_entry('ISBN5-314-75689-7', '明朝那些事儿', '当年明月', '中国海关出版社', date(year=2009, month=4, day=1), 'lib0001', 'C314.2',
'图书流通室')
book_entry('ISBN5-314-75689-7', '明朝那些事儿', '当年明月', '中国海关出版社', date(year=2009, month=4, day=1), 'lib0001', 'C314.3',
'图书流通室')
book_entry('ISBN5-315-68504-5', '十二幅地图中的世界史', '[英]杰里·布罗顿', '浙江人民出版社', date(year=2016, month=6, day=1), 'lib0001',
'C315.1', '图书流通室')
book_entry('ISBN5-315-68504-5', '十二幅地图中的世界史', '[英]杰里·布罗顿', '浙江人民出版社', date(year=2016, month=6, day=1), 'lib0001',
'C315.2', '图书流通室')
book_entry('ISBN5-315-68504-5', '十二幅地图中的世界史', '[英]杰里·布罗顿', '浙江人民出版社', date(year=2016, month=6, day=1), 'lib0001',
'C315.3', '图书流通室')
book_entry('ISBN5-316-89056-8', '天朝的崩溃', '茅海建', '生活·读书·新知三联书店', date(year=2005, month=7, day=1), 'lib0001',
'C316.1', '图书流通室')
book_entry('ISBN5-316-89056-8', '天朝的崩溃', '茅海建', '生活·读书·新知三联书店', date(year=2005, month=7, day=1), 'lib0001',
'C316.2', '图书流通室')
book_entry('ISBN5-316-89056-8', '天朝的崩溃', '茅海建', '生活·读书·新知三联书店', date(year=2005, month=7, day=1), 'lib0001',
'C316.3', '图书流通室')
book_entry('ISBN5-317-89056-8', '史记', '司马迁', '中华书局', date(year=1982, month=11, day=1), 'lib0001', 'C317.1', '图书流通室')
book_entry('ISBN5-317-89056-8', '史记', '司马迁', '中华书局', date(year=1982, month=11, day=1), 'lib0001', 'C317.2', '图书流通室')
book_entry('ISBN5-317-89056-8', '史记', '司马迁', '中华书局', date(year=1982, month=11, day=1), 'lib0001', 'C317.3', '图书流通室')
book_entry('ISBN5-318-70456-4', '大英博物馆世界简史', '[英]尼尔·麦格雷戈', '新星出版社', date(year=2014, month=1, day=1), 'lib0001',
'C318.1', '图书流通室')
book_entry('ISBN5-318-70456-4', '大英博物馆世界简史', '[英]尼尔·麦格雷戈', '新星出版社', date(year=2014, month=1, day=1), 'lib0001',
'C318.2', '图书流通室')
book_entry('ISBN5-318-70456-4', '大英博物馆世界简史', '[英]尼尔·麦格雷戈', '新星出版社', date(year=2014, month=1, day=1), 'lib0001',
'C318.3', '图书流通室')
book_entry('ISBN1-630-89745-2', '经济学原理', '[美]曼昆', '机械工业出版社', date(year=2013, month=8, day=1), 'lib0001', 'C630.1',
'图书流通室')
book_entry('ISBN1-630-89745-2', '经济学原理', '[美]曼昆', '机械工业出版社', date(year=2013, month=8, day=1), 'lib0001', 'C630.2',
'图书流通室')
book_entry('ISBN1-630-89745-2', '经济学原理', '[美]曼昆', '机械工业出版社', date(year=2013, month=8, day=1), 'lib0001', 'C630.3',
'图书流通室')
book_entry('ISBN1-631-56098-6', '通往奴役之路', '[英]弗里德利希·奥古斯特·哈耶克', '中国社会科学出版社', date(year=1997, month=8, day=1),
'lib0001', 'C631.1', '图书流通室')
book_entry('ISBN1-631-56098-6', '通往奴役之路', '[英]弗里德利希·奥古斯特·哈耶克', '中国社会科学出版社', date(year=1997, month=8, day=1),
'lib0001', 'C631.2', '图书流通室')
book_entry('ISBN1-631-56098-6', '通往奴役之路', '[英]弗里德利希·奥古斯特·哈耶克', '中国社会科学出版社', date(year=1997, month=8, day=1),
'lib0001', 'C631.3', '图书流通室')
book_entry('ISBN1-632-78564-1', '国富论', '[英]亚当·斯密', '华夏出版社', date(year=2005, month=1, day=1), 'lib0001', 'C632.1',
'图书流通室')
book_entry('ISBN1-632-78564-1', '国富论', '[英]亚当·斯密', '华夏出版社', date(year=2005, month=1, day=1), 'lib0001', 'C632.2',
'图书流通室')
book_entry('ISBN1-632-78564-1', '国富论', '[英]亚当·斯密', '华夏出版社', date(year=2005, month=1, day=1), 'lib0001', 'C632.3',
'图书流通室')
book_entry('ISBN1-633-86097-3', '激荡十三年', '吴晓波', '中信出版社', date(year=2007, month=1, day=1), 'lib0001', 'C633.1',
'图书流通室')
book_entry('ISBN1-633-86097-3', '激荡十三年', '吴晓波', '中信出版社', date(year=2007, month=1, day=1), 'lib0001', 'C633.2',
'图书流通室')
book_entry('ISBN1-633-86097-3', '激荡十三年', '吴晓波', '中信出版社', date(year=2007, month=1, day=1), 'lib0001', 'C633.3',
'图书流通室')
book_entry('ISBN1-634-98124-2', '大国大城', '陆铭', '上海人民出版社', date(year=2016, month=7, day=1), 'lib0001', 'C634.1',
'图书流通室')
book_entry('ISBN1-634-98124-2', '大国大城', '陆铭', '上海人民出版社', date(year=2016, month=7, day=1), 'lib0001', 'C634.2',
'图书流通室')
book_entry('ISBN1-634-98124-2', '大国大城', '陆铭', '上海人民出版社', date(year=2016, month=7, day=1), 'lib0001', 'C634.3',
'图书流通室')
book_entry('ISBN3-790-66580-5', '三体', '刘慈欣', '重庆出版社', date(year=2008, month=1, day=1), 'lib0001', 'C790.1', '图书流通室')
book_entry('ISBN3-790-66580-5', '三体', '刘慈欣', '重庆出版社', date(year=2008, month=1, day=1), 'lib0001', 'C790.2', '图书流通室')
book_entry('ISBN3-790-66580-5', '三体', '刘慈欣', '重庆出版社', date(year=2008, month=1, day=1), 'lib0001', 'C790.3', '图书流通室')
book_entry('ISBN3-791-79145-1', '你一生的故事', '[美]特德·姜', '译林出版社', date(year=2015, month=5, day=1), 'lib0001', 'C791.1',
'图书流通室')
book_entry('ISBN3-791-79145-1', '你一生的故事', '[美]特德·姜', '译林出版社', date(year=2015, month=5, day=1), 'lib0001', 'C791.2',
'图书流通室')
book_entry('ISBN3-791-79145-1', '你一生的故事', '[美]特德·姜', '译林出版社', date(year=2015, month=5, day=1), 'lib0001', 'C791.3',
'图书流通室')
book_entry('ISBN3-792-89675-5', '献给阿尔吉侬的花束', '[美]丹尼尔·凯斯', '辽宁教育出版社', date(year=2005, month=8, day=1), 'lib0001',
'C792.1', '图书流通室')
book_entry('ISBN3-792-89675-5', '献给阿尔吉侬的花束', '[美]丹尼尔·凯斯', '辽宁教育出版社', date(year=2005, month=8, day=1), 'lib0001',
'C792.2', '图书流通室')
book_entry('ISBN3-792-89675-5', '献给阿尔吉侬的花束', '[美]丹尼尔·凯斯', '辽宁教育出版社', date(year=2005, month=8, day=1), 'lib0001',
'C792.3', '图书流通室')
book_entry('ISBN3-793-67890-2', '沙丘', '[美]弗兰克·赫伯特', '江苏凤凰文艺出版社', date(year=2017, month=2, day=1), 'lib0001',
'C793.1', '图书流通室')
book_entry('ISBN3-793-67890-2', '沙丘', '[美]弗兰克·赫伯特', '江苏凤凰文艺出版社', date(year=2017, month=2, day=1), 'lib0001',
'C793.2', '图书流通室')
book_entry('ISBN3-793-67890-2', '沙丘', '[美]弗兰克·赫伯特', '江苏凤凰文艺出版社', date(year=2017, month=2, day=1), 'lib0001',
'C793.3', '图书流通室')
book_entry('ISBN3-794-36758-1', '看海的人', '[日]小林泰三', '新星出版社', date(year=2015, month=2, day=1), 'lib0001', 'C794.1',
'图书流通室')
book_entry('ISBN3-794-36758-1', '看海的人', '[日]小林泰三', '新星出版社', date(year=2015, month=2, day=1), 'lib0001', 'C794.2',
'图书流通室')
book_entry('ISBN3-794-36758-1', '看海的人', '[日]小林泰三', '新星出版社', date(year=2015, month=2, day=1), 'lib0001', 'C794.3',
'图书流通室')
book_entry('ISBN3-795-80945-9', '海底两万里', '[法]儒尔·凡尔纳', '译林出版社', date(year=2002, month=9, day=1), 'lib0001', 'C795.1',
'图书流通室')
book_entry('ISBN3-795-80945-9', '海底两万里', '[法]儒尔·凡尔纳', '译林出版社', date(year=2002, month=9, day=1), 'lib0001', 'C795.2',
'图书流通室')
book_entry('ISBN3-795-80945-9', '海底两万里', '[法]儒尔·凡尔纳', '译林出版社', date(year=2002, month=9, day=1), 'lib0001', 'C795.3',
'图书流通室')
| 62.621429 | 145 | 0.582925 | 2,632 | 17,534 | 3.863222 | 0.136778 | 0.080252 | 0.108183 | 0.072876 | 0.835759 | 0.826613 | 0.820909 | 0.813041 | 0.813041 | 0.78157 | 0 | 0.185993 | 0.184042 | 17,534 | 279 | 146 | 62.845878 | 0.518767 | 0 | 0 | 0.400862 | 0 | 0 | 0.336033 | 0.002395 | 0 | 0 | 0 | 0 | 0 | 1 | 0.012931 | false | 0.030172 | 0.017241 | 0 | 0.030172 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
dff515927452bc24dd734c235ef0c25158ca82f4 | 29 | py | Python | src/amuse/community/tupan/__init__.py | sibonyves/amuse | 5557bf88d14df1aa02133a199b6d60c0c57dcab7 | [
"Apache-2.0"
] | null | null | null | src/amuse/community/tupan/__init__.py | sibonyves/amuse | 5557bf88d14df1aa02133a199b6d60c0c57dcab7 | [
"Apache-2.0"
] | 12 | 2021-11-15T09:13:03.000Z | 2022-02-02T14:53:04.000Z | src/amuse/community/tupan/__init__.py | sibonyves/amuse | 5557bf88d14df1aa02133a199b6d60c0c57dcab7 | [
"Apache-2.0"
] | null | null | null | from .interface import Tupan
| 14.5 | 28 | 0.827586 | 4 | 29 | 6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.137931 | 29 | 1 | 29 | 29 | 0.96 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
dffcc7ab7f8edc79a205c9ff2dc5e76ad755dbd3 | 86 | py | Python | rmnest/__init__.py | pravirkr/rmnest | 773dbeab3b201b1e2c55ab48d406858562917b0a | [
"MIT"
] | 6 | 2021-03-03T10:54:59.000Z | 2021-08-24T11:00:07.000Z | rmnest/__init__.py | pravirkr/rmnest | 773dbeab3b201b1e2c55ab48d406858562917b0a | [
"MIT"
] | 2 | 2021-07-02T02:25:09.000Z | 2021-08-24T00:51:05.000Z | rmnest/__init__.py | pravirkr/rmnest | 773dbeab3b201b1e2c55ab48d406858562917b0a | [
"MIT"
] | 4 | 2021-03-24T06:51:24.000Z | 2021-08-24T01:36:31.000Z | from . import likelihood
from . import model
from . import utils
from . import fit_RM
| 17.2 | 24 | 0.767442 | 13 | 86 | 5 | 0.538462 | 0.615385 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.186047 | 86 | 4 | 25 | 21.5 | 0.928571 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
a03feab73b6afd59703b66bdd213792abc429ed1 | 143 | py | Python | graphspectra/__init__.py | recursive-automata/graphspectra | 6467377016b217a104feaefcbb8c2cfa9cb865c6 | [
"MIT"
] | null | null | null | graphspectra/__init__.py | recursive-automata/graphspectra | 6467377016b217a104feaefcbb8c2cfa9cb865c6 | [
"MIT"
] | 5 | 2020-03-24T16:14:04.000Z | 2021-06-01T22:49:22.000Z | graphspectra/__init__.py | recursive-automata/graphspectra | 6467377016b217a104feaefcbb8c2cfa9cb865c6 | [
"MIT"
] | 1 | 2021-01-16T21:45:53.000Z | 2021-01-16T21:45:53.000Z | from graphspectra.decompose import *
from graphspectra.laplacian import *
from graphspectra.read import *
from graphspectra.visualize import *
| 28.6 | 36 | 0.832168 | 16 | 143 | 7.4375 | 0.4375 | 0.537815 | 0.554622 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.111888 | 143 | 4 | 37 | 35.75 | 0.937008 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
a042d7cdcba70ca3211df348cd767a150d96fe96 | 30 | py | Python | getCharts.py | nizarabdouss/tradingbotV2 | 6e141b7f229c1a86f0dec3c0aa895d2bbf8fc845 | [
"MIT"
] | null | null | null | getCharts.py | nizarabdouss/tradingbotV2 | 6e141b7f229c1a86f0dec3c0aa895d2bbf8fc845 | [
"MIT"
] | null | null | null | getCharts.py | nizarabdouss/tradingbotV2 | 6e141b7f229c1a86f0dec3c0aa895d2bbf8fc845 | [
"MIT"
] | null | null | null | import selenium as webdriver
| 10 | 28 | 0.833333 | 4 | 30 | 6.25 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.166667 | 30 | 2 | 29 | 15 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
39fcabc335ad8976a7915f5e91a508cff8951396 | 192 | py | Python | register/views.py | spartenwolffe/Django-Blog | 6940db52c189c6d9e1e214aa1dcdc0b0fc10fbf2 | [
"MIT"
] | 1 | 2020-10-31T13:38:35.000Z | 2020-10-31T13:38:35.000Z | register/views.py | lmwenda/Django-Blog | 6940db52c189c6d9e1e214aa1dcdc0b0fc10fbf2 | [
"MIT"
] | null | null | null | register/views.py | lmwenda/Django-Blog | 6940db52c189c6d9e1e214aa1dcdc0b0fc10fbf2 | [
"MIT"
] | null | null | null | from django.shortcuts import render, redirect
from django.http import HttpResponse
# Create your views here.
def register(response):
return render(response, "register/register.html", {}) | 27.428571 | 57 | 0.776042 | 24 | 192 | 6.208333 | 0.708333 | 0.134228 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.130208 | 192 | 7 | 57 | 27.428571 | 0.892216 | 0.119792 | 0 | 0 | 0 | 0 | 0.130952 | 0.130952 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.5 | 0.25 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
262cdfe0bd9f6c37848b639011a6058be1390eb0 | 44 | py | Python | src/interpreter/__init__.py | aboveyou00/cc | 816a2fb8e53723e8f2c72a8e5d9f443057017594 | [
"MIT"
] | null | null | null | src/interpreter/__init__.py | aboveyou00/cc | 816a2fb8e53723e8f2c72a8e5d9f443057017594 | [
"MIT"
] | null | null | null | src/interpreter/__init__.py | aboveyou00/cc | 816a2fb8e53723e8f2c72a8e5d9f443057017594 | [
"MIT"
] | null | null | null | from interpreter.CopperInterpreter import *
| 22 | 43 | 0.863636 | 4 | 44 | 9.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.090909 | 44 | 1 | 44 | 44 | 0.95 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
263213c2b5b60ca64407114be02451260a1b9c71 | 617 | py | Python | build/lib/TextProcessing/__init__.py | mayurgpt07/TextProcessingPackage | 9041a973409032ece01ee3c56043fe2146006a6c | [
"MIT"
] | 2 | 2021-05-10T01:44:11.000Z | 2021-05-10T01:44:13.000Z | build/lib/TextProcessing/__init__.py | rvadyah/TextProcessingPackage | 9041a973409032ece01ee3c56043fe2146006a6c | [
"MIT"
] | 1 | 2021-05-03T00:44:31.000Z | 2021-05-03T00:44:31.000Z | build/lib/TextProcessing/__init__.py | rvadyah/TextProcessingPackage | 9041a973409032ece01ee3c56043fe2146006a6c | [
"MIT"
] | 1 | 2021-07-11T17:05:05.000Z | 2021-07-11T17:05:05.000Z | import os
try:
import pandas
except ImportError as e:
os.system('python -m pip install pandas')
try:
import nltk
except ImportError as e:
os.system('python -m pip install nltk')
try:
import numpy
except ImportError as e:
os.system('python -m pip install numpy')
try:
import wordcloud
except ImportError as e:
os.system('python -m pip install wordcloud')
try:
import matplotlib.pyplot as plt
except ImportError as e:
os.system('python -m pip install matplotlib')
try:
import beautifulsoup4
except ImportError as e:
os.system('python -m pip install beautifulsoup4') | 19.903226 | 53 | 0.711507 | 89 | 617 | 4.932584 | 0.224719 | 0.123007 | 0.259681 | 0.273349 | 0.615034 | 0.615034 | 0.615034 | 0.615034 | 0.615034 | 0.615034 | 0 | 0.004098 | 0.209076 | 617 | 31 | 53 | 19.903226 | 0.895492 | 0 | 0 | 0.48 | 0 | 0 | 0.291262 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.52 | 0 | 0.52 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
2670c92f5931b628cfc56447cb3da1f6aafa6adf | 43 | py | Python | gym_PVDER/envs/__init__.py | sibyjackgrove/gym-SolarPVDER-environment | 0f24cc2eac711001d6631c130ddc2a63c0765702 | [
"MIT"
] | 7 | 2019-04-18T22:05:34.000Z | 2022-03-14T07:36:33.000Z | gym_PVDER/envs/__init__.py | sibyjackgrove/gym-SolarPVDER-environment | 0f24cc2eac711001d6631c130ddc2a63c0765702 | [
"MIT"
] | 2 | 2019-11-19T16:52:24.000Z | 2020-10-19T14:41:32.000Z | gym_PVDER/envs/__init__.py | sibyjackgrove/gym-SolarPVDER-environment | 0f24cc2eac711001d6631c130ddc2a63c0765702 | [
"MIT"
] | 1 | 2019-12-05T01:07:32.000Z | 2019-12-05T01:07:32.000Z | from gym_PVDER.envs.PVDER_env import PVDER
| 21.5 | 42 | 0.860465 | 8 | 43 | 4.375 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.093023 | 43 | 1 | 43 | 43 | 0.897436 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
13fc204d69377059d8f05aef76e2fa6c9a3ca0d9 | 28 | py | Python | rh_pathfinding/src/rh_pathfinding/utils/__init__.py | RhinohawkUAV/rh_ros | e13077060bdfcc231adee9731ebfddadcd8d6b4a | [
"MIT"
] | 4 | 2020-05-13T19:34:27.000Z | 2021-09-20T09:01:10.000Z | rh_pathfinding/src/rh_pathfinding/utils/__init__.py | RhinohawkUAV/rh_ros | e13077060bdfcc231adee9731ebfddadcd8d6b4a | [
"MIT"
] | null | null | null | rh_pathfinding/src/rh_pathfinding/utils/__init__.py | RhinohawkUAV/rh_ros | e13077060bdfcc231adee9731ebfddadcd8d6b4a | [
"MIT"
] | 2 | 2019-09-14T14:45:09.000Z | 2020-11-22T01:46:59.000Z | from minheap import MinHeap
| 14 | 27 | 0.857143 | 4 | 28 | 6 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.142857 | 28 | 1 | 28 | 28 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
cd16f6bd39720c5318d9450bc186477f7d5d8c96 | 1,327 | py | Python | fastreport_cloud_sdk/api/__init__.py | FastReports/FastReport-Cloud-Python | 4442e19ef4c980222ede6d9e0597f564d6d85b26 | [
"MIT"
] | null | null | null | fastreport_cloud_sdk/api/__init__.py | FastReports/FastReport-Cloud-Python | 4442e19ef4c980222ede6d9e0597f564d6d85b26 | [
"MIT"
] | null | null | null | fastreport_cloud_sdk/api/__init__.py | FastReports/FastReport-Cloud-Python | 4442e19ef4c980222ede6d9e0597f564d6d85b26 | [
"MIT"
] | null | null | null | from __future__ import absolute_import
# flake8: noqa
# import apis into api package
from fastreport_cloud_sdk.api.api_keys_api import ApiKeysApi
from fastreport_cloud_sdk.api.configuration_api import ConfigurationApi
from fastreport_cloud_sdk.api.data_sources_api import DataSourcesApi
from fastreport_cloud_sdk.api.download_api import DownloadApi
from fastreport_cloud_sdk.api.exports_api import ExportsApi
from fastreport_cloud_sdk.api.group_users_api import GroupUsersApi
from fastreport_cloud_sdk.api.groups_api import GroupsApi
from fastreport_cloud_sdk.api.health_check_api import HealthCheckApi
from fastreport_cloud_sdk.api.reports_api import ReportsApi
from fastreport_cloud_sdk.api.subscription_groups_api import SubscriptionGroupsApi
from fastreport_cloud_sdk.api.subscription_invites_api import SubscriptionInvitesApi
from fastreport_cloud_sdk.api.subscription_plans_api import SubscriptionPlansApi
from fastreport_cloud_sdk.api.subscription_users_api import SubscriptionUsersApi
from fastreport_cloud_sdk.api.subscriptions_api import SubscriptionsApi
from fastreport_cloud_sdk.api.tasks_api import TasksApi
from fastreport_cloud_sdk.api.templates_api import TemplatesApi
from fastreport_cloud_sdk.api.user_profile_api import UserProfileApi
from fastreport_cloud_sdk.api.user_settings_api import UserSettingsApi
| 55.291667 | 84 | 0.905049 | 184 | 1,327 | 6.152174 | 0.282609 | 0.222615 | 0.30212 | 0.349823 | 0.446996 | 0.181979 | 0 | 0 | 0 | 0 | 0 | 0.000806 | 0.065561 | 1,327 | 23 | 85 | 57.695652 | 0.912097 | 0.030897 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
cd243e930329e37a42080defafe064d35a68d9cb | 121 | py | Python | pyinsights/__init__.py | Anselmoo/csv_first_insight | 0dbe64ed0875ee4eb442e5ebcc82c8a19208a5de | [
"MIT"
] | null | null | null | pyinsights/__init__.py | Anselmoo/csv_first_insight | 0dbe64ed0875ee4eb442e5ebcc82c8a19208a5de | [
"MIT"
] | 1 | 2020-08-14T17:27:33.000Z | 2020-08-14T17:27:33.000Z | pyinsights/__init__.py | Anselmoo/csv_first_insight | 0dbe64ed0875ee4eb442e5ebcc82c8a19208a5de | [
"MIT"
] | null | null | null | import sys
sys.path.append("..")
from . import sklsetups as skl
from . import dataread as dr
from . import mlmodels as ml | 24.2 | 30 | 0.743802 | 20 | 121 | 4.5 | 0.6 | 0.333333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.165289 | 121 | 5 | 31 | 24.2 | 0.891089 | 0 | 0 | 0 | 0 | 0 | 0.016393 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.8 | 0 | 0.8 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
26c546602d8c32011694db663e67b5fa756e0543 | 16,510 | py | Python | tests/test_qq.py | decentfox/async-weather-sdk | 4c47e3e390902d7d32c4ad840ad1dbb474b07998 | [
"MIT"
] | 1 | 2021-02-21T23:49:17.000Z | 2021-02-21T23:49:17.000Z | tests/test_qq.py | decentfox/async-weather-sdk | 4c47e3e390902d7d32c4ad840ad1dbb474b07998 | [
"MIT"
] | null | null | null | tests/test_qq.py | decentfox/async-weather-sdk | 4c47e3e390902d7d32c4ad840ad1dbb474b07998 | [
"MIT"
] | null | null | null | import sys
import asyncio
import aiohttp
import pytest
from async_weather_sdk.qq import query_current_weather, query_weather_forecast
from async_weather_sdk.qq import QQMap, QQWeather
pytestmark = pytest.mark.asyncio
async def test_query_current_weather(aresponses, qq_forecast_resp):
with pytest.raises(ValueError, match="Please provide tencent map api key"):
await query_current_weather("", "北京")
with pytest.raises(ValueError, match="Empty query"):
await query_current_weather("API_KEY", "")
aresponses.add(
"apis.map.qq.com",
"/ws/location/v1/ip",
"GET",
response={
"status": 0,
"message": "query ok",
"result": {
"ip": "61.135.17.68",
"location": {"lat": 39.90469, "lng": 116.40717},
"ad_info": {
"nation": "中国",
"province": "北京市",
"city": "北京市",
"district": "",
"adcode": 110000,
},
},
},
)
aresponses.add(
"wis.qq.com", "/weather/common", "GET", response=qq_forecast_resp,
)
res = await query_current_weather("API_KEY", "61.135.17.68")
assert "observe" in res
assert res["observe"] == {
"degree": "29",
"humidity": "30",
"precipitation": "0.0",
"pressure": "998",
"update_time": "202006011323",
"weather": "晴",
"weather_code": "00",
"weather_short": "晴",
"wind_direction": "5",
"wind_power": "2",
}
assert res["rise"] == {
"sunrise": "04:47",
"sunset": "19:36",
"time": "20200601",
}
assert res["location"] == {
"nation": "中国",
"province": "北京市",
"city": "北京市",
"district": "",
"adcode": 110000,
}
async def test_query_weather_forecast(aresponses, qq_forecast_resp):
with pytest.raises(ValueError, match="Please provide tencent map api key"):
await query_weather_forecast("", "北京")
with pytest.raises(ValueError, match="Empty query"):
await query_weather_forecast("API_KEY", "")
with pytest.raises(ValueError, match="Invalid forecast days"):
await query_weather_forecast("API_KEY", "北京", 10)
aresponses.add(
"apis.map.qq.com",
"/ws/location/v1/ip",
"GET",
response={
"status": 0,
"message": "query ok",
"result": {
"ip": "61.135.17.68",
"location": {"lat": 39.90469, "lng": 116.40717},
"ad_info": {
"nation": "中国",
"province": "北京市",
"city": "北京市",
"district": "",
"adcode": 110000,
},
},
},
)
aresponses.add(
"wis.qq.com", "/weather/common", "GET", response=qq_forecast_resp,
)
res = await query_weather_forecast("API_KEY", "61.135.17.68", 2)
assert res == {
"rise": [
{"sunrise": "04:47", "sunset": "19:36", "time": "20200601"},
{"sunrise": "04:47", "sunset": "19:37", "time": "20200602"},
],
"forecast": [
{
"day_weather": "小雨",
"day_weather_code": "07",
"day_weather_short": "小雨",
"day_wind_direction": "西北风",
"day_wind_direction_code": "7",
"day_wind_power": "4",
"day_wind_power_code": "1",
"max_degree": "26",
"min_degree": "14",
"night_weather": "晴",
"night_weather_code": "00",
"night_weather_short": "晴",
"night_wind_direction": "西风",
"night_wind_direction_code": "6",
"night_wind_power": "3",
"night_wind_power_code": "0",
"time": "2020-05-31",
},
{
"day_weather": "雷阵雨",
"day_weather_code": "04",
"day_weather_short": "雷阵雨",
"day_wind_direction": "西南风",
"day_wind_direction_code": "5",
"day_wind_power": "5",
"day_wind_power_code": "2",
"max_degree": "30",
"min_degree": "16",
"night_weather": "多云",
"night_weather_code": "01",
"night_weather_short": "多云",
"night_wind_direction": "西南风",
"night_wind_direction_code": "5",
"night_wind_power": "3",
"night_wind_power_code": "0",
"time": "2020-06-01",
},
{
"day_weather": "晴",
"day_weather_code": "00",
"day_weather_short": "晴",
"day_wind_direction": "南风",
"day_wind_direction_code": "4",
"day_wind_power": "5",
"day_wind_power_code": "2",
"max_degree": "33",
"min_degree": "20",
"night_weather": "晴",
"night_weather_code": "00",
"night_weather_short": "晴",
"night_wind_direction": "西南风",
"night_wind_direction_code": "5",
"night_wind_power": "4",
"night_wind_power_code": "1",
"time": "2020-06-02",
},
],
"location": {
"nation": "中国",
"province": "北京市",
"city": "北京市",
"district": "",
"adcode": 110000,
},
}
async def test_qq_weather_sdk_fetch_weather(aresponses):
aresponses.add(
"wis.qq.com",
"/weather/common",
"GET",
response={
"message": "OK",
"status": 200,
"data": {
"observe": {
"degree": "29",
"humidity": "30",
"precipitation": "0.0",
"pressure": "998",
"update_time": "202006011323",
"weather": "晴",
"weather_code": "00",
"weather_short": "晴",
"wind_direction": "5",
"wind_power": "2",
},
},
},
)
async with aiohttp.ClientSession() as session:
qq_weather = QQWeather(session=session)
res = await qq_weather.fetch_weather("北京", "北京", "observe")
assert "observe" in res
aresponses.add(
"wis.qq.com",
"/weather/common",
"GET",
response={"status": 311, "message": "key格式错误"},
)
async with aiohttp.ClientSession() as session:
qq_weather = QQWeather(session=session)
res = await qq_weather.fetch_weather("北京", "北京", "observe")
assert res == {}
async def test_qq_weather_sdk_fetch_current_weather(
aresponses, qq_forecast_resp
):
aresponses.add(
"wis.qq.com", "/weather/common", "GET", response=qq_forecast_resp,
)
async with aiohttp.ClientSession() as session:
qq_weather = QQWeather(session=session)
res = await qq_weather.fetch_current_weather("北京", "北京")
assert "observe" in res
assert "sunrise" in res["rise"]
async def test_qq_weather_sdk_fetch_weather_forecast(
aresponses, qq_forecast_resp
):
aresponses.add(
"wis.qq.com", "/weather/common", "GET", response=qq_forecast_resp,
)
async with aiohttp.ClientSession() as session:
qq_weather = QQWeather(session=session)
res = await qq_weather.fetch_weather_forecast("北京", "北京", 3)
assert "forecast" in res
assert len(res["forecast"]) == 4
assert "rise" in res
assert len(res["rise"]) == 3
aresponses.add(
"wis.qq.com", "/weather/common", "GET", response=qq_forecast_resp,
)
async with aiohttp.ClientSession() as session:
qq_weather = QQWeather(session=session)
res = await qq_weather.fetch_weather_forecast("北京", "北京", 1)
assert "forecast" in res
assert len(res["forecast"]) == 25
assert "rise" in res
assert len(res["rise"]) == 1
async def test_qq_map_wrong_api_key(aresponses):
aresponses.add(
"apis.map.qq.com",
"/ws/location/v1/ip",
"GET",
response={"status": 311, "message": "key格式错误"},
)
async with aiohttp.ClientSession() as session:
qq_map = QQMap("API_KEY", session=session)
res = await qq_map.location_lookup_by_ip("61.135.17.68")
assert res == {}
async def test_qq_map_location_lookup_by_ip(aresponses):
aresponses.add(
"apis.map.qq.com",
"/ws/location/v1/ip",
"GET",
response={
"status": 0,
"message": "query ok",
"result": {
"ip": "61.135.17.68",
"location": {"lat": 39.90469, "lng": 116.40717},
"ad_info": {
"nation": "中国",
"province": "北京市",
"city": "北京市",
"district": "",
"adcode": 110000,
},
},
},
)
async with aiohttp.ClientSession() as session:
qq_map = QQMap("API_KEY", session=session)
res = await qq_map.location_lookup_by_ip("61.135.17.68")
assert res == {
"nation": "中国",
"province": "北京市",
"city": "北京市",
"district": "",
"adcode": 110000,
}
async def test_qq_map_location_lookup_by_coordinates(aresponses):
aresponses.add(
"apis.map.qq.com",
"/ws/geocoder/v1",
"GET",
response={
"status": 0,
"message": "query ok",
"result": {
"location": {"lat": 39.90469, "lng": 116.40717},
"address": "北京市东城区正义路2号",
"ad_info": {
"nation_code": "156",
"adcode": "110101",
"city_code": "156110000",
"name": "中国,北京市,北京市,东城区",
"location": {"lat": 39.916668, "lng": 116.434578},
"nation": "中国",
"province": "北京市",
"city": "北京市",
"district": "东城区",
},
},
},
)
async with aiohttp.ClientSession() as session:
qq_map = QQMap("API_KEY", session=session)
res = await qq_map.location_lookup_by_coordinates("39.90469,116.40717")
assert res == {
"nation_code": "156",
"adcode": "110101",
"city_code": "156110000",
"name": "中国,北京市,北京市,东城区",
"location": {"lat": 39.916668, "lng": 116.434578},
"nation": "中国",
"province": "北京市",
"city": "北京市",
"district": "东城区",
}
aresponses.add(
"apis.map.qq.com",
"/ws/geocoder/v1",
"GET",
response={"status": 400, "message": "query failed",},
)
async with aiohttp.ClientSession() as session:
qq_map = QQMap("API_KEY", session=session)
res = await qq_map.location_lookup_by_coordinates("39.90469,116.40717")
assert res == {}
async def test_qq_map_location_lookup_by_keyword(aresponses):
aresponses.add(
"apis.map.qq.com",
"/ws/district/v1/search",
"GET",
response={
"status": 0,
"message": "query ok",
"result": [
[
{
"id": "110000",
"name": "北京",
"fullname": "北京市",
"pinyin": ["bei", "jing"],
"level": 1,
"location": {"lat": 39.90469, "lng": 116.40717},
"address": "北京",
},
{
"id": "230225580",
"fullname": "北京市双河农场",
"level": 4,
"location": {"lat": 47.866631, "lng": 123.753351},
"address": "黑龙江,齐齐哈尔,甘南县,北京市双河农场",
},
]
],
},
)
aresponses.add(
"apis.map.qq.com",
"/ws/geocoder/v1",
"GET",
response={
"status": 0,
"message": "query ok",
"result": {
"location": {"lat": 39.90469, "lng": 116.40717},
"address": "北京市东城区正义路2号",
"ad_info": {
"nation_code": "156",
"adcode": "110101",
"city_code": "156110000",
"name": "中国,北京市,北京市,东城区",
"location": {"lat": 39.916668, "lng": 116.434578},
"nation": "中国",
"province": "北京市",
"city": "北京市",
"district": "东城区",
},
},
},
)
async with aiohttp.ClientSession() as session:
qq_map = QQMap("API_KEY", session=session)
res = await qq_map.location_lookup_by_keyword("北京市")
assert res == {
"nation_code": "156",
"adcode": "110101",
"city_code": "156110000",
"name": "中国,北京市,北京市,东城区",
"location": {"lat": 39.916668, "lng": 116.434578},
"nation": "中国",
"province": "北京市",
"city": "北京市",
"district": "东城区",
}
aresponses.add(
"apis.map.qq.com",
"/ws/district/v1/search",
"GET",
response={"status": 0, "message": "query ok", "result": [],},
)
async with aiohttp.ClientSession() as session:
qq_map = QQMap("API_KEY", session=session)
res = await qq_map.location_lookup_by_keyword("北京市")
assert res == {}
aresponses.add(
"apis.map.qq.com",
"/ws/district/v1/search",
"GET",
response={"status": 400, "message": "query failed",},
)
async with aiohttp.ClientSession() as session:
qq_map = QQMap("API_KEY", session=session)
res = await qq_map.location_lookup_by_keyword("北京市")
assert res == {}
@pytest.fixture()
def mock_location_lookup_by_ip(mocker):
future = asyncio.Future()
mocker.patch(
"async_weather_sdk.qq.QQMap.location_lookup_by_ip", return_value=future
)
future.set_result("mock_location_lookup_by_ip")
return future
@pytest.fixture()
def mock_location_lookup_by_coordinates(mocker):
future = asyncio.Future()
mocker.patch(
"async_weather_sdk.qq.QQMap.location_lookup_by_coordinates",
return_value=future,
)
future.set_result("location_lookup_by_coordinates")
return future
@pytest.fixture()
def mock_location_lookup_by_keyword(mocker):
future = asyncio.Future()
mocker.patch(
"async_weather_sdk.qq.QQMap.location_lookup_by_keyword",
return_value=future,
)
future.set_result("location_lookup_by_keyword")
return future
async def test_qq_map_location_lookup(
mock_location_lookup_by_ip,
mock_location_lookup_by_coordinates,
mock_location_lookup_by_keyword,
):
if sys.version_info >= (3, 8):
return
qq_map = QQMap("API_KEY")
res = await qq_map.location_lookup("61.135.17.68")
assert res == "mock_location_lookup_by_ip"
res = await qq_map.location_lookup("39.90469,116.40717")
assert res == "location_lookup_by_coordinates"
res = await qq_map.location_lookup("北京市")
assert res == "location_lookup_by_keyword"
async def test_qq_map_location_lookup_p38(mocker):
if sys.version_info < (3, 8):
return
mocker.patch("async_weather_sdk.qq.QQMap.location_lookup_by_ip")
mocker.patch("async_weather_sdk.qq.QQMap.location_lookup_by_coordinates")
mocker.patch("async_weather_sdk.qq.QQMap.location_lookup_by_keyword")
qq_map = QQMap("API_KEY")
await qq_map.location_lookup("61.135.17.68")
QQMap.location_lookup_by_ip.assert_called_once()
await qq_map.location_lookup("39.90469,116.40717")
QQMap.location_lookup_by_coordinates.assert_called_once()
await qq_map.location_lookup("北京市")
QQMap.location_lookup_by_keyword.assert_called_once()
| 30.802239 | 79 | 0.498243 | 1,671 | 16,510 | 4.688211 | 0.113704 | 0.069696 | 0.063314 | 0.043656 | 0.87733 | 0.821292 | 0.77687 | 0.754532 | 0.712535 | 0.659561 | 0 | 0.061583 | 0.355784 | 16,510 | 535 | 80 | 30.859813 | 0.674972 | 0 | 0 | 0.608974 | 0 | 0 | 0.253119 | 0.045609 | 0 | 0 | 0 | 0 | 0.064103 | 1 | 0.00641 | false | 0 | 0.012821 | 0 | 0.029915 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
26d5eca000c56bc37b528d74045d16001a29b84d | 25,061 | py | Python | integrationtest/vm/installation/test_stub.py | sherry546/zstack-woodpecker | 54a37459f2d72ce6820974feaa6eb55772c3d2ce | [
"Apache-2.0"
] | 1 | 2021-03-21T12:41:11.000Z | 2021-03-21T12:41:11.000Z | integrationtest/vm/installation/test_stub.py | sherry546/zstack-woodpecker | 54a37459f2d72ce6820974feaa6eb55772c3d2ce | [
"Apache-2.0"
] | null | null | null | integrationtest/vm/installation/test_stub.py | sherry546/zstack-woodpecker | 54a37459f2d72ce6820974feaa6eb55772c3d2ce | [
"Apache-2.0"
] | 1 | 2017-05-19T06:40:40.000Z | 2017-05-19T06:40:40.000Z | import os
import subprocess
import time
import zstacklib.utils.ssh as ssh
import zstackwoodpecker.test_util as test_util
import zstackwoodpecker.test_lib as test_lib
import zstackwoodpecker.operations.resource_operations as res_ops
import zstackwoodpecker.zstack_test.zstack_test_vm as zstack_vm_header
def create_vlan_vm(image_name, l3_name=None, disk_offering_uuids=None):
image_uuid = test_lib.lib_get_image_by_name(image_name).uuid
if not l3_name:
l3_name = os.environ.get('l3PublicNetworkName')
l3_net_uuid = test_lib.lib_get_l3_by_name(l3_name).uuid
return create_vm([l3_net_uuid], image_uuid, 'zs_install_%s' % image_name, \
disk_offering_uuids)
def create_vm(l3_uuid_list, image_uuid, vm_name = None, \
disk_offering_uuids = None, default_l3_uuid = None):
vm_creation_option = test_util.VmOption()
conditions = res_ops.gen_query_conditions('name', '=', os.environ.get('instanceOfferingName_m'))
instance_offering_uuid = res_ops.query_resource(res_ops.INSTANCE_OFFERING, conditions)[0].uuid
vm_creation_option.set_instance_offering_uuid(instance_offering_uuid)
vm_creation_option.set_l3_uuids(l3_uuid_list)
vm_creation_option.set_image_uuid(image_uuid)
vm_creation_option.set_name(vm_name)
vm_creation_option.set_data_disk_uuids(disk_offering_uuids)
vm_creation_option.set_default_l3_uuid(default_l3_uuid)
vm = zstack_vm_header.ZstackTestVm()
vm.set_creation_option(vm_creation_option)
vm.create()
return vm
def create_instance_vm(image_name, instance_offering_uuid, l3_name=None, disk_offering_uuids = None, default_l3_uuid = None):
image_uuid = test_lib.lib_get_image_by_name(image_name).uuid
if not l3_name:
l3_name = os.environ.get('l3PublicNetworkName')
l3_net_uuid = test_lib.lib_get_l3_by_name(l3_name).uuid
vm_name = 'zs_install_%s' % image_name
vm_creation_option = test_util.VmOption()
vm_creation_option.set_instance_offering_uuid(instance_offering_uuid)
vm_creation_option.set_l3_uuids([l3_net_uuid])
vm_creation_option.set_image_uuid(image_uuid)
vm_creation_option.set_name(vm_name)
vm_creation_option.set_data_disk_uuids(disk_offering_uuids)
vm_creation_option.set_default_l3_uuid(default_l3_uuid)
vm = zstack_vm_header.ZstackTestVm()
vm.set_creation_option(vm_creation_option)
vm.create()
return vm
def check_str(string):
if string == None:
return ""
return string
def execute_shell_in_process(cmd, tmp_file, timeout = 1200, no_timeout_excep = False):
logfd = open(tmp_file, 'w', 0)
process = subprocess.Popen(cmd, executable='/bin/sh', shell=True, stdout=logfd, stderr=logfd, universal_newlines=True)
start_time = time.time()
while process.poll() is None:
curr_time = time.time()
test_time = curr_time - start_time
if test_time > timeout:
process.kill()
logfd.close()
logfd = open(tmp_file, 'r')
test_util.test_logger('[shell:] %s [timeout logs:] %s' % (cmd, '\n'.join(logfd.readlines())))
logfd.close()
if no_timeout_excep:
test_util.test_logger('[shell:] %s timeout, after %d seconds' % (cmd, test_time))
return 1
else:
os.system('rm -f %s' % tmp_file)
test_util.test_fail('[shell:] %s timeout, after %d seconds' % (cmd, timeout))
if test_time%10 == 0:
print('shell script used: %ds' % int(test_time))
time.sleep(1)
logfd.close()
logfd = open(tmp_file, 'r')
test_util.test_logger('[shell:] %s [logs]: %s' % (cmd, '\n'.join(logfd.readlines())))
logfd.close()
return process.returncode
def execute_shell_in_process_stdout(cmd, tmp_file, timeout = 1200, no_timeout_excep = False):
logfd = open(tmp_file, 'w', 0)
process = subprocess.Popen(cmd, executable='/bin/sh', shell=True, stdout=logfd, universal_newlines=True)
start_time = time.time()
while process.poll() is None:
curr_time = time.time()
test_time = curr_time - start_time
if test_time > timeout:
process.kill()
logfd.close()
logfd = open(tmp_file, 'r')
test_util.test_logger('[shell:] %s [timeout logs:] %s' % (cmd, '\n'.join(logfd.readlines())))
logfd.close()
if no_timeout_excep:
test_util.test_logger('[shell:] %s timeout, after %d seconds' % (cmd, test_time))
return 1
else:
os.system('rm -f %s' % tmp_file)
test_util.test_fail('[shell:] %s timeout, after %d seconds' % (cmd, timeout))
if test_time%10 == 0:
print('shell script used: %ds' % int(test_time))
time.sleep(1)
logfd.close()
logfd = open(tmp_file, 'r')
stdout = '\n'.join(logfd.readlines())
logfd.close()
test_util.test_logger('[shell:] %s [logs]: %s' % (cmd, stdout))
return (process.returncode, stdout)
def scp_file_to_vm(vm_inv, src_file, target_file):
vm_ip = vm_inv.vmNics[0].ip
vm_username = test_lib.lib_get_vm_username(vm_inv)
vm_password = test_lib.lib_get_vm_password(vm_inv)
ssh.scp_file(src_file, target_file, vm_ip, vm_username, vm_password)
def copy_id_dsa(vm_inv, ssh_cmd, tmp_file):
src_file = '/root/.ssh/id_dsa'
target_file = '/root/.ssh/id_dsa'
if not os.path.exists(src_file):
os.system("ssh-keygen -t dsa -N '' -f %s" % src_file)
scp_file_to_vm(vm_inv, src_file, target_file)
cmd = '%s "chmod 600 /root/.ssh/id_dsa"' % ssh_cmd
process_result = execute_shell_in_process(cmd, tmp_file)
def copy_id_dsa_pub(vm_inv):
src_file = '/root/.ssh/id_dsa.pub'
target_file = '/root/.ssh/authorized_keys'
if not os.path.exists(src_file):
os.system("ssh-keygen -t dsa -N '' -f %s" % src_file)
scp_file_to_vm(vm_inv, src_file, target_file)
def prepare_mevoco_test_env(vm_inv):
all_in_one_pkg = os.environ['zstackPkg']
scp_file_to_vm(vm_inv, all_in_one_pkg, '/root/zizhu.bin')
vm_ip = vm_inv.vmNics[0].ip
ssh.make_ssh_no_password(vm_ip, test_lib.lib_get_vm_username(vm_inv), \
test_lib.lib_get_vm_password(vm_inv))
def prepare_test_env(vm_inv, aio_target):
zstack_install_script = os.environ['zstackInstallScript']
target_file = '/root/zstack_installer.sh'
vm_ip = vm_inv.vmNics[0].ip
vm_username = test_lib.lib_get_vm_username(vm_inv)
vm_password = test_lib.lib_get_vm_password(vm_inv)
scp_file_to_vm(vm_inv, zstack_install_script, target_file)
all_in_one_pkg = os.environ['zstackPkg']
scp_file_to_vm(vm_inv, all_in_one_pkg, aio_target)
ssh.make_ssh_no_password(vm_ip, vm_username, vm_password)
def prepare_upgrade_test_env(vm_inv, aio_target, upgrade_pkg):
zstack_install_script = os.environ['zstackInstallScript']
target_file = '/root/zstack_installer.sh'
vm_ip = vm_inv.vmNics[0].ip
vm_username = test_lib.lib_get_vm_username(vm_inv)
vm_password = test_lib.lib_get_vm_password(vm_inv)
scp_file_to_vm(vm_inv, zstack_install_script, target_file)
scp_file_to_vm(vm_inv, upgrade_pkg, aio_target)
ssh.make_ssh_no_password(vm_ip, vm_username, vm_password)
def prepare_yum_repo(vm_inv):
origin_file = '/etc/yum.repos.d/epel.repo'
target_file = '/etc/yum.repos.d/epel.repo'
vm_ip = vm_inv.vmNics[0].ip
vm_username = test_lib.lib_get_vm_username(vm_inv)
vm_password = test_lib.lib_get_vm_password(vm_inv)
scp_file_to_vm(vm_inv, origin_file, target_file)
ssh.make_ssh_no_password(vm_ip, vm_username, vm_password)
def upgrade_zstack(ssh_cmd, target_file, tmp_file):
env_var = "WEBSITE='%s'" % 'localhost'
# cmd = '%s "%s bash %s -u -R aliyun"' % (ssh_cmd, env_var, target_file)
cmd = '%s "%s bash %s -u -o"' % (ssh_cmd, env_var, target_file)
process_result = execute_shell_in_process(cmd, tmp_file)
if process_result != 0:
cmd = '%s "cat /tmp/zstack_installation.log"' % ssh_cmd
execute_shell_in_process(cmd, tmp_file)
if 'no management-node-ready message received within' in open(tmp_file).read():
times = 30
cmd = '%s "zstack-ctl status"' % ssh_cmd
while (times > 0):
time.sleep(10)
process_result = execute_shell_in_process(cmd, tmp_file, 10, True)
times -= 0
if process_result == 0:
test_util.test_logger("management node start after extra %d seconds" % (30 - times + 1) * 10 )
return 0
test_util.test_logger("mn node is still not started up, wait for another 10 seconds...")
else:
test_util.test_fail('zstack upgrade failed')
def execute_mevoco_aliyun_install(ssh_cmd, tmp_file):
target_file = '/root/zizhu.bin'
env_var = "ZSTACK_ALL_IN_ONE='%s' WEBSITE='%s'" % \
(target_file, 'localhost')
cmd = '%s "%s bash /root/zizhu.bin -R aliyun -m"' % (ssh_cmd, env_var)
process_result = execute_shell_in_process(cmd, tmp_file)
if process_result != 0:
cmd = '%s "cat /tmp/zstack_installation.log"' % ssh_cmd
execute_shell_in_process(cmd, tmp_file)
if 'no management-node-ready message received within' in open(tmp_file).read():
times = 30
cmd = '%s "zstack-ctl status"' % ssh_cmd
while (times > 0):
time.sleep(10)
process_result = execute_shell_in_process(cmd, tmp_file, 10, True)
times -= 0
if process_result == 0:
test_util.test_logger("management node start after extra %d seconds" % (30 - times + 1) * 10 )
return 0
test_util.test_logger("mn node is still not started up, wait for another 10 seconds...")
else:
test_util.test_fail('zstack installation failed')
def execute_mevoco_online_install(ssh_cmd, tmp_file):
target_file = '/root/zizhu.bin'
env_var = "ZSTACK_ALL_IN_ONE='%s' WEBSITE='%s'" % \
(target_file, 'localhost')
cmd = '%s "%s bash /root/zizhu.bin -m"' % (ssh_cmd, env_var)
process_result = execute_shell_in_process(cmd, tmp_file)
if process_result != 0:
cmd = '%s "cat /tmp/zstack_installation.log"' % ssh_cmd
execute_shell_in_process(cmd, tmp_file)
if 'no management-node-ready message received within' in open(tmp_file).read():
times = 30
cmd = '%s "zstack-ctl status"' % ssh_cmd
while (times > 0):
time.sleep(10)
process_result = execute_shell_in_process(cmd, tmp_file, 10, True)
times -= 0
if process_result == 0:
test_util.test_logger("management node start after extra %d seconds" % (30 - times + 1) * 10 )
return 0
test_util.test_logger("mn node is still not started up, wait for another 10 seconds...")
else:
test_util.test_fail('zstack installation failed')
def execute_install_with_args(ssh_cmd, args, target_file, tmp_file):
env_var = " WEBSITE='%s'" % ('localhost')
cmd = '%s "%s bash %s %s"' % (ssh_cmd, env_var, target_file, args)
process_result = execute_shell_in_process(cmd, tmp_file, 2400)
if process_result != 0:
cmd = '%s "cat /tmp/zstack_installation.log"' % ssh_cmd
execute_shell_in_process(cmd, tmp_file)
if 'no management-node-ready message received within' in open(tmp_file).read():
times = 30
cmd = '%s "zstack-ctl status"' % ssh_cmd
while (times > 0):
time.sleep(10)
process_result = execute_shell_in_process(cmd, tmp_file, 10, True)
times -= 0
if process_result == 0:
test_util.test_logger("management node start after extra %d seconds" % (30 - times + 1) * 10 )
return 0
test_util.test_logger("mn node is still not started up, wait for another 10 seconds...")
else:
test_util.test_fail('zstack installation failed')
def execute_all_install(ssh_cmd, target_file, tmp_file):
env_var = " WEBSITE='%s'" % ('localhost')
# cmd = '%s "%s bash %s -R aliyun"' % (ssh_cmd, env_var, target_file)
cmd = '%s "%s bash %s -o"' % (ssh_cmd, env_var, target_file)
process_result = execute_shell_in_process(cmd, tmp_file, 2400)
if process_result != 0:
cmd = '%s "cat /tmp/zstack_installation.log"' % ssh_cmd
execute_shell_in_process(cmd, tmp_file)
if 'no management-node-ready message received within' in open(tmp_file).read():
times = 30
cmd = '%s "zstack-ctl status"' % ssh_cmd
while (times > 0):
time.sleep(10)
process_result = execute_shell_in_process(cmd, tmp_file, 10, True)
times -= 0
if process_result == 0:
test_util.test_logger("management node start after extra %d seconds" % (30 - times + 1) * 10 )
return 0
test_util.test_logger("mn node is still not started up, wait for another 10 seconds...")
else:
test_util.test_fail('zstack installation failed')
def only_install_zstack(ssh_cmd, target_file, tmp_file):
env_var = "WEBSITE='%s'" % 'localhost'
cmd = '%s "%s bash %s -d -i"' % (ssh_cmd, env_var, target_file)
process_result = execute_shell_in_process(cmd, tmp_file)
if process_result != 0:
cmd = '%s "cat /tmp/zstack_installation.log"' % ssh_cmd
execute_shell_in_process(cmd, tmp_file)
test_util.test_fail('zstack installation failed')
def check_installation(ssh_cmd, tmp_file, vm_inv):
cmd = '%s "/usr/bin/zstack-cli LogInByAccount accountName=admin password=password"' % ssh_cmd
process_result = execute_shell_in_process(cmd, tmp_file)
if process_result != 0:
test_util.test_fail('zstack-cli login failed')
vm_passwd = test_lib.lib_get_vm_password(vm_inv)
vm_ip = vm_ip = vm_inv.vmNics[0].ip
cmd = '%s "/usr/bin/zstack-cli AddSftpBackupStorage name=bs1 description=bs hostname=%s username=root password=%s url=/home/bs"' % (ssh_cmd, vm_ip, vm_passwd)
process_result = execute_shell_in_process(cmd, tmp_file)
if process_result != 0:
test_util.test_fail('zstack-cli create Backup Storage failed')
cmd = '%s "/usr/bin/zstack-cli QuerySftpBackupStorage name=bs1"' % ssh_cmd
process_result = execute_shell_in_process(cmd, tmp_file)
if process_result != 0:
test_util.test_fail('zstack-cli Query Backup Storage failed')
cmd = '%s "/usr/bin/zstack-cli QuerySftpBackupStorage name=bs1 fields=uuid" | grep uuid | awk \'{print $2}\'' % ssh_cmd
(process_result, bs_uuid) = execute_shell_in_process_stdout(cmd, tmp_file)
if process_result != 0:
test_util.test_fail('zstack-cli Query Backup Storage failed')
cmd = '%s "/usr/bin/zstack-cli DeleteBackupStorage uuid=%s"' % (ssh_cmd, bs_uuid.split('"')[1])
process_result = execute_shell_in_process(cmd, tmp_file)
if process_result != 0:
test_util.test_fail('zstack-cli Delete Backup Storage failed')
# check zone
cmd = '%s "/usr/bin/zstack-cli CreateZone name=ZONE1"' % ssh_cmd
process_result = execute_shell_in_process(cmd, tmp_file)
if process_result != 0:
test_util.test_fail('zstack-cli Create Zone failed')
cmd = '%s "/usr/bin/zstack-cli QueryZone name=ZONE1"' % ssh_cmd
process_result = execute_shell_in_process(cmd, tmp_file)
if process_result != 0:
test_util.test_fail('zstack-cli Query Zone failed')
cmd = '%s "/usr/bin/zstack-cli QueryZone name=ZONE1 fields=uuid" | grep uuid | awk \'{print $2}\'' % ssh_cmd
(process_result, zone_uuid) = execute_shell_in_process_stdout(cmd, tmp_file)
if process_result != 0:
test_util.test_fail('zstack-cli Query Zone failed')
cmd = '%s "/usr/bin/zstack-cli DeleteZone uuid=%s"' % (ssh_cmd, zone_uuid.split('"')[1])
process_result = execute_shell_in_process(cmd, tmp_file)
if process_result != 0:
test_util.test_fail('zstack-cli Delete Zone failed')
# check item
cmd = '%s "/usr/bin/zstack-ctl status" | grep \'^version\' | awk \'{print $2}\'' % ssh_cmd
(process_result, version_info) = execute_shell_in_process_stdout(cmd, tmp_file)
if process_result != 0:
test_util.test_fail('zstack-ctl get version failed')
if '1.3' in version_info or '1.2' in version_info:
cmd = '%s "/usr/bin/zstack-ctl status" | grep \'^status\' | awk \'{print $2}\'' % ssh_cmd
(process_result, status_info) = execute_shell_in_process_stdout(cmd, tmp_file)
if process_result != 0:
test_util.test_fail('zstack-ctl get status failed')
if not 'Running' in status_info:
test_util.test_dsc('zstack is not running, try to start zstack')
cmd = '%s "/usr/bin/zstack-ctl start"' % ssh_cmd
process_result = process_result = execute_shell_in_process(cmd, tmp_file)
if process_result != 0:
test_util.test_fail('zstack-ctl start failed')
time.sleep(5)
cmd = '%s "/usr/bin/zstack-ctl status" | grep \'^status\' | awk \'{print $2}\'' % ssh_cmd
(process_result, status_info) = execute_shell_in_process_stdout(cmd, tmp_file)
if process_result != 0:
test_util.test_fail('zstack-ctl get status failed')
if not 'Running' in status_info:
test_util.test_fail('zstack is not running, start zstack failed')
test_util.test_dsc('check zstack status, zstack is running')
else:
cmd = '%s "/usr/bin/zstack-ctl status" | grep \'^MN status\' | awk \'{print $3}\'' % ssh_cmd
(process_result, mn_status) = execute_shell_in_process_stdout(cmd, tmp_file)
if process_result != 0:
test_util.test_fail('zstack-ctl get MN status failed')
if not 'Running' in mn_status:
test_util.test_dsc('management node is not running, try to start management node')
cmd = '%s "/usr/bin/zstack-ctl start_node"' % ssh_cmd
process_result = process_result = execute_shell_in_process(cmd, tmp_file)
if process_result != 0:
test_util.test_fail('zstack-ctl start_node failed')
time.sleep(5)
cmd = '%s "/usr/bin/zstack-ctl status" | grep \'^MN status\' | awk \'{print $3}\'' % ssh_cmd
(process_result, mn_status) = execute_shell_in_process_stdout(cmd, tmp_file)
if process_result != 0:
test_util.test_fail('zstack-ctl get MN status failed')
if not 'Running' in mn_status:
test_util.test_fail('management node is not running, start management node failed')
test_util.test_dsc('check MN, MN is running')
cmd = '%s "/usr/bin/zstack-ctl status" | grep \'^UI status\' | awk \'{print $3}\'' % ssh_cmd
(process_result, ui_status) = execute_shell_in_process_stdout(cmd, tmp_file)
if process_result != 0:
test_util.test_fail('zstack-ctl get UI status failed')
if not 'Running' in ui_status:
test_util.test_dsc('UI is not running, try to start UI')
cmd = '%s "/usr/bin/zstack-ctl start_ui"' % ssh_cmd
process_result = process_result = execute_shell_in_process(cmd, tmp_file)
if process_result != 0:
test_util.test_fail('zstack-ctl start_ui failed')
time.sleep(5)
cmd = '%s "/usr/bin/zstack-ctl status" | grep \'^MN status\' | awk \'{print $3}\'' % ssh_cmd
(process_result, mn_status) = execute_shell_in_process_stdout(cmd, tmp_file)
if process_result != 0:
test_util.test_fail('zstack-ctl get MN status failed')
if not 'Running' in mn_status:
test_util.test_fail('UI is not running, start UI failed')
test_util.test_dsc('check UI, UI is running')
cmd = '%s "/usr/bin/zstack-ctl status" | grep ^ZSTACK_HOME | awk \'{print $2}\'' % ssh_cmd
(process_result, zstack_home) = execute_shell_in_process_stdout(cmd, tmp_file)
if process_result != 0:
test_util.test_fail('zstack-ctl status get ZSTACK_HOME failed')
zstack_home = zstack_home[:-1]
cmd = '%s "[ -d " %s " ] && echo yes || echo no" ' % (ssh_cmd, zstack_home)
(process_result, dir_exist) = execute_shell_in_process_stdout(cmd, tmp_file)
if process_result != 0:
test_util.test_fail('check ZSTACK_HOME failed')
dir_exist = dir_exist[:-1]
if dir_exist == 'no':
test_util.test_fail('there is no ZSTACK_HOME')
cmd = '%s "/usr/bin/zstack-ctl status" | grep ^zstack.properties | awk \'{print $2}\'' % ssh_cmd
(process_result, properties_file) = execute_shell_in_process_stdout(cmd, tmp_file)
if process_result != 0:
test_util.test_fail('zstack-ctl status get zstack.properties failed')
properties_file = properties_file[:-1]
cmd = '%s "[ -f " %s " ] && echo yes || echo no" ' % (ssh_cmd, properties_file)
(process_result, file_exist) = execute_shell_in_process_stdout(cmd, tmp_file)
if process_result != 0:
test_util.test_fail('check zstack.properties failed')
file_exist = file_exist[:-1]
if file_exist == 'no':
test_util.test_fail('there is no zstack.properties')
cmd = '%s "/usr/bin/zstack-ctl status" | grep ^log4j2.xml | awk \'{print $2}\'' % ssh_cmd
(process_result, properties_file) = execute_shell_in_process_stdout(cmd, tmp_file)
if process_result != 0:
test_util.test_fail('zstack-ctl status get log4j2.xml failed')
properties_file = properties_file[:-1]
cmd = '%s "[ -f " %s " ] && echo yes || echo no" ' % (ssh_cmd, properties_file)
(process_result, file_exist) = execute_shell_in_process_stdout(cmd, tmp_file)
if process_result != 0:
test_util.test_fail('check log4j2.xml failed')
file_exist = file_exist[:-1]
if file_exist == 'no':
test_util.test_fail('there is no log4j2.xml')
cmd = '%s "/usr/bin/zstack-ctl status" | grep ^\'PID file\' | awk \'{print $3}\'' % ssh_cmd
(process_result, properties_file) = execute_shell_in_process_stdout(cmd, tmp_file)
if process_result != 0:
test_util.test_fail('zstack-ctl status get PID file failed')
properties_file = properties_file[:-1]
cmd = '%s "[ -f " %s " ] && echo yes || echo no" ' % (ssh_cmd, properties_file)
(process_result, file_exist) = execute_shell_in_process_stdout(cmd, tmp_file)
if process_result != 0:
test_util.test_fail('check PID file failed')
file_exist = file_exist[:-1]
if file_exist == 'no':
test_util.test_fail('there is no PID file')
cmd = '%s "/usr/bin/zstack-ctl status" | grep ^\'log file\' | awk \'{print $3}\'' % ssh_cmd
(process_result, properties_file) = execute_shell_in_process_stdout(cmd, tmp_file)
if process_result != 0:
test_util.test_fail('zstack-ctl status get log file failed')
properties_file = properties_file[:-1]
cmd = '%s "[ -f " %s " ] && echo yes || echo no" ' % (ssh_cmd, properties_file)
(process_result, file_exist) = execute_shell_in_process_stdout(cmd, tmp_file)
if process_result != 0:
test_util.test_fail('check log file failed')
file_exist = file_exist[:-1]
if file_exist == 'no':
test_util.test_fail('there is no log file')
def check_zstack_version(ssh_cmd, tmp_file, vm_inv, pkg_version):
cmd = '%s "/usr/bin/zstack-ctl status" | grep ^version | awk \'{print $2}\'' % ssh_cmd
(process_result, version) = execute_shell_in_process_stdout(cmd, tmp_file)
if process_result != 0:
test_util.test_fail('zstack-ctl get version failed')
version = version[:-1]
test_util.test_dsc("current version number: %s" % version)
if version != pkg_version:
test_util.test_fail('try to install zstack-%s, but current version is zstack-%s' % (pkg_version, version))
def check_zstack_or_mevoco(ssh_cmd, tmp_file, vm_inv, zom):
cmd = '%s "/usr/bin/zstack-ctl status" | grep ^version | awk \'{print $3}\'' % ssh_cmd
(process_result, version) = execute_shell_in_process_stdout(cmd, tmp_file)
if process_result != 0:
test_util.test_fail('zstack-ctl get version failed')
version = version[1:-1]
test_util.test_dsc("current version: %s" % version)
if version != zom:
test_util.test_fail('try to install %s, but current version is %s' % (zom, version))
def update_iso(ssh_cmd, tmp_file, vm_inv, update_file):
scp_file_to_vm(vm_inv, update_file, '/home/update_iso.sh')
cmd = '%s chmod +x /home/update_iso.sh' % ssh_cmd
(process_result, ouput) = execute_shell_in_process_stdout(cmd, tmp_file)
cmd = '%s bash /home/update_iso.sh' % ssh_cmd
(process_result, output) = execute_shell_in_process_stdout(cmd, tmp_file)
| 47.735238 | 162 | 0.657236 | 3,653 | 25,061 | 4.198193 | 0.067889 | 0.076291 | 0.057121 | 0.072574 | 0.87174 | 0.859872 | 0.827204 | 0.796296 | 0.769105 | 0.761933 | 0 | 0.012247 | 0.224532 | 25,061 | 524 | 163 | 47.826336 | 0.776886 | 0.006504 | 0 | 0.666667 | 0 | 0.011038 | 0.230213 | 0.021615 | 0 | 0 | 0 | 0 | 0 | 1 | 0.050773 | false | 0.028698 | 0.01766 | 0 | 0.099338 | 0.039735 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
f82e239aa363026c168a7090604db14071d59baa | 40 | py | Python | pyorm/__init__.py | TonyFlury/pyorm | 6d811fa32d3ba4c4a013fbb8f627277fa9d20b64 | [
"MIT"
] | null | null | null | pyorm/__init__.py | TonyFlury/pyorm | 6d811fa32d3ba4c4a013fbb8f627277fa9d20b64 | [
"MIT"
] | null | null | null | pyorm/__init__.py | TonyFlury/pyorm | 6d811fa32d3ba4c4a013fbb8f627277fa9d20b64 | [
"MIT"
] | null | null | null | from pyorm.initialise import Initialise
| 20 | 39 | 0.875 | 5 | 40 | 7 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.1 | 40 | 1 | 40 | 40 | 0.972222 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
f866b769dadcd8aaf869cac801ef24454677a388 | 33 | py | Python | torchsharp/__init__.py | corenel/torchsharp | 43a585976747222b9c384abad432eb0e826a6f62 | [
"MIT"
] | 6 | 2017-10-30T20:17:41.000Z | 2020-07-22T10:11:49.000Z | torchsharp/__init__.py | corenel/torchsharp | 43a585976747222b9c384abad432eb0e826a6f62 | [
"MIT"
] | null | null | null | torchsharp/__init__.py | corenel/torchsharp | 43a585976747222b9c384abad432eb0e826a6f62 | [
"MIT"
] | null | null | null | from . import data, model, utils
| 16.5 | 32 | 0.727273 | 5 | 33 | 4.8 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.181818 | 33 | 1 | 33 | 33 | 0.888889 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
f8ba46534d393245ecd1cd21b42680994323b05d | 20 | py | Python | pwnlib/encoders/amd64/__init__.py | zaratec/pwntools | 8793decd1c9b8c822e3db6c27b9cbf6e8cddfeba | [
"MIT"
] | 1 | 2021-01-11T07:11:22.000Z | 2021-01-11T07:11:22.000Z | pwnlib/encoders/amd64/__init__.py | zerocool443/pwntools | 76fe04e3560419380108b398310cc35316b35dfb | [
"MIT"
] | null | null | null | pwnlib/encoders/amd64/__init__.py | zerocool443/pwntools | 76fe04e3560419380108b398310cc35316b35dfb | [
"MIT"
] | 1 | 2021-11-14T01:36:34.000Z | 2021-11-14T01:36:34.000Z | from . import delta
| 10 | 19 | 0.75 | 3 | 20 | 5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.2 | 20 | 1 | 20 | 20 | 0.9375 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
3e759f1929e16d6faae1322dfbdce63ac6593337 | 93 | py | Python | test.py | fstevanus/Web_Scrapper | 6a5c7e4978245035ef899cda674bccc047e25290 | [
"Unlicense"
] | null | null | null | test.py | fstevanus/Web_Scrapper | 6a5c7e4978245035ef899cda674bccc047e25290 | [
"Unlicense"
] | null | null | null | test.py | fstevanus/Web_Scrapper | 6a5c7e4978245035ef899cda674bccc047e25290 | [
"Unlicense"
] | null | null | null | from scrap import getBukalapak
from scrap import getTokopedia
getBukalapak()
getTokopedia()
| 15.5 | 30 | 0.83871 | 10 | 93 | 7.8 | 0.5 | 0.230769 | 0.384615 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.11828 | 93 | 5 | 31 | 18.6 | 0.95122 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
e498e23a4a5e3dc8da5405eadd7812b35b6323df | 192 | py | Python | app/controller/snapshot_controller.py | aaronnorrish/PubMedConnections | dc17e141d94afe6d26a9b49b2183c06f3630e561 | [
"CC-BY-4.0"
] | null | null | null | app/controller/snapshot_controller.py | aaronnorrish/PubMedConnections | dc17e141d94afe6d26a9b49b2183c06f3630e561 | [
"CC-BY-4.0"
] | null | null | null | app/controller/snapshot_controller.py | aaronnorrish/PubMedConnections | dc17e141d94afe6d26a9b49b2183c06f3630e561 | [
"CC-BY-4.0"
] | null | null | null |
def query_by_filters(graph_type, filters):
# TODO verify inputs,
# Query Neo4j,
# transform data,
# return results
return "TODO return {} snapshot".format(graph_type)
| 21.333333 | 55 | 0.666667 | 23 | 192 | 5.391304 | 0.695652 | 0.145161 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.006849 | 0.239583 | 192 | 8 | 56 | 24 | 0.842466 | 0.34375 | 0 | 0 | 0 | 0 | 0.193277 | 0 | 0 | 0 | 0 | 0.125 | 0 | 1 | 0.5 | false | 0 | 0 | 0.5 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
90331e369a6413219b60be1481e794dd7833dfaf | 41 | py | Python | funtest/__init__.py | duboviy/funtest | 60a5b768e55d21efb9953479d9a7171a33a896de | [
"MIT"
] | 6 | 2017-02-12T22:02:32.000Z | 2019-04-22T11:43:21.000Z | funtest/__init__.py | duboviy/funtest | 60a5b768e55d21efb9953479d9a7171a33a896de | [
"MIT"
] | null | null | null | funtest/__init__.py | duboviy/funtest | 60a5b768e55d21efb9953479d9a7171a33a896de | [
"MIT"
] | null | null | null | from .runner import FunctionalTestRunner
| 20.5 | 40 | 0.878049 | 4 | 41 | 9 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.097561 | 41 | 1 | 41 | 41 | 0.972973 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
5f5c0c5f5b195e1f86e56d3e2b74329f9c5ea230 | 268 | py | Python | gpflowSlim/neural_kernel_network/__init__.py | jereliu/GPflow-Slim | 5003b60712460be8b981b797b4567372fe77c302 | [
"Apache-2.0"
] | 16 | 2018-07-03T14:30:42.000Z | 2021-09-03T11:12:05.000Z | gpflowSlim/neural_kernel_network/__init__.py | jskDr/GPflow-Slim | 3b6a9eaa4967b7285cbd188b44f670bfda6f12c6 | [
"Apache-2.0"
] | 4 | 2018-09-17T20:34:31.000Z | 2021-02-27T05:17:45.000Z | gpflowSlim/neural_kernel_network/__init__.py | jskDr/GPflow-Slim | 3b6a9eaa4967b7285cbd188b44f670bfda6f12c6 | [
"Apache-2.0"
] | 6 | 2018-07-03T14:07:01.000Z | 2020-06-08T17:53:36.000Z | from .neural_kernel_network import NeuralKernelNetwork
from .neural_kernel_network_wrapper import NKNWrapper
from .neural_kernel_network_v2 import NeuralKernelNetwork as NeuralKernelNetwork_v2
from .neural_kernel_network_wrapper_v2 import NKNWrapper as NKNWrapper_v2
| 44.666667 | 83 | 0.906716 | 34 | 268 | 6.735294 | 0.294118 | 0.174672 | 0.279476 | 0.401747 | 0.262009 | 0 | 0 | 0 | 0 | 0 | 0 | 0.016194 | 0.078358 | 268 | 5 | 84 | 53.6 | 0.910931 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
5f70812a13bbd21b24b4457f9ca16a1262a16ce4 | 27 | py | Python | slgnn/models/decoder/__init__.py | thomasly/slgnn | caa1e7814498da41ad025b4e62c569fe511848ff | [
"MIT"
] | 2 | 2020-08-31T00:55:31.000Z | 2020-09-01T19:59:30.000Z | slgnn/models/decoder/__init__.py | thomasly/slgnn | caa1e7814498da41ad025b4e62c569fe511848ff | [
"MIT"
] | null | null | null | slgnn/models/decoder/__init__.py | thomasly/slgnn | caa1e7814498da41ad025b4e62c569fe511848ff | [
"MIT"
] | null | null | null | from .model import Decoder
| 13.5 | 26 | 0.814815 | 4 | 27 | 5.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.148148 | 27 | 1 | 27 | 27 | 0.956522 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
5f88cfbabb66ee501f447b8377c751b5fda17db3 | 79 | py | Python | karafs/__main__.py | iamvee/karafs | 50a283f040d9359d77186713dc707ac8839285c2 | [
"MIT"
] | 24 | 2021-02-11T07:50:51.000Z | 2022-03-10T11:59:20.000Z | karafs/__main__.py | erfansaberi/karafs | ef477970ef27a892cb3a8012443640560c271832 | [
"MIT"
] | 5 | 2021-02-11T00:58:49.000Z | 2021-02-15T16:40:23.000Z | karafs/__main__.py | erfansaberi/karafs | ef477970ef27a892cb3a8012443640560c271832 | [
"MIT"
] | 6 | 2021-02-11T06:53:52.000Z | 2021-06-02T20:44:17.000Z | """ Run the karafs """
import sys
from karafs import main
main(sys.argv[1:])
| 11.285714 | 23 | 0.670886 | 13 | 79 | 4.076923 | 0.692308 | 0.45283 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.015385 | 0.177215 | 79 | 6 | 24 | 13.166667 | 0.8 | 0.177215 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
3971645b802211fdfb596af30f2a3c454fb9afdf | 38 | py | Python | attach/__init__.py | yatharth/attach | 30d9530a18a45e1c1dba4edec42283974520300e | [
"MIT"
] | null | null | null | attach/__init__.py | yatharth/attach | 30d9530a18a45e1c1dba4edec42283974520300e | [
"MIT"
] | null | null | null | attach/__init__.py | yatharth/attach | 30d9530a18a45e1c1dba4edec42283974520300e | [
"MIT"
] | null | null | null | from .attach import attach, Namespace
| 19 | 37 | 0.815789 | 5 | 38 | 6.2 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.131579 | 38 | 1 | 38 | 38 | 0.939394 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
3987d57f338498ebff00d1cc3084d4ae9aaa7f86 | 37,894 | py | Python | instances/passenger_demand/pas-20210421-2109-int14000000000000001e/58.py | LHcau/scheduling-shared-passenger-and-freight-transport-on-a-fixed-infrastructure | bba1e6af5bc8d9deaa2dc3b83f6fe9ddf15d2a11 | [
"BSD-3-Clause"
] | null | null | null | instances/passenger_demand/pas-20210421-2109-int14000000000000001e/58.py | LHcau/scheduling-shared-passenger-and-freight-transport-on-a-fixed-infrastructure | bba1e6af5bc8d9deaa2dc3b83f6fe9ddf15d2a11 | [
"BSD-3-Clause"
] | null | null | null | instances/passenger_demand/pas-20210421-2109-int14000000000000001e/58.py | LHcau/scheduling-shared-passenger-and-freight-transport-on-a-fixed-infrastructure | bba1e6af5bc8d9deaa2dc3b83f6fe9ddf15d2a11 | [
"BSD-3-Clause"
] | null | null | null |
"""
PASSENGERS
"""
numPassengers = 3167
passenger_arriving = (
(6, 13, 5, 2, 1, 0, 6, 12, 8, 2, 4, 0), # 0
(4, 6, 4, 3, 4, 0, 6, 11, 2, 4, 1, 0), # 1
(1, 6, 4, 4, 1, 0, 8, 8, 7, 6, 1, 0), # 2
(4, 8, 10, 4, 4, 0, 4, 9, 3, 2, 2, 0), # 3
(3, 6, 5, 5, 0, 0, 9, 11, 3, 10, 2, 0), # 4
(3, 11, 12, 2, 2, 0, 10, 6, 7, 7, 2, 0), # 5
(1, 8, 9, 3, 2, 0, 8, 12, 0, 5, 4, 0), # 6
(2, 11, 5, 11, 1, 0, 5, 8, 5, 6, 3, 0), # 7
(6, 5, 9, 3, 4, 0, 12, 5, 4, 5, 3, 0), # 8
(2, 8, 5, 3, 4, 0, 2, 6, 8, 2, 3, 0), # 9
(4, 12, 4, 5, 1, 0, 4, 9, 3, 4, 2, 0), # 10
(5, 14, 5, 5, 5, 0, 8, 3, 8, 0, 1, 0), # 11
(2, 15, 5, 1, 2, 0, 10, 7, 3, 4, 0, 0), # 12
(6, 5, 3, 3, 1, 0, 4, 6, 7, 4, 4, 0), # 13
(9, 10, 7, 3, 5, 0, 10, 9, 8, 3, 1, 0), # 14
(4, 12, 10, 6, 3, 0, 9, 4, 8, 3, 2, 0), # 15
(5, 10, 8, 6, 4, 0, 3, 10, 4, 8, 5, 0), # 16
(4, 10, 7, 4, 2, 0, 11, 7, 2, 2, 1, 0), # 17
(7, 11, 11, 4, 0, 0, 1, 12, 5, 5, 0, 0), # 18
(3, 11, 6, 2, 0, 0, 6, 9, 6, 4, 2, 0), # 19
(4, 11, 4, 3, 1, 0, 7, 10, 2, 3, 0, 0), # 20
(5, 4, 9, 3, 3, 0, 10, 5, 8, 4, 3, 0), # 21
(1, 7, 6, 2, 1, 0, 10, 5, 1, 4, 0, 0), # 22
(3, 13, 9, 1, 5, 0, 5, 9, 9, 10, 3, 0), # 23
(5, 7, 9, 4, 0, 0, 5, 6, 9, 5, 2, 0), # 24
(9, 10, 8, 4, 0, 0, 8, 8, 3, 2, 1, 0), # 25
(8, 3, 3, 4, 1, 0, 5, 9, 5, 4, 1, 0), # 26
(3, 12, 6, 2, 2, 0, 10, 11, 3, 2, 4, 0), # 27
(8, 9, 9, 4, 1, 0, 2, 9, 2, 8, 6, 0), # 28
(4, 6, 11, 6, 3, 0, 10, 8, 2, 7, 1, 0), # 29
(3, 10, 5, 6, 3, 0, 5, 9, 4, 6, 1, 0), # 30
(1, 8, 7, 3, 3, 0, 5, 10, 5, 4, 1, 0), # 31
(7, 11, 4, 4, 7, 0, 4, 9, 9, 4, 4, 0), # 32
(7, 12, 6, 3, 0, 0, 4, 13, 6, 5, 2, 0), # 33
(2, 15, 8, 6, 2, 0, 7, 10, 3, 3, 2, 0), # 34
(7, 6, 8, 6, 1, 0, 3, 9, 6, 4, 1, 0), # 35
(1, 5, 10, 4, 2, 0, 10, 9, 4, 4, 2, 0), # 36
(3, 11, 7, 5, 1, 0, 6, 4, 8, 11, 0, 0), # 37
(2, 6, 9, 4, 1, 0, 3, 6, 7, 2, 2, 0), # 38
(3, 12, 8, 1, 1, 0, 6, 6, 4, 8, 4, 0), # 39
(2, 9, 12, 6, 2, 0, 7, 10, 5, 3, 3, 0), # 40
(3, 11, 4, 9, 0, 0, 5, 7, 5, 5, 1, 0), # 41
(4, 14, 7, 2, 3, 0, 6, 10, 3, 7, 3, 0), # 42
(4, 5, 9, 5, 2, 0, 5, 4, 6, 4, 1, 0), # 43
(4, 9, 5, 3, 5, 0, 12, 12, 5, 2, 6, 0), # 44
(5, 7, 9, 4, 2, 0, 7, 7, 4, 2, 0, 0), # 45
(7, 8, 10, 3, 1, 0, 6, 10, 3, 5, 6, 0), # 46
(5, 9, 5, 8, 1, 0, 7, 9, 4, 4, 3, 0), # 47
(3, 6, 11, 6, 0, 0, 8, 12, 5, 3, 5, 0), # 48
(3, 7, 6, 5, 4, 0, 2, 7, 4, 0, 5, 0), # 49
(3, 11, 7, 7, 2, 0, 4, 6, 4, 2, 5, 0), # 50
(3, 14, 8, 8, 1, 0, 10, 4, 6, 5, 4, 0), # 51
(5, 8, 11, 4, 5, 0, 4, 8, 8, 5, 5, 0), # 52
(5, 10, 6, 6, 6, 0, 5, 13, 8, 5, 3, 0), # 53
(4, 9, 6, 5, 0, 0, 1, 11, 7, 4, 1, 0), # 54
(3, 12, 11, 3, 2, 0, 8, 6, 2, 3, 0, 0), # 55
(3, 13, 9, 4, 4, 0, 2, 8, 8, 4, 4, 0), # 56
(6, 6, 3, 4, 2, 0, 11, 10, 9, 5, 2, 0), # 57
(5, 9, 7, 5, 1, 0, 7, 7, 5, 6, 2, 0), # 58
(0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0), # 59
)
station_arriving_intensity = (
(3.7095121817383676, 9.515044981060607, 11.19193043059126, 8.87078804347826, 10.000240384615385, 6.659510869565219), # 0
(3.7443308140669203, 9.620858238197952, 11.252381752534994, 8.920190141908213, 10.075193108974359, 6.657240994867151), # 1
(3.7787518681104277, 9.725101964085297, 11.31139817195087, 8.968504830917876, 10.148564102564103, 6.654901690821256), # 2
(3.8127461259877085, 9.827663671875001, 11.368936576156813, 9.01569089673913, 10.22028605769231, 6.652493274456523), # 3
(3.8462843698175795, 9.928430874719417, 11.424953852470724, 9.061707125603865, 10.290291666666668, 6.6500160628019325), # 4
(3.879337381718857, 10.027291085770905, 11.479406888210512, 9.106512303743962, 10.358513621794872, 6.647470372886473), # 5
(3.9118759438103607, 10.12413181818182, 11.53225257069409, 9.150065217391306, 10.424884615384617, 6.644856521739131), # 6
(3.943870838210907, 10.218840585104518, 11.58344778723936, 9.19232465277778, 10.489337339743592, 6.64217482638889), # 7
(3.975292847039314, 10.311304899691358, 11.632949425164242, 9.233249396135266, 10.551804487179488, 6.639425603864735), # 8
(4.006112752414399, 10.401412275094698, 11.680714371786634, 9.272798233695653, 10.61221875, 6.636609171195653), # 9
(4.03630133645498, 10.489050224466892, 11.72669951442445, 9.310929951690824, 10.670512820512823, 6.633725845410628), # 10
(4.065829381279876, 10.5741062609603, 11.7708617403956, 9.347603336352659, 10.726619391025642, 6.630775943538648), # 11
(4.094667669007903, 10.656467897727273, 11.813157937017996, 9.382777173913043, 10.780471153846154, 6.627759782608695), # 12
(4.122786981757876, 10.736022647920176, 11.85354499160954, 9.416410250603866, 10.832000801282053, 6.624677679649759), # 13
(4.15015810164862, 10.81265802469136, 11.891979791488144, 9.448461352657004, 10.881141025641025, 6.621529951690821), # 14
(4.1767518107989465, 10.886261541193182, 11.928419223971721, 9.478889266304348, 10.92782451923077, 6.618316915760871), # 15
(4.202538891327675, 10.956720710578002, 11.96282017637818, 9.507652777777778, 10.971983974358976, 6.61503888888889), # 16
(4.227490125353625, 11.023923045998176, 11.995139536025421, 9.53471067330918, 11.013552083333336, 6.611696188103866), # 17
(4.25157629499561, 11.087756060606061, 12.025334190231364, 9.560021739130436, 11.052461538461543, 6.608289130434783), # 18
(4.274768182372451, 11.148107267554012, 12.053361026313912, 9.58354476147343, 11.088645032051284, 6.604818032910629), # 19
(4.297036569602966, 11.204864179994388, 12.079176931590974, 9.60523852657005, 11.122035256410259, 6.601283212560387), # 20
(4.318352238805971, 11.257914311079544, 12.102738793380466, 9.625061820652174, 11.152564903846153, 6.597684986413044), # 21
(4.338685972100283, 11.307145173961842, 12.124003499000287, 9.642973429951692, 11.180166666666667, 6.5940236714975855), # 22
(4.358008551604722, 11.352444281793632, 12.142927935768354, 9.658932140700484, 11.204773237179488, 6.590299584842997), # 23
(4.3762907594381035, 11.393699147727272, 12.159468991002571, 9.672896739130437, 11.226317307692307, 6.586513043478261), # 24
(4.393503377719247, 11.430797284915124, 12.173583552020853, 9.684826011473431, 11.244731570512819, 6.582664364432368), # 25
(4.409617188566969, 11.46362620650954, 12.185228506141103, 9.694678743961353, 11.259948717948719, 6.5787538647343), # 26
(4.424602974100088, 11.492073425662877, 12.194360740681233, 9.702413722826089, 11.271901442307694, 6.574781861413045), # 27
(4.438431516437421, 11.516026455527497, 12.200937142959157, 9.707989734299519, 11.280522435897437, 6.570748671497586), # 28
(4.4510735976977855, 11.535372809255753, 12.204914600292774, 9.711365564613528, 11.285744391025641, 6.566654612016909), # 29
(4.4625, 11.55, 12.20625, 9.7125, 11.287500000000001, 6.562500000000001), # 30
(4.47319183983376, 11.56215031960227, 12.205248928140096, 9.712295118464054, 11.286861125886526, 6.556726763701484), # 31
(4.4836528452685425, 11.574140056818184, 12.202274033816424, 9.711684477124184, 11.28495815602837, 6.547834661835751), # 32
(4.493887715792838, 11.585967720170455, 12.197367798913046, 9.710674080882354, 11.281811569148937, 6.535910757121439), # 33
(4.503901150895141, 11.597631818181819, 12.19057270531401, 9.709269934640524, 11.277441843971632, 6.521042112277196), # 34
(4.513697850063939, 11.609130859374998, 12.181931234903383, 9.707478043300654, 11.27186945921986, 6.503315790021656), # 35
(4.523282512787724, 11.62046335227273, 12.171485869565219, 9.705304411764708, 11.265114893617023, 6.482818853073463), # 36
(4.532659838554988, 11.631627805397729, 12.159279091183576, 9.70275504493464, 11.257198625886524, 6.4596383641512585), # 37
(4.5418345268542195, 11.642622727272729, 12.145353381642513, 9.699835947712419, 11.248141134751775, 6.433861385973679), # 38
(4.5508112771739135, 11.653446626420456, 12.129751222826087, 9.696553125000001, 11.23796289893617, 6.40557498125937), # 39
(4.559594789002558, 11.664098011363638, 12.11251509661836, 9.692912581699348, 11.22668439716312, 6.37486621272697), # 40
(4.568189761828645, 11.674575390625, 12.093687484903382, 9.68892032271242, 11.214326108156028, 6.34182214309512), # 41
(4.576600895140665, 11.684877272727276, 12.07331086956522, 9.684582352941177, 11.2009085106383, 6.3065298350824595), # 42
(4.584832888427111, 11.69500216619318, 12.051427732487923, 9.679904677287583, 11.186452083333334, 6.26907635140763), # 43
(4.592890441176471, 11.704948579545455, 12.028080555555556, 9.674893300653595, 11.17097730496454, 6.229548754789272), # 44
(4.600778252877237, 11.714715021306818, 12.003311820652177, 9.669554227941177, 11.15450465425532, 6.188034107946028), # 45
(4.6085010230179035, 11.724300000000003, 11.97716400966184, 9.663893464052288, 11.137054609929079, 6.144619473596536), # 46
(4.616063451086957, 11.733702024147728, 11.9496796044686, 9.65791701388889, 11.118647650709221, 6.099391914459438), # 47
(4.623470236572891, 11.742919602272728, 11.920901086956523, 9.651630882352942, 11.099304255319149, 6.052438493253375), # 48
(4.630726078964194, 11.751951242897727, 11.890870939009663, 9.645041074346407, 11.079044902482272, 6.003846272696985), # 49
(4.6378356777493615, 11.760795454545454, 11.85963164251208, 9.638153594771243, 11.057890070921987, 5.953702315508913), # 50
(4.6448037324168805, 11.769450745738636, 11.827225679347826, 9.630974448529413, 11.035860239361703, 5.902093684407797), # 51
(4.651634942455243, 11.777915625, 11.793695531400965, 9.623509640522876, 11.012975886524824, 5.849107442112278), # 52
(4.658334007352941, 11.786188600852274, 11.759083680555555, 9.615765175653596, 10.989257491134753, 5.794830651340996), # 53
(4.6649056265984665, 11.79426818181818, 11.723432608695653, 9.60774705882353, 10.964725531914894, 5.739350374812594), # 54
(4.671354499680307, 11.802152876420456, 11.686784797705313, 9.599461294934642, 10.939400487588653, 5.682753675245711), # 55
(4.677685326086957, 11.809841193181818, 11.649182729468599, 9.59091388888889, 10.913302836879433, 5.625127615358988), # 56
(4.683902805306906, 11.817331640625003, 11.610668885869565, 9.582110845588236, 10.886453058510638, 5.566559257871065), # 57
(4.690011636828645, 11.824622727272727, 11.57128574879227, 9.573058169934642, 10.858871631205675, 5.507135665500583), # 58
(0.0, 0.0, 0.0, 0.0, 0.0, 0.0), # 59
)
passenger_arriving_acc = (
(6, 13, 5, 2, 1, 0, 6, 12, 8, 2, 4, 0), # 0
(10, 19, 9, 5, 5, 0, 12, 23, 10, 6, 5, 0), # 1
(11, 25, 13, 9, 6, 0, 20, 31, 17, 12, 6, 0), # 2
(15, 33, 23, 13, 10, 0, 24, 40, 20, 14, 8, 0), # 3
(18, 39, 28, 18, 10, 0, 33, 51, 23, 24, 10, 0), # 4
(21, 50, 40, 20, 12, 0, 43, 57, 30, 31, 12, 0), # 5
(22, 58, 49, 23, 14, 0, 51, 69, 30, 36, 16, 0), # 6
(24, 69, 54, 34, 15, 0, 56, 77, 35, 42, 19, 0), # 7
(30, 74, 63, 37, 19, 0, 68, 82, 39, 47, 22, 0), # 8
(32, 82, 68, 40, 23, 0, 70, 88, 47, 49, 25, 0), # 9
(36, 94, 72, 45, 24, 0, 74, 97, 50, 53, 27, 0), # 10
(41, 108, 77, 50, 29, 0, 82, 100, 58, 53, 28, 0), # 11
(43, 123, 82, 51, 31, 0, 92, 107, 61, 57, 28, 0), # 12
(49, 128, 85, 54, 32, 0, 96, 113, 68, 61, 32, 0), # 13
(58, 138, 92, 57, 37, 0, 106, 122, 76, 64, 33, 0), # 14
(62, 150, 102, 63, 40, 0, 115, 126, 84, 67, 35, 0), # 15
(67, 160, 110, 69, 44, 0, 118, 136, 88, 75, 40, 0), # 16
(71, 170, 117, 73, 46, 0, 129, 143, 90, 77, 41, 0), # 17
(78, 181, 128, 77, 46, 0, 130, 155, 95, 82, 41, 0), # 18
(81, 192, 134, 79, 46, 0, 136, 164, 101, 86, 43, 0), # 19
(85, 203, 138, 82, 47, 0, 143, 174, 103, 89, 43, 0), # 20
(90, 207, 147, 85, 50, 0, 153, 179, 111, 93, 46, 0), # 21
(91, 214, 153, 87, 51, 0, 163, 184, 112, 97, 46, 0), # 22
(94, 227, 162, 88, 56, 0, 168, 193, 121, 107, 49, 0), # 23
(99, 234, 171, 92, 56, 0, 173, 199, 130, 112, 51, 0), # 24
(108, 244, 179, 96, 56, 0, 181, 207, 133, 114, 52, 0), # 25
(116, 247, 182, 100, 57, 0, 186, 216, 138, 118, 53, 0), # 26
(119, 259, 188, 102, 59, 0, 196, 227, 141, 120, 57, 0), # 27
(127, 268, 197, 106, 60, 0, 198, 236, 143, 128, 63, 0), # 28
(131, 274, 208, 112, 63, 0, 208, 244, 145, 135, 64, 0), # 29
(134, 284, 213, 118, 66, 0, 213, 253, 149, 141, 65, 0), # 30
(135, 292, 220, 121, 69, 0, 218, 263, 154, 145, 66, 0), # 31
(142, 303, 224, 125, 76, 0, 222, 272, 163, 149, 70, 0), # 32
(149, 315, 230, 128, 76, 0, 226, 285, 169, 154, 72, 0), # 33
(151, 330, 238, 134, 78, 0, 233, 295, 172, 157, 74, 0), # 34
(158, 336, 246, 140, 79, 0, 236, 304, 178, 161, 75, 0), # 35
(159, 341, 256, 144, 81, 0, 246, 313, 182, 165, 77, 0), # 36
(162, 352, 263, 149, 82, 0, 252, 317, 190, 176, 77, 0), # 37
(164, 358, 272, 153, 83, 0, 255, 323, 197, 178, 79, 0), # 38
(167, 370, 280, 154, 84, 0, 261, 329, 201, 186, 83, 0), # 39
(169, 379, 292, 160, 86, 0, 268, 339, 206, 189, 86, 0), # 40
(172, 390, 296, 169, 86, 0, 273, 346, 211, 194, 87, 0), # 41
(176, 404, 303, 171, 89, 0, 279, 356, 214, 201, 90, 0), # 42
(180, 409, 312, 176, 91, 0, 284, 360, 220, 205, 91, 0), # 43
(184, 418, 317, 179, 96, 0, 296, 372, 225, 207, 97, 0), # 44
(189, 425, 326, 183, 98, 0, 303, 379, 229, 209, 97, 0), # 45
(196, 433, 336, 186, 99, 0, 309, 389, 232, 214, 103, 0), # 46
(201, 442, 341, 194, 100, 0, 316, 398, 236, 218, 106, 0), # 47
(204, 448, 352, 200, 100, 0, 324, 410, 241, 221, 111, 0), # 48
(207, 455, 358, 205, 104, 0, 326, 417, 245, 221, 116, 0), # 49
(210, 466, 365, 212, 106, 0, 330, 423, 249, 223, 121, 0), # 50
(213, 480, 373, 220, 107, 0, 340, 427, 255, 228, 125, 0), # 51
(218, 488, 384, 224, 112, 0, 344, 435, 263, 233, 130, 0), # 52
(223, 498, 390, 230, 118, 0, 349, 448, 271, 238, 133, 0), # 53
(227, 507, 396, 235, 118, 0, 350, 459, 278, 242, 134, 0), # 54
(230, 519, 407, 238, 120, 0, 358, 465, 280, 245, 134, 0), # 55
(233, 532, 416, 242, 124, 0, 360, 473, 288, 249, 138, 0), # 56
(239, 538, 419, 246, 126, 0, 371, 483, 297, 254, 140, 0), # 57
(244, 547, 426, 251, 127, 0, 378, 490, 302, 260, 142, 0), # 58
(244, 547, 426, 251, 127, 0, 378, 490, 302, 260, 142, 0), # 59
)
passenger_arriving_rate = (
(3.7095121817383676, 7.612035984848484, 6.715158258354756, 3.5483152173913037, 2.000048076923077, 0.0, 6.659510869565219, 8.000192307692307, 5.322472826086956, 4.476772172236504, 1.903008996212121, 0.0), # 0
(3.7443308140669203, 7.696686590558361, 6.751429051520996, 3.5680760567632848, 2.0150386217948717, 0.0, 6.657240994867151, 8.060154487179487, 5.352114085144928, 4.500952701013997, 1.9241716476395903, 0.0), # 1
(3.7787518681104277, 7.780081571268237, 6.786838903170522, 3.58740193236715, 2.0297128205128203, 0.0, 6.654901690821256, 8.118851282051281, 5.381102898550726, 4.524559268780347, 1.9450203928170593, 0.0), # 2
(3.8127461259877085, 7.8621309375, 6.821361945694087, 3.6062763586956517, 2.044057211538462, 0.0, 6.652493274456523, 8.176228846153847, 5.409414538043478, 4.547574630462725, 1.965532734375, 0.0), # 3
(3.8462843698175795, 7.942744699775533, 6.854972311482434, 3.624682850241546, 2.0580583333333333, 0.0, 6.6500160628019325, 8.232233333333333, 5.437024275362319, 4.569981540988289, 1.9856861749438832, 0.0), # 4
(3.879337381718857, 8.021832868616723, 6.887644132926307, 3.6426049214975844, 2.0717027243589743, 0.0, 6.647470372886473, 8.286810897435897, 5.463907382246377, 4.591762755284204, 2.005458217154181, 0.0), # 5
(3.9118759438103607, 8.099305454545455, 6.919351542416455, 3.660026086956522, 2.084976923076923, 0.0, 6.644856521739131, 8.339907692307692, 5.490039130434783, 4.612901028277636, 2.0248263636363637, 0.0), # 6
(3.943870838210907, 8.175072468083613, 6.950068672343615, 3.6769298611111116, 2.0978674679487184, 0.0, 6.64217482638889, 8.391469871794873, 5.515394791666668, 4.633379114895743, 2.043768117020903, 0.0), # 7
(3.975292847039314, 8.249043919753085, 6.979769655098544, 3.693299758454106, 2.1103608974358976, 0.0, 6.639425603864735, 8.44144358974359, 5.5399496376811594, 4.653179770065696, 2.062260979938271, 0.0), # 8
(4.006112752414399, 8.321129820075758, 7.00842862307198, 3.709119293478261, 2.12244375, 0.0, 6.636609171195653, 8.489775, 5.563678940217391, 4.672285748714653, 2.0802824550189394, 0.0), # 9
(4.03630133645498, 8.391240179573513, 7.03601970865467, 3.724371980676329, 2.134102564102564, 0.0, 6.633725845410628, 8.536410256410257, 5.586557971014494, 4.690679805769779, 2.0978100448933783, 0.0), # 10
(4.065829381279876, 8.459285008768239, 7.06251704423736, 3.739041334541063, 2.145323878205128, 0.0, 6.630775943538648, 8.581295512820512, 5.608562001811595, 4.70834469615824, 2.1148212521920597, 0.0), # 11
(4.094667669007903, 8.525174318181818, 7.087894762210797, 3.7531108695652167, 2.156094230769231, 0.0, 6.627759782608695, 8.624376923076923, 5.6296663043478254, 4.725263174807198, 2.1312935795454546, 0.0), # 12
(4.122786981757876, 8.58881811833614, 7.112126994965724, 3.766564100241546, 2.1664001602564102, 0.0, 6.624677679649759, 8.665600641025641, 5.649846150362319, 4.741417996643816, 2.147204529584035, 0.0), # 13
(4.15015810164862, 8.650126419753088, 7.135187874892886, 3.779384541062801, 2.1762282051282047, 0.0, 6.621529951690821, 8.704912820512819, 5.669076811594202, 4.756791916595257, 2.162531604938272, 0.0), # 14
(4.1767518107989465, 8.709009232954545, 7.157051534383032, 3.7915557065217387, 2.1855649038461538, 0.0, 6.618316915760871, 8.742259615384615, 5.6873335597826085, 4.771367689588688, 2.177252308238636, 0.0), # 15
(4.202538891327675, 8.7653765684624, 7.177692105826908, 3.803061111111111, 2.194396794871795, 0.0, 6.61503888888889, 8.77758717948718, 5.7045916666666665, 4.785128070551272, 2.1913441421156, 0.0), # 16
(4.227490125353625, 8.81913843679854, 7.197083721615253, 3.8138842693236716, 2.202710416666667, 0.0, 6.611696188103866, 8.810841666666668, 5.720826403985508, 4.798055814410168, 2.204784609199635, 0.0), # 17
(4.25157629499561, 8.870204848484848, 7.215200514138818, 3.824008695652174, 2.2104923076923084, 0.0, 6.608289130434783, 8.841969230769234, 5.736013043478262, 4.810133676092545, 2.217551212121212, 0.0), # 18
(4.274768182372451, 8.918485814043208, 7.232016615788346, 3.8334179045893717, 2.2177290064102566, 0.0, 6.604818032910629, 8.870916025641026, 5.750126856884058, 4.8213444105255645, 2.229621453510802, 0.0), # 19
(4.297036569602966, 8.96389134399551, 7.247506158954584, 3.8420954106280196, 2.2244070512820517, 0.0, 6.601283212560387, 8.897628205128207, 5.76314311594203, 4.831670772636389, 2.2409728359988774, 0.0), # 20
(4.318352238805971, 9.006331448863634, 7.261643276028279, 3.8500247282608693, 2.2305129807692303, 0.0, 6.597684986413044, 8.922051923076921, 5.775037092391305, 4.841095517352186, 2.2515828622159084, 0.0), # 21
(4.338685972100283, 9.045716139169473, 7.274402099400172, 3.8571893719806765, 2.2360333333333333, 0.0, 6.5940236714975855, 8.944133333333333, 5.785784057971015, 4.849601399600115, 2.2614290347923682, 0.0), # 22
(4.358008551604722, 9.081955425434906, 7.285756761461012, 3.8635728562801934, 2.2409546474358972, 0.0, 6.590299584842997, 8.963818589743589, 5.79535928442029, 4.857171174307341, 2.2704888563587264, 0.0), # 23
(4.3762907594381035, 9.114959318181818, 7.295681394601543, 3.869158695652174, 2.2452634615384612, 0.0, 6.586513043478261, 8.981053846153845, 5.803738043478262, 4.863787596401028, 2.2787398295454544, 0.0), # 24
(4.393503377719247, 9.1446378279321, 7.304150131212511, 3.8739304045893723, 2.2489463141025636, 0.0, 6.582664364432368, 8.995785256410255, 5.810895606884059, 4.869433420808341, 2.286159456983025, 0.0), # 25
(4.409617188566969, 9.17090096520763, 7.311137103684661, 3.8778714975845405, 2.2519897435897436, 0.0, 6.5787538647343, 9.007958974358974, 5.816807246376811, 4.874091402456441, 2.2927252413019077, 0.0), # 26
(4.424602974100088, 9.193658740530301, 7.31661644440874, 3.880965489130435, 2.2543802884615385, 0.0, 6.574781861413045, 9.017521153846154, 5.821448233695653, 4.877744296272493, 2.2984146851325753, 0.0), # 27
(4.438431516437421, 9.212821164421996, 7.320562285775494, 3.8831958937198072, 2.256104487179487, 0.0, 6.570748671497586, 9.024417948717948, 5.824793840579711, 4.8803748571836625, 2.303205291105499, 0.0), # 28
(4.4510735976977855, 9.228298247404602, 7.322948760175664, 3.884546225845411, 2.257148878205128, 0.0, 6.566654612016909, 9.028595512820512, 5.826819338768117, 4.881965840117109, 2.3070745618511506, 0.0), # 29
(4.4625, 9.24, 7.32375, 3.885, 2.2575000000000003, 0.0, 6.562500000000001, 9.030000000000001, 5.8275, 4.8825, 2.31, 0.0), # 30
(4.47319183983376, 9.249720255681815, 7.323149356884057, 3.884918047385621, 2.257372225177305, 0.0, 6.556726763701484, 9.02948890070922, 5.827377071078432, 4.882099571256038, 2.312430063920454, 0.0), # 31
(4.4836528452685425, 9.259312045454546, 7.3213644202898545, 3.884673790849673, 2.2569916312056737, 0.0, 6.547834661835751, 9.027966524822695, 5.82701068627451, 4.880909613526569, 2.3148280113636366, 0.0), # 32
(4.493887715792838, 9.268774176136363, 7.3184206793478275, 3.8842696323529413, 2.2563623138297872, 0.0, 6.535910757121439, 9.025449255319149, 5.826404448529412, 4.878947119565218, 2.3171935440340907, 0.0), # 33
(4.503901150895141, 9.278105454545454, 7.314343623188405, 3.8837079738562093, 2.2554883687943263, 0.0, 6.521042112277196, 9.021953475177305, 5.825561960784314, 4.876229082125604, 2.3195263636363634, 0.0), # 34
(4.513697850063939, 9.287304687499997, 7.3091587409420296, 3.882991217320261, 2.2543738918439717, 0.0, 6.503315790021656, 9.017495567375887, 5.824486825980392, 4.872772493961353, 2.3218261718749993, 0.0), # 35
(4.523282512787724, 9.296370681818182, 7.302891521739131, 3.8821217647058828, 2.253022978723404, 0.0, 6.482818853073463, 9.012091914893617, 5.823182647058824, 4.868594347826087, 2.3240926704545455, 0.0), # 36
(4.532659838554988, 9.305302244318183, 7.295567454710145, 3.881102017973856, 2.2514397251773044, 0.0, 6.4596383641512585, 9.005758900709218, 5.821653026960784, 4.86371163647343, 2.3263255610795457, 0.0), # 37
(4.5418345268542195, 9.314098181818181, 7.287212028985508, 3.8799343790849674, 2.249628226950355, 0.0, 6.433861385973679, 8.99851290780142, 5.819901568627452, 4.858141352657005, 2.3285245454545453, 0.0), # 38
(4.5508112771739135, 9.322757301136363, 7.277850733695652, 3.87862125, 2.247592579787234, 0.0, 6.40557498125937, 8.990370319148935, 5.817931875, 4.8519004891304345, 2.330689325284091, 0.0), # 39
(4.559594789002558, 9.33127840909091, 7.267509057971015, 3.8771650326797387, 2.245336879432624, 0.0, 6.37486621272697, 8.981347517730496, 5.815747549019608, 4.845006038647344, 2.3328196022727274, 0.0), # 40
(4.568189761828645, 9.3396603125, 7.256212490942029, 3.8755681290849675, 2.2428652216312055, 0.0, 6.34182214309512, 8.971460886524822, 5.813352193627452, 4.837474993961353, 2.334915078125, 0.0), # 41
(4.576600895140665, 9.34790181818182, 7.2439865217391315, 3.8738329411764707, 2.2401817021276598, 0.0, 6.3065298350824595, 8.960726808510639, 5.810749411764706, 4.829324347826088, 2.336975454545455, 0.0), # 42
(4.584832888427111, 9.356001732954544, 7.230856639492753, 3.8719618709150327, 2.2372904166666667, 0.0, 6.26907635140763, 8.949161666666667, 5.80794280637255, 4.820571092995169, 2.339000433238636, 0.0), # 43
(4.592890441176471, 9.363958863636363, 7.216848333333333, 3.8699573202614377, 2.2341954609929076, 0.0, 6.229548754789272, 8.93678184397163, 5.804935980392157, 4.811232222222222, 2.3409897159090907, 0.0), # 44
(4.600778252877237, 9.371772017045453, 7.201987092391306, 3.8678216911764705, 2.230900930851064, 0.0, 6.188034107946028, 8.923603723404256, 5.801732536764706, 4.80132472826087, 2.3429430042613633, 0.0), # 45
(4.6085010230179035, 9.379440000000002, 7.186298405797103, 3.8655573856209147, 2.2274109219858156, 0.0, 6.144619473596536, 8.909643687943262, 5.798336078431372, 4.790865603864735, 2.3448600000000006, 0.0), # 46
(4.616063451086957, 9.386961619318182, 7.16980776268116, 3.8631668055555552, 2.223729530141844, 0.0, 6.099391914459438, 8.894918120567375, 5.794750208333333, 4.77987184178744, 2.3467404048295455, 0.0), # 47
(4.623470236572891, 9.394335681818182, 7.152540652173913, 3.8606523529411763, 2.21986085106383, 0.0, 6.052438493253375, 8.87944340425532, 5.790978529411765, 4.7683604347826085, 2.3485839204545456, 0.0), # 48
(4.630726078964194, 9.401560994318181, 7.134522563405797, 3.8580164297385626, 2.2158089804964543, 0.0, 6.003846272696985, 8.863235921985817, 5.787024644607844, 4.7563483756038645, 2.3503902485795454, 0.0), # 49
(4.6378356777493615, 9.408636363636361, 7.115778985507247, 3.8552614379084966, 2.211578014184397, 0.0, 5.953702315508913, 8.846312056737588, 5.782892156862745, 4.743852657004831, 2.3521590909090904, 0.0), # 50
(4.6448037324168805, 9.415560596590907, 7.096335407608696, 3.852389779411765, 2.2071720478723407, 0.0, 5.902093684407797, 8.828688191489363, 5.778584669117648, 4.73089027173913, 2.353890149147727, 0.0), # 51
(4.651634942455243, 9.4223325, 7.0762173188405795, 3.84940385620915, 2.2025951773049646, 0.0, 5.849107442112278, 8.810380709219858, 5.774105784313726, 4.717478212560386, 2.355583125, 0.0), # 52
(4.658334007352941, 9.428950880681818, 7.055450208333333, 3.8463060702614382, 2.1978514982269504, 0.0, 5.794830651340996, 8.791405992907801, 5.769459105392158, 4.703633472222222, 2.3572377201704544, 0.0), # 53
(4.6649056265984665, 9.435414545454544, 7.034059565217391, 3.843098823529412, 2.192945106382979, 0.0, 5.739350374812594, 8.771780425531915, 5.764648235294119, 4.689373043478261, 2.358853636363636, 0.0), # 54
(4.671354499680307, 9.441722301136364, 7.012070878623187, 3.8397845179738566, 2.1878800975177306, 0.0, 5.682753675245711, 8.751520390070922, 5.759676776960785, 4.674713919082125, 2.360430575284091, 0.0), # 55
(4.677685326086957, 9.447872954545453, 6.989509637681159, 3.8363655555555556, 2.1826605673758865, 0.0, 5.625127615358988, 8.730642269503546, 5.754548333333334, 4.65967309178744, 2.361968238636363, 0.0), # 56
(4.683902805306906, 9.453865312500001, 6.966401331521738, 3.832844338235294, 2.1772906117021273, 0.0, 5.566559257871065, 8.70916244680851, 5.749266507352941, 4.644267554347826, 2.3634663281250003, 0.0), # 57
(4.690011636828645, 9.459698181818181, 6.942771449275362, 3.8292232679738563, 2.1717743262411346, 0.0, 5.507135665500583, 8.687097304964539, 5.743834901960785, 4.628514299516908, 2.3649245454545453, 0.0), # 58
(0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0), # 59
)
passenger_allighting_rate = (
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 0
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 1
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 2
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 3
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 4
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 5
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 6
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 7
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 8
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 9
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 10
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 11
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 12
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 13
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 14
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 15
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 16
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 17
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 18
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 19
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 20
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 21
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 22
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 23
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 24
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 25
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 26
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 27
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 28
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 29
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 30
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 31
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 32
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 33
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 34
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 35
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 36
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 37
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 38
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 39
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 40
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 41
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 42
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 43
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 44
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 45
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 46
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 47
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 48
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 49
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 50
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 51
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 52
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 53
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 54
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 55
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 56
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 57
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 58
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 59
)
"""
parameters for reproducibiliy. More information: https://numpy.org/doc/stable/reference/random/parallel.html
"""
#initial entropy
entropy = 258194110137029475889902652135037600173
#index for seed sequence child
child_seed_index = (
1, # 0
57, # 1
)
| 113.116418 | 212 | 0.729139 | 5,147 | 37,894 | 5.366038 | 0.2279 | 0.312828 | 0.247656 | 0.469242 | 0.328325 | 0.32789 | 0.32789 | 0.32789 | 0.32789 | 0.32789 | 0 | 0.819053 | 0.119122 | 37,894 | 334 | 213 | 113.45509 | 0.008358 | 0.031958 | 0 | 0.202532 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.015823 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
ffccdf1e42e7cb2979fcbf5ac33bbf016662afdb | 2,365 | py | Python | example/src/tanh.py | SeanMabli/aiinpy | bd332fce454c489e236878c9da91bb86ec6dda14 | [
"MIT"
] | null | null | null | example/src/tanh.py | SeanMabli/aiinpy | bd332fce454c489e236878c9da91bb86ec6dda14 | [
"MIT"
] | null | null | null | example/src/tanh.py | SeanMabli/aiinpy | bd332fce454c489e236878c9da91bb86ec6dda14 | [
"MIT"
] | null | null | null | import numpy as np
class tanh:
def __repr__(self):
return 'tanh()'
# def forwardone(self, input):
# return (np.exp(input) - np.exp(-input)) / (np.exp(input) + np.exp(-input))
# def forwardtwo(self, input):
# return (np.exp(2 * input) - 1) / (np.exp(2 * input) + 1)
# def forwardthree(self, input):
# a = np.exp(input)
# return (a - 1 / a) / (a + 1 / a)
def forward(self, input):
a = np.exp(2 * input)
return (a - 1) / (a + 1)
# def backwardone(self, input):
# return (4 * np.exp(2 * input)) / np.square(np.exp(2 * input) + 1)
# def backwardtwo(self, input):
# return 4 / np.square(np.exp(input) + np.exp(-input))
def backward(self, input):
a = np.exp(2 * input)
return (4 * a) / np.square(a + 1)
# def backwardfour(self, input):
# a = np.exp(input)
# return 4 / np.square(a ** 2 + 1 / a)
# forward
# import timeit
# print(timeit.timeit('(np.exp(2 * np.random.uniform(-1, 1, (100, 100))) - 1) / (np.exp(2 * np.random.uniform(-1, 1, (100, 100))) + 1)', setup='import numpy as np', number=10000))
# print(timeit.timeit('(np.exp(np.random.uniform(-1, 1, (100, 100))) - np.exp(-np.random.uniform(-1, 1, (100, 100)))) / (np.exp(np.random.uniform(-1, 1, (100, 100))) + np.exp(-np.random.uniform(-1, 1, (100, 100))))', setup='import numpy as np', number=10000))
# print(timeit.timeit('a = np.exp(np.random.uniform(-1, 1, (100, 100))); (a - 1 / a) / (a + 1 / a)', setup='import numpy as np', number=10000))
# print(timeit.timeit('a = np.exp(2 * np.random.uniform(-1, 1, (100, 100))); (a - 1) / (a + 1)', setup='import numpy as np', number=10000))
# backward
# import timeit
# print(timeit.timeit('(4 * np.exp(2 * np.random.uniform(-1, 1, (100, 100)))) / np.square(np.exp(2 * np.random.uniform(-1, 1, (100, 100))) + 1)', setup='import numpy as np', number=10000)) # 2.7722288
# print(timeit.timeit('4 / np.square(np.exp(np.random.uniform(-1, 1, (100, 100))) + np.exp(-np.random.uniform(-1, 1, (100, 100))))', setup='import numpy as np', number=10000)) # 2.6440156000000004
# print(timeit.timeit('a = np.exp(2 * np.random.uniform(-1, 1, (100, 100))); (4 * a) / np.square(a + 1)', setup='import numpy as np', number=10000)) # 1.4674546
# print(timeit.timeit('a = np.exp(np.random.uniform(-1, 1, (100, 100))); 4 / np.square(a ** 2 + 1 / a)', setup='import numpy as np', number=10000)) # 1.567536 | 50.319149 | 259 | 0.592389 | 397 | 2,365 | 3.518892 | 0.093199 | 0.100215 | 0.150322 | 0.160344 | 0.857552 | 0.724409 | 0.682892 | 0.610594 | 0.566213 | 0.465999 | 0 | 0.118913 | 0.175053 | 2,365 | 47 | 260 | 50.319149 | 0.59713 | 0.8537 | 0 | 0.2 | 0 | 0 | 0.018987 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.3 | false | 0 | 0.1 | 0.1 | 0.8 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 6 |
fff19b3fe4761c51ef3bfe334574e33a631d5904 | 38 | py | Python | api/models/cunik/__init__.py | ShengliangD/Cunik-engine | 1951d20629fc3cbbe0047ee04b438bbe91adc44c | [
"MIT"
] | 31 | 2018-05-17T01:54:46.000Z | 2019-08-22T02:55:58.000Z | api/models/cunik/__init__.py | ShengliangD/Cunik-engine | 1951d20629fc3cbbe0047ee04b438bbe91adc44c | [
"MIT"
] | 1 | 2018-07-06T11:33:31.000Z | 2018-07-17T10:08:15.000Z | api/models/cunik/__init__.py | ShengliangD/Cunik-engine | 1951d20629fc3cbbe0047ee04b438bbe91adc44c | [
"MIT"
] | 7 | 2018-06-08T08:35:11.000Z | 2018-07-07T09:16:32.000Z | from .cunik import Cunik, CunikConfig
| 19 | 37 | 0.815789 | 5 | 38 | 6.2 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.131579 | 38 | 1 | 38 | 38 | 0.939394 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
08052c95b481f24ead57c5bda3faac000851809b | 35 | py | Python | lawyertools/it/compenso_avvocati/__init__.py | lawengineeringsystems/lawyertools | b71098aaf893819e890ccc7940a14bdabb7845fa | [
"MIT"
] | 2 | 2021-11-20T18:36:45.000Z | 2021-11-25T16:56:00.000Z | lawyertools/it/compenso_avvocati/__init__.py | lawengineeringsystems/lawyertools | b71098aaf893819e890ccc7940a14bdabb7845fa | [
"MIT"
] | null | null | null | lawyertools/it/compenso_avvocati/__init__.py | lawengineeringsystems/lawyertools | b71098aaf893819e890ccc7940a14bdabb7845fa | [
"MIT"
] | null | null | null | from .main import calcola_compenso
| 17.5 | 34 | 0.857143 | 5 | 35 | 5.8 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.114286 | 35 | 1 | 35 | 35 | 0.935484 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
084a8b5e0a7e040414ce29b712e890350d9bde5a | 17,740 | py | Python | sdk/python/pulumi_pagerduty/service.py | stmcallister/pulumi-pagerduty | b2816e19b3243bba646e8e095b475299ad6a7b8f | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | sdk/python/pulumi_pagerduty/service.py | stmcallister/pulumi-pagerduty | b2816e19b3243bba646e8e095b475299ad6a7b8f | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | sdk/python/pulumi_pagerduty/service.py | stmcallister/pulumi-pagerduty | b2816e19b3243bba646e8e095b475299ad6a7b8f | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | # coding=utf-8
# *** WARNING: this file was generated by the Pulumi Terraform Bridge (tfgen) Tool. ***
# *** Do not edit by hand unless you're certain you know what you are doing! ***
import json
import warnings
import pulumi
import pulumi.runtime
from typing import Union
from . import utilities, tables
class Service(pulumi.CustomResource):
acknowledgement_timeout: pulumi.Output[str]
"""
Time in seconds that an incident changes to the Triggered State after being Acknowledged. Disabled if set to the `"null"` string.
"""
alert_creation: pulumi.Output[str]
"""
Must be one of two values. PagerDuty receives events from your monitoring systems and can then create incidents in different ways. Value "create_incidents" is default: events will create an incident that cannot be merged. Value "create_alerts_and_incidents" is the alternative: events will create an alert and then add it to a new incident, these incidents can be merged.
"""
alert_grouping: pulumi.Output[str]
"""
Defines how alerts on this service will be automatically grouped into incidents. Note that the alert grouping features are available only on certain plans. If not set, each alert will create a separate incident; If value is set to `time`: All alerts within a specified duration will be grouped into the same incident. This duration is set in the `alert_grouping_timeout` setting (described below). Available on Standard, Enterprise, and Event Intelligence plans; If value is set to `intelligent` - Alerts will be intelligently grouped based on a machine learning model that looks at the alert summary, timing, and the history of grouped alerts. Available on Enterprise and Event Intelligence plan.
"""
alert_grouping_timeout: pulumi.Output[float]
"""
The duration in minutes within which to automatically group incoming alerts. This setting applies only when `alert_grouping` is set to `time`. To continue grouping alerts until the incident is resolved, set this value to `0`.
"""
auto_resolve_timeout: pulumi.Output[str]
"""
Time in seconds that an incident is automatically resolved if left open for that long. Disabled if set to the `"null"` string.
"""
created_at: pulumi.Output[str]
description: pulumi.Output[str]
escalation_policy: pulumi.Output[str]
"""
The escalation policy used by this service.
"""
html_url: pulumi.Output[str]
incident_urgency_rule: pulumi.Output[dict]
last_incident_timestamp: pulumi.Output[str]
name: pulumi.Output[str]
"""
The name of the service.
"""
scheduled_actions: pulumi.Output[list]
status: pulumi.Output[str]
support_hours: pulumi.Output[dict]
def __init__(__self__, resource_name, opts=None, acknowledgement_timeout=None, alert_creation=None, alert_grouping=None, alert_grouping_timeout=None, auto_resolve_timeout=None, description=None, escalation_policy=None, incident_urgency_rule=None, name=None, scheduled_actions=None, support_hours=None, __props__=None, __name__=None, __opts__=None):
"""
A [service](https://v2.developer.pagerduty.com/v2/page/api-reference#!/Services/get_services) represents something you monitor (like a web service, email service, or database service). It is a container for related incidents that associates them with escalation policies.
## Example Usage
```python
import pulumi
import pulumi_pagerduty as pagerduty
example_user = pagerduty.User("exampleUser",
email="125.greenholt.earline@graham.name",
teams=[pagerduty_team["example"]["id"]])
foo = pagerduty.EscalationPolicy("foo",
num_loops=2,
rules=[{
"escalationDelayInMinutes": 10,
"target": [{
"id": example_user.id,
"type": "user",
}],
}])
example_service = pagerduty.Service("exampleService",
acknowledgement_timeout=600,
alert_creation="create_incidents",
auto_resolve_timeout=14400,
escalation_policy=pagerduty_escalation_policy["example"]["id"])
```
:param str resource_name: The name of the resource.
:param pulumi.ResourceOptions opts: Options for the resource.
:param pulumi.Input[str] acknowledgement_timeout: Time in seconds that an incident changes to the Triggered State after being Acknowledged. Disabled if set to the `"null"` string.
:param pulumi.Input[str] alert_creation: Must be one of two values. PagerDuty receives events from your monitoring systems and can then create incidents in different ways. Value "create_incidents" is default: events will create an incident that cannot be merged. Value "create_alerts_and_incidents" is the alternative: events will create an alert and then add it to a new incident, these incidents can be merged.
:param pulumi.Input[str] alert_grouping: Defines how alerts on this service will be automatically grouped into incidents. Note that the alert grouping features are available only on certain plans. If not set, each alert will create a separate incident; If value is set to `time`: All alerts within a specified duration will be grouped into the same incident. This duration is set in the `alert_grouping_timeout` setting (described below). Available on Standard, Enterprise, and Event Intelligence plans; If value is set to `intelligent` - Alerts will be intelligently grouped based on a machine learning model that looks at the alert summary, timing, and the history of grouped alerts. Available on Enterprise and Event Intelligence plan.
:param pulumi.Input[float] alert_grouping_timeout: The duration in minutes within which to automatically group incoming alerts. This setting applies only when `alert_grouping` is set to `time`. To continue grouping alerts until the incident is resolved, set this value to `0`.
:param pulumi.Input[str] auto_resolve_timeout: Time in seconds that an incident is automatically resolved if left open for that long. Disabled if set to the `"null"` string.
:param pulumi.Input[str] escalation_policy: The escalation policy used by this service.
:param pulumi.Input[str] name: The name of the service.
The **incident_urgency_rule** object supports the following:
* `duringSupportHours` (`pulumi.Input[dict]`) - Incidents' urgency during support hours.
* `type` (`pulumi.Input[str]`) - The type of scheduled action. Currently, this must be set to `urgency_change`.
* `urgency` (`pulumi.Input[str]`) - The urgency: `low` Notify responders (does not escalate), `high` (follows escalation rules) or `severity_based` Set's the urgency of the incident based on the severity set by the triggering monitoring tool.
* `outsideSupportHours` (`pulumi.Input[dict]`) - Incidents' urgency outside of support hours.
* `type` (`pulumi.Input[str]`) - The type of scheduled action. Currently, this must be set to `urgency_change`.
* `urgency` (`pulumi.Input[str]`) - The urgency: `low` Notify responders (does not escalate), `high` (follows escalation rules) or `severity_based` Set's the urgency of the incident based on the severity set by the triggering monitoring tool.
* `type` (`pulumi.Input[str]`) - The type of scheduled action. Currently, this must be set to `urgency_change`.
* `urgency` (`pulumi.Input[str]`) - The urgency: `low` Notify responders (does not escalate), `high` (follows escalation rules) or `severity_based` Set's the urgency of the incident based on the severity set by the triggering monitoring tool.
The **scheduled_actions** object supports the following:
* `ats` (`pulumi.Input[list]`) - A block representing when the scheduled action will occur.
* `name` (`pulumi.Input[str]`) - Designates either the start or the end of the scheduled action. Can be `support_hours_start` or `support_hours_end`.
* `type` (`pulumi.Input[str]`) - The type of time specification. Currently, this must be set to `named_time`.
* `toUrgency` (`pulumi.Input[str]`) - The urgency to change to: `low` (does not escalate), or `high` (follows escalation rules).
* `type` (`pulumi.Input[str]`) - The type of scheduled action. Currently, this must be set to `urgency_change`.
The **support_hours** object supports the following:
* `daysOfWeeks` (`pulumi.Input[list]`) - Array of days of week as integers. `1` to `7`, `1` being
Monday and `7` being Sunday.
* `end_time` (`pulumi.Input[str]`) - The support hours' ending time of day.
* `start_time` (`pulumi.Input[str]`) - The support hours' starting time of day.
* `time_zone` (`pulumi.Input[str]`) - The time zone for the support hours.
* `type` (`pulumi.Input[str]`) - The type of scheduled action. Currently, this must be set to `urgency_change`.
"""
if __name__ is not None:
warnings.warn("explicit use of __name__ is deprecated", DeprecationWarning)
resource_name = __name__
if __opts__ is not None:
warnings.warn("explicit use of __opts__ is deprecated, use 'opts' instead", DeprecationWarning)
opts = __opts__
if opts is None:
opts = pulumi.ResourceOptions()
if not isinstance(opts, pulumi.ResourceOptions):
raise TypeError('Expected resource options to be a ResourceOptions instance')
if opts.version is None:
opts.version = utilities.get_version()
if opts.id is None:
if __props__ is not None:
raise TypeError('__props__ is only valid when passed in combination with a valid opts.id to get an existing resource')
__props__ = dict()
__props__['acknowledgement_timeout'] = acknowledgement_timeout
__props__['alert_creation'] = alert_creation
__props__['alert_grouping'] = alert_grouping
__props__['alert_grouping_timeout'] = alert_grouping_timeout
__props__['auto_resolve_timeout'] = auto_resolve_timeout
__props__['description'] = description
if escalation_policy is None:
raise TypeError("Missing required property 'escalation_policy'")
__props__['escalation_policy'] = escalation_policy
__props__['incident_urgency_rule'] = incident_urgency_rule
__props__['name'] = name
__props__['scheduled_actions'] = scheduled_actions
__props__['support_hours'] = support_hours
__props__['created_at'] = None
__props__['html_url'] = None
__props__['last_incident_timestamp'] = None
__props__['status'] = None
super(Service, __self__).__init__(
'pagerduty:index/service:Service',
resource_name,
__props__,
opts)
@staticmethod
def get(resource_name, id, opts=None, acknowledgement_timeout=None, alert_creation=None, alert_grouping=None, alert_grouping_timeout=None, auto_resolve_timeout=None, created_at=None, description=None, escalation_policy=None, html_url=None, incident_urgency_rule=None, last_incident_timestamp=None, name=None, scheduled_actions=None, status=None, support_hours=None):
"""
Get an existing Service resource's state with the given name, id, and optional extra
properties used to qualify the lookup.
:param str resource_name: The unique name of the resulting resource.
:param str id: The unique provider ID of the resource to lookup.
:param pulumi.ResourceOptions opts: Options for the resource.
:param pulumi.Input[str] acknowledgement_timeout: Time in seconds that an incident changes to the Triggered State after being Acknowledged. Disabled if set to the `"null"` string.
:param pulumi.Input[str] alert_creation: Must be one of two values. PagerDuty receives events from your monitoring systems and can then create incidents in different ways. Value "create_incidents" is default: events will create an incident that cannot be merged. Value "create_alerts_and_incidents" is the alternative: events will create an alert and then add it to a new incident, these incidents can be merged.
:param pulumi.Input[str] alert_grouping: Defines how alerts on this service will be automatically grouped into incidents. Note that the alert grouping features are available only on certain plans. If not set, each alert will create a separate incident; If value is set to `time`: All alerts within a specified duration will be grouped into the same incident. This duration is set in the `alert_grouping_timeout` setting (described below). Available on Standard, Enterprise, and Event Intelligence plans; If value is set to `intelligent` - Alerts will be intelligently grouped based on a machine learning model that looks at the alert summary, timing, and the history of grouped alerts. Available on Enterprise and Event Intelligence plan.
:param pulumi.Input[float] alert_grouping_timeout: The duration in minutes within which to automatically group incoming alerts. This setting applies only when `alert_grouping` is set to `time`. To continue grouping alerts until the incident is resolved, set this value to `0`.
:param pulumi.Input[str] auto_resolve_timeout: Time in seconds that an incident is automatically resolved if left open for that long. Disabled if set to the `"null"` string.
:param pulumi.Input[str] escalation_policy: The escalation policy used by this service.
:param pulumi.Input[str] name: The name of the service.
The **incident_urgency_rule** object supports the following:
* `duringSupportHours` (`pulumi.Input[dict]`) - Incidents' urgency during support hours.
* `type` (`pulumi.Input[str]`) - The type of scheduled action. Currently, this must be set to `urgency_change`.
* `urgency` (`pulumi.Input[str]`) - The urgency: `low` Notify responders (does not escalate), `high` (follows escalation rules) or `severity_based` Set's the urgency of the incident based on the severity set by the triggering monitoring tool.
* `outsideSupportHours` (`pulumi.Input[dict]`) - Incidents' urgency outside of support hours.
* `type` (`pulumi.Input[str]`) - The type of scheduled action. Currently, this must be set to `urgency_change`.
* `urgency` (`pulumi.Input[str]`) - The urgency: `low` Notify responders (does not escalate), `high` (follows escalation rules) or `severity_based` Set's the urgency of the incident based on the severity set by the triggering monitoring tool.
* `type` (`pulumi.Input[str]`) - The type of scheduled action. Currently, this must be set to `urgency_change`.
* `urgency` (`pulumi.Input[str]`) - The urgency: `low` Notify responders (does not escalate), `high` (follows escalation rules) or `severity_based` Set's the urgency of the incident based on the severity set by the triggering monitoring tool.
The **scheduled_actions** object supports the following:
* `ats` (`pulumi.Input[list]`) - A block representing when the scheduled action will occur.
* `name` (`pulumi.Input[str]`) - Designates either the start or the end of the scheduled action. Can be `support_hours_start` or `support_hours_end`.
* `type` (`pulumi.Input[str]`) - The type of time specification. Currently, this must be set to `named_time`.
* `toUrgency` (`pulumi.Input[str]`) - The urgency to change to: `low` (does not escalate), or `high` (follows escalation rules).
* `type` (`pulumi.Input[str]`) - The type of scheduled action. Currently, this must be set to `urgency_change`.
The **support_hours** object supports the following:
* `daysOfWeeks` (`pulumi.Input[list]`) - Array of days of week as integers. `1` to `7`, `1` being
Monday and `7` being Sunday.
* `end_time` (`pulumi.Input[str]`) - The support hours' ending time of day.
* `start_time` (`pulumi.Input[str]`) - The support hours' starting time of day.
* `time_zone` (`pulumi.Input[str]`) - The time zone for the support hours.
* `type` (`pulumi.Input[str]`) - The type of scheduled action. Currently, this must be set to `urgency_change`.
"""
opts = pulumi.ResourceOptions.merge(opts, pulumi.ResourceOptions(id=id))
__props__ = dict()
__props__["acknowledgement_timeout"] = acknowledgement_timeout
__props__["alert_creation"] = alert_creation
__props__["alert_grouping"] = alert_grouping
__props__["alert_grouping_timeout"] = alert_grouping_timeout
__props__["auto_resolve_timeout"] = auto_resolve_timeout
__props__["created_at"] = created_at
__props__["description"] = description
__props__["escalation_policy"] = escalation_policy
__props__["html_url"] = html_url
__props__["incident_urgency_rule"] = incident_urgency_rule
__props__["last_incident_timestamp"] = last_incident_timestamp
__props__["name"] = name
__props__["scheduled_actions"] = scheduled_actions
__props__["status"] = status
__props__["support_hours"] = support_hours
return Service(resource_name, opts=opts, __props__=__props__)
def translate_output_property(self, prop):
return tables._CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
def translate_input_property(self, prop):
return tables._SNAKE_TO_CAMEL_CASE_TABLE.get(prop) or prop
| 74.852321 | 746 | 0.706877 | 2,329 | 17,740 | 5.20395 | 0.133963 | 0.04538 | 0.046205 | 0.036469 | 0.785396 | 0.766502 | 0.744967 | 0.741997 | 0.728878 | 0.719802 | 0 | 0.001994 | 0.208455 | 17,740 | 236 | 747 | 75.169492 | 0.861131 | 0.591432 | 0 | 0.023529 | 1 | 0 | 0.162032 | 0.043696 | 0 | 0 | 0 | 0 | 0 | 1 | 0.047059 | false | 0.011765 | 0.070588 | 0.023529 | 0.341176 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
f2864a6287615e1d8b6932aa036bae192962ce42 | 269 | py | Python | test/unit/twentytwenty/test_day2.py | Launchpaddy/adventofcode-1 | 1104b981ca2e8f65a0349cfee1d63bd2aa365d28 | [
"MIT"
] | null | null | null | test/unit/twentytwenty/test_day2.py | Launchpaddy/adventofcode-1 | 1104b981ca2e8f65a0349cfee1d63bd2aa365d28 | [
"MIT"
] | null | null | null | test/unit/twentytwenty/test_day2.py | Launchpaddy/adventofcode-1 | 1104b981ca2e8f65a0349cfee1d63bd2aa365d28 | [
"MIT"
] | null | null | null | from adventofcode.twentytwenty.day2 import (
iterate_password_data,
)
def test_iterate_password_data():
valid_password_count_1, valid_password_count_2 = iterate_password_data()
assert valid_password_count_1 == 638
assert valid_password_count_2 == 699
| 26.9 | 76 | 0.799257 | 36 | 269 | 5.444444 | 0.472222 | 0.265306 | 0.367347 | 0.193878 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.047619 | 0.141264 | 269 | 9 | 77 | 29.888889 | 0.800866 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.285714 | 1 | 0.142857 | true | 0.714286 | 0.142857 | 0 | 0.285714 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
f2b391b553bb5662d6009a74a4e762a391642e84 | 136 | py | Python | Hash Table/leetcode205_Isomorphic Strings.py | aurora314156/leetcode | 7fb6096f9af255e46c69d83254b58b1558e082d8 | [
"MIT"
] | null | null | null | Hash Table/leetcode205_Isomorphic Strings.py | aurora314156/leetcode | 7fb6096f9af255e46c69d83254b58b1558e082d8 | [
"MIT"
] | null | null | null | Hash Table/leetcode205_Isomorphic Strings.py | aurora314156/leetcode | 7fb6096f9af255e46c69d83254b58b1558e082d8 | [
"MIT"
] | null | null | null | class Solution:
def isIsomorphic(self, s: str, t: str) -> bool:
return [len(set(s)),len(set(t))] == 2 * [len(set(zip(s,t)))] | 45.333333 | 68 | 0.558824 | 23 | 136 | 3.304348 | 0.608696 | 0.236842 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.009174 | 0.198529 | 136 | 3 | 68 | 45.333333 | 0.688073 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
4b71961aae9799a7cae766ebfcf4eb4df8701051 | 44 | py | Python | tools/__init__.py | Anselmoo/NeuralNetwork-Toobox | e8f7d9437506806cc4383b6becaa20abf6e8e46f | [
"MIT"
] | null | null | null | tools/__init__.py | Anselmoo/NeuralNetwork-Toobox | e8f7d9437506806cc4383b6becaa20abf6e8e46f | [
"MIT"
] | null | null | null | tools/__init__.py | Anselmoo/NeuralNetwork-Toobox | e8f7d9437506806cc4383b6becaa20abf6e8e46f | [
"MIT"
] | null | null | null | from .logic_generator import LogicGenerator
| 22 | 43 | 0.886364 | 5 | 44 | 7.6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.090909 | 44 | 1 | 44 | 44 | 0.95 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
4b7c6aa17084496b3639bd66289ee15172a7d9a1 | 84 | py | Python | CodeWars/8 Kyu/Character Frequency.py | anubhab-code/Competitive-Programming | de28cb7d44044b9e7d8bdb475da61e37c018ac35 | [
"MIT"
] | null | null | null | CodeWars/8 Kyu/Character Frequency.py | anubhab-code/Competitive-Programming | de28cb7d44044b9e7d8bdb475da61e37c018ac35 | [
"MIT"
] | null | null | null | CodeWars/8 Kyu/Character Frequency.py | anubhab-code/Competitive-Programming | de28cb7d44044b9e7d8bdb475da61e37c018ac35 | [
"MIT"
] | null | null | null | from collections import Counter
def char_freq(message):
return Counter(message) | 21 | 31 | 0.797619 | 11 | 84 | 6 | 0.818182 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.142857 | 84 | 4 | 32 | 21 | 0.916667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
4b9d25b0789716d9563e2e71097060f60077e97c | 47 | py | Python | gamestonk_terminal/economy/__init__.py | clairvoyant/GamestonkTerminal | 7b40cfe61b32782e36f5de8a08d075532a08c294 | [
"MIT"
] | 1 | 2021-12-04T13:21:40.000Z | 2021-12-04T13:21:40.000Z | gamestonk_terminal/economy/__init__.py | clairvoyant/GamestonkTerminal | 7b40cfe61b32782e36f5de8a08d075532a08c294 | [
"MIT"
] | null | null | null | gamestonk_terminal/economy/__init__.py | clairvoyant/GamestonkTerminal | 7b40cfe61b32782e36f5de8a08d075532a08c294 | [
"MIT"
] | null | null | null | from . import economy_controller # noqa: F401
| 23.5 | 46 | 0.765957 | 6 | 47 | 5.833333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.076923 | 0.170213 | 47 | 1 | 47 | 47 | 0.820513 | 0.212766 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
299b801b48096585f3012cf7d8a0ef55742a3a66 | 90 | py | Python | modl/datasets/tests/test_adhd.py | lcharlin/modl | 24605d7f6b1184a56cec2899232c7661b8337a32 | [
"BSD-3-Clause"
] | null | null | null | modl/datasets/tests/test_adhd.py | lcharlin/modl | 24605d7f6b1184a56cec2899232c7661b8337a32 | [
"BSD-3-Clause"
] | null | null | null | modl/datasets/tests/test_adhd.py | lcharlin/modl | 24605d7f6b1184a56cec2899232c7661b8337a32 | [
"BSD-3-Clause"
] | null | null | null | from modl.datasets import fetch_adhd
#
#
# def test_fetch_adhd():
# res = fetch_adhd() | 18 | 36 | 0.7 | 13 | 90 | 4.538462 | 0.692308 | 0.457627 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.177778 | 90 | 5 | 37 | 18 | 0.797297 | 0.5 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
29a575df88a75fca469e5ff0eeadefd6ad367c23 | 3,213 | py | Python | board/migrations/0001_initial.py | kuna/iidxranktable | 7182baaf25b76a8f696e96947d85d4b1d074ba4a | [
"MIT"
] | 6 | 2015-12-25T02:01:37.000Z | 2020-09-02T02:39:41.000Z | board/migrations/0001_initial.py | kuna/iidxranktable | 7182baaf25b76a8f696e96947d85d4b1d074ba4a | [
"MIT"
] | null | null | null | board/migrations/0001_initial.py | kuna/iidxranktable | 7182baaf25b76a8f696e96947d85d4b1d074ba4a | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
# Generated by Django 1.9.4 on 2016-07-09 12:07
from __future__ import unicode_literals
from django.db import migrations, models
import django.db.models.deletion
import django.utils.timezone
class Migration(migrations.Migration):
initial = True
dependencies = [
]
operations = [
migrations.CreateModel(
name='BannedUser',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('ip', models.CharField(max_length=100)),
],
),
migrations.CreateModel(
name='BannedWord',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('word', models.CharField(max_length=100)),
],
),
migrations.CreateModel(
name='Board',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('title', models.CharField(max_length=100)),
('permission', models.IntegerField(default=0)),
],
),
migrations.CreateModel(
name='BoardComment',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('time', models.DateTimeField(default=django.utils.timezone.now)),
('text', models.CharField(max_length=1000)),
('writer', models.CharField(max_length=100)),
('tag', models.CharField(default='', max_length=100)),
('ip', models.CharField(max_length=100)),
('attr', models.IntegerField(default=0)),
('password', models.CharField(max_length=100)),
('parent', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.CASCADE, to='board.BoardComment')),
],
),
migrations.CreateModel(
name='BoardPost',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('time', models.DateTimeField(default=django.utils.timezone.now)),
('title', models.CharField(max_length=100)),
('text', models.CharField(max_length=1000)),
('writer', models.CharField(max_length=100)),
('tag', models.CharField(default='', max_length=100)),
('ip', models.CharField(max_length=100)),
('attr', models.IntegerField(default=0)),
('password', models.CharField(max_length=100)),
('permission', models.IntegerField(default=0)),
('board', models.ForeignKey(null=True, on_delete=django.db.models.deletion.CASCADE, to='board.Board')),
],
),
migrations.AddField(
model_name='boardcomment',
name='post',
field=models.ForeignKey(null=True, on_delete=django.db.models.deletion.CASCADE, to='board.BoardPost'),
),
]
| 42.276316 | 139 | 0.568316 | 314 | 3,213 | 5.694268 | 0.251592 | 0.11745 | 0.120805 | 0.161074 | 0.738814 | 0.738814 | 0.717002 | 0.717002 | 0.658837 | 0.658837 | 0 | 0.027875 | 0.285403 | 3,213 | 75 | 140 | 42.84 | 0.750871 | 0.020853 | 0 | 0.671642 | 1 | 0 | 0.074769 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.029851 | 0.059701 | 0 | 0.119403 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
29a5df22c0a436b3b4b071df4aa27e3d1ab2a553 | 1,599 | py | Python | test/pyaz/billing/invoice/__init__.py | bigdatamoore/py-az-cli | 54383a4ee7cc77556f6183e74e992eec95b28e01 | [
"MIT"
] | null | null | null | test/pyaz/billing/invoice/__init__.py | bigdatamoore/py-az-cli | 54383a4ee7cc77556f6183e74e992eec95b28e01 | [
"MIT"
] | 9 | 2021-09-24T16:37:24.000Z | 2021-12-24T00:39:19.000Z | test/pyaz/billing/invoice/__init__.py | bigdatamoore/py-az-cli | 54383a4ee7cc77556f6183e74e992eec95b28e01 | [
"MIT"
] | null | null | null | import json, subprocess
from ... pyaz_utils import get_cli_name, get_params
def list(period_start_date, period_end_date, account_name=None, profile_name=None):
params = get_params(locals())
command = "az billing invoice list " + params
print(command)
output = subprocess.run(command, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
stdout = output.stdout.decode("utf-8")
stderr = output.stderr.decode("utf-8")
if stdout:
return json.loads(stdout)
print(stdout)
else:
raise Exception(stderr)
print(stderr)
def show(name, account_name=None, by_subscription=None):
params = get_params(locals())
command = "az billing invoice show " + params
print(command)
output = subprocess.run(command, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
stdout = output.stdout.decode("utf-8")
stderr = output.stderr.decode("utf-8")
if stdout:
return json.loads(stdout)
print(stdout)
else:
raise Exception(stderr)
print(stderr)
def download(account_name=None, invoice_name=None, download_token=None, download_urls=None):
params = get_params(locals())
command = "az billing invoice download " + params
print(command)
output = subprocess.run(command, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
stdout = output.stdout.decode("utf-8")
stderr = output.stderr.decode("utf-8")
if stdout:
return json.loads(stdout)
print(stdout)
else:
raise Exception(stderr)
print(stderr)
| 34.76087 | 96 | 0.675422 | 199 | 1,599 | 5.331658 | 0.236181 | 0.079171 | 0.05655 | 0.053723 | 0.774741 | 0.774741 | 0.774741 | 0.774741 | 0.774741 | 0.63902 | 0 | 0.004762 | 0.212008 | 1,599 | 45 | 97 | 35.533333 | 0.837302 | 0 | 0 | 0.804878 | 0 | 0 | 0.066291 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.073171 | false | 0 | 0.04878 | 0 | 0.195122 | 0.219512 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
29f09ae59ffbf2344b6b05d5557ebe86b0c72323 | 35 | py | Python | alvinchow_backend/test/factory/__init__.py | alvinchow86/python-backend-template | 46c07d733d68bc8682afd8510a17bc2aa360c606 | [
"MIT"
] | 6 | 2021-01-07T00:20:49.000Z | 2022-01-13T04:53:12.000Z | alvinchow_backend/api/graphql/types/__init__.py | alvinchow86/python-backend-template | 46c07d733d68bc8682afd8510a17bc2aa360c606 | [
"MIT"
] | 4 | 2021-01-06T22:07:43.000Z | 2021-06-02T01:52:41.000Z | alvinchow_backend/api/graphql/types/__init__.py | alvinchow86/python-backend-template | 46c07d733d68bc8682afd8510a17bc2aa360c606 | [
"MIT"
] | 1 | 2021-11-09T07:46:44.000Z | 2021-11-09T07:46:44.000Z | # flake8: noqa
from .user import *
| 11.666667 | 19 | 0.685714 | 5 | 35 | 4.8 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.035714 | 0.2 | 35 | 2 | 20 | 17.5 | 0.821429 | 0.342857 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
4b15b8d089c46fc0a3050665dcfaf8eac4208a50 | 113 | py | Python | server/app/models/__init__.py | abhijithvijayan/corona-screening-portal | fc2e17926131fdc76ff62f850e1cfce4e05b50dc | [
"MIT"
] | 1 | 2020-03-15T16:54:42.000Z | 2020-03-15T16:54:42.000Z | server/app/models/__init__.py | abhijithvijayan/corona-screening-portal | fc2e17926131fdc76ff62f850e1cfce4e05b50dc | [
"MIT"
] | 5 | 2020-03-15T17:06:51.000Z | 2020-03-15T17:18:13.000Z | server/app/models/__init__.py | abhijithvijayan/corona-screening-portal | fc2e17926131fdc76ff62f850e1cfce4e05b50dc | [
"MIT"
] | null | null | null | """import all models here"""
from app.models.Person import Person
from app.models.Association import Association
| 28.25 | 46 | 0.80531 | 16 | 113 | 5.6875 | 0.5 | 0.153846 | 0.285714 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.106195 | 113 | 3 | 47 | 37.666667 | 0.90099 | 0.19469 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
d99fdb6fec0cef5c5d58b189e8684bbca4132736 | 41 | py | Python | web_parsers/extractors/xml/__init__.py | invanalabs/web-parser | dca9c6354317ec7187f46fd270092372b39f63f8 | [
"Apache-2.0"
] | 1 | 2019-10-06T23:11:32.000Z | 2019-10-06T23:11:32.000Z | web_parsers/extractors/xml/__init__.py | crawlerflow/extraction-engine | dca9c6354317ec7187f46fd270092372b39f63f8 | [
"Apache-2.0"
] | 2 | 2020-03-11T09:33:03.000Z | 2020-03-18T21:12:28.000Z | web_parsers/extractors/xml/__init__.py | crawlerflow/extraction-engine | dca9c6354317ec7187f46fd270092372b39f63f8 | [
"Apache-2.0"
] | null | null | null | from .content import CustomDataExtractor
| 20.5 | 40 | 0.878049 | 4 | 41 | 9 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.097561 | 41 | 1 | 41 | 41 | 0.972973 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
d9bab50c373a39495ccfd980740392490cc1e835 | 96,317 | py | Python | core/domain/stats_services_test.py | ReshuKumari/oppia | cb89b633275b3d0b2d02e0d22e0c472d8b8da0e1 | [
"Apache-2.0"
] | null | null | null | core/domain/stats_services_test.py | ReshuKumari/oppia | cb89b633275b3d0b2d02e0d22e0c472d8b8da0e1 | [
"Apache-2.0"
] | null | null | null | core/domain/stats_services_test.py | ReshuKumari/oppia | cb89b633275b3d0b2d02e0d22e0c472d8b8da0e1 | [
"Apache-2.0"
] | null | null | null | # coding: utf-8
#
# Copyright 2014 The Oppia Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS-IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Unit tests for core.domain.stats_services."""
from __future__ import absolute_import
from __future__ import unicode_literals
import os
from core.domain import event_services
from core.domain import exp_domain
from core.domain import exp_fetchers
from core.domain import exp_services
from core.domain import question_services
from core.domain import stats_domain
from core.domain import stats_services
from core.platform import models
from core.tests import test_utils
import feconf
import python_utils
import utils
(stats_models,) = models.Registry.import_models([models.NAMES.statistics])
class StatisticsServicesTests(test_utils.GenericTestBase):
"""Test the helper functions and methods defined in the stats_services
module.
"""
def setUp(self):
super(StatisticsServicesTests, self).setUp()
self.exp_id = 'exp_id1'
self.exp_version = 1
self.stats_model_id = (
stats_models.ExplorationStatsModel.create(
'exp_id1', 1, 0, 0, 0, 0, 0, 0, {}))
stats_models.ExplorationIssuesModel.create(
self.exp_id, self.exp_version, [])
self.playthrough_id = stats_models.PlaythroughModel.create(
'exp_id1', 1, 'EarlyQuit', {}, [])
def test_get_exploration_stats_with_new_exp_id(self):
exploration_stats = stats_services.get_exploration_stats(
'new_exp_id', 1)
self.assertEqual(exploration_stats.exp_version, 1)
self.assertEqual(exploration_stats.exp_id, 'new_exp_id')
self.assertEqual(exploration_stats.state_stats_mapping, {})
def test_update_stats_method(self):
"""Test the update_stats method."""
exploration_stats = stats_services.get_exploration_stats_by_id(
'exp_id1', 1)
exploration_stats.state_stats_mapping = {
'Home': stats_domain.StateStats.create_default(),
'🙂': stats_domain.StateStats.create_default(),
}
stats_services.save_stats_model(exploration_stats)
# Pass in exploration start event to stats model created in setup
# function.
aggregated_stats = {
'num_starts': 1,
'num_actual_starts': 1,
'num_completions': 1,
'state_stats_mapping': {
'Home': {
'total_hit_count': 1,
'first_hit_count': 1,
'total_answers_count': 1,
'useful_feedback_count': 1,
'num_times_solution_viewed': 1,
'num_completions': 1
},
'🙂': {
'total_hit_count': 1,
'first_hit_count': 1,
'total_answers_count': 1,
'useful_feedback_count': 1,
'num_times_solution_viewed': 1,
'num_completions': 1
},
}
}
stats_services.update_stats('exp_id1', 1, aggregated_stats)
exploration_stats = stats_services.get_exploration_stats_by_id(
'exp_id1', 1)
self.assertEqual(exploration_stats.num_starts_v2, 1)
self.assertEqual(exploration_stats.num_actual_starts_v2, 1)
self.assertEqual(exploration_stats.num_completions_v2, 1)
self.assertEqual(
exploration_stats.state_stats_mapping[
'Home'].total_hit_count_v2, 1)
self.assertEqual(
exploration_stats.state_stats_mapping[
'Home'].first_hit_count_v2, 1)
self.assertEqual(
exploration_stats.state_stats_mapping[
'Home'].total_answers_count_v2, 1)
self.assertEqual(
exploration_stats.state_stats_mapping[
'Home'].useful_feedback_count_v2, 1)
self.assertEqual(
exploration_stats.state_stats_mapping[
'Home'].num_completions_v2, 1)
self.assertEqual(
exploration_stats.state_stats_mapping[
'Home'].num_times_solution_viewed_v2, 1)
self.assertEqual(
exploration_stats.state_stats_mapping[
'🙂'].total_hit_count_v2, 1)
self.assertEqual(
exploration_stats.state_stats_mapping[
'🙂'].first_hit_count_v2, 1)
self.assertEqual(
exploration_stats.state_stats_mapping[
'🙂'].total_answers_count_v2, 1)
self.assertEqual(
exploration_stats.state_stats_mapping[
'🙂'].useful_feedback_count_v2, 1)
self.assertEqual(
exploration_stats.state_stats_mapping[
'🙂'].num_completions_v2, 1)
self.assertEqual(
exploration_stats.state_stats_mapping[
'🙂'].num_times_solution_viewed_v2, 1)
def test_update_stats_throws_if_model_is_missing_entirely(self):
"""Test the update_stats method."""
aggregated_stats = {
'num_starts': 1,
'num_actual_starts': 1,
'num_completions': 1,
'state_stats_mapping': {
'Home': {
'total_hit_count': 1,
'first_hit_count': 1,
'total_answers_count': 1,
'useful_feedback_count': 1,
'num_times_solution_viewed': 1,
'num_completions': 1
},
}
}
with self.assertRaisesRegexp(Exception, 'id="nullid.1" does not exist'):
stats_services.update_stats('nullid', 1, aggregated_stats)
def test_update_stats_throws_if_model_is_missing_state_stats(self):
"""Test the update_stats method."""
exploration_stats = stats_services.get_exploration_stats_by_id(
'exp_id1', 1)
exploration_stats.state_stats_mapping = {
'Home': stats_domain.StateStats.create_default()
}
stats_services.save_stats_model(exploration_stats)
aggregated_stats = {
'num_starts': 1,
'num_actual_starts': 1,
'num_completions': 1,
'state_stats_mapping': {
'Home': {
'total_hit_count': 1,
'first_hit_count': 1,
'total_answers_count': 1,
'useful_feedback_count': 1,
'num_times_solution_viewed': 1,
'num_completions': 1
},
'Away from Home': {
'total_hit_count': 1,
'first_hit_count': 1,
'total_answers_count': 1,
'useful_feedback_count': 1,
'num_times_solution_viewed': 1,
'num_completions': 1
},
}
}
with self.assertRaisesRegexp(Exception, 'does not exist'):
stats_services.update_stats('exp_id1', 1, aggregated_stats)
def test_update_stats_returns_if_state_name_is_undefined(self):
"""Tests that the update_stats returns if a state name is undefined."""
aggregated_stats = {
'num_starts': 1,
'num_actual_starts': 1,
'num_completions': 1,
'state_stats_mapping': {
'undefined': {
'total_hit_count': 1,
'first_hit_count': 1,
'total_answers_count': 1,
'useful_feedback_count': 1,
'num_times_solution_viewed': 1,
'num_completions': 1
},
}
}
exploration_stats = stats_services.get_exploration_stats_by_id(
'exp_id1', 1)
self.assertEqual(exploration_stats.state_stats_mapping, {})
stats_services.update_stats('exp_id1', 1, aggregated_stats)
exploration_stats = stats_services.get_exploration_stats_by_id(
'exp_id1', 1)
self.assertEqual(exploration_stats.state_stats_mapping, {})
def test_update_stats_throws_if_model_is_using_unicode_state_name(self):
"""Test the update_stats method."""
exploration_stats = stats_services.get_exploration_stats_by_id(
'exp_id1', 1)
exploration_stats.state_stats_mapping = {
'Home': stats_domain.StateStats.create_default(),
# No stats for '🙂'.
}
stats_services.save_stats_model(exploration_stats)
aggregated_stats = {
'num_starts': 1,
'num_actual_starts': 1,
'num_completions': 1,
'state_stats_mapping': {
'Home': {
'total_hit_count': 1,
'first_hit_count': 1,
'total_answers_count': 1,
'useful_feedback_count': 1,
'num_times_solution_viewed': 1,
'num_completions': 1
},
'🙂': {
'total_hit_count': 1,
'first_hit_count': 1,
'total_answers_count': 1,
'useful_feedback_count': 1,
'num_times_solution_viewed': 1,
'num_completions': 1
},
}
}
with self.assertRaisesRegexp(Exception, 'does not exist'):
stats_services.update_stats('exp_id1', 1, aggregated_stats)
def test_calls_to_stats_methods(self):
"""Test that calls are being made to the
get_stats_for_new_exp_version and
get_stats_for_new_exploration methods when an exploration is
created or updated.
"""
# Initialize call counters.
stats_for_new_exploration_log = test_utils.CallCounter(
stats_services.get_stats_for_new_exploration)
stats_for_new_exp_version_log = test_utils.CallCounter(
stats_services.get_stats_for_new_exp_version)
# Create exploration object in datastore.
exp_id = 'exp_id'
test_exp_filepath = os.path.join(
feconf.TESTS_DATA_DIR, 'string_classifier_test.yaml')
yaml_content = utils.get_file_contents(test_exp_filepath)
assets_list = []
with self.swap(
stats_services, 'get_stats_for_new_exploration',
stats_for_new_exploration_log):
exp_services.save_new_exploration_from_yaml_and_assets(
feconf.SYSTEM_COMMITTER_ID, yaml_content, exp_id,
assets_list)
# Now, the stats creation for new explorations method will be called
# once and stats creation for new exploration version won't be called.
self.assertEqual(stats_for_new_exploration_log.times_called, 1)
self.assertEqual(stats_for_new_exp_version_log.times_called, 0)
# Update exploration by adding a state.
change_list = [exp_domain.ExplorationChange({
'cmd': 'add_state',
'state_name': 'New state'
})]
with self.swap(
stats_services, 'get_stats_for_new_exp_version',
stats_for_new_exp_version_log):
exp_services.update_exploration(
feconf.SYSTEM_COMMITTER_ID, exp_id, change_list, '')
# Now, the stats creation for new explorations method will be called
# once and stats creation for new exploration version will also be
# called once.
self.assertEqual(stats_for_new_exploration_log.times_called, 1)
self.assertEqual(stats_for_new_exp_version_log.times_called, 1)
def test_get_stats_for_new_exploration(self):
"""Test the get_stats_for_new_exploration method."""
# Create exploration object in datastore.
exp_id = 'exp_id'
test_exp_filepath = os.path.join(
feconf.TESTS_DATA_DIR, 'string_classifier_test.yaml')
yaml_content = utils.get_file_contents(test_exp_filepath)
assets_list = []
exp_services.save_new_exploration_from_yaml_and_assets(
feconf.SYSTEM_COMMITTER_ID, yaml_content, exp_id,
assets_list)
exploration = exp_fetchers.get_exploration_by_id(exp_id)
exploration_stats = stats_services.get_stats_for_new_exploration(
exploration.id, exploration.version, exploration.states)
stats_services.create_stats_model(exploration_stats)
exploration_stats = stats_services.get_exploration_stats_by_id(
exploration.id, exploration.version)
self.assertEqual(exploration_stats.exp_id, exp_id)
self.assertEqual(exploration_stats.exp_version, 1)
self.assertEqual(exploration_stats.num_starts_v1, 0)
self.assertEqual(exploration_stats.num_starts_v2, 0)
self.assertEqual(exploration_stats.num_actual_starts_v1, 0)
self.assertEqual(exploration_stats.num_actual_starts_v2, 0)
self.assertEqual(exploration_stats.num_completions_v1, 0)
self.assertEqual(exploration_stats.num_completions_v2, 0)
self.assertEqual(
list(exploration_stats.state_stats_mapping.keys()), ['Home', 'End'])
def test_revert_exploration_creates_stats(self):
"""Test that the revert_exploration method creates stats
for the newest exploration version.
"""
# Create exploration object in datastore.
exp_id = 'exp_id'
test_exp_filepath = os.path.join(
feconf.TESTS_DATA_DIR, 'string_classifier_test.yaml')
yaml_content = utils.get_file_contents(test_exp_filepath)
assets_list = []
exp_services.save_new_exploration_from_yaml_and_assets(
feconf.SYSTEM_COMMITTER_ID, yaml_content, exp_id,
assets_list)
exploration = exp_fetchers.get_exploration_by_id(exp_id)
# Save stats for version 1.
exploration_stats = stats_services.get_exploration_stats_by_id(
exp_id, 1)
exploration_stats.num_starts_v2 = 3
exploration_stats.num_actual_starts_v2 = 2
exploration_stats.num_completions_v2 = 1
stats_services.save_stats_model(exploration_stats)
# Update exploration to next version 2 and its stats.
exp_services.update_exploration(
'committer_id_v2', exploration.id, [], 'Updated')
exploration_stats = stats_services.get_exploration_stats_by_id(
exp_id, 2)
exploration_stats.num_starts_v2 = 4
exploration_stats.num_actual_starts_v2 = 3
exploration_stats.num_completions_v2 = 2
stats_services.save_stats_model(exploration_stats)
# Revert to an older version.
exp_services.revert_exploration(
'committer_id_v3', exp_id, 2, 1)
exploration_stats = stats_services.get_exploration_stats_by_id(
exp_id, 3
)
self.assertIsNotNone(exploration_stats)
self.assertEqual(exploration_stats.num_starts_v2, 3)
self.assertEqual(exploration_stats.num_actual_starts_v2, 2)
self.assertEqual(exploration_stats.num_completions_v2, 1)
def test_get_stats_for_new_exp_version(self):
"""Test the get_stats_for_new_exp_version method."""
# Create exploration object in datastore.
exp_id = 'exp_id'
test_exp_filepath = os.path.join(
feconf.TESTS_DATA_DIR, 'string_classifier_test.yaml')
yaml_content = utils.get_file_contents(test_exp_filepath)
assets_list = []
exp_services.save_new_exploration_from_yaml_and_assets(
feconf.SYSTEM_COMMITTER_ID, yaml_content, exp_id,
assets_list)
exploration = exp_fetchers.get_exploration_by_id(exp_id)
# Test addition of states.
exploration.add_states(['New state', 'New state 2'])
exploration.version += 1
change_list = [exp_domain.ExplorationChange({
'cmd': 'add_state',
'state_name': 'New state',
}), exp_domain.ExplorationChange({
'cmd': 'add_state',
'state_name': 'New state 2'
})]
exp_versions_diff = exp_domain.ExplorationVersionsDiff(change_list)
exploration_stats = stats_services.get_stats_for_new_exp_version(
exploration.id, exploration.version, exploration.states,
exp_versions_diff, None)
stats_services.create_stats_model(exploration_stats)
exploration_stats = stats_services.get_exploration_stats_by_id(
exploration.id, exploration.version)
self.assertEqual(exploration_stats.exp_id, exp_id)
self.assertEqual(exploration_stats.exp_version, 2)
self.assertEqual(exploration_stats.num_actual_starts_v2, 0)
self.assertEqual(exploration_stats.num_completions_v2, 0)
self.assertEqual(
set(exploration_stats.state_stats_mapping.keys()), set([
'Home', 'New state 2', 'End', 'New state']))
self.assertEqual(
exploration_stats.state_stats_mapping['New state'].to_dict(),
stats_domain.StateStats.create_default().to_dict())
self.assertEqual(
exploration_stats.state_stats_mapping['New state 2'].to_dict(),
stats_domain.StateStats.create_default().to_dict())
# Test renaming of states.
exploration.rename_state('New state 2', 'Renamed state')
exploration.version += 1
change_list = [exp_domain.ExplorationChange({
'cmd': 'rename_state',
'old_state_name': 'New state 2',
'new_state_name': 'Renamed state'
})]
exp_versions_diff = exp_domain.ExplorationVersionsDiff(change_list)
exploration_stats = stats_services.get_stats_for_new_exp_version(
exploration.id, exploration.version, exploration.states,
exp_versions_diff, None)
stats_services.create_stats_model(exploration_stats)
exploration_stats = stats_services.get_exploration_stats_by_id(
exploration.id, exploration.version)
self.assertEqual(exploration_stats.exp_version, 3)
self.assertEqual(
set(exploration_stats.state_stats_mapping.keys()), set([
'Home', 'End', 'Renamed state', 'New state']))
# Test deletion of states.
exploration.delete_state('New state')
exploration.version += 1
change_list = [exp_domain.ExplorationChange({
'cmd': 'delete_state',
'state_name': 'New state'
})]
exp_versions_diff = exp_domain.ExplorationVersionsDiff(change_list)
exploration_stats = stats_services.get_stats_for_new_exp_version(
exploration.id, exploration.version, exploration.states,
exp_versions_diff, None)
stats_services.create_stats_model(exploration_stats)
exploration_stats = stats_services.get_exploration_stats_by_id(
exploration.id, exploration.version)
self.assertEqual(exploration_stats.exp_version, 4)
self.assertEqual(
set(exploration_stats.state_stats_mapping.keys()),
set(['Home', 'Renamed state', 'End']))
# Test addition, renaming and deletion of states.
exploration.add_states(['New state 2'])
exploration.rename_state('New state 2', 'Renamed state 2')
exploration.delete_state('Renamed state 2')
exploration.version += 1
change_list = [exp_domain.ExplorationChange({
'cmd': 'add_state',
'state_name': 'New state 2'
}), exp_domain.ExplorationChange({
'cmd': 'rename_state',
'old_state_name': 'New state 2',
'new_state_name': 'Renamed state 2'
}), exp_domain.ExplorationChange({
'cmd': 'delete_state',
'state_name': 'Renamed state 2'
})]
exp_versions_diff = exp_domain.ExplorationVersionsDiff(change_list)
exploration_stats = stats_services.get_stats_for_new_exp_version(
exploration.id, exploration.version, exploration.states,
exp_versions_diff, None)
stats_services.create_stats_model(exploration_stats)
exploration_stats = stats_services.get_exploration_stats_by_id(
exploration.id, exploration.version)
self.assertEqual(exploration_stats.exp_version, 5)
self.assertEqual(
set(exploration_stats.state_stats_mapping.keys()),
set(['Home', 'End', 'Renamed state']))
# Test addition and multiple renames.
exploration.add_states(['New state 2'])
exploration.rename_state('New state 2', 'New state 3')
exploration.rename_state('New state 3', 'New state 4')
exploration.version += 1
change_list = [exp_domain.ExplorationChange({
'cmd': 'add_state',
'state_name': 'New state 2',
}), exp_domain.ExplorationChange({
'cmd': 'rename_state',
'old_state_name': 'New state 2',
'new_state_name': 'New state 3'
}), exp_domain.ExplorationChange({
'cmd': 'rename_state',
'old_state_name': 'New state 3',
'new_state_name': 'New state 4'
})]
exp_versions_diff = exp_domain.ExplorationVersionsDiff(change_list)
exploration_stats = stats_services.get_stats_for_new_exp_version(
exploration.id, exploration.version, exploration.states,
exp_versions_diff, None)
stats_services.create_stats_model(exploration_stats)
exploration_stats = stats_services.get_exploration_stats_by_id(
exploration.id, exploration.version)
self.assertEqual(exploration_stats.exp_version, 6)
self.assertEqual(
set(exploration_stats.state_stats_mapping.keys()),
set(['Home', 'New state 4', 'Renamed state', 'End']))
# Set some values for the the stats in the ExplorationStatsModel
# instance.
exploration_stats_model = stats_models.ExplorationStatsModel.get_model(
exploration.id, exploration.version)
exploration_stats_model.num_actual_starts_v2 = 5
exploration_stats_model.num_completions_v2 = 2
exploration_stats_model.state_stats_mapping['New state 4'][
'total_answers_count_v2'] = 12
exploration_stats_model.state_stats_mapping['Home'][
'total_hit_count_v2'] = 8
exploration_stats_model.state_stats_mapping['Renamed state'][
'first_hit_count_v2'] = 2
exploration_stats_model.state_stats_mapping['End'][
'useful_feedback_count_v2'] = 4
exploration_stats_model.update_timestamps()
exploration_stats_model.put()
# Test deletion, addition and rename.
exploration.delete_state('New state 4')
exploration.add_states(['New state'])
exploration.rename_state('New state', 'New state 4')
exploration.version += 1
change_list = [exp_domain.ExplorationChange({
'cmd': 'delete_state',
'state_name': 'New state 4'
}), exp_domain.ExplorationChange({
'cmd': 'add_state',
'state_name': 'New state',
}), exp_domain.ExplorationChange({
'cmd': 'rename_state',
'old_state_name': 'New state',
'new_state_name': 'New state 4'
})]
exp_versions_diff = exp_domain.ExplorationVersionsDiff(change_list)
exploration_stats = stats_services.get_stats_for_new_exp_version(
exploration.id, exploration.version, exploration.states,
exp_versions_diff, None)
stats_services.create_stats_model(exploration_stats)
exploration_stats = stats_services.get_exploration_stats_by_id(
exploration.id, exploration.version)
self.assertEqual(exploration_stats.exp_version, 7)
self.assertEqual(
set(exploration_stats.state_stats_mapping.keys()),
set(['Home', 'New state 4', 'Renamed state', 'End']))
# Test the values of the stats carried over from the last version.
self.assertEqual(exploration_stats.num_actual_starts_v2, 5)
self.assertEqual(exploration_stats.num_completions_v2, 2)
self.assertEqual(
exploration_stats.state_stats_mapping['Home'].total_hit_count_v2, 8)
self.assertEqual(
exploration_stats.state_stats_mapping[
'Renamed state'].first_hit_count_v2, 2)
self.assertEqual(
exploration_stats.state_stats_mapping[
'End'].useful_feedback_count_v2, 4)
# State 'New state 4' has been deleted and recreated, so it should
# now contain default values for stats instead of the values it
# contained in the last version.
self.assertEqual(
exploration_stats.state_stats_mapping[
'New state 4'].total_answers_count_v2, 0)
# Test reverts.
exploration.version += 1
exploration_stats = stats_services.get_stats_for_new_exp_version(
exploration.id, exploration.version, exploration.states,
None, 5)
stats_services.create_stats_model(exploration_stats)
exploration_stats = stats_services.get_exploration_stats_by_id(
exploration.id, exploration.version)
self.assertEqual(exploration_stats.exp_version, 8)
self.assertEqual(
set(exploration_stats.state_stats_mapping.keys()),
set(['Home', 'Renamed state', 'End']))
self.assertEqual(exploration_stats.num_actual_starts_v2, 0)
self.assertEqual(exploration_stats.num_completions_v2, 0)
# Test state name swaps.
exploration.add_states(['New state 5', 'New state 6'])
exploration.version += 1
change_list = [exp_domain.ExplorationChange({
'cmd': 'add_state',
'state_name': 'New state 5'
}), exp_domain.ExplorationChange({
'cmd': 'add_state',
'state_name': 'New state 6'
})]
exp_versions_diff = exp_domain.ExplorationVersionsDiff(change_list)
exploration_stats = stats_services.get_stats_for_new_exp_version(
exploration.id, exploration.version, exploration.states,
exp_versions_diff, None)
stats_services.create_stats_model(exploration_stats)
exploration.rename_state('New state 5', 'New state 7')
exploration.rename_state('New state 6', 'New state 5')
exploration.rename_state('New state 7', 'New state 6')
exploration.version += 1
change_list = [exp_domain.ExplorationChange({
'cmd': 'rename_state',
'old_state_name': 'New state 5',
'new_state_name': 'New state 7'
}), exp_domain.ExplorationChange({
'cmd': 'rename_state',
'old_state_name': 'New state 6',
'new_state_name': 'New state 5'
}), exp_domain.ExplorationChange({
'cmd': 'rename_state',
'old_state_name': 'New state 7',
'new_state_name': 'New state 6'
})]
exp_versions_diff = exp_domain.ExplorationVersionsDiff(change_list)
exploration_stats = stats_services.get_stats_for_new_exp_version(
exploration.id, exploration.version, exploration.states,
exp_versions_diff, None)
stats_services.create_stats_model(exploration_stats)
exploration_stats = stats_services.get_exploration_stats_by_id(
exploration.id, exploration.version)
self.assertEqual(exploration_stats.exp_version, 10)
self.assertEqual(
set(exploration_stats.state_stats_mapping.keys()),
set(['End', 'Home', 'New state 6', 'New state 5', 'Renamed state']))
def test_get_exploration_stats_from_model(self):
"""Test the get_exploration_stats_from_model method."""
model = stats_models.ExplorationStatsModel.get(self.stats_model_id)
exploration_stats = stats_services.get_exploration_stats_from_model(
model)
self.assertEqual(exploration_stats.exp_id, 'exp_id1')
self.assertEqual(exploration_stats.exp_version, 1)
self.assertEqual(exploration_stats.num_starts_v1, 0)
self.assertEqual(exploration_stats.num_starts_v2, 0)
self.assertEqual(exploration_stats.num_actual_starts_v1, 0)
self.assertEqual(exploration_stats.num_actual_starts_v2, 0)
self.assertEqual(exploration_stats.num_completions_v1, 0)
self.assertEqual(exploration_stats.num_completions_v2, 0)
self.assertEqual(exploration_stats.state_stats_mapping, {})
def test_get_playthrough_from_model(self):
"""Test the get_playthrough_from_model method."""
model = stats_models.PlaythroughModel.get(self.playthrough_id)
playthrough = stats_services.get_playthrough_from_model(model)
self.assertEqual(playthrough.exp_id, 'exp_id1')
self.assertEqual(playthrough.exp_version, 1)
self.assertEqual(playthrough.issue_type, 'EarlyQuit')
self.assertEqual(playthrough.issue_customization_args, {})
self.assertEqual(playthrough.actions, [])
def test_get_exploration_stats_by_id(self):
"""Test the get_exploration_stats_by_id method."""
exploration_stats = stats_services.get_exploration_stats_by_id(
self.exp_id, self.exp_version)
self.assertEqual(exploration_stats.exp_id, 'exp_id1')
self.assertEqual(exploration_stats.exp_version, 1)
self.assertEqual(exploration_stats.num_starts_v1, 0)
self.assertEqual(exploration_stats.num_starts_v2, 0)
self.assertEqual(exploration_stats.num_actual_starts_v1, 0)
self.assertEqual(exploration_stats.num_actual_starts_v2, 0)
self.assertEqual(exploration_stats.num_completions_v1, 0)
self.assertEqual(exploration_stats.num_completions_v2, 0)
self.assertEqual(exploration_stats.state_stats_mapping, {})
def test_create_stats_model(self):
"""Test the create_stats_model method."""
exploration_stats = stats_services.get_exploration_stats_by_id(
self.exp_id, self.exp_version)
exploration_stats.exp_version += 1
model_id = stats_services.create_stats_model(exploration_stats)
exploration_stats = stats_services.get_exploration_stats_by_id(
self.exp_id, self.exp_version + 1)
self.assertEqual(exploration_stats.exp_id, 'exp_id1')
self.assertEqual(exploration_stats.exp_version, 2)
self.assertEqual(exploration_stats.num_starts_v1, 0)
self.assertEqual(exploration_stats.num_starts_v2, 0)
self.assertEqual(exploration_stats.num_actual_starts_v1, 0)
self.assertEqual(exploration_stats.num_actual_starts_v2, 0)
self.assertEqual(exploration_stats.num_completions_v1, 0)
self.assertEqual(exploration_stats.num_completions_v2, 0)
self.assertEqual(exploration_stats.state_stats_mapping, {})
# Test create method with different state_stats_mapping.
exploration_stats.state_stats_mapping = {
'Home': stats_domain.StateStats.create_default()
}
exploration_stats.exp_version += 1
model_id = stats_services.create_stats_model(exploration_stats)
model = stats_models.ExplorationStatsModel.get(model_id)
self.assertEqual(model.exp_id, 'exp_id1')
self.assertEqual(model.exp_version, 3)
self.assertEqual(exploration_stats.num_starts_v1, 0)
self.assertEqual(exploration_stats.num_starts_v2, 0)
self.assertEqual(exploration_stats.num_actual_starts_v1, 0)
self.assertEqual(exploration_stats.num_actual_starts_v2, 0)
self.assertEqual(exploration_stats.num_completions_v1, 0)
self.assertEqual(exploration_stats.num_completions_v2, 0)
self.assertEqual(
model.state_stats_mapping, {
'Home': {
'total_answers_count_v1': 0,
'total_answers_count_v2': 0,
'useful_feedback_count_v1': 0,
'useful_feedback_count_v2': 0,
'total_hit_count_v1': 0,
'total_hit_count_v2': 0,
'first_hit_count_v1': 0,
'first_hit_count_v2': 0,
'num_times_solution_viewed_v2': 0,
'num_completions_v1': 0,
'num_completions_v2': 0
}
})
def test_save_stats_model(self):
"""Test the save_stats_model method."""
exploration_stats = stats_services.get_exploration_stats_by_id(
self.exp_id, self.exp_version)
exploration_stats.num_starts_v2 += 15
exploration_stats.num_actual_starts_v2 += 5
exploration_stats.num_completions_v2 += 2
stats_services.save_stats_model(exploration_stats)
exploration_stats = stats_services.get_exploration_stats_by_id(
self.exp_id, self.exp_version)
self.assertEqual(exploration_stats.num_starts_v2, 15)
self.assertEqual(exploration_stats.num_actual_starts_v2, 5)
self.assertEqual(exploration_stats.num_completions_v2, 2)
def test_get_exploration_stats_multi(self):
"""Test the get_exploration_stats_multi method."""
stats_models.ExplorationStatsModel.create(
'exp_id2', 2, 10, 0, 0, 0, 0, 0, {})
exp_version_references = [
exp_domain.ExpVersionReference(self.exp_id, self.exp_version),
exp_domain.ExpVersionReference('exp_id2', 2)]
exp_stats_list = stats_services.get_exploration_stats_multi(
exp_version_references)
self.assertEqual(len(exp_stats_list), 2)
self.assertEqual(exp_stats_list[0].exp_id, self.exp_id)
self.assertEqual(exp_stats_list[0].exp_version, self.exp_version)
self.assertEqual(exp_stats_list[1].exp_id, 'exp_id2')
self.assertEqual(exp_stats_list[1].exp_version, 2)
def test_get_multiple_exploration_stats_by_version_with_invalid_exp_id(
self):
exp_stats = stats_services.get_multiple_exploration_stats_by_version(
'invalid_exp_id', [1])
self.assertEqual(exp_stats, [None])
def test_get_exploration_stats_multi_with_invalid_exp_id(self):
exp_version_references = [
exp_domain.ExpVersionReference('exp_id_1', 1),
exp_domain.ExpVersionReference('exp_id_2', 2)]
exploration_stats_models = (
stats_models.ExplorationStatsModel.get_multi_stats_models(
exp_version_references))
self.assertEqual(exploration_stats_models, [None, None])
exp_stats_list = stats_services.get_exploration_stats_multi(
exp_version_references)
self.assertEqual(len(exp_stats_list), 2)
self.assertEqual(exp_stats_list[0].exp_id, 'exp_id_1')
self.assertEqual(exp_stats_list[0].exp_version, 1)
self.assertEqual(exp_stats_list[1].exp_id, 'exp_id_2')
self.assertEqual(exp_stats_list[1].exp_version, 2)
class ExplorationIssuesTests(test_utils.GenericTestBase):
"""Unit tests focused on services related to exploration issues."""
def setUp(self):
super(ExplorationIssuesTests, self).setUp()
self.signup(self.OWNER_EMAIL, self.OWNER_USERNAME)
self.owner_id = self.get_user_id_from_email(self.OWNER_EMAIL)
self.exp = self.save_new_linear_exp_with_state_names_and_interactions(
'exp_id', self.owner_id,
['A', 'B'], ['TextInput', 'EndExploration'])
def _create_cst_playthrough(self, state_names):
"""Creates a Cyclic State Transitions playthrough and returns its id.
Args:
state_names: list(str). The states of the cycle, where only the
first and last values are the same. Requires at least 2 distinct
values.
Returns:
str. The ID of the new playthrough.
"""
issue_customization_args = {'state_names': {'value': state_names}}
actions = [{
'action_type': 'ExplorationStart',
'action_customization_args': {
'state_name': {'value': state_names[0]},
},
'schema_version': stats_models.CURRENT_ACTION_SCHEMA_VERSION,
}]
actions.extend(
{
'action_type': 'AnswerSubmit',
'action_customization_args': {
'state_name': {'value': state_name},
'dest_state_name': {'value': dest_state_name},
'interaction_id': {'value': 'TextInput'},
'submitted_answer': {'value': 'Foo!'},
'feedback': {'value': ''},
'time_spent_in_exp_in_msecs': {'value': 1000},
},
'schema_version': stats_models.CURRENT_ACTION_SCHEMA_VERSION,
}
for state_name, dest_state_name in python_utils.ZIP(
state_names[:-1], state_names[1:]))
actions.append({
'action_type': 'ExplorationQuit',
'action_customization_args': {
'state_name': {'value': state_names[-1]},
'time_spent_in_state_in_msecs': {'value': 1000},
},
'schema_version': stats_models.CURRENT_ACTION_SCHEMA_VERSION,
})
return stats_models.PlaythroughModel.create(
self.exp.id, self.exp.version, 'CyclicStateTransitions',
issue_customization_args, actions)
def _create_eq_playthrough(self, state_name):
"""Creates an Early Quit playthrough and returns its id.
Args:
state_name: str. The state the early quit occurred from.
Returns:
str. The ID of the new playthrough.
"""
issue_customization_args = {
'state_name': {'value': state_name},
'time_spent_in_exp_in_msecs': {'value': 200},
}
actions = [{
'action_type': 'ExplorationStart',
'action_customization_args': {'state_name': {'value': state_name}},
'schema_version': stats_models.CURRENT_ACTION_SCHEMA_VERSION,
}, {
'action_type': 'ExplorationQuit',
'action_customization_args': {
'state_name': {'value': state_name},
'time_spent_in_state_in_msecs': {'value': 1000},
},
'schema_version': stats_models.CURRENT_ACTION_SCHEMA_VERSION,
}]
return stats_models.PlaythroughModel.create(
self.exp.id, self.exp.version, 'EarlyQuit',
issue_customization_args, actions)
def _create_mis_playthrough(
self, state_name, num_times_answered_incorrectly):
"""Creates a Multiple Incorrect Submissions playthrough and returns its
id.
Args:
state_name: str. The state the answers were submitted to.
num_times_answered_incorrectly: int. Number of times incorrect
answers were submitted.
Returns:
str. The ID of the new playthrough.
"""
issue_customization_args = {
'state_name': {'value': state_name},
'num_times_answered_incorrectly': {
'value': num_times_answered_incorrectly
},
}
actions = [{
'action_type': 'ExplorationStart',
'action_customization_args': {'state_name': {'value': state_name}},
'schema_version': stats_models.CURRENT_ACTION_SCHEMA_VERSION,
}]
actions.extend(
{
'action_type': 'AnswerSubmit',
'action_customization_args': {
'state_name': {'value': state_name},
'dest_state_name': {'value': state_name},
'interaction_id': {'value': 'TextInput'},
'submitted_answer': {'value': 'Foo!'},
'feedback': {'value': ''},
'time_spent_in_exp_in_msecs': {'value': 1000},
},
'schema_version': stats_models.CURRENT_ACTION_SCHEMA_VERSION,
}
for _ in python_utils.RANGE(num_times_answered_incorrectly))
actions.append({
'action_type': 'ExplorationQuit',
'action_customization_args': {
'state_name': {'value': state_name},
'time_spent_in_state_in_msecs': {'value': 1000},
},
'schema_version': stats_models.CURRENT_ACTION_SCHEMA_VERSION,
})
return stats_models.PlaythroughModel.create(
self.exp.id, self.exp.version, 'MultipleIncorrectSubmissions',
issue_customization_args, actions)
def _create_cst_exp_issue(self, playthrough_ids, state_names):
"""Returns a new Cyclic State Transitions issue domain object.
Args:
playthrough_ids: list(str). List of playthrough IDs demonstrating a
Cyclic State Transitions issue.
state_names: list(str). The states of the cycle, where only the
first and last values are the same. Requires at least 2 distinct
values.
Returns:
stats_domain.ExplorationIssue. The new issue.
"""
issue_customization_args = {'state_names': {'value': state_names}}
is_valid = True
return stats_domain.ExplorationIssue(
'CyclicStateTransitions', issue_customization_args, playthrough_ids,
stats_models.CURRENT_ISSUE_SCHEMA_VERSION, is_valid)
def _create_eq_exp_issue(self, playthrough_ids, state_name):
"""Returns a new Early Quit issue domain object.
Args:
playthrough_ids: list(str). List of playthrough IDs demonstrating an
Early Quit issue.
state_name: str. The state the early quit occurred from.
Returns:
stats_domain.ExplorationIssue. The new issue.
"""
issue_customization_args = {
'state_name': {'value': state_name},
'time_spent_in_exp_in_msecs': {'value': 200},
}
is_valid = True
return stats_domain.ExplorationIssue(
'EarlyQuit', issue_customization_args, playthrough_ids,
stats_models.CURRENT_ISSUE_SCHEMA_VERSION, is_valid)
def _create_mis_exp_issue(
self, playthrough_ids, state_name, num_times_answered_incorrectly):
"""Returns a new Multiple Incorrect Submissions issue domain object.
Args:
playthrough_ids: list(str). List of playthrough IDs demonstrating a
Multiple Incorrect Submissions issue.
state_name: str. The state the answers were submitted to.
num_times_answered_incorrectly: int. Number of times incorrect
answers were submitted.
Returns:
stats_domain.ExplorationIssue. The new issue.
"""
issue_customization_args = {
'state_name': {'value': state_name},
'num_times_answered_incorrectly': {
'value': num_times_answered_incorrectly
},
}
is_valid = True
return stats_domain.ExplorationIssue(
'MultipleIncorrectSubmissions', issue_customization_args,
playthrough_ids, stats_models.CURRENT_ISSUE_SCHEMA_VERSION,
is_valid)
def test_create_exp_issues_model(self):
exp_issues = stats_domain.ExplorationIssues(
self.exp.id, stats_models.CURRENT_ISSUE_SCHEMA_VERSION, [])
stats_services.create_exp_issues_model(exp_issues)
exp_issues = stats_models.ExplorationIssuesModel.get_model(
self.exp.id, self.exp.version)
self.assertEqual(exp_issues.exp_id, self.exp.id)
self.assertEqual(exp_issues.exp_version, self.exp.version)
self.assertEqual(exp_issues.unresolved_issues, [])
def test_get_exp_issues_creates_new_empty_exp_issues_when_missing(self):
exp_issues = (
stats_services.get_exp_issues(self.exp.id, self.exp.version))
self.assertEqual(exp_issues.exp_id, self.exp.id)
self.assertEqual(exp_issues.unresolved_issues, [])
def test_delete_playthroughs_multi(self):
playthrough_ids = [
self._create_eq_playthrough('A'),
self._create_cst_playthrough(['A', 'B', 'A']),
self._create_mis_playthrough('A', 3),
]
stats_services.delete_playthroughs_multi(playthrough_ids)
self.assertEqual(
stats_models.PlaythroughModel.get_multi(playthrough_ids),
[None, None, None])
def test_save_exp_issues_model(self):
eq_playthrough_ids = [self._create_eq_playthrough('A')]
cst_playthrough_ids = [self._create_cst_playthrough(['A', 'B', 'A'])]
mis_playthrough_ids = [self._create_mis_playthrough('A', 3)]
stats_services.save_exp_issues_model(
stats_domain.ExplorationIssues(self.exp.id, self.exp.version, [
self._create_eq_exp_issue(eq_playthrough_ids, 'A'),
self._create_mis_exp_issue(mis_playthrough_ids, 'A', 3),
self._create_cst_exp_issue(cst_playthrough_ids, ['A', 'B', 'A'])
]))
exp_issues = (
stats_services.get_exp_issues(self.exp.id, self.exp.version))
self.assertEqual(len(exp_issues.unresolved_issues), 3)
self.assertEqual(
exp_issues.unresolved_issues[0].playthrough_ids,
eq_playthrough_ids)
self.assertEqual(
exp_issues.unresolved_issues[1].playthrough_ids,
mis_playthrough_ids)
self.assertEqual(
exp_issues.unresolved_issues[2].playthrough_ids,
cst_playthrough_ids)
def test_cst_exp_issue_is_invalidated_when_state_is_deleted(self):
stats_services.save_exp_issues_model(
stats_domain.ExplorationIssues(self.exp.id, self.exp.version, [
self._create_cst_exp_issue(
[self._create_cst_playthrough(['A', 'B', 'A'])],
['A', 'B', 'A'])
]))
exp_services.update_exploration(self.owner_id, self.exp.id, [
exp_domain.ExplorationChange(
{'cmd': 'delete_state', 'state_name': 'B'})
], 'change')
exp_issues = (
stats_services.get_exp_issues(self.exp.id, self.exp.version + 1))
self.assertEqual(len(exp_issues.unresolved_issues), 1)
self.assertFalse(exp_issues.unresolved_issues[0].is_valid)
def test_cst_exp_issue_is_updated_when_state_is_renamed(self):
stats_services.save_exp_issues_model(
stats_domain.ExplorationIssues(self.exp.id, self.exp.version, [
self._create_cst_exp_issue(
[self._create_cst_playthrough(['A', 'B', 'A'])],
['A', 'B', 'A'])
]))
exp_services.update_exploration(self.owner_id, self.exp.id, [
exp_domain.ExplorationChange({
'cmd': 'rename_state',
'old_state_name': 'A',
'new_state_name': 'Z',
})
], 'change')
exp_issues = (
stats_services.get_exp_issues(self.exp.id, self.exp.version + 1))
self.assertEqual(len(exp_issues.unresolved_issues), 1)
exp_issue = exp_issues.unresolved_issues[0]
self.assertTrue(exp_issue.is_valid)
self.assertEqual(
exp_issue.issue_customization_args['state_names']['value'],
['Z', 'B', 'Z'])
self.assertEqual(len(exp_issue.playthrough_ids), 1)
playthrough = stats_models.PlaythroughModel.get_by_id(
exp_issue.playthrough_ids[0])
self.assertEqual(
playthrough.issue_customization_args['state_names']['value'],
['Z', 'B', 'Z'])
self.assertEqual(len(playthrough.actions), 4)
actions = playthrough.actions
self.assertEqual(
actions[0]['action_customization_args']['state_name']['value'],
'Z')
self.assertEqual(
actions[1]['action_customization_args']['state_name']['value'],
'Z')
self.assertEqual(
actions[2]['action_customization_args']['dest_state_name']['value'],
'Z')
self.assertEqual(
actions[3]['action_customization_args']['state_name']['value'],
'Z')
def test_eq_exp_issue_is_invalidated_when_state_is_deleted(self):
stats_services.save_exp_issues_model(
stats_domain.ExplorationIssues(self.exp.id, self.exp.version, [
self._create_eq_exp_issue(
[self._create_eq_playthrough('B')], 'B')
]))
exp_services.update_exploration(self.owner_id, self.exp.id, [
exp_domain.ExplorationChange(
{'cmd': 'delete_state', 'state_name': 'B'})
], 'change')
exp_issues = (
stats_services.get_exp_issues(self.exp.id, self.exp.version + 1))
self.assertEqual(len(exp_issues.unresolved_issues), 1)
self.assertFalse(exp_issues.unresolved_issues[0].is_valid)
def test_eq_exp_issue_is_updated_when_state_is_renamed(self):
stats_services.save_exp_issues_model(
stats_domain.ExplorationIssues(self.exp.id, self.exp.version, [
self._create_eq_exp_issue(
[self._create_eq_playthrough('A')], 'A')
]))
exp_services.update_exploration(self.owner_id, self.exp.id, [
exp_domain.ExplorationChange({
'cmd': 'rename_state',
'old_state_name': 'A',
'new_state_name': 'Z',
})
], 'change')
exp_issues = (
stats_services.get_exp_issues(self.exp.id, self.exp.version + 1))
self.assertEqual(len(exp_issues.unresolved_issues), 1)
exp_issue = exp_issues.unresolved_issues[0]
self.assertTrue(exp_issue.is_valid)
self.assertEqual(
exp_issue.issue_customization_args['state_name']['value'], 'Z')
self.assertEqual(len(exp_issue.playthrough_ids), 1)
playthrough = stats_models.PlaythroughModel.get_by_id(
exp_issue.playthrough_ids[0])
self.assertEqual(
playthrough.issue_customization_args['state_name']['value'], 'Z')
self.assertEqual(len(playthrough.actions), 2)
actions = playthrough.actions
self.assertEqual(
actions[0]['action_customization_args']['state_name']['value'], 'Z')
self.assertEqual(
actions[1]['action_customization_args']['state_name']['value'], 'Z')
def test_mis_exp_issue_is_invalidated_when_state_is_deleted(self):
stats_services.save_exp_issues_model(
stats_domain.ExplorationIssues(self.exp.id, self.exp.version, [
self._create_mis_exp_issue(
[self._create_mis_playthrough('B', 2)], 'B', 2)
]))
exp_services.update_exploration(self.owner_id, self.exp.id, [
exp_domain.ExplorationChange(
{'cmd': 'delete_state', 'state_name': 'B'}),
], 'Delete B')
exp_issues = (
stats_services.get_exp_issues(self.exp.id, self.exp.version + 1))
self.assertEqual(len(exp_issues.unresolved_issues), 1)
self.assertFalse(exp_issues.unresolved_issues[0].is_valid)
def test_mis_exp_issue_is_updated_when_state_is_renamed(self):
stats_services.save_exp_issues_model(
stats_domain.ExplorationIssues(self.exp.id, self.exp.version, [
self._create_mis_exp_issue(
[self._create_mis_playthrough('A', 2)], 'A', 2)
]))
exp_services.update_exploration(self.owner_id, self.exp.id, [
exp_domain.ExplorationChange({
'cmd': 'rename_state',
'old_state_name': 'A',
'new_state_name': 'Z',
})
], 'change')
exp_issues = (
stats_services.get_exp_issues(self.exp.id, self.exp.version + 1))
self.assertEqual(len(exp_issues.unresolved_issues), 1)
exp_issue = exp_issues.unresolved_issues[0]
self.assertTrue(exp_issue.is_valid)
self.assertEqual(
exp_issue.issue_customization_args['state_name']['value'], 'Z')
self.assertEqual(len(exp_issue.playthrough_ids), 1)
playthrough = stats_models.PlaythroughModel.get_by_id(
exp_issue.playthrough_ids[0])
self.assertEqual(
playthrough.issue_customization_args['state_name']['value'], 'Z')
self.assertEqual(len(playthrough.actions), 4)
actions = playthrough.actions
self.assertEqual(
actions[0]['action_customization_args']['state_name']['value'], 'Z')
self.assertEqual(
actions[1]['action_customization_args']['state_name']['value'], 'Z')
self.assertEqual(
actions[2]['action_customization_args']['state_name']['value'], 'Z')
self.assertEqual(
actions[3]['action_customization_args']['state_name']['value'], 'Z')
def test_revert_exploration_recovers_exp_issues(self):
stats_services.save_exp_issues_model(
stats_domain.ExplorationIssues(self.exp.id, self.exp.version, [
self._create_eq_exp_issue(
[self._create_eq_playthrough('B')], 'B'),
self._create_cst_exp_issue(
[self._create_cst_playthrough(['A', 'B', 'A'])],
['A', 'B', 'A']),
self._create_mis_exp_issue(
[self._create_mis_playthrough('B', 3)], 'B', 3),
]))
exp_services.update_exploration(self.owner_id, self.exp.id, [
exp_domain.ExplorationChange(
{'cmd': 'delete_state', 'state_name': 'B'}),
], 'commit')
exp_issues = (
stats_services.get_exp_issues(self.exp.id, self.exp.version + 1))
self.assertEqual(len(exp_issues.unresolved_issues), 3)
self.assertFalse(exp_issues.unresolved_issues[0].is_valid)
self.assertFalse(exp_issues.unresolved_issues[1].is_valid)
self.assertFalse(exp_issues.unresolved_issues[2].is_valid)
exp_services.revert_exploration(self.owner_id, self.exp.id, 2, 1)
exp_issues = (
stats_services.get_exp_issues(self.exp.id, self.exp.version + 2))
self.assertEqual(len(exp_issues.unresolved_issues), 3)
self.assertTrue(exp_issues.unresolved_issues[0].is_valid)
self.assertTrue(exp_issues.unresolved_issues[1].is_valid)
self.assertTrue(exp_issues.unresolved_issues[2].is_valid)
class EventLogEntryTests(test_utils.GenericTestBase):
"""Test for the event log creation."""
def test_create_events(self):
"""Basic test that makes sure there are no exceptions thrown."""
event_services.StartExplorationEventHandler.record(
'eid', 2, 'state', 'session', {}, feconf.PLAY_TYPE_NORMAL)
event_services.MaybeLeaveExplorationEventHandler.record(
'eid', 2, 'state', 'session', 27.2, {}, feconf.PLAY_TYPE_NORMAL)
class AnswerEventTests(test_utils.GenericTestBase):
"""Test recording new answer operations through events."""
SESSION_ID = 'SESSION_ID'
TIME_SPENT = 5.0
PARAMS = {}
def test_record_answer(self):
self.save_new_default_exploration('eid', 'fake@user.com')
exp = exp_fetchers.get_exploration_by_id('eid')
first_state_name = exp.init_state_name
second_state_name = 'State 2'
third_state_name = 'State 3'
exp_services.update_exploration('fake@user.com', 'eid', [
exp_domain.ExplorationChange({
'cmd': exp_domain.CMD_EDIT_STATE_PROPERTY,
'state_name': first_state_name,
'property_name': exp_domain.STATE_PROPERTY_INTERACTION_ID,
'new_value': 'TextInput',
}), exp_domain.ExplorationChange({
'cmd': exp_domain.CMD_EDIT_STATE_PROPERTY,
'state_name': first_state_name,
'property_name':
exp_domain.STATE_PROPERTY_INTERACTION_CUST_ARGS,
'new_value': {
'placeholder': {
'value': {
'content_id': 'ca_placeholder_0',
'unicode_str': 'Enter here'
}
},
'rows': {'value': 1}
}
}), exp_domain.ExplorationChange({
'cmd': exp_domain.CMD_EDIT_STATE_PROPERTY,
'state_name': first_state_name,
'property_name':
exp_domain.STATE_PROPERTY_NEXT_CONTENT_ID_INDEX,
'new_value': 1
}), exp_domain.ExplorationChange({
'cmd': exp_domain.CMD_ADD_STATE,
'state_name': second_state_name,
}), exp_domain.ExplorationChange({
'cmd': exp_domain.CMD_ADD_STATE,
'state_name': third_state_name,
}), exp_domain.ExplorationChange({
'cmd': exp_domain.CMD_EDIT_STATE_PROPERTY,
'state_name': second_state_name,
'property_name': exp_domain.STATE_PROPERTY_INTERACTION_ID,
'new_value': 'TextInput',
}), exp_domain.ExplorationChange({
'cmd': exp_domain.CMD_EDIT_STATE_PROPERTY,
'state_name': second_state_name,
'property_name':
exp_domain.STATE_PROPERTY_INTERACTION_CUST_ARGS,
'new_value': {
'placeholder': {
'value': {
'content_id': 'ca_placeholder_0',
'unicode_str': 'Enter here'
}
},
'rows': {'value': 1}
}
}), exp_domain.ExplorationChange({
'cmd': exp_domain.CMD_EDIT_STATE_PROPERTY,
'state_name': second_state_name,
'property_name':
exp_domain.STATE_PROPERTY_NEXT_CONTENT_ID_INDEX,
'new_value': 1
}), exp_domain.ExplorationChange({
'cmd': exp_domain.CMD_EDIT_STATE_PROPERTY,
'state_name': third_state_name,
'property_name': exp_domain.STATE_PROPERTY_INTERACTION_ID,
'new_value': 'Continue',
}), exp_domain.ExplorationChange({
'cmd': exp_domain.CMD_EDIT_STATE_PROPERTY,
'state_name': third_state_name,
'property_name':
exp_domain.STATE_PROPERTY_INTERACTION_CUST_ARGS,
'new_value': {
'buttonText': {
'value': {
'content_id': 'ca_buttonText_1',
'unicode_str': 'Continue'
}
},
}
}), exp_domain.ExplorationChange({
'cmd': exp_domain.CMD_EDIT_STATE_PROPERTY,
'state_name': third_state_name,
'property_name':
exp_domain.STATE_PROPERTY_NEXT_CONTENT_ID_INDEX,
'new_value': 2
})], 'Add new state')
exp = exp_fetchers.get_exploration_by_id('eid')
exp_version = exp.version
for state_name in [first_state_name, second_state_name]:
state_answers = stats_services.get_state_answers(
'eid', exp_version, state_name)
self.assertEqual(state_answers, None)
# Answer is a string.
event_services.AnswerSubmissionEventHandler.record(
'eid', exp_version, first_state_name, 'TextInput', 0, 0,
exp_domain.EXPLICIT_CLASSIFICATION, 'sid1', self.TIME_SPENT,
self.PARAMS, 'answer1')
event_services.AnswerSubmissionEventHandler.record(
'eid', exp_version, first_state_name, 'TextInput', 0, 1,
exp_domain.EXPLICIT_CLASSIFICATION, 'sid2', self.TIME_SPENT,
self.PARAMS, 'answer1')
# Answer is a dict.
event_services.AnswerSubmissionEventHandler.record(
'eid', exp_version, first_state_name, 'TextInput', 1, 0,
exp_domain.EXPLICIT_CLASSIFICATION, 'sid1', self.TIME_SPENT,
self.PARAMS, {'x': 1.0, 'y': 5.0})
# Answer is a number.
event_services.AnswerSubmissionEventHandler.record(
'eid', exp_version, first_state_name, 'TextInput', 2, 0,
exp_domain.EXPLICIT_CLASSIFICATION, 'sid1', self.TIME_SPENT,
self.PARAMS, 10)
# Answer is a list of dicts.
event_services.AnswerSubmissionEventHandler.record(
'eid', exp_version, first_state_name, 'TextInput', 3, 0,
exp_domain.EXPLICIT_CLASSIFICATION, 'sid1', self.TIME_SPENT,
self.PARAMS, [{'a': 'some', 'b': 'text'}, {'a': 1.0, 'c': 2.0}])
# Answer is a list.
event_services.AnswerSubmissionEventHandler.record(
'eid', exp_version, second_state_name, 'TextInput', 2, 0,
exp_domain.EXPLICIT_CLASSIFICATION, 'sid3', self.TIME_SPENT,
self.PARAMS, [2, 4, 8])
# Answer is a unicode string.
event_services.AnswerSubmissionEventHandler.record(
'eid', exp_version, second_state_name, 'TextInput', 1, 1,
exp_domain.EXPLICIT_CLASSIFICATION, 'sid4', self.TIME_SPENT,
self.PARAMS, self.UNICODE_TEST_STRING)
# Answer is None (such as for Continue).
event_services.AnswerSubmissionEventHandler.record(
'eid', exp_version, third_state_name, 'Continue', 1, 1,
exp_domain.EXPLICIT_CLASSIFICATION, 'sid5', self.TIME_SPENT,
self.PARAMS, None)
expected_submitted_answer_list1 = [{
'answer': 'answer1', 'time_spent_in_sec': 5.0,
'answer_group_index': 0, 'rule_spec_index': 0,
'classification_categorization': 'explicit', 'session_id': 'sid1',
'interaction_id': 'TextInput', 'params': {}
}, {
'answer': 'answer1', 'time_spent_in_sec': 5.0,
'answer_group_index': 0, 'rule_spec_index': 1,
'classification_categorization': 'explicit', 'session_id': 'sid2',
'interaction_id': 'TextInput', 'params': {}
}, {
'answer': {'x': 1.0, 'y': 5.0}, 'time_spent_in_sec': 5.0,
'answer_group_index': 1, 'rule_spec_index': 0,
'classification_categorization': 'explicit', 'session_id': 'sid1',
'interaction_id': 'TextInput', 'params': {}
}, {
'answer': 10, 'time_spent_in_sec': 5.0, 'answer_group_index': 2,
'rule_spec_index': 0, 'classification_categorization': 'explicit',
'session_id': 'sid1', 'interaction_id': 'TextInput', 'params': {}
}, {
'answer': [{'a': 'some', 'b': 'text'}, {'a': 1.0, 'c': 2.0}],
'time_spent_in_sec': 5.0, 'answer_group_index': 3,
'rule_spec_index': 0, 'classification_categorization': 'explicit',
'session_id': 'sid1', 'interaction_id': 'TextInput', 'params': {}
}]
expected_submitted_answer_list2 = [{
'answer': [2, 4, 8], 'time_spent_in_sec': 5.0,
'answer_group_index': 2, 'rule_spec_index': 0,
'classification_categorization': 'explicit', 'session_id': 'sid3',
'interaction_id': 'TextInput', 'params': {}
}, {
'answer': self.UNICODE_TEST_STRING, 'time_spent_in_sec': 5.0,
'answer_group_index': 1, 'rule_spec_index': 1,
'classification_categorization': 'explicit', 'session_id': 'sid4',
'interaction_id': 'TextInput', 'params': {}
}]
expected_submitted_answer_list3 = [{
'answer': None, 'time_spent_in_sec': 5.0, 'answer_group_index': 1,
'rule_spec_index': 1, 'classification_categorization': 'explicit',
'session_id': 'sid5', 'interaction_id': 'Continue', 'params': {}
}]
state_answers = stats_services.get_state_answers(
'eid', exp_version, first_state_name)
self.assertEqual(
state_answers.get_submitted_answer_dict_list(),
expected_submitted_answer_list1)
state_answers = stats_services.get_state_answers(
'eid', exp_version, second_state_name)
self.assertEqual(
state_answers.get_submitted_answer_dict_list(),
expected_submitted_answer_list2)
state_answers = stats_services.get_state_answers(
'eid', exp_version, third_state_name)
self.assertEqual(
state_answers.get_submitted_answer_dict_list(),
expected_submitted_answer_list3)
class RecordAnswerTests(test_utils.GenericTestBase):
"""Tests for functionality related to recording and retrieving answers."""
EXP_ID = 'exp_id0'
def setUp(self):
super(RecordAnswerTests, self).setUp()
self.signup(self.OWNER_EMAIL, self.OWNER_USERNAME)
self.owner_id = self.get_user_id_from_email(self.OWNER_EMAIL)
self.exploration = self.save_new_valid_exploration(
self.EXP_ID, self.owner_id, end_state_name='End')
def test_record_answer_without_retrieving_it_first(self):
stats_services.record_answer(
self.EXP_ID, self.exploration.version,
self.exploration.init_state_name, 'TextInput',
stats_domain.SubmittedAnswer(
'first answer', 'TextInput', 0,
0, exp_domain.EXPLICIT_CLASSIFICATION, {},
'a_session_id_val', 1.0))
state_answers = stats_services.get_state_answers(
self.EXP_ID, self.exploration.version,
self.exploration.init_state_name)
self.assertEqual(state_answers.get_submitted_answer_dict_list(), [{
'answer': 'first answer',
'time_spent_in_sec': 1.0,
'answer_group_index': 0,
'rule_spec_index': 0,
'classification_categorization': 'explicit',
'session_id': 'a_session_id_val',
'interaction_id': 'TextInput',
'params': {}
}])
def test_record_and_retrieve_single_answer(self):
state_answers = stats_services.get_state_answers(
self.EXP_ID, self.exploration.version,
self.exploration.init_state_name)
self.assertIsNone(state_answers)
stats_services.record_answer(
self.EXP_ID, self.exploration.version,
self.exploration.init_state_name, 'TextInput',
stats_domain.SubmittedAnswer(
'some text', 'TextInput', 0,
1, exp_domain.EXPLICIT_CLASSIFICATION, {},
'a_session_id_val', 10.0))
state_answers = stats_services.get_state_answers(
self.EXP_ID, self.exploration.version,
self.exploration.init_state_name)
self.assertEqual(state_answers.exploration_id, 'exp_id0')
self.assertEqual(state_answers.exploration_version, 1)
self.assertEqual(
state_answers.state_name, feconf.DEFAULT_INIT_STATE_NAME)
self.assertEqual(state_answers.interaction_id, 'TextInput')
self.assertEqual(state_answers.get_submitted_answer_dict_list(), [{
'answer': 'some text',
'time_spent_in_sec': 10.0,
'answer_group_index': 0,
'rule_spec_index': 1,
'classification_categorization': 'explicit',
'session_id': 'a_session_id_val',
'interaction_id': 'TextInput',
'params': {}
}])
def test_record_and_retrieve_single_answer_with_preexisting_entry(self):
stats_services.record_answer(
self.EXP_ID, self.exploration.version,
self.exploration.init_state_name, 'TextInput',
stats_domain.SubmittedAnswer(
'first answer', 'TextInput', 0,
0, exp_domain.EXPLICIT_CLASSIFICATION, {},
'a_session_id_val', 1.0))
state_answers = stats_services.get_state_answers(
self.EXP_ID, self.exploration.version,
self.exploration.init_state_name)
self.assertEqual(state_answers.get_submitted_answer_dict_list(), [{
'answer': 'first answer',
'time_spent_in_sec': 1.0,
'answer_group_index': 0,
'rule_spec_index': 0,
'classification_categorization': 'explicit',
'session_id': 'a_session_id_val',
'interaction_id': 'TextInput',
'params': {}
}])
stats_services.record_answer(
self.EXP_ID, self.exploration.version,
self.exploration.init_state_name, 'TextInput',
stats_domain.SubmittedAnswer(
'some text', 'TextInput', 0,
1, exp_domain.EXPLICIT_CLASSIFICATION, {},
'a_session_id_val', 10.0))
state_answers = stats_services.get_state_answers(
self.EXP_ID, self.exploration.version,
self.exploration.init_state_name)
self.assertEqual(state_answers.exploration_id, 'exp_id0')
self.assertEqual(state_answers.exploration_version, 1)
self.assertEqual(
state_answers.state_name, feconf.DEFAULT_INIT_STATE_NAME)
self.assertEqual(state_answers.interaction_id, 'TextInput')
self.assertEqual(state_answers.get_submitted_answer_dict_list(), [{
'answer': 'first answer',
'time_spent_in_sec': 1.0,
'answer_group_index': 0,
'rule_spec_index': 0,
'classification_categorization': 'explicit',
'session_id': 'a_session_id_val',
'interaction_id': 'TextInput',
'params': {}
}, {
'answer': 'some text',
'time_spent_in_sec': 10.0,
'answer_group_index': 0,
'rule_spec_index': 1,
'classification_categorization': 'explicit',
'session_id': 'a_session_id_val',
'interaction_id': 'TextInput',
'params': {}
}])
def test_record_many_answers(self):
state_answers = stats_services.get_state_answers(
self.EXP_ID, self.exploration.version,
self.exploration.init_state_name)
self.assertIsNone(state_answers)
submitted_answer_list = [
stats_domain.SubmittedAnswer(
'answer a', 'TextInput', 0, 1,
exp_domain.EXPLICIT_CLASSIFICATION, {}, 'session_id_v', 10.0),
stats_domain.SubmittedAnswer(
'answer ccc', 'TextInput', 1, 1,
exp_domain.EXPLICIT_CLASSIFICATION, {}, 'session_id_v', 3.0),
stats_domain.SubmittedAnswer(
'answer bbbbb', 'TextInput', 1, 0,
exp_domain.EXPLICIT_CLASSIFICATION, {}, 'session_id_v', 7.5),
]
stats_services.record_answers(
self.EXP_ID, self.exploration.version,
self.exploration.init_state_name, 'TextInput',
submitted_answer_list)
# The order of the answers returned depends on the size of the answers.
state_answers = stats_services.get_state_answers(
self.EXP_ID, self.exploration.version,
self.exploration.init_state_name)
self.assertEqual(state_answers.exploration_id, 'exp_id0')
self.assertEqual(state_answers.exploration_version, 1)
self.assertEqual(
state_answers.state_name, feconf.DEFAULT_INIT_STATE_NAME)
self.assertEqual(state_answers.interaction_id, 'TextInput')
self.assertEqual(state_answers.get_submitted_answer_dict_list(), [{
'answer': 'answer a',
'time_spent_in_sec': 10.0,
'answer_group_index': 0,
'rule_spec_index': 1,
'classification_categorization': 'explicit',
'session_id': 'session_id_v',
'interaction_id': 'TextInput',
'params': {}
}, {
'answer': 'answer ccc',
'time_spent_in_sec': 3.0,
'answer_group_index': 1,
'rule_spec_index': 1,
'classification_categorization': 'explicit',
'session_id': 'session_id_v',
'interaction_id': 'TextInput',
'params': {}
}, {
'answer': 'answer bbbbb',
'time_spent_in_sec': 7.5,
'answer_group_index': 1,
'rule_spec_index': 0,
'classification_categorization': 'explicit',
'session_id': 'session_id_v',
'interaction_id': 'TextInput',
'params': {}
}])
def test_record_answers_exceeding_one_shard(self):
# Use a smaller max answer list size so less answers are needed to
# exceed a shard.
with self.swap(
stats_models.StateAnswersModel, '_MAX_ANSWER_LIST_BYTE_SIZE',
100000):
state_answers = stats_services.get_state_answers(
self.EXP_ID, self.exploration.version,
self.exploration.init_state_name)
self.assertIsNone(state_answers)
submitted_answer_list = [
stats_domain.SubmittedAnswer(
'answer a', 'TextInput', 0, 1,
exp_domain.EXPLICIT_CLASSIFICATION, {}, 'session_id_v',
10.0),
stats_domain.SubmittedAnswer(
'answer ccc', 'TextInput', 1, 1,
exp_domain.EXPLICIT_CLASSIFICATION, {}, 'session_id_v',
3.0),
stats_domain.SubmittedAnswer(
'answer bbbbb', 'TextInput', 1, 0,
exp_domain.EXPLICIT_CLASSIFICATION, {}, 'session_id_v',
7.5),
]
stats_services.record_answers(
self.EXP_ID, self.exploration.version,
self.exploration.init_state_name, 'TextInput',
submitted_answer_list * 200)
# Verify that more than 1 shard was stored. The index shard
# (shard_id 0) is not included in the shard count.
master_model = stats_models.StateAnswersModel.get_master_model(
self.exploration.id, self.exploration.version,
self.exploration.init_state_name)
self.assertGreater(master_model.shard_count, 0)
# The order of the answers returned depends on the size of the
# answers.
state_answers = stats_services.get_state_answers(
self.EXP_ID, self.exploration.version,
self.exploration.init_state_name)
self.assertEqual(state_answers.exploration_id, 'exp_id0')
self.assertEqual(state_answers.exploration_version, 1)
self.assertEqual(
state_answers.state_name, feconf.DEFAULT_INIT_STATE_NAME)
self.assertEqual(state_answers.interaction_id, 'TextInput')
self.assertEqual(
len(state_answers.get_submitted_answer_dict_list()), 600)
def test_record_many_answers_with_preexisting_entry(self):
stats_services.record_answer(
self.EXP_ID, self.exploration.version,
self.exploration.init_state_name, 'TextInput',
stats_domain.SubmittedAnswer(
'1 answer', 'TextInput', 0,
0, exp_domain.EXPLICIT_CLASSIFICATION, {},
'a_session_id_val', 1.0))
state_answers = stats_services.get_state_answers(
self.EXP_ID, self.exploration.version,
self.exploration.init_state_name)
self.assertEqual(state_answers.get_submitted_answer_dict_list(), [{
'answer': '1 answer',
'time_spent_in_sec': 1.0,
'answer_group_index': 0,
'rule_spec_index': 0,
'classification_categorization': 'explicit',
'session_id': 'a_session_id_val',
'interaction_id': 'TextInput',
'params': {}
}])
submitted_answer_list = [
stats_domain.SubmittedAnswer(
'answer aaa', 'TextInput', 0, 1,
exp_domain.EXPLICIT_CLASSIFICATION, {}, 'session_id_v', 10.0),
stats_domain.SubmittedAnswer(
'answer ccccc', 'TextInput', 1, 1,
exp_domain.EXPLICIT_CLASSIFICATION, {}, 'session_id_v', 3.0),
stats_domain.SubmittedAnswer(
'answer bbbbbbb', 'TextInput', 1, 0,
exp_domain.EXPLICIT_CLASSIFICATION, {}, 'session_id_v', 7.5),
]
stats_services.record_answers(
self.EXP_ID, self.exploration.version,
self.exploration.init_state_name, 'TextInput',
submitted_answer_list)
# The order of the answers returned depends on the size of the answers.
state_answers = stats_services.get_state_answers(
self.EXP_ID, self.exploration.version,
self.exploration.init_state_name)
self.assertEqual(state_answers.exploration_id, 'exp_id0')
self.assertEqual(state_answers.exploration_version, 1)
self.assertEqual(
state_answers.state_name, feconf.DEFAULT_INIT_STATE_NAME)
self.assertEqual(state_answers.interaction_id, 'TextInput')
self.assertEqual(state_answers.get_submitted_answer_dict_list(), [{
'answer': '1 answer',
'time_spent_in_sec': 1.0,
'answer_group_index': 0,
'rule_spec_index': 0,
'classification_categorization': 'explicit',
'session_id': 'a_session_id_val',
'interaction_id': 'TextInput',
'params': {}
}, {
'answer': 'answer aaa',
'time_spent_in_sec': 10.0,
'answer_group_index': 0,
'rule_spec_index': 1,
'classification_categorization': 'explicit',
'session_id': 'session_id_v',
'interaction_id': 'TextInput',
'params': {}
}, {
'answer': 'answer ccccc',
'time_spent_in_sec': 3.0,
'answer_group_index': 1,
'rule_spec_index': 1,
'classification_categorization': 'explicit',
'session_id': 'session_id_v',
'interaction_id': 'TextInput',
'params': {}
}, {
'answer': 'answer bbbbbbb',
'time_spent_in_sec': 7.5,
'answer_group_index': 1,
'rule_spec_index': 0,
'classification_categorization': 'explicit',
'session_id': 'session_id_v',
'interaction_id': 'TextInput',
'params': {}
}])
class SampleAnswerTests(test_utils.GenericTestBase):
"""Tests for functionality related to retrieving sample answers."""
EXP_ID = 'exp_id0'
def setUp(self):
super(SampleAnswerTests, self).setUp()
self.signup(self.OWNER_EMAIL, self.OWNER_USERNAME)
self.owner_id = self.get_user_id_from_email(self.OWNER_EMAIL)
self.exploration = self.save_new_valid_exploration(
self.EXP_ID, self.owner_id, end_state_name='End')
def test_at_most_100_answers_returned_even_if_there_are_lots(self):
submitted_answer_list = [
stats_domain.SubmittedAnswer(
'answer a', 'TextInput', 0, 1,
exp_domain.EXPLICIT_CLASSIFICATION, {}, 'session_id_v', 10.0),
stats_domain.SubmittedAnswer(
'answer ccc', 'TextInput', 1, 1,
exp_domain.EXPLICIT_CLASSIFICATION, {}, 'session_id_v', 3.0),
stats_domain.SubmittedAnswer(
'answer bbbbb', 'TextInput', 1, 0,
exp_domain.EXPLICIT_CLASSIFICATION, {}, 'session_id_v', 7.5),
]
# Record 600 answers.
stats_services.record_answers(
self.EXP_ID, self.exploration.version,
self.exploration.init_state_name, 'TextInput',
submitted_answer_list * 200)
sample_answers = stats_services.get_sample_answers(
self.EXP_ID, self.exploration.version,
self.exploration.init_state_name)
self.assertEqual(len(sample_answers), 100)
def test_exactly_100_answers_returned_if_main_shard_has_100_answers(self):
submitted_answer_list = [
stats_domain.SubmittedAnswer(
'answer a', 'TextInput', 0, 1,
exp_domain.EXPLICIT_CLASSIFICATION, {}, 'session_id_v', 10.0)
]
# Record 100 answers.
stats_services.record_answers(
self.EXP_ID, self.exploration.version,
self.exploration.init_state_name, 'TextInput',
submitted_answer_list * 100)
sample_answers = stats_services.get_sample_answers(
self.EXP_ID, self.exploration.version,
self.exploration.init_state_name)
self.assertEqual(sample_answers, ['answer a'] * 100)
def test_all_answers_returned_if_main_shard_has_few_answers(self):
submitted_answer_list = [
stats_domain.SubmittedAnswer(
'answer a', 'TextInput', 0, 1,
exp_domain.EXPLICIT_CLASSIFICATION, {}, 'session_id_v', 10.0),
stats_domain.SubmittedAnswer(
'answer bbbbb', 'TextInput', 1, 0,
exp_domain.EXPLICIT_CLASSIFICATION, {}, 'session_id_v', 7.5),
]
# Record 2 answers.
stats_services.record_answers(
self.EXP_ID, self.exploration.version,
self.exploration.init_state_name, 'TextInput',
submitted_answer_list)
sample_answers = stats_services.get_sample_answers(
self.EXP_ID, self.exploration.version,
self.exploration.init_state_name)
self.assertEqual(sample_answers, ['answer a', 'answer bbbbb'])
def test_only_sample_answers_in_main_shard_returned(self):
# Use a smaller max answer list size so fewer answers are needed to
# exceed a shard.
with self.swap(
stats_models.StateAnswersModel, '_MAX_ANSWER_LIST_BYTE_SIZE',
15000):
state_answers = stats_services.get_state_answers(
self.EXP_ID, self.exploration.version,
self.exploration.init_state_name)
self.assertIsNone(state_answers)
submitted_answer_list = [
stats_domain.SubmittedAnswer(
'answer ccc', 'TextInput', 1, 1,
exp_domain.EXPLICIT_CLASSIFICATION, {}, 'session_id_v',
3.0),
]
stats_services.record_answers(
self.EXP_ID, self.exploration.version,
self.exploration.init_state_name, 'TextInput',
submitted_answer_list * 100)
# Verify more than 1 shard was stored. The index shard (shard_id 0)
# is not included in the shard count. Since a total of 100 answers were
# submitted, there must therefore be fewer than 100 answers in the
# index shard.
model = stats_models.StateAnswersModel.get('%s:%s:%s:%s' % (
self.exploration.id, python_utils.UNICODE(self.exploration.version),
self.exploration.init_state_name, '0'))
self.assertEqual(model.shard_count, 1)
# Verify that the list of sample answers returned contains fewer than
# 100 answers, although a total of 100 answers were submitted.
sample_answers = stats_services.get_sample_answers(
self.EXP_ID, self.exploration.version,
self.exploration.init_state_name)
self.assertLess(len(sample_answers), 100)
def test_get_sample_answers_with_invalid_exp_id(self):
sample_answers = stats_services.get_sample_answers(
'invalid_exp_id', self.exploration.version,
self.exploration.init_state_name)
self.assertEqual(sample_answers, [])
class LearnerAnswerDetailsServicesTest(test_utils.GenericTestBase):
"""Test for services related to learner answer details."""
def setUp(self):
super(LearnerAnswerDetailsServicesTest, self).setUp()
self.exp_id = 'exp_id1'
self.state_name = 'intro'
self.question_id = 'q_id_1'
self.interaction_id = 'TextInput'
self.state_reference_exploration = (
stats_models.LearnerAnswerDetailsModel.
get_state_reference_for_exploration(
self.exp_id, self.state_name))
self.state_reference_question = (
stats_models.LearnerAnswerDetailsModel.
get_state_reference_for_question(
self.question_id))
self.learner_answer_details_model_exploration = (
stats_models.LearnerAnswerDetailsModel.create_model_instance(
feconf.ENTITY_TYPE_EXPLORATION,
self.state_reference_exploration, self.interaction_id, [],
feconf.CURRENT_LEARNER_ANSWER_INFO_SCHEMA_VERSION, 0))
self.learner_answer_details_model_question = (
stats_models.LearnerAnswerDetailsModel.create_model_instance(
feconf.ENTITY_TYPE_QUESTION,
self.state_reference_question, self.interaction_id, [],
feconf.CURRENT_LEARNER_ANSWER_INFO_SCHEMA_VERSION, 0))
def test_get_state_reference_for_exp_raises_error_for_fake_exp_id(self):
self.signup(self.OWNER_EMAIL, self.OWNER_USERNAME)
self.get_user_id_from_email(self.OWNER_EMAIL)
with self.assertRaisesRegexp(
Exception, 'Entity .* not found'):
stats_services.get_state_reference_for_exploration(
'fake_exp', 'state_name')
def test_get_state_reference_for_exp_raises_error_for_invalid_state_name(
self):
self.signup(self.OWNER_EMAIL, self.OWNER_USERNAME)
owner_id = self.get_user_id_from_email(self.OWNER_EMAIL)
exploration = self.save_new_default_exploration(
self.exp_id, owner_id)
self.assertEqual(list(exploration.states.keys()), ['Introduction'])
with self.assertRaisesRegexp(
utils.InvalidInputException,
'No state with the given state name was found'):
stats_services.get_state_reference_for_exploration(
self.exp_id, 'state_name')
def test_get_state_reference_for_exp_for_valid_exp_id_and_state_name(self):
self.signup(self.OWNER_EMAIL, self.OWNER_USERNAME)
owner_id = self.get_user_id_from_email(self.OWNER_EMAIL)
exploration = self.save_new_default_exploration(
self.exp_id, owner_id)
self.assertEqual(list(exploration.states.keys()), ['Introduction'])
state_reference = (
stats_services.get_state_reference_for_exploration(
self.exp_id, 'Introduction'))
self.assertEqual(state_reference, 'exp_id1:Introduction')
def test_get_state_reference_for_question_with_invalid_question_id(self):
with self.assertRaisesRegexp(
utils.InvalidInputException,
'No question with the given question id exists'):
stats_services.get_state_reference_for_question(
'fake_question_id')
def test_get_state_reference_for_question_with_valid_question_id(self):
self.signup(self.EDITOR_EMAIL, self.EDITOR_USERNAME)
editor_id = self.get_user_id_from_email(
self.EDITOR_EMAIL)
question_id = question_services.get_new_question_id()
question = self.save_new_question(
question_id, editor_id,
self._create_valid_question_data('ABC'), ['skill_1'])
self.assertNotEqual(question, None)
state_reference = (
stats_services.get_state_reference_for_question(question_id))
self.assertEqual(state_reference, question_id)
def test_update_learner_answer_details(self):
answer = 'This is my answer'
answer_details = 'This is my answer details'
learner_answer_details = stats_services.get_learner_answer_details(
feconf.ENTITY_TYPE_EXPLORATION, self.state_reference_exploration)
self.assertEqual(
len(learner_answer_details.learner_answer_info_list), 0)
stats_services.record_learner_answer_info(
feconf.ENTITY_TYPE_EXPLORATION, self.state_reference_exploration,
self.interaction_id, answer, answer_details)
learner_answer_details = stats_services.get_learner_answer_details(
feconf.ENTITY_TYPE_EXPLORATION, self.state_reference_exploration)
self.assertEqual(
len(learner_answer_details.learner_answer_info_list), 1)
answer = 'My answer'
answer_details = 'My answer details'
stats_services.record_learner_answer_info(
feconf.ENTITY_TYPE_EXPLORATION, self.state_reference_exploration,
self.interaction_id, answer, answer_details)
learner_answer_details = stats_services.get_learner_answer_details(
feconf.ENTITY_TYPE_EXPLORATION, self.state_reference_exploration)
self.assertEqual(
len(learner_answer_details.learner_answer_info_list), 2)
def test_delete_learner_answer_info(self):
learner_answer_details = stats_services.get_learner_answer_details(
feconf.ENTITY_TYPE_EXPLORATION, self.state_reference_exploration)
self.assertEqual(
len(learner_answer_details.learner_answer_info_list), 0)
answer = 'This is my answer'
answer_details = 'This is my answer details'
stats_services.record_learner_answer_info(
feconf.ENTITY_TYPE_EXPLORATION, self.state_reference_exploration,
self.interaction_id, answer, answer_details)
learner_answer_details = stats_services.get_learner_answer_details(
feconf.ENTITY_TYPE_EXPLORATION, self.state_reference_exploration)
self.assertEqual(
len(learner_answer_details.learner_answer_info_list), 1)
learner_answer_info_id = (
learner_answer_details.learner_answer_info_list[0].id)
stats_services.delete_learner_answer_info(
feconf.ENTITY_TYPE_EXPLORATION, self.state_reference_exploration,
learner_answer_info_id)
learner_answer_details = stats_services.get_learner_answer_details(
feconf.ENTITY_TYPE_EXPLORATION, self.state_reference_exploration)
self.assertEqual(
len(learner_answer_details.learner_answer_info_list), 0)
def test_delete_learner_answer_info_with_invalid_input(self):
with self.assertRaisesRegexp(
utils.InvalidInputException,
'No learner answer details found with the given state reference'):
stats_services.delete_learner_answer_info(
feconf.ENTITY_TYPE_EXPLORATION, 'expID:stateName', 'id_1')
def test_delete_learner_answer_info_with_unknown_learner_answer_info_id(
self):
learner_answer_details = stats_services.get_learner_answer_details(
feconf.ENTITY_TYPE_EXPLORATION, self.state_reference_exploration)
self.assertEqual(
len(learner_answer_details.learner_answer_info_list), 0)
answer = 'This is my answer'
answer_details = 'This is my answer details'
stats_services.record_learner_answer_info(
feconf.ENTITY_TYPE_EXPLORATION, self.state_reference_exploration,
self.interaction_id, answer, answer_details)
learner_answer_details = stats_services.get_learner_answer_details(
feconf.ENTITY_TYPE_EXPLORATION, self.state_reference_exploration)
self.assertEqual(
len(learner_answer_details.learner_answer_info_list), 1)
learner_answer_info_id = 'id_1'
with self.assertRaisesRegexp(
Exception, 'Learner answer info with the given id not found'):
stats_services.delete_learner_answer_info(
feconf.ENTITY_TYPE_EXPLORATION,
self.state_reference_exploration, learner_answer_info_id)
def test_update_state_reference(self):
new_state_reference = 'exp_id_2:state_name_2'
learner_answer_details = stats_services.get_learner_answer_details(
feconf.ENTITY_TYPE_EXPLORATION, self.state_reference_exploration)
self.assertNotEqual(
learner_answer_details.state_reference, new_state_reference)
stats_services.update_state_reference(
feconf.ENTITY_TYPE_EXPLORATION, self.state_reference_exploration,
new_state_reference)
learner_answer_details = stats_services.get_learner_answer_details(
feconf.ENTITY_TYPE_EXPLORATION, new_state_reference)
self.assertEqual(
learner_answer_details.state_reference, new_state_reference)
def test_new_learner_answer_details_is_created(self):
state_reference = 'exp_id_2:state_name_2'
interaction_id = 'GraphInput'
answer = 'Hello World'
answer_details = 'Hello Programmer'
learner_answer_details = stats_services.get_learner_answer_details(
feconf.ENTITY_TYPE_EXPLORATION, state_reference)
self.assertEqual(learner_answer_details, None)
stats_services.record_learner_answer_info(
feconf.ENTITY_TYPE_EXPLORATION, state_reference,
interaction_id, answer, answer_details)
learner_answer_details = stats_services.get_learner_answer_details(
feconf.ENTITY_TYPE_EXPLORATION, state_reference)
self.assertNotEqual(learner_answer_details, None)
self.assertEqual(
learner_answer_details.state_reference, state_reference)
self.assertEqual(learner_answer_details.interaction_id, interaction_id)
self.assertEqual(
len(learner_answer_details.learner_answer_info_list), 1)
def test_update_with_invalid_input_raises_exception(self):
with self.assertRaisesRegexp(
utils.InvalidInputException,
'No learner answer details found with the given state reference'):
stats_services.update_state_reference(
feconf.ENTITY_TYPE_EXPLORATION, 'expID:stateName',
'newexp:statename')
def test_delete_learner_answer_details_for_exploration_state(self):
learner_answer_details = stats_services.get_learner_answer_details(
feconf.ENTITY_TYPE_EXPLORATION, self.state_reference_exploration)
self.assertNotEqual(learner_answer_details, None)
stats_services.delete_learner_answer_details_for_exploration_state(
self.exp_id, self.state_name)
learner_answer_details = stats_services.get_learner_answer_details(
feconf.ENTITY_TYPE_EXPLORATION, self.state_reference_exploration)
self.assertEqual(learner_answer_details, None)
def test_delete_learner_answer_details_for_question_state(self):
learner_answer_details = stats_services.get_learner_answer_details(
feconf.ENTITY_TYPE_QUESTION, self.state_reference_question)
self.assertNotEqual(learner_answer_details, None)
stats_services.delete_learner_answer_details_for_question_state(
self.question_id)
learner_answer_details = stats_services.get_learner_answer_details(
feconf.ENTITY_TYPE_QUESTION, self.state_reference_question)
self.assertEqual(learner_answer_details, None)
| 44.736182 | 80 | 0.634364 | 10,623 | 96,317 | 5.362515 | 0.04735 | 0.06123 | 0.026683 | 0.048432 | 0.874364 | 0.843854 | 0.80943 | 0.774497 | 0.739406 | 0.71641 | 0 | 0.012554 | 0.272216 | 96,317 | 2,152 | 81 | 44.75697 | 0.799966 | 0.06272 | 0 | 0.697258 | 0 | 0 | 0.12794 | 0.024557 | 0 | 0 | 0 | 0 | 0.139899 | 1 | 0.037493 | false | 0 | 0.008954 | 0 | 0.056519 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
d9dca873b085f3046f69b655efd156dbb2e8b160 | 8,262 | py | Python | test/ux/utils/test_parser.py | intel/neural-compressor | 16a4a12045fcb468da4d33769aff2c1a5e2ba6ba | [
"Apache-2.0"
] | 172 | 2021-09-14T18:34:17.000Z | 2022-03-30T06:49:53.000Z | test/ux/utils/test_parser.py | intel/lp-opt-tool | 130eefa3586b38df6c0ff78cc8807ae273f6a63f | [
"Apache-2.0"
] | 40 | 2021-09-14T02:26:12.000Z | 2022-03-29T08:34:04.000Z | test/ux/utils/test_parser.py | intel/neural-compressor | 16a4a12045fcb468da4d33769aff2c1a5e2ba6ba | [
"Apache-2.0"
] | 33 | 2021-09-15T07:27:25.000Z | 2022-03-25T08:30:57.000Z | # -*- coding: utf-8 -*-
# Copyright (c) 2021 Intel Corporation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Log parsers test."""
import unittest
from unittest.mock import MagicMock, patch
from neural_compressor.ux.utils.parser import BenchmarkParserFactory, OptimizationParser
class TestTuningParser(unittest.TestCase):
"""TuningParser tests."""
def test_parsing_empty_file_list(self) -> None:
"""Test parsing of none files."""
optimization_parser = OptimizationParser([])
parsed = optimization_parser.process()
self.assertEqual({}, parsed)
@patch("builtins.open", create=True)
def test_parsing_empty_files(self, mocked_open: MagicMock) -> None:
"""Test parsing of files without any lines."""
mocked_open.return_value.__enter__.return_value = []
optimization_parser = OptimizationParser(["file.log"])
parsed = optimization_parser.process()
self.assertEqual({}, parsed)
@patch("builtins.open", create=True)
def test_parsing_simple_file(self, mocked_open: MagicMock) -> None:
"""Test parsing of file."""
mocked_open.return_value.__enter__.return_value = [
"Foo bar baz",
"2021-05-27 07:52:50 [INFO] Tune 1 result is: [Accuracy (int8|fp32): 0.1234|0.12344, Duration (seconds) (int8|fp32): 5.6789|5.6789], Best tune result is: None", # noqa: E501
"2021-05-27 07:52:50 [INFO] Tune 2 result is: [Accuracy (int8|fp32): 0.99876|0.12344, Duration (seconds) (int8|fp32): 0.5432|5.6789], Best tune result is: [Accuracy: 0.99876, Duration (seconds): 0.5432]", # noqa: E501
"2021-05-27 07:52:27 [INFO] FP32 baseline is: [Accuracy: 0.12344, Duration (seconds): 5.6789]", # noqa: E501
"2021-05-27 07:52:27 [INFO] Save quantized model to /foo/bar/baz.pb.",
"a b c d",
]
optimization_parser = OptimizationParser(["file.log"])
parsed = optimization_parser.process()
self.assertEqual(
{
"acc_input_model": 0.1234,
"acc_optimized_model": 0.9988,
"path_optimized_model": "/foo/bar/baz.pb",
},
parsed,
)
@patch("builtins.open", create=True)
def test_parsing_file_with_duplicated_lines(self, mocked_open: MagicMock) -> None:
"""Test parsing of files without any lines."""
mocked_open.return_value.__enter__.return_value = [
"Foo bar baz",
"2021-05-27 07:52:50 [INFO] Tune 1 result is: [Accuracy (int8|fp32): 0.1234|0.12344, Duration (seconds) (int8|fp32): 5.6789|5.6789], Best tune result is: None", # noqa: E501
"2021-05-27 07:52:50 [INFO] Tune 2 result is: [Accuracy (int8|fp32): 0.2345|0.12344, Duration (seconds) (int8|fp32): 0.6789|5.6789], Best tune result is: [Accuracy: 0.2345, Duration (seconds): 0.6789]", # noqa: E501
"2021-05-27 07:52:27 [INFO] FP32 baseline is: [Accuracy: 0.12344, Duration (seconds): 5.6789]", # noqa: E501
"2021-05-27 07:52:27 [INFO] Save quantized model to /a/b/c.pb.",
"a b c d",
"2021-05-27 07:52:50 [INFO] Tune 1 result is: [Accuracy (int8|fp32): 0.1234|0.12344, Duration (seconds) (int8|fp32): 5.6789|5.6789], Best tune result is: None", # noqa: E501
"2021-05-27 07:52:50 [INFO] Tune 2 result is: [Accuracy (int8|fp32): 0.99876|0.12344, Duration (seconds) (int8|fp32): 0.5432|5.6789], Best tune result is: [Accuracy: 0.99876, Duration (seconds): 0.5432]", # noqa: E501
"2021-05-27 07:52:27 [INFO] FP32 baseline is: [Accuracy: 0.12344, Duration (seconds): 5.6789]", # noqa: E501
"2021-05-27 07:52:27 [INFO] Save quantized model to /foo/bar/baz.pb.",
]
tuning_parser = OptimizationParser(["file.log"])
parsed = tuning_parser.process()
self.assertEqual(
{
"acc_input_model": 0.1234,
"acc_optimized_model": 0.9988,
"path_optimized_model": "/foo/bar/baz.pb",
},
parsed,
)
class TestBenchmarkParser(unittest.TestCase):
"""BenchmarkParser tests."""
def test_parsing_empty_file_list(self) -> None:
"""Test parsing of none files."""
benchmark_parser = BenchmarkParserFactory.get_parser(
benchmark_mode="performance",
logs=[],
)
parsed = benchmark_parser.process()
self.assertEqual({}, parsed)
@patch("builtins.open", create=True)
def test_parsing_empty_files(self, mocked_open: MagicMock) -> None:
"""Test parsing of files without any lines."""
mocked_open.return_value.__enter__.return_value = []
benchmark_parser = BenchmarkParserFactory.get_parser(
benchmark_mode="performance",
logs=["file.log"],
)
parsed = benchmark_parser.process()
self.assertEqual({}, parsed)
@patch("builtins.open", create=True)
def test_parsing_simple_file(self, mocked_open: MagicMock) -> None:
"""Test parsing of file."""
mocked_open.return_value.__enter__.return_value = [
"Foo bar baz",
"performance mode benchmark result:",
"2021-04-14 05:16:09 [INFO] Accuracy is 0.1234567",
"2021-04-14 05:16:09 [INFO] Batch size = 1234",
"2021-04-14 05:16:09 [INFO] Latency: 2.34567 ms",
"2021-04-14 05:16:09 [INFO] Throughput: 123.45678 images/sec",
"2021-04-14 05:16:10 [INFO] performance mode benchmark done!",
"2021-04-14 05:16:10 [INFO]",
"performance mode benchmark result:",
"a b c d",
]
benchmark_parser = BenchmarkParserFactory.get_parser(
benchmark_mode="performance",
logs=["file.log"],
)
parsed = benchmark_parser.process()
self.assertEqual(
{
"throughput": 123.4568,
"latency": 2.3457,
},
parsed,
)
@patch("builtins.open", create=True)
def test_parsing_simple_file_with_many_entries(self, mocked_open: MagicMock) -> None:
"""Test parsing of file."""
mocked_open.return_value.__enter__.return_value = [
"Foo bar baz",
"performance mode benchmark result:",
"2021-04-14 05:16:09 [INFO] Accuracy is 0.1234567",
"2021-04-14 05:16:09 [INFO] Batch size = 1234",
"2021-04-14 05:16:09 [INFO] Latency: 1.0 ms",
"2021-04-14 05:16:09 [INFO] Latency: 2.0 ms",
"2021-04-14 05:16:09 [INFO] Latency: 3.0 ms",
"2021-04-14 05:16:09 [INFO] Latency: 4.0 ms",
"2021-04-14 05:16:09 [INFO] Latency: 5.0 ms",
"2021-04-14 05:16:09 [INFO] Latency: 6.0 ms",
"2021-04-14 05:16:09 [INFO] Throughput: 1.0 images/sec",
"2021-04-14 05:16:09 [INFO] Throughput: 2.0 images/sec",
"2021-04-14 05:16:09 [INFO] Throughput: 3.0 images/sec",
"2021-04-14 05:16:09 [INFO] Throughput: 4.0 images/sec",
"2021-04-14 05:16:09 [INFO] Throughput: 5.0 images/sec",
"2021-04-14 05:16:10 [INFO] performance mode benchmark done!",
"2021-04-14 05:16:10 [INFO]",
"performance mode benchmark result:",
"a b c d",
]
benchmark_parser = BenchmarkParserFactory.get_parser(
benchmark_mode="performance",
logs=["file.log"],
)
parsed = benchmark_parser.process()
self.assertEqual(
{
"throughput": 15.0, # SUM
"latency": 3.5, # AVG
},
parsed,
)
if __name__ == "__main__":
unittest.main()
| 42.587629 | 230 | 0.600218 | 1,080 | 8,262 | 4.476852 | 0.173148 | 0.02606 | 0.034747 | 0.043433 | 0.81303 | 0.803309 | 0.803309 | 0.796277 | 0.789866 | 0.753051 | 0 | 0.137704 | 0.26519 | 8,262 | 193 | 231 | 42.80829 | 0.658705 | 0.120068 | 0 | 0.623188 | 0 | 0.130435 | 0.427955 | 0 | 0 | 0 | 0 | 0 | 0.057971 | 1 | 0.057971 | false | 0 | 0.021739 | 0 | 0.094203 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
d9e1f14b7772b0857cd68fba683c7907c08ee028 | 68,836 | py | Python | ooiservices/app/uframe/media.py | asascience-open/ooi-ui-services | a3254b612b5831e5e34beaf93000228826c1ed5a | [
"Apache-2.0"
] | 2 | 2015-02-28T00:20:30.000Z | 2015-04-30T12:40:31.000Z | ooiservices/app/uframe/media.py | asascience-open/ooi-ui-services | a3254b612b5831e5e34beaf93000228826c1ed5a | [
"Apache-2.0"
] | 266 | 2015-01-02T21:29:25.000Z | 2020-01-23T16:00:11.000Z | ooiservices/app/uframe/media.py | oceanobservatories/ooi-ui-services | a3254b612b5831e5e34beaf93000228826c1ed5a | [
"Apache-2.0"
] | 13 | 2015-02-04T21:13:34.000Z | 2016-10-18T14:39:36.000Z | #!/usr/bin/env python
"""
Support for images, utilized for image information and display.
Routes:
[GET] /get_cam_images # Get list of available camera thumbnail/images.( Filter orDeprecate?)
[GET] /get_cam_image/<string:image_id>.png' # Get single thumbnail image. (Deprecate)
[GET] /get_image_thumbnail/<string:image_id>.png' # Get single thumbnail image.
Gallery routes
[GET] /get_data_slice/<string:rd>/<string:start>/<string:end>
Development/Test:
/get_first_date/rd
"""
__author__ = 'Edna Donoughe'
import os.path
from flask import (jsonify, current_app)
from ooiservices.app.uframe import uframe as api
from ooiservices.app.main.errors import bad_request
from ooiservices.app.uframe.media_tools import (_get_media_for_rd, validate_year_month, validate_year,
_get_base_url_thumbnails, _get_base_url_raw_data,
get_large_format_index)
from ooiservices.app.uframe.common_tools import (rds_get_sensor_type, rds_get_all_supported_sensor_types,
rds_get_instrument_info_url_root, rds_get_instrument_site_url_root)
from ooiservices.app.uframe.media_tools import (get_large_format_for_rd, get_large_format_index_for_rd,
get_file_list_for_large_format_data, fetch_media_for_rd)
from ooiservices.app.uframe.gallery_tools import (get_range_data, _get_data_day, _get_date_bound)
from ooiservices.app.uframe.config import get_rds_base_url
from ooiservices.app.uframe.vocab import (get_display_name_by_rd, get_vocab_dict_by_rd)
"""
Populate sensor type page...
[GET] /media/get_instrument_list/<string:sensor_type> # Get list of instruments for sensor type.
[GET] /media/get_instrument_information_link/<string:sensor_type> # Get info link for sensor type.
[GET] /media/get_instrument_site_link/<string:rd> # Get instrument site link (for reference designator).
[GET] /media/get_instrument_rds_link/<string:rd> # Get instrument raw data server link.
[GET] /get_base_url_thumbnails # Get thumbnail data store location.
[GET] /get_base_url_raw_data # Get url for raw data server
[GET] /media/get_data_bounds/<string:rd> # Get first and last date of data available for instrument.
[GET] /media/get_first_date/<string:rd> # Get first date of data available for instrument.
[GET] /media/get_last_date/<string:rd> # Get last date of data available for instrument.
[GET] /media/<string:rd>/<string:year>/<string:month> # Get all media for reference designator by year/month.
[GET] /media/<string:rd>/<string:year> # Hold. Due to performance related to response size.
[GET] /media/<string:rd>/date/<string:data_date> # Get media for a specific date. yyyy-mm-dd
[GET] /media/<string:rd>/range/<string:start>/<string:end> # Get media for range start/end (yyyy-mm-dd/yyyy-mm-dd)
Data availability routes (use to populate pull down year, month, day selections for reference designator.
[GET] /media/<string:rd>/da/map
[GET] /media/<string:rd>/da/years
[GET] /media/<string:rd>/da/<string:year>
[GET] /media/<string:rd>/da/<string:year>/<string:month>
"""
# Get thumbnail location on server.
@api.route('/media/get_base_url_thumbnails', methods=['GET'])
def media_get_base_url_thumbnails():
""" Get location for cam images.
Request: http://localhost:4000/uframe/media/get_base_url_thumbnails
Response:
{
"base": "ooiservices/cam_images"
}
"""
try:
data = _get_base_url_thumbnails()
return jsonify({'base': data})
except Exception as err:
message = str(err)
current_app.logger.info(message)
return bad_request(message)
# Get thumbnail location on server).
@api.route('/media/get_base_url_raw_data', methods=['GET'])
def media_get_base_url_raw_data():
""" Get root data directory for raw data server.
Request: http://localhost:4000/uframe/media/get_base_url_raw_data
Response:
{
"base": "https://rawdata.oceanobservatories.org/files/"
}
"""
try:
data = _get_base_url_raw_data()
return jsonify({'base': data})
except Exception as err:
message = str(err)
current_app.logger.info(message)
return bad_request(message)
#- - - - - - - - - - - - - - - - - - - - - - - -
# Get list of instruments by sensor type.
#- - - - - - - - - - - - - - - - - - - - - - - -
# Get list of instruments by sensor type.
@api.route('/media/get_instrument_list/<string:sensor_type>', methods=['GET'])
def media_get_instruments_by_sensor_type(sensor_type):
""" Get list of all instruments in large format index by sensor type.
(Used to populate the instrument selection list for a specific sensor type page.)
For instance. for 'Still Camera (CAMDS) Archive' use: /media/get_instrument_list/CAMDS
sensor_type must be one of rds_valid_sensor_types: (common_tools.py)
['CAMDS', 'CAMHD', 'ZPL', 'HYDBB', 'HYDLF', 'OBS']
Sample Requests:
http://localhost:4000/uframe/media/get_instrument_list/CAMDS # Sprint 10
http://localhost:4000/uframe/media/get_instrument_list/CAMHD # Future
http://localhost:4000/uframe/media/get_instrument_list/ZPL # Available
http://localhost:4000/uframe/media/get_instrument_list/HYDBB # Available
http://localhost:4000/uframe/media/get_instrument_list/HYDLF # Future
http://localhost:4000/uframe/media/get_instrument_list/OBS # Future
Sample Requests with response results included.
http://localhost:4000/uframe/media/get_instrument_list/CAMDS
{
"instruments": [
"CE02SHBP-MJ01C-08-CAMDSB107",
"CE04OSBP-LV01C-06-CAMDSB106",
"RS01SBPS-PC01A-07-CAMDSC102",
"RS01SUM2-MJ01B-05-CAMDSB103",
"RS03AXPS-PC03A-07-CAMDSC302",
"RS03INT1-MJ03C-05-CAMDSB303"
]
}
http://localhost:4000/uframe/media/get_instrument_list/CAMHD
{
"instruments": [
"RS03ASHS-PN03B-06-CAMHDA301"
]
}
http://localhost:4000/uframe/media/get_instrument_list/ZPL
{
"instruments": [
"CE02SHBP-MJ01C-07-ZPLSCB101",
"CE04OSPS-PC01B-05-ZPLSCB102"
]
}
http://localhost:4000/uframe/media/get_instrument_list/HYDBB
{
"instruments": [
"CE02SHBP-LJ01D-11-HYDBBA106",
"CE04OSBP-LJ01C-11-HYDBBA105",
"RS01SBPS-PC01A-08-HYDBBA103",
"RS01SLBS-LJ01A-09-HYDBBA102",
"RS03AXBS-LJ03A-09-HYDBBA302",
"RS03AXPS-PC03A-08-HYDBBA303"
]
}
http://localhost:4000/uframe/media/get_instrument_list/HYDLF
{
"instruments": []
}
http://localhost:4000/uframe/media/get_instrument_list/OBS
{
"instruments": []
}
http://localhost:4000/uframe/media/get_instrument_list/OSMOI
{
"instruments": [
"RS01SUM2-MJ01B-OSMOIA101",
"RS03ASHS-MJ03B-OSMOIA301"
]
}
"""
try:
# Get valid sensor types.
#valid_sensor_types = rds_get_valid_sensor_types()
valid_sensor_types = rds_get_all_supported_sensor_types()
msg_valid_sensors = []
for s in valid_sensor_types:
msg_valid_sensors.append(s.replace('-',''))
if sensor_type not in valid_sensor_types and sensor_type not in msg_valid_sensors:
message = 'Invalid sensor type (%s), not one of valid sensor types: %s' % (sensor_type, msg_valid_sensors)
raise Exception(message)
# Get large format index and retrieve instrument list available.
data = get_large_format_index()
instruments_available = data.keys()
instruments = []
for instrument in instruments_available:
if sensor_type in instrument:
if instrument not in instruments:
instruments.append(instrument)
if instruments:
instruments.sort()
return jsonify({'instruments': instruments})
except Exception as err:
message = str(err)
current_app.logger.info(message)
return bad_request(message)
#- - - - - - - - - - - - - - - - - - - - - - - -
# Get instrument information link by sensor type.
#- - - - - - - - - - - - - - - - - - - - - - - -
@api.route('/media/get_instrument_information_link/<string:sensor_type>', methods=['GET'])
def media_get_instrument_information_link_by_sensor_type(sensor_type):
""" Get instrument information link by sensor type.
(Used to provided the link for the instrument information display on sensor type page.)
For instance. for 'Still Camera (CAMDS) Archive' use: /media/get_instrument_list/CAMDS
sensor_type must be one of rds_valid_sensor_types: (common_tools.py)
['CAMDS', 'CAMHD', 'ZPL', 'HYDBB', 'HYDLF', 'OBS']
Sample Requests:
http://localhost:4000/uframe/media/get_instrument_information_link/CAMDS # Sprint 10
http://localhost:4000/uframe/media/get_instrument_information_link/CAMHD # Available
http://localhost:4000/uframe/media/get_instrument_information_link/ZPL # Available
http://localhost:4000/uframe/media/get_instrument_information_link/HYDBB # Available
http://localhost:4000/uframe/media/get_instrument_information_link/HYDLF # Available
http://localhost:4000/uframe/media/get_instrument_information_link/OBS # Available
Sample Requests with response results included.
http://localhost:4000/uframe/media/get_instrument_information_link/CAMDS
{
"instrument_information_url": "http://oceanobservatories.org/instruments/camds/"
}
http://localhost:4000/uframe/media/get_instrument_information_link/CAMHD
{
"instrument_information_url": "http://oceanobservatories.org/instruments/camhd/"
}
http://localhost:4000/uframe/media/get_instrument_information_link/ZPL
{
"instrument_information_url": "http://oceanobservatories.org/instruments/zpls/"
}
http://localhost:4000/uframe/media/get_instrument_information_link/HYDBB
{
"instrument_information_url": "http://oceanobservatories.org/instruments/hydbb/"
}
http://localhost:4000/uframe/media/get_instrument_information_link/HYDLF
{
"instrument_information_url": "http://oceanobservatories.org/instruments/hydlf/"
}
http://localhost:4000/uframe/media/get_instrument_information_link/OBS
{
"instrument_information_url": "http://oceanobservatories.org/instrument-class/obsbb/"
}
http://localhost:4000/uframe/media/get_instrument_information_link/OSMOI
{
"instrument_information_url": "http://oceanobservatories.org/instrument-series/osmoia/"
}
"""
try:
# Get valid sensor types.
valid_sensor_types = rds_get_all_supported_sensor_types()
msg_valid_sensors = []
for s in valid_sensor_types:
msg_valid_sensors.append(s.replace('-',''))
if sensor_type not in valid_sensor_types and sensor_type not in msg_valid_sensors:
message = 'Invalid sensor type (%s), not one of valid sensor types: %s' % (sensor_type, valid_sensor_types)
raise Exception(message)
# Get large format index and retrieve instrument list available.
root_url = rds_get_instrument_info_url_root()
suffix = ""
if sensor_type and sensor_type is not None:
if sensor_type == 'ZPL':
suffix = 'zpls'
elif sensor_type == 'OBS':
suffix = 'obsbb'
elif sensor_type == 'OSMOI':
suffix = 'osmoia'
else:
suffix = sensor_type.lower()
url = root_url + suffix
return jsonify({'instrument_information_url': url})
except Exception as err:
message = str(err)
current_app.logger.info(message)
return bad_request(message)
#- - - - - - - - - - - - - - - - - - - - - - - -
# Get instrument site link by sensor type.
#- - - - - - - - - - - - - - - - - - - - - - - -
@api.route('/media/get_instrument_site_link/<string:rd>', methods=['GET'])
def media_get_instrument_site_link(rd):
""" Get instrument site link from reference designator.
(Used to provided the link for the site display on sensor type page.)
Sample Requests:
http://localhost:4000/uframe/media/get_instrument_site_link/CE02SHBP-MJ01C-08-CAMDSB107 # Sprint 10
http://localhost:4000/uframe/media/get_instrument_site_link/RS03ASHS-PN03B-06-CAMHDA301 # Available
http://localhost:4000/uframe/media/get_instrument_site_link/CE04OSPS-PC01B-05-ZPLSCB102 # Available
http://localhost:4000/uframe/media/get_instrument_site_link/CE04OSBP-LJ01C-11-HYDBBA105 # Available
Sample Requests with response results included.
http://localhost:4000/uframe/media/get_instrument_site_link/CE02SHBP-MJ01C-08-CAMDSB107
{
"instrument_site_url": "http://oceanobservatories.org/site/ce02shbp"
}
http://localhost:4000/uframe/media/get_instrument_site_link/RS03ASHS-PN03B-06-CAMHDA301
{
"instrument_site_url": "http://oceanobservatories.org/site/rs03ashs"
}
http://localhost:4000/uframe/media/get_instrument_site_link/CE04OSPS-PC01B-05-ZPLSCB102
{
"instrument_site_url": "http://oceanobservatories.org/site/ce04osps"
}
http://localhost:4000/uframe/media/get_instrument_site_link/CE04OSBP-LJ01C-11-HYDBBA105
{
"instrument_site_url": "http://oceanobservatories.org/site/ce04osbp"
}
http://localhost:4000/uframe/media/get_instrument_site_link/RS01SUM2-MJ01B-OSMOIA101
{
"instrument_site_url": "http://oceanobservatories.org/site/rs01sum2"
}
"""
try:
# Get site from reference designator.
if '-' not in rd:
message = 'Malformed instrument reference designator provided (%r).' % rd
raise Exception(message)
site, remainder = rd.split('-', 1)
if site and site is not None:
site = site.lower()
else:
message = 'Site is invalid, check reference designator provided: %r' % rd
raise Exception(message)
url_root = rds_get_instrument_site_url_root()
url = url_root + site
return jsonify({'instrument_site_url': url})
except Exception as err:
message = str(err)
current_app.logger.info(message)
return bad_request(message)
#- - - - - - - - - - - - - - - - - - - - - - - -
# Get instrument rds link by reference designator.
#- - - - - - - - - - - - - - - - - - - - - - - -
@api.route('/media/get_instrument_rds_link/<string:rd>', methods=['GET'])
def media_get_instrument_rds_link(rd):
""" Get instrument raw data server (rds) link for reference designator.
(Used to provided the link for the site display on sensor type page.)
Sample Requests:
http://localhost:4000/uframe/media/get_instrument_rds_link/CE02SHBP-MJ01C-08-CAMDSB107 # Sprint 10
http://localhost:4000/uframe/media/get_instrument_rds_link/RS03ASHS-PN03B-06-CAMHDA301 # Available
http://localhost:4000/uframe/media/get_instrument_rds_link/CE04OSPS-PC01B-05-ZPLSCB102 # Available
http://localhost:4000/uframe/media/get_instrument_rds_link/CE04OSBP-LJ01C-11-HYDBBA105 # Available
Sample Requests with response results included.
http://localhost:4000/uframe/media/get_instrument_rds_link/CE02SHBP-MJ01C-08-CAMDSB107
{
"instrument_rds_url": "https://rawdata.oceanobservatories.org/files/CE02SHBP/MJ01C/08-CAMDSB107"
}
http://localhost:4000/uframe/media/get_instrument_rds_link/RS03ASHS-PN03B-06-CAMHDA301
{
"instrument_rds_url": "https://rawdata.oceanobservatories.org/files/RS03ASHS/PN03B/06-CAMHDA301"
}
http://localhost:4000/uframe/media/get_instrument_rds_link/CE04OSPS-PC01B-05-ZPLSCB102
{
"instrument_rds_url": "https://rawdata.oceanobservatories.org/files/CE04OSPS/PC01B/05-ZPLSCB102"
}
http://localhost:4000/uframe/media/get_instrument_rds_link/CE04OSBP-LJ01C-11-HYDBBA105
{
"instrument_rds_url": "https://rawdata.oceanobservatories.org/files/CE04OSBP/LJ01C/11-HYDBBA105"
}
http://localhost:4000/uframe/media/get_instrument_rds_link/RS01SUM2-MJ01B-OSMOIA101
{
"instrument_rds_url": "https://rawdata.oceanobservatories.org/files/RS01SUM2/MJ01B/OSMOIA101"
}
"""
try:
# Quick check reference designator.
if '-' not in rd:
message = 'Malformed instrument reference designator provided (%r).' % rd
raise Exception(message)
# Check availability of data for reference designator.
check_data = get_large_format_index_for_rd(rd)
if not check_data or check_data is None:
message = 'Data not currently available for instrument \'%s\'. ' % rd
raise Exception(message)
# Format and return url.
site, node, sensor = rd.split('-', 2)
suffix = '/'.join([site, node, sensor])
url_root = get_rds_base_url()
url = '/'.join([url_root, suffix])
return jsonify({'instrument_rds_url': url})
except Exception as err:
message = str(err)
current_app.logger.info(message)
return bad_request(message)
#- - - - - - - - - - - - - - - - - - - - - - - - - - - -
# Data functions (all media types).
#- - - - - - - - - - - - - - - - - - - - - - - - - - - -
# Get first_date from large format data.
@api.route('/media/get_first_date/<string:rd>', methods=['GET'])
def media_get_first_date(rd):
""" Get string first date from /da/map.
Request: http://localhost:4000/uframe/media/get_first_date/CE02SHBP-MJ01C-08-CAMDSB107
Response:
{
"first_date": "2009-12-07"
}
"""
try:
data = get_large_format_index_for_rd(rd)
first_date = _get_date_bound(data, bound='first')
return jsonify({'first_date': first_date})
except Exception as err:
message = str(err)
current_app.logger.info(message)
return bad_request(message)
# Get last_date from large format data.
@api.route('/media/get_last_date/<string:rd>', methods=['GET'])
def media_get_last_date(rd):
""" Get string first date from /da/map.
Request: http://localhost:4000/uframe/media/get_last_date/CE02SHBP-MJ01C-08-CAMDSB107
Response:
{
"last_date": "2016-10-03"
}
"""
try:
data = get_large_format_index_for_rd(rd)
last_date = _get_date_bound(data, bound='last')
return jsonify({'last_date': last_date})
except Exception as err:
message = str(err)
current_app.logger.info(message)
return bad_request(message)
# Get first and last_date from large format data for reference designator..
@api.route('/media/get_data_bounds/<string:rd>', methods=['GET'])
def media_get_data_bounds(rd):
""" Get first date and last date for reference designator.
Request: http://localhost:4000/uframe/media/get_data_bounds/CE02SHBP-MJ01C-08-CAMDSB107
Response:
{
"bounds": {
"first_date": "2009-12-07",
"last_date": "2016-10-03"
}
}
Request: http://localhost:4000/uframe/media/get_data_bounds/RS01SUM2-MJ01B-OSMOIA101
Response:
{
"bounds": {
"first_date": "2014-09-02",
"last_date": "2015-07-14"
}
}
"""
try:
data = get_large_format_index_for_rd(rd)
first_date = _get_date_bound(data, bound='first')
last_date = _get_date_bound(data, bound='last')
result = {'bounds':
{'first_date': first_date,
'last_date': last_date
}
}
return jsonify(result)
except Exception as err:
message = str(err)
current_app.logger.info(message)
return bad_request(message)
# Get data availability map (dict of years, months and days) for reference designator.
@api.route('/media/<string:rd>/da/map')
def media_get_da_map_for_rd(rd):
""" Get all large format file indices for reference designator, year and month.
Request: http://localhost:4000/uframe/media/CE02SHBP-MJ01C-08-CAMDSB107/da/map
Response:
{
"map": {
"2009": {
"12": [
7
]
},
"2010": {
"01": [
1
]
},
"2015": {
"08": [
18
]
},
"2016": {
"01": [
28
],
"02": [
2,
3,
4,
5,
8,
9,
10,
12
],
"05": [
10,
11,
23
],
"06": [
29
],
"07": [
26,
28
],
"08": [
2,
8,
12,
18
],
"09": [
12
],
"10": [
3
]
}
}
}
Request:
http://localhost:4000/uframe/media/CE02SHBP-MJ01C-07-ZPLSCB101/da/map
Response:
{
"map": {
"2014": {
"09": [
24,
25,
26,
27,
28,
29,
30
],
"10": [
1,
2,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31
],
. . .
}
. . .
}
Request: http://localhost:4000/uframe/media/RS01SBPS-PC01A-08-HYDBBA103/da/map
Response:
{
"map": {
"2015": {
"09": [
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22
],
"10": [
20,
21,
22,
23
],
"11": [
30
],
"12": [
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16
]
},
"2016": {
"01": [
4,
5,
7,
8,
9,
10,
11,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
25,
26,
28,
29,
30,
31
],
. . .
}
. . .
}
Request: http://localhost:4000/uframe/media/RS01SUM2-MJ01B-OSMOIA101/da/map
Response:
{
"map": {
"2014": {
"09": [
2,
9
]
},
"2015": {
"07": [
14
]
}
}
}
"""
try:
# Get large format index data..
data = get_large_format_index_for_rd(rd)
# Get reference designator data.
years = [str(key) for key in data.keys()]
# Get years in the reference designator data.
result = {}
if years and years is not None:
if years and years is not None:
for year in years:
result[year] = {}
months = [str(key) for key in data[year].keys()]
for month in months:
days = data[year][month]
if days:
_days = []
for day in days:
int_day = int(day)
_days.append(int_day)
_days.sort()
result[year][month] = _days
return jsonify({'map': result}), 200
except Exception as err:
message = str(err)
current_app.logger.info(message)
return bad_request(message)
# Get [list of] years of available data for reference designator.
@api.route('/media/<string:rd>/da/years')
def media_get_da_for_rd_years(rd):
""" Get all large format index file indices for reference designator, year and month.
Get [list of] years of available data for reference designator.
Request: http://localhost:4000/uframe/media/CE02SHBP-MJ01C-08-CAMDSB107/da/years
Response:
{
"years": [
2009,
2010,
2015,
2016
]
}
Request: http://localhost:4000/uframe/media/CE02SHBP-MJ01C-07-ZPLSCB101/da/years
Response:
{
"years": [
2014,
2015,
2016,
2017
]
}
Request: http://localhost:4000/uframe/media/RS01SBPS-PC01A-08-HYDBBA103/da/years
Response:
{
"years": [
2015,
2016,
2017
]
}
Request: http://localhost:4000/uframe/media/RS01SUM2-MJ01B-OSMOIA101/da/years
Response:
{
"years": [
2014,
2015
]
}
"""
try:
# Get large format index data..
data = get_large_format_index_for_rd(rd)
# Get reference designator data.
years = [str(key) for key in data.keys()]
if years:
years.sort()
# Get years in the reference designator data.
result = []
if years and years is not None:
years.sort()
_years = []
for year in years:
int_year = int(year)
_years.append(int_year)
_years.sort()
result = _years
return jsonify({'years': result}), 200
except Exception as err:
message = str(err)
current_app.logger.info(message)
return bad_request(message)
# Get [list of] months of available data for reference designator and year.
@api.route('/media/<string:rd>/da/<string:year>')
def media_get_da_for_rd_year(rd, year):
""" Get all large format file indices for reference designator, year and month.
Get [list of] months of available data for reference designator and year.
Request: http://localhost:4000/uframe/media/CE02SHBP-MJ01C-08-CAMDSB107/da/2015
Response:
{
"months": [
8
]
}
http://localhost:4000/uframe/media/CE02SHBP-MJ01C-07-ZPLSCB101/da/2015
{
"months": [
1,
2,
3,
4,
8,
9,
10,
11,
12
]
}
http://localhost:4000/uframe/media/RS01SBPS-PC01A-08-HYDBBA103/da/2015
{
"months": [
9,
10,
11,
12
]
}
http://localhost:4000/uframe/media/RS01SUM2-MJ01B-OSMOIA101/da/2014
{
"months": [
9
]
}
"""
try:
_year = validate_year(year)
# Get large format index data..
data = get_large_format_index_for_rd(rd)
# Get reference designator data.
years = [str(key) for key in data.keys()]
# Get years in the reference designator data.
result = []
if years and years is not None:
# If year requested has available data, present months.
if _year in years:
months = data[_year].keys()
_months = []
if months:
for m in months:
_months.append(int(m))
_months.sort()
result = _months
return jsonify({'months': result}), 200
except Exception as err:
message = str(err)
current_app.logger.info(message)
return bad_request(message)
# Get [list of] days of available data for reference designator, year and month.
@api.route('/media/<string:rd>/da/<string:year>/<string:month>')
def media_get_da_for_rd_years_months(rd, year, month):
""" Get available for reference designator, year and month.
Get [list of] days of available data for reference designator, year and month.
http://localhost:4000/uframe/media/CE02SHBP-MJ01C-08-CAMDSB107/da/2015/08
{
"days": [
18
]
}
-- Get days for year and month
http://localhost:4000/uframe/media/CE02SHBP-MJ01C-07-ZPLSCB101/da/2015/10
{
"days": [
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31
]
}
http://localhost:4000/uframe/media/RS01SBPS-PC01A-08-HYDBBA103/da/2015/10
http://localhost:4000/uframe/media/RS01SBPS-PC01A-08-HYDBBA103/da/2015/09
http://localhost:4000/uframe/media/RS01SUM2-MJ01B-OSMOIA101/da/2014/09
{
"days": [
2,
9
]
}
"""
result = None
try:
_year, _month = validate_year_month(year, month)
data = get_large_format_index_for_rd(rd)
if _year in data:
if _month in data[_year]:
days = data[_year][_month] # days is a list of strings
# Convert list of str days into list of int days
if days:
_days = []
for d in days:
_days.append(int(d))
_days.sort()
result = _days
return jsonify({'days': result}), 200
except Exception as err:
message = str(err)
current_app.logger.info(message)
return bad_request(message)
#===========================================================================
# Development - under construction, review and discussion.
# Get list of available data from raw data server for reference designator.
@api.route('/media/<string:rd>/<string:year>/<string:month>', methods=['GET'])
def get_media_for_rd(rd, year, month):
""" Get list of all data for reference designator, year, month; return list of dicts descending by date.
Request: http://localhost:4000/uframe/media/CE02SHBP-MJ01C-08-CAMDSB107/2015/08
Response:
{
"media": [
{
"baseUrl": "/api/uframe/get_cam_image/",
"data_link": null,
"date": "2015-08-18",
"datetime": "20150818T214937.543Z",
"filename": "CE02SHBP-MJ01C-08-CAMDSB107_20150818T214937,543Z.png",
"reference_designator": "CE02SHBP-MJ01C-08-CAMDSB107",
"thumbnail": "ooiservices/cam_images/imageNotFound404.png",
"url": "https://rawdata.oceanobservatories.org/files/CE02SHBP/MJ01C/08-CAMDSB107/2015/08/18/CE02SHBP-MJ01C-08-CAMDSB107_20150818T214937,543Z.png"
}
]
}
General response format:
{
"media": [{}, {}, {}, ...]
}
where each dict contains:
{
"baseUrl": "/api/uframe/get_cam_image/",
"data_link": "link-to-data-file-on-remote-data-server",
"date": "2017-01-16",
"datetime": "20170116T000000.000Z",
"filename": "some-actual-filename-on-remote-data-server.png",
"reference_designator": "CE02SHBP-MJ01C-07-ZPLSCB101",
"thumbnail": "<server_data_folder>/images/imageNotFound404.png",
"url": "full-url-to-png-image-file-on-remote-data-server"
}
Notes:
1. For sensor ZPL and raw data only, the url value should be null.
1. For sensor HYD and mseed data only, the url value should be null.
Request examples:
1a. http://localhost:4000/uframe/media/CE02SHBP-MJ01C-07-ZPLSCB101/2017/01
1b. http://localhost:4000/uframe/media/CE04OSPS-PC01B-05-ZPLSCB102/2017/01
X 1c. http://localhost:4000/uframe/media/CE04OSPS-PC01B-05-ZPLSCB102/-1/-1 (provides current year and month)
2a. http://localhost:4000/uframe/media/RS01SLBS-LJ01A-09-HYDBBA102/2017/01
2b. http://localhost:4000/uframe/media/RS01SBPS-PC01A-08-HYDBBA103/2017/01
2c. http://localhost:4000/uframe/media/RS01SLBS-LJ01A-09-HYDBBA102/-1/-1 (provides current year and month)
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
3a. http://localhost:4000/uframe/media/RS03ASHS-PN03B-06-CAMHDA301/2017/01 ** On hold.
3b. http://localhost:4000/uframe/media/RS03ASHS-PN03B-06-CAMHDA301/-1/-1 ** On hold.
Sample request/responses:
1a. (png, raw) http://localhost:4000/uframe/media/CE02SHBP-MJ01C-07-ZPLSCB101/2017/01
{
"baseUrl": "/api/uframe/get_cam_image/",
"data_link": "https://rawdata.oceanobservatories.org/files/CE02SHBP/MJ01C/07-ZPLSCB101/2017/01/16/CE02SHBP-MJ01C-07-ZPLSCB101_OOI-D20170116-T000000.raw",
"date": "2017-01-16",
"datetime": "20170116T000000.000Z",
"filename": "CE02SHBP-MJ01C-07-ZPLSCB101_OOI-D20170116-T000000.png",
"reference_designator": "CE02SHBP-MJ01C-07-ZPLSCB101",
"thumbnail": "<server_data_folder>/images/imageNotFound404.png",
"url": "https://rawdata.oceanobservatories.org/files/CE02SHBP/MJ01C/07-ZPLSCB101/2017/01/16/CE02SHBP-MJ01C-07-ZPLSCB101_OOI-D20170116-T000000.png"
},
. . .
{
"baseUrl": "/api/uframe/get_cam_image/",
"data_link": "https://rawdata.oceanobservatories.org/files/CE02SHBP/MJ01C/07-ZPLSCB101/2017/01/13/CE02SHBP-MJ01C-07-ZPLSCB101_OOI-D20170113-T000000.raw",
"date": "2017-01-13",
"datetime": "20170113T000000.000Z",
"filename": "CE02SHBP-MJ01C-07-ZPLSCB101_OOI-D20170113-T000000.png",
"reference_designator": "CE02SHBP-MJ01C-07-ZPLSCB101",
"thumbnail": "<server_data_folder>/images/CE02SHBP-MJ01C-07-ZPLSCB101_OOI-D20170113-T000000_thumbnail.png",
"url": "https://rawdata.oceanobservatories.org/files/CE02SHBP/MJ01C/07-ZPLSCB101/2017/01/13/CE02SHBP-MJ01C-07-ZPLSCB101_OOI-D20170113-T000000.png"
},
. . .
2a. (mseed) http://localhost:4000/uframe/media/RS01SLBS-LJ01A-09-HYDBBA102/2017/01
{
"baseUrl": "/api/uframe/get_cam_image/",
"data_link": "https://rawdata.oceanobservatories.org/files/RS01SLBS/LJ01A/09-HYDBBA102/2017/01/25/OO-HYVM1--YDH-2017-01-25T00:00:00.000000.mseed",
"date": "2017-01-25",
"datetime": "20170125T000000.000000Z",
"filename": "OO-HYVM1--YDH-2017-01-25T00:00:00.000000.mseed",
"reference_designator": "RS01SLBS-LJ01A-09-HYDBBA102",
"thumbnail": "<server_data_folder>/images/imageNotFound404.png",
"url": null
},
{
"baseUrl": "/api/uframe/get_cam_image/",
"data_link": "https://rawdata.oceanobservatories.org/files/RS01SLBS/LJ01A/09-HYDBBA102/2017/01/25/OO-HYVM1--YDH-2017-01-25T00:05:00.000000.mseed",
"date": "2017-01-25",
"datetime": "20170125T000500.000000Z",
"filename": "OO-HYVM1--YDH-2017-01-25T00:05:00.000000.mseed",
"reference_designator": "RS01SLBS-LJ01A-09-HYDBBA102",
"thumbnail": "<server_data_folder>/images/imageNotFound404.png",
"url": null
},
. . .
2b. (mseed) http://localhost:4000/uframe/media/RS01SBPS-PC01A-08-HYDBBA103/2017/01
{
"baseUrl": "/api/uframe/get_cam_image/",
"data_link": "https://rawdata.oceanobservatories.org/files/RS01SBPS/PC01A/08-HYDBBA103/2017/01/25/OO-HYVM2--YDH-2017-01-25T00:00:00.000000.mseed",
"date": "2017-01-25",
"datetime": "20170125T000000.000000Z",
"filename": "OO-HYVM2--YDH-2017-01-25T00:00:00.000000.mseed",
"reference_designator": "RS01SBPS-PC01A-08-HYDBBA103",
"thumbnail": "<server_data_folder>/images/imageNotFound404.png",
"url": null
},
{
"baseUrl": "/api/uframe/get_cam_image/",
"data_link": "https://rawdata.oceanobservatories.org/files/RS01SBPS/PC01A/08-HYDBBA103/2017/01/25/OO-HYVM2--YDH-2017-01-25T00:05:00.000000.mseed",
"date": "2017-01-25",
"datetime": "20170125T000500.000000Z",
"filename": "OO-HYVM2--YDH-2017-01-25T00:05:00.000000.mseed",
"reference_designator": "RS01SBPS-PC01A-08-HYDBBA103",
"thumbnail": "<server_data_folder>/images/imageNotFound404.png",
"url": null
},
. . .
(mseed and png) Another example, mseed file and image file but no thumbnail, mseed file and no thumbnail or image
http://localhost:4000/uframe/media/RS01SBPS-PC01A-08-HYDBBA103/2016/01
{
"baseUrl": "/api/uframe/get_cam_image/",
"data_link": "https://rawdata.oceanobservatories.org/files/RS01SBPS/PC01A/08-HYDBBA103/2016/01/31/OO-HYVM2--YDH-2016-01-31T01:50:00.000000.mseed",
"date": "2016-01-31",
"datetime": "20160131T015000.000000Z",
"filename": "OO-HYVM2--YDH-2016-01-31T01:50:00.000000.mseed",
"reference_designator": "RS01SBPS-PC01A-08-HYDBBA103",
"thumbnail": "<server_data_folder>/images/imageNotFound404.png",
"url": "https://rawdata.oceanobservatories.org/files/RS01SBPS/PC01A/08-HYDBBA103/2016/01/31/OO-HYVM2--YDH-2016-01-31T01:50:00.000000.png"
},
{
"baseUrl": "/api/uframe/get_cam_image/",
"data_link": "https://rawdata.oceanobservatories.org/files/RS01SBPS/PC01A/08-HYDBBA103/2016/01/31/OO-HYVM2--YDH-2016-01-31T01:55:00.000015.mseed",
"date": "2016-01-31",
"datetime": "20160131T015500.000015Z",
"filename": "OO-HYVM2--YDH-2016-01-31T01:55:00.000015.mseed",
"reference_designator": "RS01SBPS-PC01A-08-HYDBBA103",
"thumbnail": "<server_data_folder>/images/imageNotFound404.png",
"url": null
},
. . .
4. CAMHD response output goes here. (On hold.)
"""
try:
_year, _month = validate_year_month(year, month)
data = _get_media_for_rd(rd, year=_year, month=_month)
return jsonify({'media': data})
except Exception as err:
message = str(err)
current_app.logger.info(message)
return bad_request(message)
'''
# Get media for reference designator for a given year.
@api.route('/media/<string:rd>/year/<string:year>')
def media_get_files_for_rd_year(rd, year):
""" Get all large format file indices for reference designator and year.
http://localhost:4000/uframe/media/RS01SBPS-PC01A-08-HYDBBA103/da/map
http://localhost:4000/uframe/media/RS01SBPS-PC01A-08-HYDBBA103/year/2015
{
"media": [
{
"baseUrl": "/api/uframe/get_cam_image/",
"data_link": "https://rawdata.oceanobservatories.org/files/RS01SBPS/PC01A/08-HYDBBA103/2015/09/03/OO-HYVM2--YDH.2015-09-03T23:53:17.272250.mseed",
"date": "2015-09-03",
"datetime": "20150903T235317272250Z",
"filename": "OO-HYVM2--YDH.2015-09-03T23:53:17.272250.mseed",
"reference_designator": "RS01SBPS-PC01A-08-HYDBBA103",
"thumbnail": "ooiservices/cam_images/imageNotFound404.png",
"url": null
},
{
"baseUrl": "/api/uframe/get_cam_image/",
"data_link": "https://rawdata.oceanobservatories.org/files/RS01SBPS/PC01A/08-HYDBBA103/2015/09/03/OO-HYVM2--YDH.2015-09-03T23:55:00.000000.mseed",
"date": "2015-09-03",
"datetime": "20150903T235500000000Z",
"filename": "OO-HYVM2--YDH.2015-09-03T23:55:00.000000.mseed",
"reference_designator": "RS01SBPS-PC01A-08-HYDBBA103",
"thumbnail": "ooiservices/cam_images/imageNotFound404.png",
"url": null
},
{
"baseUrl": "/api/uframe/get_cam_image/",
"data_link": "https://rawdata.oceanobservatories.org/files/RS01SBPS/PC01A/08-HYDBBA103/2015/09/04/OO-HYVM2--YDH.2015-09-04T00:00:00.000000.mseed",
"date": "2015-09-04",
"datetime": "20150904T000000000000Z",
"filename": "OO-HYVM2--YDH.2015-09-04T00:00:00.000000.mseed",
"reference_designator": "RS01SBPS-PC01A-08-HYDBBA103",
"thumbnail": "ooiservices/cam_images/imageNotFound404.png",
"url": null
},
. . .
"""
#
# Working here ==========================================================
#
# Need to limit the query based on outstanding data items.
#
# The following data queries will cripple interface - need a point to break on...
#
# http://localhost:4000/uframe/media/RS01SBPS-PC01A-08-HYDBBA103/da/map
#
# http://localhost:4000/uframe/media/RS01SBPS-PC01A-08-HYDBBA103/year/2016
#
debug = True
try:
_year = validate_year(year)
# Get large format cache
if debug: print '\n debug -- Entered test_get_media_year: %s for %r' % (rd, _year)
if debug: print '\n debug -- Get get_large_format_for_rd for rd: %s' % rd
data = get_large_format_for_rd(rd)
if _year not in data:
message = 'Invalid year (%s) for %s data set.' % (str(_year), rd)
raise Exception(message)
# Get year data from large format cache.
data_slice = {}
data_slice[_year] = {}
data_slice[_year] = data[_year]
# Determine sensor type using reference designator.
#sensor_type = 'CAMDS'
sensor_type = rds_get_sensor_type(rd)
if debug: print '\n debug -- Get get_file_list_for_large_format_data for sensor_type: %s' % sensor_type
result = get_file_list_for_large_format_data(data_slice, rd, sensor_type)
# for data segment, process for media output.
media = fetch_media_for_rd(result, rd)
return jsonify({'media': media}), 200
except Exception as err:
message = str(err)
current_app.logger.info(message)
return bad_request(message)
'''
# Modified.
# Get data for reference designator and specific date (yyyy-mm-dd).
@api.route('/media/<string:rd>/date/<string:data_date>', methods=['GET'])
def media_get_day(rd, data_date):
""" Get data slice for date from rd/da/map; process for sensor type identified in rd.
Get data for reference designator and specific date (yyyy-mm-dd).
Request: http://localhost:4000/uframe/media/CE02SHBP-MJ01C-08-CAMDSB107/date/2015-08-18
Response:
{
"media": [
{
"baseUrl": "/api/uframe/get_cam_image/",
"data_link": null,
"date": "2015-08-18",
"datetime": "20150818T214937.543Z",
"filename": "CE02SHBP-MJ01C-08-CAMDSB107_20150818T214937,543Z.png",
"reference_designator": "CE02SHBP-MJ01C-08-CAMDSB107",
"thumbnail": "ooiservices/cam_images/imageNotFound404.png",
"url": "https://rawdata.oceanobservatories.org/files/CE02SHBP/MJ01C/08-CAMDSB107/2015/08/18/CE02SHBP-MJ01C-08-CAMDSB107_20150818T214937,543Z.png"
}
]
}
Other examples....
Request: http://localhost:4000/uframe/media/RS01SBPS-PC01A-08-HYDBBA103/date/2016-12-31
Response:
{
"media": [
{
"baseUrl": "/api/uframe/get_cam_image/",
"data_link": "https://rawdata.oceanobservatories.org/files/RS01SBPS/PC01A/08-HYDBBA103/2016/12/31/OO-HYVM2--YDH-2016-12-31T00:00:00.000016.mseed",
"date": "2016-12-31",
"datetime": "20161231T000000.000016Z",
"filename": "OO-HYVM2--YDH-2016-12-31T00:00:00.000016.mseed",
"reference_designator": "RS01SBPS-PC01A-08-HYDBBA103",
"thumbnail": "ooiservices/cam_images/imageNotFound404.png",
"url": null
},
{
"baseUrl": "/api/uframe/get_cam_image/",
"data_link": "https://rawdata.oceanobservatories.org/files/RS01SBPS/PC01A/08-HYDBBA103/2016/12/31/OO-HYVM2--YDH-2016-12-31T00:05:00.000015.mseed",
"date": "2016-12-31",
"datetime": "20161231T000500.000015Z",
"filename": "OO-HYVM2--YDH-2016-12-31T00:05:00.000015.mseed",
"reference_designator": "RS01SBPS-PC01A-08-HYDBBA103",
"thumbnail": "ooiservices/cam_images/imageNotFound404.png",
"url": null
},
. . .
]
}
Request: http://localhost:4000/uframe/media/RS01SUM2-MJ01B-OSMOIA101/date/2014-09-02
{
"media": [
{
"baseUrl": "/api/uframe/get_cam_image/",
"data_link": null,
"date": "2014-09-02",
"datetime": "20140902T143401.000Z",
"filename": "OSMOIA101_20140902T143401_UTC_Image_ver_1-00.jpg",
"reference_designator": "RS01SUM2-MJ01B-OSMOIA101",
"thumbnail": "ooiservices/cam_images/imageNotFound404.png",
"url": "https://rawdata.oceanobservatories.org/files/RS01SUM2/MJ01B/OSMOIA101/2014_deployment/metadata/OSMOIA101_20140902T143401_UTC_Image_ver_1-00.jpg"
},
{
"baseUrl": "/api/uframe/get_cam_image/",
"data_link": null,
"date": "2014-09-02",
"datetime": "20140902T144932.000Z",
"filename": "OSMOIA101_20140902T144932_UTC_Image_ver_1-00.jpg",
"reference_designator": "RS01SUM2-MJ01B-OSMOIA101",
"thumbnail": "ooiservices/cam_images/imageNotFound404.png",
"url": "https://rawdata.oceanobservatories.org/files/RS01SUM2/MJ01B/OSMOIA101/2014_deployment/metadata/OSMOIA101_20140902T144932_UTC_Image_ver_1-00.jpg"
},
{
"baseUrl": "/api/uframe/get_cam_image/",
"data_link": "https://rawdata.oceanobservatories.org/files/RS01SUM2/MJ01B/OSMOIA101/2014_deployment/metadata/OSMOIA101_20140902_UTC_Analytical_Methods_ver_1-00.pdf",
"date": "2014-09-02",
"datetime": "20140902T000000.000Z",
"filename": "OSMOIA101_20140902_UTC_Analytical_Methods_ver_1-00.pdf",
"reference_designator": "RS01SUM2-MJ01B-OSMOIA101",
"thumbnail": "ooiservices/cam_images/imageNotFound404.png",
"url": null
}
]
}
"""
import json
debug = False
try:
# Get large format cache
data = get_large_format_for_rd(rd)
if data is None:
message = 'No cache data available for reference designator: %s' % rd
raise Exception(message)
# Get date from large format cache.
data_slice = _get_data_day(data, data_date)
if debug: print '\n debug -- data_slice: %s' % json.dumps(data_slice, indent=4, sort_keys=True)
# Determine sensor type using reference designator.
sensor_type = rds_get_sensor_type(rd)
result = get_file_list_for_large_format_data(data_slice, rd, sensor_type)
if debug: print '\n debug -- result: %s' % json.dumps(result, indent=4, sort_keys=True)
# For data segment, process for media output.
media = fetch_media_for_rd(result, rd)
return jsonify({'media': media})
except Exception as err:
message = str(err)
current_app.logger.info(message)
return bad_request(message)
# Get data slice for reference designator, provide start and end date.
@api.route('/media/<string:rd>/range/<string:start>/<string:end>', methods=['GET'])
def media_get_data_slice_with_dates(rd, start, end):
""" Get data slice, provide start and end date, (using start/end values from rd/da/map).
For reference check the available dates to use for range.
Request:
http://localhost:4000/uframe/media/RS01SUM2-MJ01B-05-CAMDSB103/da/map
Response:
{
"map": {
"2015": {
"07": [
31
],
"08": [
4,
7,
21,
26,
31
],
"09": [
9,
11,
23
],
"10": [
6,
26
],
"11": [
5,
20
],
"12": [
7
]
},
"2016": {
"01": [
5,
7,
21
],
. . .
Request: http://localhost:4000/uframe/media/RS01SUM2-MJ01B-05-CAMDSB103/range/2015-12-07/2016-01-05
Response:
{
"media": [
{
"baseUrl": "/api/uframe/get_cam_image/",
"data_link": null,
"date": "2015-12-07",
"datetime": "20151207T215118.854Z",
"filename": "RS01SUM2-MJ01B-05-CAMDSB103_20151207T215118,854Z.png",
"reference_designator": "RS01SUM2-MJ01B-05-CAMDSB103",
"thumbnail": "ooiservices/cam_images/imageNotFound404.png",
"url": "https://rawdata.oceanobservatories.org/files/RS01SUM2/MJ01B/05-CAMDSB103/2015/12/07/RS01SUM2-MJ01B-05-CAMDSB103_20151207T215118,854Z.png"
},
{
"baseUrl": "/api/uframe/get_cam_image/",
"data_link": null,
"date": "2015-12-07",
"datetime": "20151207T215119.165Z",
"filename": "RS01SUM2-MJ01B-05-CAMDSB103_20151207T215119,165Z.png",
"reference_designator": "RS01SUM2-MJ01B-05-CAMDSB103",
"thumbnail": "ooiservices/cam_images/imageNotFound404.png",
"url": "https://rawdata.oceanobservatories.org/files/RS01SUM2/MJ01B/05-CAMDSB103/2015/12/07/RS01SUM2-MJ01B-05-CAMDSB103_20151207T215119,165Z.png"
},
{
"baseUrl": "/api/uframe/get_cam_image/",
"data_link": null,
"date": "2015-12-07",
"datetime": "20151207T215119.477Z",
"filename": "RS01SUM2-MJ01B-05-CAMDSB103_20151207T215119,477Z.png",
"reference_designator": "RS01SUM2-MJ01B-05-CAMDSB103",
"thumbnail": "ooiservices/cam_images/imageNotFound404.png",
"url": "https://rawdata.oceanobservatories.org/files/RS01SUM2/MJ01B/05-CAMDSB103/2015/12/07/RS01SUM2-MJ01B-05-CAMDSB103_20151207T215119,477Z.png"
},
{
"baseUrl": "/api/uframe/get_cam_image/",
"data_link": null,
"date": "2015-12-07",
"datetime": "20151207T220018.909Z",
"filename": "RS01SUM2-MJ01B-05-CAMDSB103_20151207T220018,909Z.png",
"reference_designator": "RS01SUM2-MJ01B-05-CAMDSB103",
"thumbnail": "ooiservices/cam_images/imageNotFound404.png",
"url": "https://rawdata.oceanobservatories.org/files/RS01SUM2/MJ01B/05-CAMDSB103/2015/12/07/RS01SUM2-MJ01B-05-CAMDSB103_20151207T220018,909Z.png"
},
{
"baseUrl": "/api/uframe/get_cam_image/",
"data_link": null,
"date": "2015-12-07",
"datetime": "20151207T220019.220Z",
"filename": "RS01SUM2-MJ01B-05-CAMDSB103_20151207T220019,220Z.png",
"reference_designator": "RS01SUM2-MJ01B-05-CAMDSB103",
"thumbnail": "ooiservices/cam_images/imageNotFound404.png",
"url": "https://rawdata.oceanobservatories.org/files/RS01SUM2/MJ01B/05-CAMDSB103/2015/12/07/RS01SUM2-MJ01B-05-CAMDSB103_20151207T220019,220Z.png"
},
{
"baseUrl": "/api/uframe/get_cam_image/",
"data_link": null,
"date": "2015-12-07",
"datetime": "20151207T220019.532Z",
"filename": "RS01SUM2-MJ01B-05-CAMDSB103_20151207T220019,532Z.png",
"reference_designator": "RS01SUM2-MJ01B-05-CAMDSB103",
"thumbnail": "ooiservices/cam_images/imageNotFound404.png",
"url": "https://rawdata.oceanobservatories.org/files/RS01SUM2/MJ01B/05-CAMDSB103/2015/12/07/RS01SUM2-MJ01B-05-CAMDSB103_20151207T220019,532Z.png"
},
{
"baseUrl": "/api/uframe/get_cam_image/",
"data_link": null,
"date": "2015-12-07",
"datetime": "20151207T220019.843Z",
"filename": "RS01SUM2-MJ01B-05-CAMDSB103_20151207T220019,843Z.png",
"reference_designator": "RS01SUM2-MJ01B-05-CAMDSB103",
"thumbnail": "ooiservices/cam_images/imageNotFound404.png",
"url": "https://rawdata.oceanobservatories.org/files/RS01SUM2/MJ01B/05-CAMDSB103/2015/12/07/RS01SUM2-MJ01B-05-CAMDSB103_20151207T220019,843Z.png"
},
{
"baseUrl": "/api/uframe/get_cam_image/",
"data_link": null,
"date": "2015-12-07",
"datetime": "20151207T220020.154Z",
"filename": "RS01SUM2-MJ01B-05-CAMDSB103_20151207T220020,154Z.png",
"reference_designator": "RS01SUM2-MJ01B-05-CAMDSB103",
"thumbnail": "ooiservices/cam_images/imageNotFound404.png",
"url": "https://rawdata.oceanobservatories.org/files/RS01SUM2/MJ01B/05-CAMDSB103/2015/12/07/RS01SUM2-MJ01B-05-CAMDSB103_20151207T220020,154Z.png"
},
{
"baseUrl": "/api/uframe/get_cam_image/",
"data_link": null,
"date": "2015-12-07",
"datetime": "20151207T220020.466Z",
"filename": "RS01SUM2-MJ01B-05-CAMDSB103_20151207T220020,466Z.png",
"reference_designator": "RS01SUM2-MJ01B-05-CAMDSB103",
"thumbnail": "ooiservices/cam_images/imageNotFound404.png",
"url": "https://rawdata.oceanobservatories.org/files/RS01SUM2/MJ01B/05-CAMDSB103/2015/12/07/RS01SUM2-MJ01B-05-CAMDSB103_20151207T220020,466Z.png"
},
{
"baseUrl": "/api/uframe/get_cam_image/",
"data_link": null,
"date": "2015-12-07",
"datetime": "20151207T220020.777Z",
"filename": "RS01SUM2-MJ01B-05-CAMDSB103_20151207T220020,777Z.png",
"reference_designator": "RS01SUM2-MJ01B-05-CAMDSB103",
"thumbnail": "ooiservices/cam_images/imageNotFound404.png",
"url": "https://rawdata.oceanobservatories.org/files/RS01SUM2/MJ01B/05-CAMDSB103/2015/12/07/RS01SUM2-MJ01B-05-CAMDSB103_20151207T220020,777Z.png"
},
{
"baseUrl": "/api/uframe/get_cam_image/",
"data_link": null,
"date": "2015-12-07",
"datetime": "20151207T220021.089Z",
"filename": "RS01SUM2-MJ01B-05-CAMDSB103_20151207T220021,089Z.png",
"reference_designator": "RS01SUM2-MJ01B-05-CAMDSB103",
"thumbnail": "ooiservices/cam_images/imageNotFound404.png",
"url": "https://rawdata.oceanobservatories.org/files/RS01SUM2/MJ01B/05-CAMDSB103/2015/12/07/RS01SUM2-MJ01B-05-CAMDSB103_20151207T220021,089Z.png"
},
{
"baseUrl": "/api/uframe/get_cam_image/",
"data_link": null,
"date": "2015-12-07",
"datetime": "20151207T220021.400Z",
"filename": "RS01SUM2-MJ01B-05-CAMDSB103_20151207T220021,400Z.png",
"reference_designator": "RS01SUM2-MJ01B-05-CAMDSB103",
"thumbnail": "ooiservices/cam_images/imageNotFound404.png",
"url": "https://rawdata.oceanobservatories.org/files/RS01SUM2/MJ01B/05-CAMDSB103/2015/12/07/RS01SUM2-MJ01B-05-CAMDSB103_20151207T220021,400Z.png"
},
{
"baseUrl": "/api/uframe/get_cam_image/",
"data_link": null,
"date": "2015-12-07",
"datetime": "20151207T220021.712Z",
"filename": "RS01SUM2-MJ01B-05-CAMDSB103_20151207T220021,712Z.png",
"reference_designator": "RS01SUM2-MJ01B-05-CAMDSB103",
"thumbnail": "ooiservices/cam_images/imageNotFound404.png",
"url": "https://rawdata.oceanobservatories.org/files/RS01SUM2/MJ01B/05-CAMDSB103/2015/12/07/RS01SUM2-MJ01B-05-CAMDSB103_20151207T220021,712Z.png"
},
{
"baseUrl": "/api/uframe/get_cam_image/",
"data_link": null,
"date": "2015-12-07",
"datetime": "20151207T220022.023Z",
"filename": "RS01SUM2-MJ01B-05-CAMDSB103_20151207T220022,023Z.png",
"reference_designator": "RS01SUM2-MJ01B-05-CAMDSB103",
"thumbnail": "ooiservices/cam_images/imageNotFound404.png",
"url": "https://rawdata.oceanobservatories.org/files/RS01SUM2/MJ01B/05-CAMDSB103/2015/12/07/RS01SUM2-MJ01B-05-CAMDSB103_20151207T220022,023Z.png"
},
{
"baseUrl": "/api/uframe/get_cam_image/",
"data_link": null,
"date": "2015-12-07",
"datetime": "20151207T220022.334Z",
"filename": "RS01SUM2-MJ01B-05-CAMDSB103_20151207T220022,334Z.png",
"reference_designator": "RS01SUM2-MJ01B-05-CAMDSB103",
"thumbnail": "ooiservices/cam_images/imageNotFound404.png",
"url": "https://rawdata.oceanobservatories.org/files/RS01SUM2/MJ01B/05-CAMDSB103/2015/12/07/RS01SUM2-MJ01B-05-CAMDSB103_20151207T220022,334Z.png"
},
{
"baseUrl": "/api/uframe/get_cam_image/",
"data_link": null,
"date": "2015-12-07",
"datetime": "20151207T223311.078Z",
"filename": "RS01SUM2-MJ01B-05-CAMDSB103_20151207T223311,078Z.png",
"reference_designator": "RS01SUM2-MJ01B-05-CAMDSB103",
"thumbnail": "ooiservices/cam_images/imageNotFound404.png",
"url": "https://rawdata.oceanobservatories.org/files/RS01SUM2/MJ01B/05-CAMDSB103/2015/12/07/RS01SUM2-MJ01B-05-CAMDSB103_20151207T223311,078Z.png"
},
{
"baseUrl": "/api/uframe/get_cam_image/",
"data_link": null,
"date": "2015-12-07",
"datetime": "20151207T223311.389Z",
"filename": "RS01SUM2-MJ01B-05-CAMDSB103_20151207T223311,389Z.png",
"reference_designator": "RS01SUM2-MJ01B-05-CAMDSB103",
"thumbnail": "ooiservices/cam_images/imageNotFound404.png",
"url": "https://rawdata.oceanobservatories.org/files/RS01SUM2/MJ01B/05-CAMDSB103/2015/12/07/RS01SUM2-MJ01B-05-CAMDSB103_20151207T223311,389Z.png"
},
{
"baseUrl": "/api/uframe/get_cam_image/",
"data_link": null,
"date": "2015-12-07",
"datetime": "20151207T223311.700Z",
"filename": "RS01SUM2-MJ01B-05-CAMDSB103_20151207T223311,700Z.png",
"reference_designator": "RS01SUM2-MJ01B-05-CAMDSB103",
"thumbnail": "ooiservices/cam_images/imageNotFound404.png",
"url": "https://rawdata.oceanobservatories.org/files/RS01SUM2/MJ01B/05-CAMDSB103/2015/12/07/RS01SUM2-MJ01B-05-CAMDSB103_20151207T223311,700Z.png"
},
{
"baseUrl": "/api/uframe/get_cam_image/",
"data_link": null,
"date": "2015-12-07",
"datetime": "20151207T223312.012Z",
"filename": "RS01SUM2-MJ01B-05-CAMDSB103_20151207T223312,012Z.png",
"reference_designator": "RS01SUM2-MJ01B-05-CAMDSB103",
"thumbnail": "ooiservices/cam_images/imageNotFound404.png",
"url": "https://rawdata.oceanobservatories.org/files/RS01SUM2/MJ01B/05-CAMDSB103/2015/12/07/RS01SUM2-MJ01B-05-CAMDSB103_20151207T223312,012Z.png"
},
{
"baseUrl": "/api/uframe/get_cam_image/",
"data_link": null,
"date": "2015-12-07",
"datetime": "20151207T223312.323Z",
"filename": "RS01SUM2-MJ01B-05-CAMDSB103_20151207T223312,323Z.png",
"reference_designator": "RS01SUM2-MJ01B-05-CAMDSB103",
"thumbnail": "ooiservices/cam_images/imageNotFound404.png",
"url": "https://rawdata.oceanobservatories.org/files/RS01SUM2/MJ01B/05-CAMDSB103/2015/12/07/RS01SUM2-MJ01B-05-CAMDSB103_20151207T223312,323Z.png"
},
{
"baseUrl": "/api/uframe/get_cam_image/",
"data_link": null,
"date": "2015-12-07",
"datetime": "20151207T223312.635Z",
"filename": "RS01SUM2-MJ01B-05-CAMDSB103_20151207T223312,635Z.png",
"reference_designator": "RS01SUM2-MJ01B-05-CAMDSB103",
"thumbnail": "ooiservices/cam_images/imageNotFound404.png",
"url": "https://rawdata.oceanobservatories.org/files/RS01SUM2/MJ01B/05-CAMDSB103/2015/12/07/RS01SUM2-MJ01B-05-CAMDSB103_20151207T223312,635Z.png"
},
{
"baseUrl": "/api/uframe/get_cam_image/",
"data_link": null,
"date": "2015-12-07",
"datetime": "20151207T223312.946Z",
"filename": "RS01SUM2-MJ01B-05-CAMDSB103_20151207T223312,946Z.png",
"reference_designator": "RS01SUM2-MJ01B-05-CAMDSB103",
"thumbnail": "ooiservices/cam_images/imageNotFound404.png",
"url": "https://rawdata.oceanobservatories.org/files/RS01SUM2/MJ01B/05-CAMDSB103/2015/12/07/RS01SUM2-MJ01B-05-CAMDSB103_20151207T223312,946Z.png"
},
{
"baseUrl": "/api/uframe/get_cam_image/",
"data_link": null,
"date": "2015-12-07",
"datetime": "20151207T223313.258Z",
"filename": "RS01SUM2-MJ01B-05-CAMDSB103_20151207T223313,258Z.png",
"reference_designator": "RS01SUM2-MJ01B-05-CAMDSB103",
"thumbnail": "ooiservices/cam_images/imageNotFound404.png",
"url": "https://rawdata.oceanobservatories.org/files/RS01SUM2/MJ01B/05-CAMDSB103/2015/12/07/RS01SUM2-MJ01B-05-CAMDSB103_20151207T223313,258Z.png"
},
{
"baseUrl": "/api/uframe/get_cam_image/",
"data_link": null,
"date": "2015-12-07",
"datetime": "20151207T223313.569Z",
"filename": "RS01SUM2-MJ01B-05-CAMDSB103_20151207T223313,569Z.png",
"reference_designator": "RS01SUM2-MJ01B-05-CAMDSB103",
"thumbnail": "ooiservices/cam_images/imageNotFound404.png",
"url": "https://rawdata.oceanobservatories.org/files/RS01SUM2/MJ01B/05-CAMDSB103/2015/12/07/RS01SUM2-MJ01B-05-CAMDSB103_20151207T223313,569Z.png"
},
{
"baseUrl": "/api/uframe/get_cam_image/",
"data_link": null,
"date": "2016-01-05",
"datetime": "20160105T012025.501Z",
"filename": "RS01SUM2-MJ01B-05-CAMDSB103_20160105T012025,501Z.png",
"reference_designator": "RS01SUM2-MJ01B-05-CAMDSB103",
"thumbnail": "ooiservices/cam_images/imageNotFound404.png",
"url": "https://rawdata.oceanobservatories.org/files/RS01SUM2/MJ01B/05-CAMDSB103/2016/01/05/RS01SUM2-MJ01B-05-CAMDSB103_20160105T012025,501Z.png"
},
{
"baseUrl": "/api/uframe/get_cam_image/",
"data_link": null,
"date": "2016-01-05",
"datetime": "20160105T012025.812Z",
"filename": "RS01SUM2-MJ01B-05-CAMDSB103_20160105T012025,812Z.png",
"reference_designator": "RS01SUM2-MJ01B-05-CAMDSB103",
"thumbnail": "ooiservices/cam_images/imageNotFound404.png",
"url": "https://rawdata.oceanobservatories.org/files/RS01SUM2/MJ01B/05-CAMDSB103/2016/01/05/RS01SUM2-MJ01B-05-CAMDSB103_20160105T012025,812Z.png"
},
{
"baseUrl": "/api/uframe/get_cam_image/",
"data_link": null,
"date": "2016-01-05",
"datetime": "20160105T012026.124Z",
"filename": "RS01SUM2-MJ01B-05-CAMDSB103_20160105T012026,124Z.png",
"reference_designator": "RS01SUM2-MJ01B-05-CAMDSB103",
"thumbnail": "ooiservices/cam_images/imageNotFound404.png",
"url": "https://rawdata.oceanobservatories.org/files/RS01SUM2/MJ01B/05-CAMDSB103/2016/01/05/RS01SUM2-MJ01B-05-CAMDSB103_20160105T012026,124Z.png"
}
]
}
Request: http://localhost:4000/uframe/media/RS01SUM2-MJ01B-05-CAMDSB103/range/2015-12-07/2016-01-05
Response:
{
"media": [
{
"baseUrl": "/api/uframe/get_cam_image/",
"data_link": null,
"date": "2014-09-02",
"datetime": "20140902T143401.000Z",
"filename": "OSMOIA101_20140902T143401_UTC_Image_ver_1-00.jpg",
"reference_designator": "RS01SUM2-MJ01B-OSMOIA101",
"thumbnail": "ooiservices/cam_images/imageNotFound404.png",
"url": "https://rawdata.oceanobservatories.org/files/RS01SUM2/MJ01B/OSMOIA101/2014_deployment/metadata/OSMOIA101_20140902T143401_UTC_Image_ver_1-00.jpg"
},
{
"baseUrl": "/api/uframe/get_cam_image/",
"data_link": null,
"date": "2014-09-02",
"datetime": "20140902T144932.000Z",
"filename": "OSMOIA101_20140902T144932_UTC_Image_ver_1-00.jpg",
"reference_designator": "RS01SUM2-MJ01B-OSMOIA101",
"thumbnail": "ooiservices/cam_images/imageNotFound404.png",
"url": "https://rawdata.oceanobservatories.org/files/RS01SUM2/MJ01B/OSMOIA101/2014_deployment/metadata/OSMOIA101_20140902T144932_UTC_Image_ver_1-00.jpg"
},
{
"baseUrl": "/api/uframe/get_cam_image/",
"data_link": "https://rawdata.oceanobservatories.org/files/RS01SUM2/MJ01B/OSMOIA101/2014_deployment/metadata/OSMOIA101_20140902_UTC_Analytical_Methods_ver_1-00.pdf",
"date": "2014-09-02",
"datetime": "20140902T000000.000Z",
"filename": "OSMOIA101_20140902_UTC_Analytical_Methods_ver_1-00.pdf",
"reference_designator": "RS01SUM2-MJ01B-OSMOIA101",
"thumbnail": "ooiservices/cam_images/imageNotFound404.png",
"url": null
},
{
"baseUrl": "/api/uframe/get_cam_image/",
"data_link": null,
"date": "2014-09-09",
"datetime": "20140909T034403.000Z",
"filename": "OSMOIA101_20140909T034403_UTC_Image_ver_1-00.jpg",
"reference_designator": "RS01SUM2-MJ01B-OSMOIA101",
"thumbnail": "ooiservices/cam_images/imageNotFound404.png",
"url": "https://rawdata.oceanobservatories.org/files/RS01SUM2/MJ01B/OSMOIA101/2014_deployment/metadata/OSMOIA101_20140909T034403_UTC_Image_ver_1-00.jpg"
},
{
"baseUrl": "/api/uframe/get_cam_image/",
"data_link": null,
"date": "2015-07-14",
"datetime": "20150714T175226.000Z",
"filename": "OSMOIA101_20150714T175226_UTC_Image_ver_1-00.jpg",
"reference_designator": "RS01SUM2-MJ01B-OSMOIA101",
"thumbnail": "ooiservices/cam_images/imageNotFound404.png",
"url": "https://rawdata.oceanobservatories.org/files/RS01SUM2/MJ01B/OSMOIA101/2014_deployment/metadata/OSMOIA101_20150714T175226_UTC_Image_ver_1-00.jpg"
},
{
"baseUrl": "/api/uframe/get_cam_image/",
"data_link": null,
"date": "2015-07-14",
"datetime": "20150714T175957.000Z",
"filename": "OSMOIA101_20150714T175957_UTC_Image_ver_1-00.jpg",
"reference_designator": "RS01SUM2-MJ01B-OSMOIA101",
"thumbnail": "ooiservices/cam_images/imageNotFound404.png",
"url": "https://rawdata.oceanobservatories.org/files/RS01SUM2/MJ01B/OSMOIA101/2014_deployment/metadata/OSMOIA101_20150714T175957_UTC_Image_ver_1-00.jpg"
}
]
}
"""
debug = False
import json
try:
# Validate start and end format (yyyy-mm-dd, bwe: '2015-12-07', '2016-01-05')
data = get_large_format_for_rd(rd)
if not data or data is None:
message = 'Data is not defined (empty or null), unable to fulfill get_data_slice request.'
return bad_request(message)
data_slice = get_range_data(data, start, end)
if debug: print '\n debug -- data_slice: ', json.dumps(data_slice, indent=4, sort_keys=True)
# Determine sensor type using reference designator.
sensor_type = rds_get_sensor_type(rd)
result = get_file_list_for_large_format_data(data_slice, rd, sensor_type)
if debug: print '\n debug -- result: ', json.dumps(result, indent=4, sort_keys=True)
# for data segment, process for media output.
media = fetch_media_for_rd(result, rd)
return jsonify({'media': media})
except Exception as err:
message = str(err)
current_app.logger.info(message)
return bad_request(message)
# Get display name and vocab for a reference designator
@api.route('/media/get_display_name/<string:ref_des>', methods=['GET'])
def media_get_display_name_by_rd(ref_des):
""" Get display name and vocab.
Request: http://localhost:4000/uframe/media/get_display_name
Response:
{
"display_name": "Bio-acoustic Sonar (Coastal)",
"vocab": {}
}
"""
try:
data = get_display_name_by_rd(ref_des)
data2 = get_vocab_dict_by_rd(ref_des)
return jsonify({'display_name': data, 'vocab': data2})
except Exception as err:
message = str(err)
current_app.logger.info(message)
return bad_request(message)
| 39.04481 | 175 | 0.605817 | 7,736 | 68,836 | 5.232291 | 0.062952 | 0.044964 | 0.041505 | 0.066408 | 0.853892 | 0.824592 | 0.795217 | 0.719371 | 0.679719 | 0.633174 | 0 | 0.142877 | 0.259181 | 68,836 | 1,762 | 176 | 39.066969 | 0.650868 | 0.040923 | 0 | 0.514754 | 0 | 0 | 0.11049 | 0.050939 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.039344 | null | null | 0.013115 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
d9ef0c83b0c2168f075fd80341fd747fdc5b5e22 | 19,958 | py | Python | yes.py | jonathanGR3450/estocasticos | be710cb1d326f62df05ddb258be731bddf43993f | [
"MIT"
] | null | null | null | yes.py | jonathanGR3450/estocasticos | be710cb1d326f62df05ddb258be731bddf43993f | [
"MIT"
] | null | null | null | yes.py | jonathanGR3450/estocasticos | be710cb1d326f62df05ddb258be731bddf43993f | [
"MIT"
] | null | null | null | import numpy as np
yes = [[7.738464222320461, 7.933942249528828, 9.900058859939833, 11.846646034454967, 12.388022078233224, 12.06232972352621, 11.80709233412567, 11.722556454154846, 8.165331550084774, 8.652164978627415, 10.688116496825081, 10.964811161502633, 11.808099902153797, 9.780636718065029, 8.474314231029073, 9.649337254172783, 7.739997316452467, 6.96542518045551, 6.5299938421102395, 7.218545388898123, 7.064454584418568, 6.19925679032991, 7.0747800476953016, 6.262023349841849, 8.074060106759564, 9.289609416650682], [8.172093686643784, 8.518496373306489, 10.283356224419023, 11.473797489113775, 11.562611459340674, 10.933087499778676, 11.259953438390886, 11.689752924599063, 8.504859553088254, 10.268723277119532, 10.68327471427069, 11.392547982672177, 10.751418599175885, 9.409022610468313, 9.314754427720345, 9.847147992838288, 7.3839937494842385, 6.466602344876635, 6.946482788991601, 7.006352634039627, 6.618748642215983, 6.178693296759837, 6.629169831005441, 6.426084418194769, 8.055300302991201, 9.008709244880349]],[[4.842686863101372, 4.940602841090154, 4.367300255009805, 3.915994217998098, 4.956726113516733, 4.707314726712879, 4.520468651127495, 4.311828362218355, 3.46566248715294, 4.064314785822737, 4.224613435525389, 4.627863160119167, 3.967641898171448, 5.112014045793063, 5.375148839857058, 5.2419992282086705, 5.278247576122669, 5.325363801538621, 5.283020033978653, 5.74608772604213, 6.200432762503413, 6.251398522436823, 5.84381890168903, 4.941770061467001, 4.468732349713085, 3.3241014454934685], [5.003383006998139, 3.733428905648437, 5.040599159142632, 4.628043746954538, 4.73972946106653, 4.1321809920748205, 5.116492343411964, 4.814012698265661, 3.758636944008929, 3.509939138980019, 3.431715425720606, 4.620102567440396, 4.7454961399060815, 4.9267288485859675, 5.454524887860332, 5.143098757537711, 5.133592073454244, 5.850910003764851, 5.829751680575409, 5.792329157084775, 6.088631037399145, 6.175740479094642, 5.773309874203245, 5.344812041056509, 4.567410340565526, 3.3844931453159175]],[[4.842686863101372, 4.940602841090154, 4.367300255009805, 3.915994217998098, 4.956726113516733, 4.707314726712879, 4.520468651127495, 4.311828362218355, 3.46566248715294, 4.064314785822737, 4.224613435525389, 4.627863160119167, 3.967641898171448, 5.112014045793063, 5.375148839857058, 5.2419992282086705, 5.278247576122669, 5.325363801538621, 5.283020033978653, 5.74608772604213, 6.200432762503413, 6.251398522436823, 5.84381890168903, 4.941770061467001, 4.468732349713085, 3.3241014454934685], [5.003383006998139, 3.733428905648437, 5.040599159142632, 4.628043746954538, 4.73972946106653, 4.1321809920748205, 5.116492343411964, 4.814012698265661, 3.758636944008929, 3.509939138980019, 3.431715425720606, 4.620102567440396, 4.7454961399060815, 4.9267288485859675, 5.454524887860332, 5.143098757537711, 5.133592073454244, 5.850910003764851, 5.829751680575409, 5.792329157084775, 6.088631037399145, 6.175740479094642, 5.773309874203245, 5.344812041056509, 4.567410340565526, 3.3844931453159175]],[[4.842686863101372, 4.940602841090154, 4.367300255009805, 3.915994217998098, 4.956726113516733, 4.707314726712879, 4.520468651127495, 4.311828362218355, 3.46566248715294, 4.064314785822737, 4.224613435525389, 4.627863160119167, 3.967641898171448, 5.112014045793063, 5.375148839857058, 5.2419992282086705, 5.278247576122669, 5.325363801538621, 5.283020033978653, 5.74608772604213, 6.200432762503413, 6.251398522436823, 5.84381890168903, 4.941770061467001, 4.468732349713085, 3.3241014454934685], [5.003383006998139, 3.733428905648437, 5.040599159142632, 4.628043746954538, 4.73972946106653, 4.1321809920748205, 5.116492343411964, 4.814012698265661, 3.758636944008929, 3.509939138980019, 3.431715425720606, 4.620102567440396, 4.7454961399060815, 4.9267288485859675, 5.454524887860332, 5.143098757537711, 5.133592073454244, 5.850910003764851, 5.829751680575409, 5.792329157084775, 6.088631037399145, 6.175740479094642, 5.773309874203245, 5.344812041056509, 4.567410340565526, 3.3844931453159175]],[[4.842686863101372, 4.940602841090154, 4.367300255009805, 3.915994217998098, 4.956726113516733, 4.707314726712879, 4.520468651127495, 4.311828362218355, 3.46566248715294, 4.064314785822737, 4.224613435525389, 4.627863160119167, 3.967641898171448, 5.112014045793063, 5.375148839857058, 5.2419992282086705, 5.278247576122669, 5.325363801538621, 5.283020033978653, 5.74608772604213, 6.200432762503413, 6.251398522436823, 5.84381890168903, 4.941770061467001, 4.468732349713085, 3.3241014454934685], [5.003383006998139, 3.733428905648437, 5.040599159142632, 4.628043746954538, 4.73972946106653, 4.1321809920748205, 5.116492343411964, 4.814012698265661, 3.758636944008929, 3.509939138980019, 3.431715425720606, 4.620102567440396, 4.7454961399060815, 4.9267288485859675, 5.454524887860332, 5.143098757537711, 5.133592073454244, 5.850910003764851, 5.829751680575409, 5.792329157084775, 6.088631037399145, 6.175740479094642, 5.773309874203245, 5.344812041056509, 4.567410340565526, 3.3844931453159175]],[[7.440223932588488, 5.507284447000614, 7.112142422517585, 7.290082489284603, 6.667565776404272, 6.616153615296916, 6.952192483410038, 7.0578299009183425, 4.904467888540947, 5.405766127366995, 5.45422660358222, 5.80551297277695, 5.220208758418451, 4.995709023096739, 5.420745742928167, 5.594333505813237, 5.438769825530799, 5.764282391132931, 6.108009844432293, 6.107890162813424, 6.053602006564365, 6.06397631748666, 6.024029067573905, 5.458085009999574, 4.614555748789999, 4.032736906659446], [7.3906286443887135, 7.991079579188856, 8.252425987782685, 6.893799913593788, 6.502288708669505, 7.626491125779521, 6.031564628619804, 6.185731427819835, 2.992382990949007, 3.710555982306273, 4.629293082003641, 5.531376998042845, 5.108815002246216, 4.740507948781813, 5.442512615078566, 5.43508627698701, 5.951199416798501, 5.959239134941684, 6.180223781130297, 6.19265055905187, 6.071781460875696, 6.3149130022880495, 5.906615274404962, 5.6881627427145265, 4.927002406590375, 3.7178843780014423]],[[7.440223932588488, 5.507284447000614, 7.112142422517585, 7.290082489284603, 6.667565776404272, 6.616153615296916, 6.952192483410038, 7.0578299009183425, 4.904467888540947, 5.405766127366995, 5.45422660358222, 5.80551297277695, 5.220208758418451, 4.995709023096739, 5.420745742928167, 5.594333505813237, 5.438769825530799, 5.764282391132931, 6.108009844432293, 6.107890162813424, 6.053602006564365, 6.06397631748666, 6.024029067573905, 5.458085009999574, 4.614555748789999, 4.032736906659446], [7.3906286443887135, 7.991079579188856, 8.252425987782685, 6.893799913593788, 6.502288708669505, 7.626491125779521, 6.031564628619804, 6.185731427819835, 2.992382990949007, 3.710555982306273, 4.629293082003641, 5.531376998042845, 5.108815002246216, 4.740507948781813, 5.442512615078566, 5.43508627698701, 5.951199416798501, 5.959239134941684, 6.180223781130297, 6.19265055905187, 6.071781460875696, 6.3149130022880495, 5.906615274404962, 5.6881627427145265, 4.927002406590375, 3.7178843780014423]],[[7.738464222320461, 7.933942249528827, 9.900058859939833, 11.846646034454967, 12.388022078233224, 12.06232972352621, 11.80709233412567, 11.722556454154846, 8.165331550084774, 8.652164978627415, 10.688116496825081, 10.964811161502633, 11.808099902153797, 9.780636718065029, 8.474314231029073, 9.649337254172785, 7.739997316452467, 6.96542518045551, 6.529993842110239, 7.218545388898123, 7.064454584418567, 6.199256790329909, 7.0747800476953016, 6.262023349841849, 8.074060106759564, 9.289609416650682], [8.172093686643784, 8.518496373306489, 10.283356224419025, 11.473797489113773, 11.562611459340674, 10.933087499778676, 11.259953438390886, 11.689752924599063, 8.504859553088254, 10.268723277119532, 10.68327471427069, 11.392547982672177, 10.751418599175885, 9.409022610468313, 9.314754427720345, 9.847147992838288, 7.383993749484238, 6.466602344876636, 6.946482788991601, 7.006352634039627, 6.618748642215983, 6.178693296759837, 6.629169831005441, 6.42608441819477, 8.055300302991201, 9.008709244880349]],[[4.842686863101372, 4.940602841090154, 4.3673002550098055, 3.915994217998098, 4.956726113516733, 4.707314726712879, 4.520468651127495, 4.311828362218355, 3.46566248715294, 4.064314785822737, 4.224613435525389, 4.627863160119168, 3.967641898171448, 5.112014045793063, 5.375148839857058, 5.2419992282086705, 5.278247576122669, 5.325363801538621, 5.283020033978653, 5.74608772604213, 6.200432762503413, 6.251398522436823, 5.84381890168903, 4.941770061467001, 4.468732349713085, 3.3241014454934694], [5.003383006998139, 3.733428905648437, 5.040599159142632, 4.628043746954538, 4.73972946106653, 4.1321809920748205, 5.116492343411964, 4.81401269826566, 3.7586369440089284, 3.509939138980019, 3.431715425720606, 4.620102567440396, 4.745496139906081, 4.9267288485859675, 5.454524887860332, 5.143098757537712, 5.133592073454244, 5.850910003764851, 5.829751680575409, 5.792329157084775, 6.088631037399145, 6.1757404790946415, 5.773309874203245, 5.344812041056509, 4.567410340565526, 3.384493145315917]],[[5.270029497029496, 6.227368552502352, 4.436201449984646, 5.350288314634418, 4.090889019261895, 4.115107587994037, 5.102193543492857, 5.289363214148227, 4.0088396275376015, 4.034129636906576, 4.036822904838447, 4.518166595802167, 5.302326192627104, 4.611807624979661, 5.089848192514107, 5.2703619310682415, 5.703991335217011, 5.526722593995629, 5.586688556898365, 6.175001952863098, 6.149209504502864, 5.990906240432877, 5.973909172800482, 5.2921022627164405, 4.655214007861512, 3.3872674173008526], [5.461461034544743, 6.119376432036391, 5.302402399416798, 5.972625945787729, 4.732284013735012, 6.614945801932644, 7.179042802009621, 5.185609524223673, 3.6134258438961693, 5.571633003377058, 6.693085348100548, 7.641422295489408, 5.5180498144539225, 4.740896727019091, 5.341266872797977, 5.163176418061547, 5.4311666509021395, 5.677635681483166, 5.954334609722023, 6.067128602937499, 5.750023725975782, 5.467898200018405, 5.730788524229351, 5.463591174791138, 4.593540498486599, 4.749534984228268]],[[4.770748882351799, 5.998995063021529, 5.127571759798754, 4.801205768481249, 6.094943514347347, 7.381309131168651, 7.425749840334626, 6.90124754235086, 4.630051827776518, 4.667418213647627, 5.928238233110258, 6.77874350172007, 6.629944220306781, 5.35900603420431, 5.354567373294818, 5.251663169413216, 5.006512300760393, 5.607077224876854, 6.055014361828818, 6.325662869471408, 5.9362043731862615, 5.9870087777747925, 5.900552541312807, 5.620153915316936, 4.400051680248553, 4.6415755545356685], [5.939162921877733, 5.90822621495861, 4.175449771304128, 4.697653568647362, 4.7623009989658875, 6.546534313426099, 6.566347467545596, 5.265565960364995, 3.643444086381045, 5.291005784076326, 5.58001056686915, 6.696217823442257, 6.670564986738748, 5.344479845587826, 5.681880227417013, 5.364748146738521, 5.533743844779171, 5.3365144991090965, 5.574795396961233, 6.299042929839956, 5.942923060851159, 6.1438328407368274, 5.884869654165469, 5.327998157902151, 4.651776683075107, 4.574455412920234]],[[6.8655876168121965, 7.932570158495499, 8.793756273470727, 5.953287275621127, 6.876726028416752, 8.155812022765339, 6.99012058983497, 6.743115531747121, 4.838882350336147, 4.543920171914134, 4.9603830095237615, 5.563943743835386, 5.1407183135561985, 4.943764270629893, 5.57680231084732, 5.62827908247078, 5.712048064408209, 5.378924979319954, 6.0476456169222015, 6.255410317013618, 6.26064365963826, 5.846557843560354, 6.10659067276987, 5.516749767694312, 4.78784984817946, 3.9397123563097494], [4.678404821713273, 7.567870187257668, 8.712192998254986, 4.743005660846261, 6.574229908959007, 8.14124693408793, 8.033987508734402, 6.942747991741896, 3.540981193774657, 4.124584283608696, 4.3272519430350584, 4.716876092109876, 5.551487237792182, 4.878443644962924, 5.401531935300791, 5.549939297790428, 5.392157148391174, 5.7371444483264655, 5.987137659206593, 6.217925837599863, 6.209602460659148, 5.397039891864786, 6.1145604116191095, 5.4841713947695165, 4.6998440044683205, 3.866874787977495]],[[7.440223932588488, 5.507284447000614, 7.112142422517585, 7.290082489284603, 6.667565776404272, 6.616153615296916, 6.952192483410038, 7.0578299009183425, 4.904467888540947, 5.405766127366995, 5.45422660358222, 5.80551297277695, 5.220208758418451, 4.995709023096739, 5.420745742928167, 5.594333505813237, 5.438769825530799, 5.764282391132931, 6.108009844432293, 6.107890162813424, 6.053602006564365, 6.063976317486659, 6.024029067573905, 5.458085009999574, 4.614555748789999, 4.032736906659446], [7.390628644388713, 7.991079579188856, 8.252425987782685, 6.893799913593788, 6.502288708669505, 7.626491125779521, 6.031564628619803, 6.185731427819834, 2.9923829909490065, 3.710555982306273, 4.629293082003641, 5.531376998042845, 5.108815002246216, 4.740507948781813, 5.442512615078566, 5.43508627698701, 5.951199416798501, 5.959239134941683, 6.180223781130298, 6.19265055905187, 6.071781460875697, 6.3149130022880495, 5.906615274404962, 5.688162742714526, 4.927002406590375, 3.7178843780014423]],[[6.357415140318851, 6.330550132305745, 6.526056933726785, 5.861769460722235, 7.327453755368086, 8.228714866461614, 7.523651433656047, 6.129281750878256, 5.251474963891562, 5.344713367326823, 5.040049359443988, 6.032075806439486, 5.136317644109466, 5.0535794468446955, 5.5802457704440664, 5.640194356833367, 5.717196006993193, 5.365619250181815, 5.648058119313039, 6.016632623784903, 6.028655260208149, 6.050393730291751, 5.809546716679328, 5.2564403384291065, 4.756783314491807, 4.053733407685056], [4.034431952353789, 6.49886655670802, 6.572596830861374, 4.4809062418976655, 5.706547694817619, 6.6858653642924954, 7.416970190914389, 6.738756223832026, 4.389010841328765, 5.477070702109522, 5.187359187069882, 5.693477051805135, 5.412778249983459, 4.9819513430042734, 5.27070371899075, 5.627359906607185, 5.414899339406586, 5.616777896816121, 5.63696218591526, 5.624401297113823, 5.994512001231834, 5.780989484905204, 5.689422511173976, 5.147060903061594, 4.6677322581640395, 3.939792605593581]],[[6.186115069997011, 6.0788664842546885, 4.50676164771139, 6.239247377724524, 6.496634331096491, 6.172198471570733, 7.176953719500172, 7.449595909494369, 5.342618247300031, 4.4545908530855085, 4.805929455723621, 4.620242019692568, 5.003426169844333, 5.116232900871514, 5.4659724874035645, 5.447283099813698, 5.225958660950098, 5.875648941242159, 5.878154449054266, 5.694013674069602, 6.022206763657595, 6.0939515525066, 5.7029656722565125, 5.356741283845734, 4.665376610408856, 3.78463078430121], [6.654841995974846, 5.940322352768354, 5.238882527145087, 5.7464496304566355, 5.498688473099264, 5.848891108565921, 6.09684114591386, 6.434036936798409, 3.4644212670615757, 3.710944905006786, 4.3727663471408444, 5.070440715310921, 4.70496753634368, 5.180158325190011, 4.852645927056211, 5.4994800298276525, 4.937732525046967, 5.540122475938401, 6.172853456212635, 6.170503445735473, 6.124755883223548, 6.2519302618111245, 5.900813043589835, 5.612146510128575, 4.432585475579923, 3.5086569156327507]],[[2.5171697096813213, 6.3922453439651985, 5.1855149700807885, 4.124992736458461, 5.799201635779017, 6.99207686855324, 6.235722977939212, 5.590177349524143, 4.429414009712176, 4.827732602135687, 4.348914437102868, 5.232773340210062, 5.351150366218037, 5.456440093724018, 5.346179658242372, 5.2890181634992075, 4.623135103531591, 5.549461102678569, 5.79860502174181, 6.15994693184003, 6.1276313206562545, 5.951409699332733, 5.98235849079123, 5.369587220834301, 4.410412726774685, 3.801009307382004], [4.439478696651048, 5.177799349694066, 5.974719179100793, 6.428445758009116, 6.9063196804360185, 8.89310464148672, 7.112029594866397, 5.470378453646272, 4.473367021514044, 4.386441857833246, 5.170000494955973, 5.726458483078197, 5.347930667318488, 5.2972495397733885, 5.2520846686081155, 4.913498511020396, 5.268315013358873, 5.625511506857343, 5.917006021375228, 6.549223747897227, 6.6344854404422025, 6.230887343820519, 6.506680922951532, 5.501648985065932, 4.479511298551335, 3.956193804130613]],[[5.189846833613424, 6.756359254467184, 6.905776991128192, 5.882107292274268, 5.876641468363622, 6.2145654909955805, 5.940530203018855, 5.6761502912453095, 3.6054202315449406, 4.346744970061346, 4.0942454139235895, 4.8329657737895495, 4.790117211986893, 5.034299692148844, 5.6938930304508535, 6.288381864504183, 5.649009669862861, 5.642067099242697, 5.936039713715437, 6.137737009751198, 5.919057941222673, 5.714035709915053, 5.7848673469397225, 5.468195290142958, 5.056556740504238, 3.8689271291004643], [3.3724772438280874, 5.371750884805875, 8.257630003884394, 7.900731382507271, 7.455963509249479, 6.92375563342153, 7.863216804961571, 7.438520705220319, 5.184636633854654, 5.751030507645501, 7.485547590455844, 7.831485879506778, 6.692457781094734, 5.555744994092348, 6.094309056364599, 6.407445057290972, 5.428551952667862, 5.837723895906896, 5.74313223484883, 6.16750807053656, 5.774556626171718, 5.662657353353213, 5.800401712727202, 5.400905559763083, 5.104512614686177, 5.344040975861816]],[[5.189846833613424, 6.756359254467184, 6.905776991128192, 5.882107292274268, 5.876641468363622, 6.2145654909955805, 5.940530203018855, 5.6761502912453095, 3.6054202315449406, 4.346744970061346, 4.0942454139235895, 4.8329657737895495, 4.790117211986893, 5.034299692148844, 5.6938930304508535, 6.288381864504183, 5.649009669862861, 5.642067099242697, 5.936039713715437, 6.137737009751198, 5.919057941222673, 5.714035709915053, 5.7848673469397225, 5.468195290142958, 5.056556740504238, 3.8689271291004643], [3.3724772438280874, 5.371750884805875, 8.257630003884394, 7.900731382507271, 7.455963509249479, 6.92375563342153, 7.863216804961571, 7.438520705220319, 5.184636633854654, 5.751030507645501, 7.485547590455844, 7.831485879506778, 6.692457781094734, 5.555744994092348, 6.094309056364599, 6.407445057290972, 5.428551952667862, 5.837723895906896, 5.74313223484883, 6.16750807053656, 5.774556626171718, 5.662657353353213, 5.800401712727202, 5.400905559763083, 5.104512614686177, 5.344040975861816]],[[4.334721370453612, 5.9739628059534, 6.265974303823975, 6.345893279959952, 6.858242668640501, 7.026856666514761, 7.860599854930776, 7.398604137394523, 6.359646515120692, 6.222276979656678, 6.686216855356826, 7.155641401740545, 6.788529200295718, 5.593118855801094, 5.309796314008841, 6.158878343487232, 6.263502672066431, 6.217293376565409, 6.366059856332343, 6.279568091473973, 6.677202773619323, 6.643464531218356, 6.409267233655801, 5.755745273436116, 5.100702942129028, 5.335129536526355], [7.160647243090869, 6.567685355595886, 4.90233013471508, 5.682029707946317, 7.205040128773916, 8.215498374381884, 8.840106090989599, 8.471586655971775, 5.022053390116782, 5.992646565418439, 8.12660326664314, 8.228935627098862, 6.907667177412059, 4.958237215894629, 4.855312871835857, 6.415687971974325, 5.714884254887505, 5.688597190368186, 6.1548544347869, 6.03724004042008, 6.119521546752495, 6.037714811805952, 5.898497243165874, 5.581222673588291, 5.014206174139886, 5.663416555752319]],[[5.890302593678864, 5.21180726421165, 5.324254361233675, 6.088991815009525, 4.862425529166954, 6.016044340292808, 5.669676820363695, 6.012179344828563, 4.255161938850746, 4.748776319011942, 5.508596526484098, 4.823869473040359, 5.116974832205998, 5.190577143009528, 5.5930394975011515, 5.579268911954584, 5.764505319460383, 5.982351230964798, 6.282237729201488, 6.350154843322939, 6.491110660693489, 6.404217246944287, 6.181204486368999, 5.810179941078948, 4.940592779678917, 3.720937163397539], [5.1115545780977225, 5.7666225853648445, 4.357947665840798, 7.1717965804560295, 7.8256044702908545, 5.9934626086418215, 6.17916467894829, 6.6368974047075335, 3.6527313759222118, 4.0870637989451, 4.785803184084874, 5.990383548567321, 5.891654623812888, 5.443150737503426, 5.449475141823635, 5.456986925707505, 5.570628897393146, 5.785898794646767, 5.670035568513936, 6.430871020163596, 6.167532766250318, 6.245780332609718, 6.058033550450801, 5.500910062006717, 4.762774356363048, 4.110029908546244]]
| 4,989.5 | 19,937 | 0.83826 | 2,085 | 19,958 | 8.023981 | 0.320384 | 0.004782 | 0.005081 | 0.009564 | 0.55792 | 0.55003 | 0.55003 | 0.544172 | 0.544172 | 0.534489 | 0 | 0.882831 | 0.051508 | 19,958 | 3 | 19,938 | 6,652.666667 | 0.000951 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.5 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
d9f0f5f011c6462b864cc0d6bf29409daa25863e | 31 | py | Python | __init__.py | gauravpawar8/tensorflow_vgg19_image_classification | 21fdbfb6b0a559c2654345a2811cd98fd800e9ed | [
"MIT"
] | null | null | null | __init__.py | gauravpawar8/tensorflow_vgg19_image_classification | 21fdbfb6b0a559c2654345a2811cd98fd800e9ed | [
"MIT"
] | null | null | null | __init__.py | gauravpawar8/tensorflow_vgg19_image_classification | 21fdbfb6b0a559c2654345a2811cd98fd800e9ed | [
"MIT"
] | 1 | 2018-03-29T13:20:10.000Z | 2018-03-29T13:20:10.000Z | from . import tensorflow_layers | 31 | 31 | 0.870968 | 4 | 31 | 6.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.096774 | 31 | 1 | 31 | 31 | 0.928571 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
d9f4db5150156aa76d5e1771899f14be75c1da14 | 23,491 | py | Python | pybind/nos/v7_1_0/interface/fortygigabitethernet/ipv6/interface_ospfv3_conf/link_interval_properties/__init__.py | shivharis/pybind | 4e1c6d54b9fd722ccec25546ba2413d79ce337e6 | [
"Apache-2.0"
] | null | null | null | pybind/nos/v7_1_0/interface/fortygigabitethernet/ipv6/interface_ospfv3_conf/link_interval_properties/__init__.py | shivharis/pybind | 4e1c6d54b9fd722ccec25546ba2413d79ce337e6 | [
"Apache-2.0"
] | null | null | null | pybind/nos/v7_1_0/interface/fortygigabitethernet/ipv6/interface_ospfv3_conf/link_interval_properties/__init__.py | shivharis/pybind | 4e1c6d54b9fd722ccec25546ba2413d79ce337e6 | [
"Apache-2.0"
] | 1 | 2021-11-05T22:15:42.000Z | 2021-11-05T22:15:42.000Z |
from operator import attrgetter
import pyangbind.lib.xpathhelper as xpathhelper
from pyangbind.lib.yangtypes import RestrictedPrecisionDecimalType, RestrictedClassType, TypedListType
from pyangbind.lib.yangtypes import YANGBool, YANGListType, YANGDynClass, ReferenceType
from pyangbind.lib.base import PybindBase
from decimal import Decimal
from bitarray import bitarray
import __builtin__
class link_interval_properties(PybindBase):
"""
This class was auto-generated by the PythonClass plugin for PYANG
from YANG module brocade-interface - based on the path /interface/fortygigabitethernet/ipv6/interface-ospfv3-conf/link-interval-properties. Each member element of
the container is represented as a class variable - with a specific
YANG type.
"""
__slots__ = ('_pybind_generated_by', '_path_helper', '_yang_name', '_rest_name', '_extmethods', '__hello_interval','__dead_interval','__hello_jitter','__retransmit_interval','__transmit_delay',)
_yang_name = 'link-interval-properties'
_rest_name = ''
_pybind_generated_by = 'container'
def __init__(self, *args, **kwargs):
path_helper_ = kwargs.pop("path_helper", None)
if path_helper_ is False:
self._path_helper = False
elif path_helper_ is not None and isinstance(path_helper_, xpathhelper.YANGPathHelper):
self._path_helper = path_helper_
elif hasattr(self, "_parent"):
path_helper_ = getattr(self._parent, "_path_helper", False)
self._path_helper = path_helper_
else:
self._path_helper = False
extmethods = kwargs.pop("extmethods", None)
if extmethods is False:
self._extmethods = False
elif extmethods is not None and isinstance(extmethods, dict):
self._extmethods = extmethods
elif hasattr(self, "_parent"):
extmethods = getattr(self._parent, "_extmethods", None)
self._extmethods = extmethods
else:
self._extmethods = False
self.__transmit_delay = YANGDynClass(base=RestrictedClassType(base_type=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32), restriction_dict={'range': [u'0..3600']}), is_leaf=True, yang_name="transmit-delay", rest_name="transmit-delay", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'cli-full-command': None, u'info': u'Link state transmit delay'}}, namespace='urn:brocade.com:mgmt:brocade-ospfv3', defining_module='brocade-ospfv3', yang_type='common-def:time-interval-sec', is_config=True)
self.__retransmit_interval = YANGDynClass(base=RestrictedClassType(base_type=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32), restriction_dict={'range': [u'1..3600']}), is_leaf=True, yang_name="retransmit-interval", rest_name="retransmit-interval", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'cli-full-command': None, u'info': u'Time between retransmitting lost link state advertisements.'}}, namespace='urn:brocade.com:mgmt:brocade-ospfv3', defining_module='brocade-ospfv3', yang_type='common-def:time-interval-sec', is_config=True)
self.__dead_interval = YANGDynClass(base=RestrictedClassType(base_type=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32), restriction_dict={'range': [u'3..65535']}), is_leaf=True, yang_name="dead-interval", rest_name="dead-interval", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'cli-full-command': None, u'info': u'Interval after which a neighbor is declared dead'}}, namespace='urn:brocade.com:mgmt:brocade-ospfv3', defining_module='brocade-ospfv3', yang_type='common-def:time-interval-sec', is_config=True)
self.__hello_jitter = YANGDynClass(base=RestrictedClassType(base_type=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32), restriction_dict={'range': [u'1..50']}), is_leaf=True, yang_name="hello-jitter", rest_name="hello-jitter", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'cli-full-command': None, u'info': u'Allowed jitter between HELLO packets'}}, namespace='urn:brocade.com:mgmt:brocade-ospfv3', defining_module='brocade-ospfv3', yang_type='uint32', is_config=True)
self.__hello_interval = YANGDynClass(base=RestrictedClassType(base_type=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32), restriction_dict={'range': [u'1..65535']}), is_leaf=True, yang_name="hello-interval", rest_name="hello-interval", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'cli-full-command': None, u'info': u'Time between HELLO packets'}}, namespace='urn:brocade.com:mgmt:brocade-ospfv3', defining_module='brocade-ospfv3', yang_type='common-def:time-interval-sec', is_config=True)
load = kwargs.pop("load", None)
if args:
if len(args) > 1:
raise TypeError("cannot create a YANG container with >1 argument")
all_attr = True
for e in self._pyangbind_elements:
if not hasattr(args[0], e):
all_attr = False
break
if not all_attr:
raise ValueError("Supplied object did not have the correct attributes")
for e in self._pyangbind_elements:
nobj = getattr(args[0], e)
if nobj._changed() is False:
continue
setmethod = getattr(self, "_set_%s" % e)
if load is None:
setmethod(getattr(args[0], e))
else:
setmethod(getattr(args[0], e), load=load)
def _path(self):
if hasattr(self, "_parent"):
return self._parent._path()+[self._yang_name]
else:
return [u'interface', u'fortygigabitethernet', u'ipv6', u'interface-ospfv3-conf', u'link-interval-properties']
def _rest_path(self):
if hasattr(self, "_parent"):
if self._rest_name:
return self._parent._rest_path()+[self._rest_name]
else:
return self._parent._rest_path()
else:
return [u'interface', u'FortyGigabitEthernet', u'ipv6', u'ospf']
def _get_hello_interval(self):
"""
Getter method for hello_interval, mapped from YANG variable /interface/fortygigabitethernet/ipv6/interface_ospfv3_conf/link_interval_properties/hello_interval (common-def:time-interval-sec)
YANG Description: Time interval in seconds between HELLO packets
"""
return self.__hello_interval
def _set_hello_interval(self, v, load=False):
"""
Setter method for hello_interval, mapped from YANG variable /interface/fortygigabitethernet/ipv6/interface_ospfv3_conf/link_interval_properties/hello_interval (common-def:time-interval-sec)
If this variable is read-only (config: false) in the
source YANG file, then _set_hello_interval is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_hello_interval() directly.
YANG Description: Time interval in seconds between HELLO packets
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=RestrictedClassType(base_type=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32), restriction_dict={'range': [u'1..65535']}), is_leaf=True, yang_name="hello-interval", rest_name="hello-interval", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'cli-full-command': None, u'info': u'Time between HELLO packets'}}, namespace='urn:brocade.com:mgmt:brocade-ospfv3', defining_module='brocade-ospfv3', yang_type='common-def:time-interval-sec', is_config=True)
except (TypeError, ValueError):
raise ValueError({
'error-string': """hello_interval must be of a type compatible with common-def:time-interval-sec""",
'defined-type': "common-def:time-interval-sec",
'generated-type': """YANGDynClass(base=RestrictedClassType(base_type=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32), restriction_dict={'range': [u'1..65535']}), is_leaf=True, yang_name="hello-interval", rest_name="hello-interval", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'cli-full-command': None, u'info': u'Time between HELLO packets'}}, namespace='urn:brocade.com:mgmt:brocade-ospfv3', defining_module='brocade-ospfv3', yang_type='common-def:time-interval-sec', is_config=True)""",
})
self.__hello_interval = t
if hasattr(self, '_set'):
self._set()
def _unset_hello_interval(self):
self.__hello_interval = YANGDynClass(base=RestrictedClassType(base_type=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32), restriction_dict={'range': [u'1..65535']}), is_leaf=True, yang_name="hello-interval", rest_name="hello-interval", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'cli-full-command': None, u'info': u'Time between HELLO packets'}}, namespace='urn:brocade.com:mgmt:brocade-ospfv3', defining_module='brocade-ospfv3', yang_type='common-def:time-interval-sec', is_config=True)
def _get_dead_interval(self):
"""
Getter method for dead_interval, mapped from YANG variable /interface/fortygigabitethernet/ipv6/interface_ospfv3_conf/link_interval_properties/dead_interval (common-def:time-interval-sec)
YANG Description: Indicates the number of seconds that a neighbor router waits for a hello packetfrom the device before declaring the router down.
"""
return self.__dead_interval
def _set_dead_interval(self, v, load=False):
"""
Setter method for dead_interval, mapped from YANG variable /interface/fortygigabitethernet/ipv6/interface_ospfv3_conf/link_interval_properties/dead_interval (common-def:time-interval-sec)
If this variable is read-only (config: false) in the
source YANG file, then _set_dead_interval is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_dead_interval() directly.
YANG Description: Indicates the number of seconds that a neighbor router waits for a hello packetfrom the device before declaring the router down.
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=RestrictedClassType(base_type=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32), restriction_dict={'range': [u'3..65535']}), is_leaf=True, yang_name="dead-interval", rest_name="dead-interval", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'cli-full-command': None, u'info': u'Interval after which a neighbor is declared dead'}}, namespace='urn:brocade.com:mgmt:brocade-ospfv3', defining_module='brocade-ospfv3', yang_type='common-def:time-interval-sec', is_config=True)
except (TypeError, ValueError):
raise ValueError({
'error-string': """dead_interval must be of a type compatible with common-def:time-interval-sec""",
'defined-type': "common-def:time-interval-sec",
'generated-type': """YANGDynClass(base=RestrictedClassType(base_type=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32), restriction_dict={'range': [u'3..65535']}), is_leaf=True, yang_name="dead-interval", rest_name="dead-interval", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'cli-full-command': None, u'info': u'Interval after which a neighbor is declared dead'}}, namespace='urn:brocade.com:mgmt:brocade-ospfv3', defining_module='brocade-ospfv3', yang_type='common-def:time-interval-sec', is_config=True)""",
})
self.__dead_interval = t
if hasattr(self, '_set'):
self._set()
def _unset_dead_interval(self):
self.__dead_interval = YANGDynClass(base=RestrictedClassType(base_type=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32), restriction_dict={'range': [u'3..65535']}), is_leaf=True, yang_name="dead-interval", rest_name="dead-interval", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'cli-full-command': None, u'info': u'Interval after which a neighbor is declared dead'}}, namespace='urn:brocade.com:mgmt:brocade-ospfv3', defining_module='brocade-ospfv3', yang_type='common-def:time-interval-sec', is_config=True)
def _get_hello_jitter(self):
"""
Getter method for hello_jitter, mapped from YANG variable /interface/fortygigabitethernet/ipv6/interface_ospfv3_conf/link_interval_properties/hello_jitter (uint32)
YANG Description: Allowed jitter between HELLO packets
"""
return self.__hello_jitter
def _set_hello_jitter(self, v, load=False):
"""
Setter method for hello_jitter, mapped from YANG variable /interface/fortygigabitethernet/ipv6/interface_ospfv3_conf/link_interval_properties/hello_jitter (uint32)
If this variable is read-only (config: false) in the
source YANG file, then _set_hello_jitter is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_hello_jitter() directly.
YANG Description: Allowed jitter between HELLO packets
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=RestrictedClassType(base_type=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32), restriction_dict={'range': [u'1..50']}), is_leaf=True, yang_name="hello-jitter", rest_name="hello-jitter", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'cli-full-command': None, u'info': u'Allowed jitter between HELLO packets'}}, namespace='urn:brocade.com:mgmt:brocade-ospfv3', defining_module='brocade-ospfv3', yang_type='uint32', is_config=True)
except (TypeError, ValueError):
raise ValueError({
'error-string': """hello_jitter must be of a type compatible with uint32""",
'defined-type': "uint32",
'generated-type': """YANGDynClass(base=RestrictedClassType(base_type=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32), restriction_dict={'range': [u'1..50']}), is_leaf=True, yang_name="hello-jitter", rest_name="hello-jitter", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'cli-full-command': None, u'info': u'Allowed jitter between HELLO packets'}}, namespace='urn:brocade.com:mgmt:brocade-ospfv3', defining_module='brocade-ospfv3', yang_type='uint32', is_config=True)""",
})
self.__hello_jitter = t
if hasattr(self, '_set'):
self._set()
def _unset_hello_jitter(self):
self.__hello_jitter = YANGDynClass(base=RestrictedClassType(base_type=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32), restriction_dict={'range': [u'1..50']}), is_leaf=True, yang_name="hello-jitter", rest_name="hello-jitter", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'cli-full-command': None, u'info': u'Allowed jitter between HELLO packets'}}, namespace='urn:brocade.com:mgmt:brocade-ospfv3', defining_module='brocade-ospfv3', yang_type='uint32', is_config=True)
def _get_retransmit_interval(self):
"""
Getter method for retransmit_interval, mapped from YANG variable /interface/fortygigabitethernet/ipv6/interface_ospfv3_conf/link_interval_properties/retransmit_interval (common-def:time-interval-sec)
YANG Description: The time between retransmissions of LSAs to adjacent routers for an interface.The value can be from 1-3600 seconds. The default is 5 seconds.
"""
return self.__retransmit_interval
def _set_retransmit_interval(self, v, load=False):
"""
Setter method for retransmit_interval, mapped from YANG variable /interface/fortygigabitethernet/ipv6/interface_ospfv3_conf/link_interval_properties/retransmit_interval (common-def:time-interval-sec)
If this variable is read-only (config: false) in the
source YANG file, then _set_retransmit_interval is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_retransmit_interval() directly.
YANG Description: The time between retransmissions of LSAs to adjacent routers for an interface.The value can be from 1-3600 seconds. The default is 5 seconds.
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=RestrictedClassType(base_type=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32), restriction_dict={'range': [u'1..3600']}), is_leaf=True, yang_name="retransmit-interval", rest_name="retransmit-interval", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'cli-full-command': None, u'info': u'Time between retransmitting lost link state advertisements.'}}, namespace='urn:brocade.com:mgmt:brocade-ospfv3', defining_module='brocade-ospfv3', yang_type='common-def:time-interval-sec', is_config=True)
except (TypeError, ValueError):
raise ValueError({
'error-string': """retransmit_interval must be of a type compatible with common-def:time-interval-sec""",
'defined-type': "common-def:time-interval-sec",
'generated-type': """YANGDynClass(base=RestrictedClassType(base_type=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32), restriction_dict={'range': [u'1..3600']}), is_leaf=True, yang_name="retransmit-interval", rest_name="retransmit-interval", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'cli-full-command': None, u'info': u'Time between retransmitting lost link state advertisements.'}}, namespace='urn:brocade.com:mgmt:brocade-ospfv3', defining_module='brocade-ospfv3', yang_type='common-def:time-interval-sec', is_config=True)""",
})
self.__retransmit_interval = t
if hasattr(self, '_set'):
self._set()
def _unset_retransmit_interval(self):
self.__retransmit_interval = YANGDynClass(base=RestrictedClassType(base_type=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32), restriction_dict={'range': [u'1..3600']}), is_leaf=True, yang_name="retransmit-interval", rest_name="retransmit-interval", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'cli-full-command': None, u'info': u'Time between retransmitting lost link state advertisements.'}}, namespace='urn:brocade.com:mgmt:brocade-ospfv3', defining_module='brocade-ospfv3', yang_type='common-def:time-interval-sec', is_config=True)
def _get_transmit_delay(self):
"""
Getter method for transmit_delay, mapped from YANG variable /interface/fortygigabitethernet/ipv6/interface_ospfv3_conf/link_interval_properties/transmit_delay (common-def:time-interval-sec)
YANG Description: The time it takes to transmit Link State Update packets on this interface.The range is 0-3600 seconds. The default is 1 second.
"""
return self.__transmit_delay
def _set_transmit_delay(self, v, load=False):
"""
Setter method for transmit_delay, mapped from YANG variable /interface/fortygigabitethernet/ipv6/interface_ospfv3_conf/link_interval_properties/transmit_delay (common-def:time-interval-sec)
If this variable is read-only (config: false) in the
source YANG file, then _set_transmit_delay is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_transmit_delay() directly.
YANG Description: The time it takes to transmit Link State Update packets on this interface.The range is 0-3600 seconds. The default is 1 second.
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=RestrictedClassType(base_type=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32), restriction_dict={'range': [u'0..3600']}), is_leaf=True, yang_name="transmit-delay", rest_name="transmit-delay", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'cli-full-command': None, u'info': u'Link state transmit delay'}}, namespace='urn:brocade.com:mgmt:brocade-ospfv3', defining_module='brocade-ospfv3', yang_type='common-def:time-interval-sec', is_config=True)
except (TypeError, ValueError):
raise ValueError({
'error-string': """transmit_delay must be of a type compatible with common-def:time-interval-sec""",
'defined-type': "common-def:time-interval-sec",
'generated-type': """YANGDynClass(base=RestrictedClassType(base_type=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32), restriction_dict={'range': [u'0..3600']}), is_leaf=True, yang_name="transmit-delay", rest_name="transmit-delay", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'cli-full-command': None, u'info': u'Link state transmit delay'}}, namespace='urn:brocade.com:mgmt:brocade-ospfv3', defining_module='brocade-ospfv3', yang_type='common-def:time-interval-sec', is_config=True)""",
})
self.__transmit_delay = t
if hasattr(self, '_set'):
self._set()
def _unset_transmit_delay(self):
self.__transmit_delay = YANGDynClass(base=RestrictedClassType(base_type=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32), restriction_dict={'range': [u'0..3600']}), is_leaf=True, yang_name="transmit-delay", rest_name="transmit-delay", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'cli-full-command': None, u'info': u'Link state transmit delay'}}, namespace='urn:brocade.com:mgmt:brocade-ospfv3', defining_module='brocade-ospfv3', yang_type='common-def:time-interval-sec', is_config=True)
hello_interval = __builtin__.property(_get_hello_interval, _set_hello_interval)
dead_interval = __builtin__.property(_get_dead_interval, _set_dead_interval)
hello_jitter = __builtin__.property(_get_hello_jitter, _set_hello_jitter)
retransmit_interval = __builtin__.property(_get_retransmit_interval, _set_retransmit_interval)
transmit_delay = __builtin__.property(_get_transmit_delay, _set_transmit_delay)
_pyangbind_elements = {'hello_interval': hello_interval, 'dead_interval': dead_interval, 'hello_jitter': hello_jitter, 'retransmit_interval': retransmit_interval, 'transmit_delay': transmit_delay, }
| 83.007067 | 669 | 0.752075 | 3,163 | 23,491 | 5.359153 | 0.070503 | 0.031857 | 0.03634 | 0.039644 | 0.85641 | 0.833166 | 0.824671 | 0.817474 | 0.811339 | 0.794171 | 0 | 0.022676 | 0.115789 | 23,491 | 282 | 670 | 83.301418 | 0.793414 | 0.192627 | 0 | 0.440476 | 0 | 0.029762 | 0.391058 | 0.172851 | 0 | 0 | 0 | 0 | 0 | 1 | 0.107143 | false | 0 | 0.047619 | 0 | 0.279762 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
8a0688e6a696874bbb9132a1477381b616e006ac | 4,732 | py | Python | tests/integration/test_endpoints.py | LimaGuilherme/what-is-your-band-favorite-word-api | dff4a9ed120bdd9dca1ad72167aa963d15d810ca | [
"MIT"
] | null | null | null | tests/integration/test_endpoints.py | LimaGuilherme/what-is-your-band-favorite-word-api | dff4a9ed120bdd9dca1ad72167aa963d15d810ca | [
"MIT"
] | null | null | null | tests/integration/test_endpoints.py | LimaGuilherme/what-is-your-band-favorite-word-api | dff4a9ed120bdd9dca1ad72167aa963d15d810ca | [
"MIT"
] | null | null | null | from unittest import TestCase
from src import initialize
from src.lyrics.repositories import create_repository, MongoRepository, ElasticSearchRepository
from src import configurations as config_module
class TestLyricsResource(TestCase):
def setUp(self) -> None:
self.app = initialize.web_app.test_client()
self.config = config_module.get_config(config_type='full')
self.repository = create_repository(self.config)
def tearDown(self) -> None:
if isinstance(self.repository, MongoRepository):
self.repository.delete_collection()
if isinstance(self.repository, ElasticSearchRepository):
self.repository.delete_index()
def test_post_should_return_ok_when_lyrics_are_correctly_create(self):
self.url = '/api/artists/Mc Rodolfinho/lyrics'
response = self.app.post(self.url)
self.assertEqual(response.status_code, 200)
self.assertEqual(response.json, {'result': 'OK'})
def test_post_should_return_lyrics_not_found(self):
self.url = '/api/artists/Semper Soma/lyrics'
response = self.app.post(self.url)
self.assertEqual(response.status_code, 404)
def test_post_should_raise_artist_not_found_when_given_a_invalid_artist(self):
self.url = '/api/artists/Mc Kweiss/lyrics'
response = self.app.post(self.url)
self.assertEqual(response.status_code, 404)
self.assertEqual(response.json, {'result': 'error',
'exception': 'This artist seems invalid, perhaps you misspelled'})
def test_get_should_return_method_not_allowed(self):
self.url = '/api/artists/Mc Rodolfinho/lyrics'
response = self.app.get(self.url)
self.assertEqual(response.status_code, 405)
self.assertEqual(response.json, {'error': 'Method Not Allowed', 'result': 'error'})
def test_delete_should_return_method_not_allowed(self):
self.url = '/api/artists/Mc Rodolfinho/lyrics'
response = self.app.delete(self.url)
self.assertEqual(response.status_code, 405)
self.assertEqual(response.json, {'error': 'Method Not Allowed', 'result': 'error'})
def test_put_should_return_method_not_allowed(self):
self.url = '/api/artists/Mc Rodolfinho/lyrics'
response = self.app.put(self.url)
self.assertEqual(response.status_code, 405)
self.assertEqual(response.json, {'error': 'Method Not Allowed', 'result': 'error'})
class TestTopWordsResource(TestCase):
def setUp(self) -> None:
self.app = initialize.web_app.test_client()
self.config = config_module.get_config(config_type='full')
self.repository = create_repository(self.config)
def tearDown(self) -> None:
if isinstance(self.repository, MongoRepository):
self.repository.delete_collection()
if isinstance(self.repository, ElasticSearchRepository):
self.repository.delete_index()
def test_get_should_return_(self):
self.url_to_index = '/api/artists/Mc Rodolfinho/lyrics'
self.url_to_get = '/api/artists/Mc Rodolfinho/top-words'
self.app.post(self.url_to_index)
response = self.app.get(self.url_to_get, query_string={"size": 10})
self.assertEqual(response.status_code, 200)
self.assertEqual(len(response.json), 10)
def test_get_should_return_artist_not_found(self):
self.url = '/api/artists/Mc Kweiss/top-words'
response = self.app.get(self.url, query_string={"size": 10})
self.assertEqual(response.status_code, 404)
def test_get_should_return_lyrics_not_found(self):
self.url = '/api/artists/Semper Soma/top-words'
response = self.app.get(self.url, query_string={"size": 10})
self.assertEqual(response.status_code, 404)
def test_post_should_return_method_not_allowed(self):
self.url = '/api/artists/Mc Rodolfinho/lyrics'
response = self.app.get(self.url)
self.assertEqual(response.status_code, 405)
self.assertEqual(response.json, {'error': 'Method Not Allowed', 'result': 'error'})
def test_delete_should_return_method_not_allowed(self):
self.url = '/api/artists/Mc Rodolfinho/lyrics'
response = self.app.delete(self.url)
self.assertEqual(response.status_code, 405)
self.assertEqual(response.json, {'error': 'Method Not Allowed', 'result': 'error'})
def test_put_should_return_method_not_allowed(self):
self.url = '/api/artists/Mc Rodolfinho/lyrics'
response = self.app.put(self.url)
self.assertEqual(response.status_code, 405)
self.assertEqual(response.json, {'error': 'Method Not Allowed', 'result': 'error'})
| 43.412844 | 107 | 0.691674 | 588 | 4,732 | 5.363946 | 0.144558 | 0.057705 | 0.145847 | 0.110336 | 0.857958 | 0.80279 | 0.794864 | 0.773938 | 0.762524 | 0.746671 | 0 | 0.011506 | 0.191885 | 4,732 | 108 | 108 | 43.814815 | 0.813285 | 0 | 0 | 0.690476 | 0 | 0 | 0.153635 | 0 | 0 | 0 | 0 | 0 | 0.25 | 1 | 0.190476 | false | 0 | 0.047619 | 0 | 0.261905 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.