hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
c823a4c60137f4d6fbd1ad2b23e17ce45df8a297 | 402 | py | Python | django_messages_framework/tests/__init__.py | none-da/zeshare | 6c13cd3bd9d82d89f53d4a8b287fe2c30f1d3779 | [
"BSD-3-Clause"
] | null | null | null | django_messages_framework/tests/__init__.py | none-da/zeshare | 6c13cd3bd9d82d89f53d4a8b287fe2c30f1d3779 | [
"BSD-3-Clause"
] | null | null | null | django_messages_framework/tests/__init__.py | none-da/zeshare | 6c13cd3bd9d82d89f53d4a8b287fe2c30f1d3779 | [
"BSD-3-Clause"
] | 1 | 2021-04-12T11:43:38.000Z | 2021-04-12T11:43:38.000Z | from django_messages_framework.tests.cookie import CookieTest
from django_messages_framework.tests.fallback import FallbackTest
from django_messages_framework.tests.middleware import MiddlewareTest
from django_messages_framework.tests.session import SessionTest
from django_messages_framework.tests.user_messages import \
UserMessagesTest, LegacyFallbackTest
| 57.428571 | 79 | 0.808458 | 42 | 402 | 7.47619 | 0.404762 | 0.159236 | 0.286624 | 0.429936 | 0.509554 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.161692 | 402 | 6 | 80 | 67 | 0.931751 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.833333 | 0 | 0.833333 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
c833dddf91459efd79b586865f20a51c0e56eacf | 30 | py | Python | orgassist/plugins/org/__init__.py | blaa/orgassist | a09727ca1cb63e881b2ea7b96e078aa68f21d0ce | [
"MIT"
] | 43 | 2018-05-30T15:59:51.000Z | 2021-09-18T22:11:37.000Z | orgassist/plugins/org/__init__.py | blaa/orgassist | a09727ca1cb63e881b2ea7b96e078aa68f21d0ce | [
"MIT"
] | 1 | 2018-06-01T22:41:59.000Z | 2018-06-14T18:38:55.000Z | orgassist/plugins/org/__init__.py | blaa/orgassist | a09727ca1cb63e881b2ea7b96e078aa68f21d0ce | [
"MIT"
] | 2 | 2020-02-18T08:54:45.000Z | 2021-02-28T02:56:24.000Z | from .plugin import OrgPlugin
| 15 | 29 | 0.833333 | 4 | 30 | 6.25 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.133333 | 30 | 1 | 30 | 30 | 0.961538 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
c0711ab29dd83ed5992762c360fe5f2f27f31ff5 | 135 | py | Python | venv/Lib/site-packages/palettable/colorbrewer/diverging.py | EkremBayar/bayar | aad1a32044da671d0b4f11908416044753360b39 | [
"MIT"
] | null | null | null | venv/Lib/site-packages/palettable/colorbrewer/diverging.py | EkremBayar/bayar | aad1a32044da671d0b4f11908416044753360b39 | [
"MIT"
] | null | null | null | venv/Lib/site-packages/palettable/colorbrewer/diverging.py | EkremBayar/bayar | aad1a32044da671d0b4f11908416044753360b39 | [
"MIT"
] | null | null | null | from __future__ import absolute_import
from .colorbrewer import _load_maps_by_type
globals().update(_load_maps_by_type('diverging'))
| 22.5 | 49 | 0.844444 | 19 | 135 | 5.315789 | 0.631579 | 0.158416 | 0.19802 | 0.277228 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.081481 | 135 | 5 | 50 | 27 | 0.814516 | 0 | 0 | 0 | 0 | 0 | 0.066667 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
c07ec508f48eb60f47313ca34b77566856c7e11a | 49 | py | Python | centralogger/__init__.py | reevoremo/centralogger | 823be19b6537159e79bd4c06eefb2d3ddbdbf16e | [
"MIT"
] | null | null | null | centralogger/__init__.py | reevoremo/centralogger | 823be19b6537159e79bd4c06eefb2d3ddbdbf16e | [
"MIT"
] | null | null | null | centralogger/__init__.py | reevoremo/centralogger | 823be19b6537159e79bd4c06eefb2d3ddbdbf16e | [
"MIT"
] | null | null | null | from .telegram_handler import TelegramLogHandler
| 24.5 | 48 | 0.897959 | 5 | 49 | 8.6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.081633 | 49 | 1 | 49 | 49 | 0.955556 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
c089811e3d9865038083a344d4f5e5232198358f | 32 | py | Python | model/__init__.py | Thuako/LSQ | 35a67424e89505f53b60bff72a465aa8e03f9426 | [
"MIT"
] | 129 | 2020-02-07T16:05:06.000Z | 2022-03-31T08:58:28.000Z | model/__init__.py | Thuako/LSQ | 35a67424e89505f53b60bff72a465aa8e03f9426 | [
"MIT"
] | 17 | 2020-02-20T05:22:02.000Z | 2022-03-31T06:55:29.000Z | model/__init__.py | Thuako/LSQ | 35a67424e89505f53b60bff72a465aa8e03f9426 | [
"MIT"
] | 18 | 2020-02-08T04:32:00.000Z | 2021-12-31T08:27:21.000Z | from .model import create_model
| 16 | 31 | 0.84375 | 5 | 32 | 5.2 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.125 | 32 | 1 | 32 | 32 | 0.928571 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
c08a7e5354a40627e486f91e1ff08038e3e52689 | 354 | py | Python | Sources/Compiler/NameMethod.py | Tuluobo/HTTPIDL | 0b4476fe0fe1ae8237c92ca53b1fc8be1f8c2d5d | [
"MIT"
] | null | null | null | Sources/Compiler/NameMethod.py | Tuluobo/HTTPIDL | 0b4476fe0fe1ae8237c92ca53b1fc8be1f8c2d5d | [
"MIT"
] | null | null | null | Sources/Compiler/NameMethod.py | Tuluobo/HTTPIDL | 0b4476fe0fe1ae8237c92ca53b1fc8be1f8c2d5d | [
"MIT"
] | null | null | null | def underline_to_upper_camel_case(str):
return ''.join(map(upper_first_letter, str.split('_')))
def underline_to_lower_camel_case(str):
return lower_first_letter(''.join(map(upper_first_letter, str.split('_'))))
def upper_first_letter(str):
return str[0].upper() + str[1:]
def lower_first_letter(str):
return str[0].lower() + str[1:]
| 27.230769 | 79 | 0.717514 | 55 | 354 | 4.254545 | 0.290909 | 0.235043 | 0.239316 | 0.24359 | 0.495727 | 0.495727 | 0.290598 | 0.290598 | 0 | 0 | 0 | 0.012821 | 0.118644 | 354 | 12 | 80 | 29.5 | 0.737179 | 0 | 0 | 0 | 0 | 0 | 0.005666 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | false | 0 | 0 | 0.5 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
c0933bfa7d287dd03750028f62982b066700126d | 35 | py | Python | branchey.py | HickNamby/cs3240-labdemo | ecc5a8bbc99b0a5c8db6e13ab675167eef0b46b1 | [
"MIT"
] | null | null | null | branchey.py | HickNamby/cs3240-labdemo | ecc5a8bbc99b0a5c8db6e13ab675167eef0b46b1 | [
"MIT"
] | null | null | null | branchey.py | HickNamby/cs3240-labdemo | ecc5a8bbc99b0a5c8db6e13ab675167eef0b46b1 | [
"MIT"
] | null | null | null | print("I am not sure this works")
| 11.666667 | 33 | 0.685714 | 7 | 35 | 3.428571 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.2 | 35 | 2 | 34 | 17.5 | 0.857143 | 0 | 0 | 0 | 0 | 0 | 0.705882 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
c093eebf3fd7e980d867c268ed1cf0b3c468e294 | 1,487 | py | Python | register_service_util.py | swiftops/registration-service | ad335cd37f5976ecd65d8f5c1ff513335f8b5aa0 | [
"Apache-2.0"
] | null | null | null | register_service_util.py | swiftops/registration-service | ad335cd37f5976ecd65d8f5c1ff513335f8b5aa0 | [
"Apache-2.0"
] | null | null | null | register_service_util.py | swiftops/registration-service | ad335cd37f5976ecd65d8f5c1ff513335f8b5aa0 | [
"Apache-2.0"
] | null | null | null | from flask import jsonify
from service_util import register_service_util, update_service_util, delete_service_util,\
get_service_util, service_validation, update_filter_service_util
def add_master_service():
input_json = service_validation()
response = register_service_util(input_json, True)
return jsonify(response)
def update_master_service():
input_json = service_validation()
response = update_service_util(input_json)
return jsonify(response)
def delete_master_service():
input_json = service_validation()
response = delete_service_util(input_json["data"]["keyword"], True)
return jsonify(response)
def get_master_data():
input_json = service_validation()
response = get_service_util(input_json["data"]["keyword"], True)
return jsonify(response)
def register_service():
input_json = service_validation()
response = register_service_util(input_json, False)
return jsonify(response)
def update_service():
input_json = service_validation()
response = update_filter_service_util(input_json)
return jsonify(response)
def delete_service():
input_json = service_validation()
response = delete_service_util(input_json["data"]["keyword"], False)
return jsonify(response)
def get_service_data():
input_json = service_validation()
response = get_service_util(input_json["data"]["keyword"], False)
return jsonify(response)
| 28.056604 | 91 | 0.73033 | 174 | 1,487 | 5.856322 | 0.132184 | 0.141315 | 0.125613 | 0.204122 | 0.820412 | 0.743867 | 0.743867 | 0.633955 | 0.633955 | 0.535819 | 0 | 0 | 0.180901 | 1,487 | 52 | 92 | 28.596154 | 0.836617 | 0 | 0 | 0.457143 | 0 | 0 | 0.030662 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.228571 | false | 0 | 0.057143 | 0 | 0.514286 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 6 |
c0c4da0d90f5f80265d58ec4006a8d72ae8dbc35 | 13,746 | py | Python | open_spiel/python/examples/evaluation_graph_behavior_probs_competition_based.py | xujing1994/open_spiel | 7663a2717f16ff84c0d6a6bfdf19a9c21b37b765 | [
"Apache-2.0"
] | null | null | null | open_spiel/python/examples/evaluation_graph_behavior_probs_competition_based.py | xujing1994/open_spiel | 7663a2717f16ff84c0d6a6bfdf19a9c21b37b765 | [
"Apache-2.0"
] | null | null | null | open_spiel/python/examples/evaluation_graph_behavior_probs_competition_based.py | xujing1994/open_spiel | 7663a2717f16ff84c0d6a6bfdf19a9c21b37b765 | [
"Apache-2.0"
] | null | null | null | from absl import app
from absl import flags
import numpy as np
from matplotlib.legend_handler import HandlerLine2D
import matplotlib.pyplot as plt
def read_wr(txt_name):
text_file = open(txt_name, "r")
lines = text_file.read().split("\n")
list1 = []
list2 = []
list3 = []
for line in lines[:-1]:
[str1, str2, str3] = line.split(" ")
list1.append(float(str1))
list2.append(float(str2))
list3.append(float(str3))
return list1, list2, list3
def read_exploitability(txt_name):
txt_file = open(txt_name, 'r')
lines = txt_file.read().split('\n')
num_list = []
for str in lines[:-1]:
if str == "NaN":
num_list.append(1)
else:
num_list.append(float(str))
return num_list
def read_loss(txt_name):
txt_file = open(txt_name)
lines = txt_file.read().split('\n')
list1 = []
list2 = []
for line in lines[:-1]:
[str1, str2] = line.split(' ')
if str1 != 'None':
list1.append(float(str1))
else:
list1.append(str1)
if str2 != 'None':
list2.append(float(str2))
else:
list2.append(str2)
for idx, number in enumerate(list1):
if number == 'None':
list1[idx] = list1[idx+1]
for number, idx in enumerate(list2):
if number == 'None':
list2[idx] = list2[idx+1]
return list1, list2
def read_behavior_probs(txt_name):
text_file = open(txt_name, "r")
lines = text_file.read().split("\n")
list1 = []
list2 = []
list3 = []
list4 = []
list5 = []
list6 = []
list7 = []
list8 = []
for line in lines[:-1]:
[str1, str2, str3, str4, str5, str6, str7, str8] = line.split(" ")
list1.append(float(str1))
list2.append(float(str2))
list3.append(float(str3))
list4.append(float(str4))
list5.append(float(str5))
list6.append(float(str6))
list7.append(float(str7))
list8.append(float(str8))
return list1, list2, list3, list4, list5, list6, list7, list8
def main(argv):
kuhn_poker_nfsp_0 = "/home/jxu8/Code_update/open_spiel/evaluation_data/eval_kp_nfsp_0.1_7_27/"
kuhn_poker_nfsp_1 = "/home/jxu8/Code_update/open_spiel/evaluation_data/eval_kp_nfsp_1_7_28/"
ttt_nfsp_0 = "/home/jxu8/Code_update/open_spiel/evaluation_data/eval_ttt_nfsp_0.1_7_26/"
ttt_nfsp_1 = "/home/jxu8/Code_update/open_spiel/evaluation_data/eval_ttt_nfsp_1_7_29/"
kuhn_poker_psro = "/home/jxu8/Code/open_spiel/evaluation_data/eval_kuhn_poker_psro_7_2/"
bp_jk_cb = []
bp_jq_cb = []
bp_kj_cb = []
bp_kq_cb = []
bp_qj_cb = []
bp_qk_cb = []
bp_jk_cb.append(read_behavior_probs(kuhn_poker_nfsp_0 + 'behavior_probs/eta_0/competition_based/JK.txt'))
bp_jq_cb.append(read_behavior_probs(kuhn_poker_nfsp_0 + 'behavior_probs/eta_0/competition_based/JQ.txt'))
bp_kj_cb.append(read_behavior_probs(kuhn_poker_nfsp_0 + 'behavior_probs/eta_0/competition_based/KJ.txt'))
bp_kq_cb.append(read_behavior_probs(kuhn_poker_nfsp_0 + 'behavior_probs/eta_0/competition_based/KQ.txt'))
bp_qj_cb.append(read_behavior_probs(kuhn_poker_nfsp_0 + 'behavior_probs/eta_0/competition_based/QJ.txt'))
bp_qk_cb.append(read_behavior_probs(kuhn_poker_nfsp_0 + 'behavior_probs/eta_0/competition_based/QK.txt'))
bp_jk_cb.append(read_behavior_probs(kuhn_poker_nfsp_1 + 'behavior_probs/eta_0/competition_based/JK.txt'))
bp_jq_cb.append(read_behavior_probs(kuhn_poker_nfsp_1 + 'behavior_probs/eta_0/competition_based/JQ.txt'))
bp_kj_cb.append(read_behavior_probs(kuhn_poker_nfsp_1 + 'behavior_probs/eta_0/competition_based/KJ.txt'))
bp_kq_cb.append(read_behavior_probs(kuhn_poker_nfsp_1 + 'behavior_probs/eta_0/competition_based/KQ.txt'))
bp_qj_cb.append(read_behavior_probs(kuhn_poker_nfsp_1 + 'behavior_probs/eta_0/competition_based/QJ.txt'))
bp_qk_cb.append(read_behavior_probs(kuhn_poker_nfsp_1 + 'behavior_probs/eta_0/competition_based/QK.txt'))
#plt alpha in kuhn_poker_nfsp_0.1(eta 0)
tmp_list = [bp_jk_cb[0], bp_jq_cb[0], bp_kj_cb[0], bp_kq_cb[0], bp_qj_cb[0], bp_qk_cb[0]]
alpha_1 = [1 - tmp_list[0][0][i] for i in range(len(tmp_list[0][0]))]
alpha_2 = [1 - tmp_list[1][0][i] for i in range(len(tmp_list[1][0]))]
alpha_3 = [(1/3) * (1 - tmp_list[2][0][i]) for i in range(len(tmp_list[2][0]))]
alpha_4 = [(1/3) * (1 - tmp_list[3][0][i]) for i in range(len(tmp_list[2][0]))]
alpha_5 = [tmp_list[4][7][i] - 1/3 for i in range(len(tmp_list[4][7]))]
alpha_6 = [tmp_list[5][7][i] - 1/3 for i in range(len(tmp_list[5][7]))]
ax2 = plt.figure(figsize=(10, 5))
#ax2.set_title("JK (kuhn_poker_nfsp_0.1, eta0.1 in evaluation)")
#plt.ylim(0, 0.35)
line1, = plt.plot(alpha_1, "b-", label="JK & JQ")
#line2, = plt.plot(alpha_2, "b*", label="2")
line3, = plt.plot(alpha_3, "g-", label="KJ & KQ")
#line4, = plt.plot(alpha_4, "g*", label="4")
line5, = plt.plot(alpha_5, "y-", label="QJ & QK")
#line6, = plt.plot(alpha_6, "y*", label="6")
#plt.legend(handles=[line1, line2, line3, line4, line5, line6], loc='upper right')
plt.legend(handles=[line1, line3, line5], loc='upper right')
plt.ylabel('alpha')
plt.xlabel('episode(*1e4)')
plt.show()
#plt alpha in kuhn_poker_nfsp_1(eta 0)
tmp_list = [bp_jk_cb[1], bp_jq_cb[1], bp_kj_cb[1], bp_kq_cb[1], bp_qj_cb[1], bp_qk_cb[1]]
alpha_1 = [1 - tmp_list[0][0][i] for i in range(len(tmp_list[0][0]))]
alpha_2 = [1 - tmp_list[1][0][i] for i in range(len(tmp_list[1][0]))]
alpha_3 = [(1/3) * (1 - tmp_list[2][0][i]) for i in range(len(tmp_list[2][0]))]
alpha_4 = [(1/3) * (1 - tmp_list[3][0][i]) for i in range(len(tmp_list[2][0]))]
alpha_5 = [tmp_list[4][7][i] - 1/3 for i in range(len(tmp_list[4][7]))]
alpha_6 = [tmp_list[5][7][i] - 1/3 for i in range(len(tmp_list[5][7]))]
ax2 = plt.figure(figsize=(10, 5))
#ax2.set_title("JK (kuhn_poker_nfsp_0.1, eta0.1 in evaluation)")
#plt.ylim(0, 0.35)
line1, = plt.plot(alpha_1, "b-", label="JK & JQ")
#line2, = plt.plot(alpha_2, "b*", label="2")
line3, = plt.plot(alpha_3, "g-", label="KJ & KQ")
#line4, = plt.plot(alpha_4, "g*", label="4")
line5, = plt.plot(alpha_5, "y-", label="QJ & QK")
#line6, = plt.plot(alpha_6, "y*", label="6")
#plt.legend(handles=[line1, line2, line3, line4, line5, line6], loc='upper right')
plt.legend(handles=[line1, line3, line5], loc='upper right')
plt.ylabel('alpha')
plt.xlabel('episode(*1e4)')
plt.show()
plt.figure(figsize=(10, 10))
ax2 = plt.subplot(311)
ax2.set_title("JK (kuhn_poker_nfsp_0.1, eta0 in evaluation)")
#plt.ylim(0, 0.35)
y_ticks = np.arange(0, 1.1, 0.1)
line1, = plt.plot(bp_jk_cb[0][0], "b", label="1")
line2, = plt.plot(bp_jk_cb[0][2], "r", label="2")
line3, = plt.plot(bp_jk_cb[0][4], "g", label="3")
line4, = plt.plot(bp_jk_cb[0][6], "y", label="4")
plt.axhline(y=2/3,ls=":",c="blue")
plt.axhline(y=1,ls=":",c="blue")
plt.yticks(y_ticks)
plt.legend(handles=[line1, line2, line3, line4], loc='upper right')
plt.ylabel('behavior_probs')
plt.xlabel('episode(*1e4)')
ax2 = plt.subplot(312)
ax2.set_title("JQ")
#plt.ylim(0, 0.35)
y_ticks = np.arange(0, 1.1, 0.1)
line1, = plt.plot(bp_jq_cb[0][0], "b", label="1")
line2, = plt.plot(bp_jq_cb[0][2], "r", label="2")
line3, = plt.plot(bp_jq_cb[0][4], "g", label="3")
line4, = plt.plot(bp_jq_cb[0][6], "y", label="4")
plt.axhline(y=2/3,ls=":",c="blue")
plt.axhline(y=1,ls=":",c="blue")
plt.yticks(y_ticks)
plt.legend(handles=[line1, line2, line3, line4], loc='upper right')
plt.ylabel('behavior_probs')
plt.xlabel('episode(*1e4)')
ax2 = plt.subplot(313)
ax2.set_title("KJ")
#plt.ylim(0, 0.35)
y_ticks = np.arange(0, 1.1, 0.1)
line1, = plt.plot(bp_kj_cb[0][0], "b", label="1")
line2, = plt.plot(bp_kj_cb[0][2], "r", label="2")
line3, = plt.plot(bp_kj_cb[0][4], "g", label="3")
line4, = plt.plot(bp_kj_cb[0][6], "y", label="4")
plt.axhline(y=0,ls=":",c="blue")
plt.axhline(y=1,ls=":",c="blue")
plt.yticks(y_ticks)
plt.legend(handles=[line1, line2, line3, line4], loc='upper right')
plt.ylabel('behavior_probs')
plt.xlabel('episode(*1e4)')
plt.show()
plt.figure(figsize=(10, 10))
ax2 = plt.subplot(311)
ax2.set_title("KQ")
#plt.ylim(0, 0.35)
y_ticks = np.arange(0, 1.1, 0.1)
line1, = plt.plot(bp_kq_cb[0][0], "b", label="1")
line2, = plt.plot(bp_kq_cb[0][2], "r", label="2")
line3, = plt.plot(bp_kq_cb[0][4], "g", label="3")
line4, = plt.plot(bp_kq_cb[0][6], "y", label="4")
plt.axhline(y=0,ls=":",c="blue")
plt.axhline(y=1,ls=":",c="blue")
plt.yticks(y_ticks)
plt.legend(handles=[line1, line2, line3, line4], loc='upper right')
plt.ylabel('behavior_probs')
plt.xlabel('episode(*1e4)')
ax2 = plt.subplot(312)
ax2.set_title("QJ")
#plt.ylim(0, 0.35)
y_ticks = np.arange(0, 1.1, 0.1)
line1, = plt.plot(bp_qj_cb[0][0], "b", label="1")
line2, = plt.plot(bp_qj_cb[0][2], "r", label="2")
line3, = plt.plot(bp_qj_cb[0][4], "g", label="3")
line4, = plt.plot(bp_qj_cb[0][6], "y", label="4")
plt.axhline(y=1/3,ls=":",c="yellow")
plt.axhline(y=2/3,ls=":",c="yellow")
plt.yticks(y_ticks)
plt.legend(handles=[line1, line2, line3, line4], loc='upper right')
plt.ylabel('behavior_probs')
plt.xlabel('episode(*1e4)')
ax2 = plt.subplot(313)
ax2.set_title("QK")
#plt.ylim(0, 0.35)
y_ticks = np.arange(0, 1.1, 0.1)
line1, = plt.plot(bp_qk_cb[0][0], "b", label="1")
line2, = plt.plot(bp_qk_cb[0][2], "r", label="2")
line3, = plt.plot(bp_qk_cb[0][4], "g", label="3")
line4, = plt.plot(bp_qk_cb[0][6], "y", label="4")
plt.axhline(y=1/3,ls=":",c="yellow")
plt.axhline(y=2/3,ls=":",c="yellow")
plt.yticks(y_ticks)
plt.legend(handles=[line1, line2, line3, line4], loc='upper right')
plt.ylabel('behavior_probs')
plt.xlabel('episode(*1e4)')
plt.show()
# plt bp for kuhn_poker_nfsp_0.1, eta1 in evaluation
plt.figure(figsize=(10, 10))
ax2 = plt.subplot(311)
ax2.set_title("JK (kuhn_poker_nfsp_1, eta0 in evaluation)")
#plt.ylim(0, 0.35)
y_ticks = np.arange(0, 1.1, 0.1)
line1, = plt.plot(bp_jk_cb[1][0], "b", label="1")
line2, = plt.plot(bp_jk_cb[1][2], "r", label="2")
line3, = plt.plot(bp_jk_cb[1][4], "g", label="3")
line4, = plt.plot(bp_jk_cb[1][6], "y", label="4")
plt.axhline(y=2/3,ls=":",c="blue")
plt.axhline(y=1,ls=":",c="blue")
plt.yticks(y_ticks)
plt.legend(handles=[line1, line2, line3, line4], loc='upper right')
plt.ylabel('behavior_probs')
plt.xlabel('episode(*1e4)')
ax2 = plt.subplot(312)
ax2.set_title("JQ")
#plt.ylim(0, 0.35)
y_ticks = np.arange(0, 1.1, 0.1)
line1, = plt.plot(bp_jq_cb[1][0], "b", label="1")
line2, = plt.plot(bp_jq_cb[1][2], "r", label="2")
line3, = plt.plot(bp_jq_cb[1][4], "g", label="3")
line4, = plt.plot(bp_jq_cb[1][6], "y", label="4")
plt.axhline(y=2/3,ls=":",c="blue")
plt.axhline(y=1,ls=":",c="blue")
plt.yticks(y_ticks)
plt.legend(handles=[line1, line2, line3, line4], loc='upper right')
plt.ylabel('behavior_probs')
plt.xlabel('episode(*1e4)')
ax2 = plt.subplot(313)
ax2.set_title("KJ")
#plt.ylim(0, 0.35)
y_ticks = np.arange(0, 1.1, 0.1)
line1, = plt.plot(bp_kj_cb[1][0], "b", label="1")
line2, = plt.plot(bp_kj_cb[1][2], "r", label="2")
line3, = plt.plot(bp_kj_cb[1][4], "g", label="3")
line4, = plt.plot(bp_kj_cb[1][6], "y", label="4")
plt.axhline(y=0,ls=":",c="blue")
plt.axhline(y=1,ls=":",c="blue")
plt.yticks(y_ticks)
plt.legend(handles=[line1, line2, line3, line4], loc='upper right')
plt.ylabel('behavior_probs')
plt.xlabel('episode(*1e4)')
plt.show()
plt.figure(figsize=(10, 10))
ax2 = plt.subplot(311)
ax2.set_title("KQ")
#plt.ylim(0, 0.35)
y_ticks = np.arange(0, 1.1, 0.1)
line1, = plt.plot(bp_kq_cb[1][0], "b", label="1")
line2, = plt.plot(bp_kq_cb[1][2], "r", label="2")
line3, = plt.plot(bp_kq_cb[1][4], "g", label="3")
line4, = plt.plot(bp_kq_cb[1][6], "y", label="4")
plt.axhline(y=0,ls=":",c="blue")
plt.axhline(y=1,ls=":",c="blue")
plt.yticks(y_ticks)
plt.legend(handles=[line1, line2, line3, line4], loc='upper right')
plt.ylabel('behavior_probs')
plt.xlabel('episode(*1e4)')
ax2 = plt.subplot(312)
ax2.set_title("QJ")
#plt.ylim(0, 0.35)
y_ticks = np.arange(0, 1.1, 0.1)
line1, = plt.plot(bp_qj_cb[1][0], "b", label="1")
line2, = plt.plot(bp_qj_cb[1][2], "r", label="2")
line3, = plt.plot(bp_qj_cb[1][4], "g", label="3")
line4, = plt.plot(bp_qj_cb[1][6], "y", label="4")
plt.axhline(y=1/3,ls=":",c="yellow")
plt.axhline(y=2/3,ls=":",c="yellow")
plt.yticks(y_ticks)
plt.legend(handles=[line1, line2, line3, line4], loc='upper right')
plt.ylabel('behavior_probs')
plt.xlabel('episode(*1e4)')
ax2 = plt.subplot(313)
ax2.set_title("QK")
#plt.ylim(0, 0.35)
y_ticks = np.arange(0, 1.1, 0.1)
line1, = plt.plot(bp_qk_cb[1][0], "b", label="1")
line2, = plt.plot(bp_qk_cb[1][2], "r", label="2")
line3, = plt.plot(bp_qk_cb[1][4], "g", label="3")
line4, = plt.plot(bp_qk_cb[1][6], "y", label="4")
plt.axhline(y=1/3,ls=":",c="yellow")
plt.axhline(y=2/3,ls=":",c="yellow")
plt.yticks(y_ticks)
plt.legend(handles=[line1, line2, line3, line4], loc='upper right')
plt.ylabel('behavior_probs')
plt.xlabel('episode(*1e4)')
plt.show()
if __name__ == "__main__":
app.run(main) | 39.959302 | 109 | 0.608177 | 2,401 | 13,746 | 3.291129 | 0.062057 | 0.053151 | 0.05467 | 0.042521 | 0.887117 | 0.881676 | 0.869274 | 0.860035 | 0.839155 | 0.839155 | 0 | 0.071873 | 0.181144 | 13,746 | 344 | 110 | 39.959302 | 0.630153 | 0.066274 | 0 | 0.591065 | 0 | 0 | 0.143727 | 0.071434 | 0 | 0 | 0 | 0 | 0 | 1 | 0.017182 | false | 0 | 0.017182 | 0 | 0.04811 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
23f8b606f19d4b24f6560dc302060c0e253b6a50 | 38 | py | Python | pkg/pkg/match/__init__.py | Restok/networks-course | c1c959b1a73b6bb301a4273bd9c1bb4c0a2fa4ff | [
"MIT"
] | 8 | 2022-01-03T23:54:30.000Z | 2022-03-18T11:04:18.000Z | pkg/pkg/match/__init__.py | Restok/networks-course | c1c959b1a73b6bb301a4273bd9c1bb4c0a2fa4ff | [
"MIT"
] | 17 | 2021-03-03T14:48:54.000Z | 2021-09-08T15:52:50.000Z | pkg/pkg/match/__init__.py | Restok/networks-course | c1c959b1a73b6bb301a4273bd9c1bb4c0a2fa4ff | [
"MIT"
] | 16 | 2022-01-04T17:54:57.000Z | 2022-03-29T00:34:14.000Z | from .qap import quadratic_assignment
| 19 | 37 | 0.868421 | 5 | 38 | 6.4 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.105263 | 38 | 1 | 38 | 38 | 0.941176 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
23f9faf4a7cbbadbc96207288f57d955ea06a437 | 14,975 | py | Python | src/hyde/dataset/ground_network/ws/lib_ws_variables.py | c-hydro/hyde | 3a3ff92d442077ce353b071d5afe726fc5465201 | [
"MIT"
] | null | null | null | src/hyde/dataset/ground_network/ws/lib_ws_variables.py | c-hydro/hyde | 3a3ff92d442077ce353b071d5afe726fc5465201 | [
"MIT"
] | 18 | 2020-04-07T16:34:59.000Z | 2021-07-02T07:32:39.000Z | src/hyde/dataset/ground_network/ws/lib_ws_variables.py | c-hydro/fp-hyde | b0728397522aceebec3e7ff115aff160a10efede | [
"MIT"
] | null | null | null | """
Library Features:
Name: lib_ws_variables
Author(s): Fabio Delogu (fabio.delogu@cimafoundation.org)
Date: '20201102'
Version: '2.0.0'
"""
#######################################################################################
# Library
import logging
import numpy as np
from src.hyde.algorithm.geo.ground_network.lib_ws_geo import find_geo_index, deg_2_km
from src.hyde.algorithm.analysis.ground_network.lib_ws_analysis_interpolation_point import interp_point2grid
from src.hyde.algorithm.analysis.ground_network.lib_ws_analysis_regression_stepwisefit import stepwisefit
from src.hyde.dataset.ground_network.ws.lib_ws_ancillary_snow import compute_kernel
# Debug
import matplotlib.pylab as plt
logging.getLogger('matplotlib').setLevel(logging.WARNING)
#######################################################################################
# -------------------------------------------------------------------------------------
# Method to compute rain map
def compute_rain(var_data, var_geo_x, var_geo_y,
ref_geo_x, ref_geo_y, ref_geo_z, ref_epsg='4326', ref_no_data=-9999.0,
var_units='mm', var_missing_value=-9999.0, var_fill_value=-9999.0,
fx_nodata=-9999.0, fx_interp_name='idw',
fx_interp_radius_x=None, fx_interp_radius_y=None,
fx_cpu=1):
if var_units is None:
logging.warning(' ===> Rain variable unit is undefined; set to [mm]')
var_units = 'mm'
if var_units != 'mm':
logging.warning(' ===> Rain variable units in wrong format; expected in [mm], passed in [' +
var_units + ']')
if var_data.ndim > 1:
logging.error(' ===> Rain variable dimensions are not allowed')
raise IOError('Dimension must be equal to 1')
if ref_geo_x.ndim == 1 and ref_geo_y.ndim == 1:
grid_geo_x, grid_geo_y = np.meshgrid(ref_geo_x, ref_geo_y)
elif ref_geo_x.ndim == 2 and ref_geo_y.ndim == 2:
grid_geo_x = ref_geo_x
grid_geo_y = ref_geo_y
else:
logging.error(' ===> Reference dimensions in bad format')
raise IOError('Data format not allowed')
# Interpolate point(s) data to grid
grid_data = interp_point2grid(var_data, var_geo_x, var_geo_y, grid_geo_x, grid_geo_y, epsg_code=ref_epsg,
interp_no_data=fx_nodata, interp_method=fx_interp_name,
interp_radius_x=fx_interp_radius_x,
interp_radius_y=fx_interp_radius_y, n_cpu=fx_cpu)
# Filter data nan and over domain
grid_data[np.isnan(grid_data)] = var_missing_value
grid_data[np.isnan(ref_geo_z)] = var_fill_value
grid_data[ref_geo_z == ref_no_data] = np.nan
return grid_data
# -------------------------------------------------------------------------------------
# -------------------------------------------------------------------------------------
# Method to compute air temperature map
def compute_air_temperature(var_data, var_geo_x, var_geo_y, var_geo_z,
ref_geo_x, ref_geo_y, ref_geo_z, ref_epsg='4326', ref_no_data=-9999.0,
var_units='C', var_missing_value=-9999.0, var_fill_value=-9999.0,
fx_nodata=-9999.0, fx_interp_name='idw',
fx_interp_radius_x=None, fx_interp_radius_y=None,
fx_cpu=1):
if var_units is None:
logging.warning(' ===> Air temperature variable unit is undefined; set to [C]')
var_units = 'C'
if var_units != 'C':
logging.warning(' ===> Air temperature variable units in wrong format; expected in [C], passed in [' +
var_units + ']')
if var_data.ndim > 1:
logging.error(' ===> Air temperature variable dimensions are not allowed')
raise IOError('Dimension must be equal to 1')
if ref_geo_x.ndim == 1 and ref_geo_y.ndim == 1:
grid_geo_x, grid_geo_y = np.meshgrid(ref_geo_x, ref_geo_y)
elif ref_geo_x.ndim == 2 and ref_geo_y.ndim == 2:
grid_geo_x = ref_geo_x
grid_geo_y = ref_geo_y
else:
logging.error(' ===> Reference dimensions in bad format')
raise IOError('Data format not allowed')
# Sort altitude(s)
var_index_sort = np.argsort(var_geo_z)
# Extract sorting value(s) from finite arrays
var_geo_x_sort = var_geo_x[var_index_sort]
var_geo_y_sort = var_geo_y[var_index_sort]
var_geo_z_sort = var_geo_z[var_index_sort]
var_data_sort = var_data[var_index_sort]
# Polyfit parameters and value(s) (--> linear regression)
var_poly_parameters = np.polyfit(var_geo_z_sort, var_data_sort, 1)
var_poly_values = np.polyval(var_poly_parameters, var_geo_z_sort)
# Define residual for point value(s)
var_data_res = var_data_sort - var_poly_values
# Interpolate point(s) data to grid
grid_data_res = interp_point2grid(var_data_res, var_geo_x_sort, var_geo_y_sort, grid_geo_x, grid_geo_y,
epsg_code=ref_epsg,
interp_no_data=fx_nodata, interp_method=fx_interp_name,
interp_radius_x=fx_interp_radius_x,
interp_radius_y=fx_interp_radius_y, n_cpu=fx_cpu)
# Interpolate polynomial parameters on z map
grid_poly_z = np.polyval(var_poly_parameters, ref_geo_z)
# Calculate temperature (using z regression and idw method(s))
grid_data = grid_poly_z + grid_data_res
# Filter data nan and over domain
grid_data[np.isnan(grid_data)] = var_missing_value
grid_data[np.isnan(ref_geo_z)] = var_fill_value
grid_data[ref_geo_z == ref_no_data] = np.nan
# Debug
# plt.figure()
# plt.imshow(grid_data)
# plt.colorbar()
# plt.clim([-10, 30])
# plt.show()
return grid_data
# -------------------------------------------------------------------------------------
# -------------------------------------------------------------------------------------
# Method to compute wind speed map
def compute_wind_speed(var_data, var_geo_x, var_geo_y,
ref_geo_x, ref_geo_y, ref_geo_z, ref_epsg='4326', ref_no_data=-9999.0,
var_units='m s-1', var_missing_value=-9999.0, var_fill_value=-9999.0,
fx_nodata=-9999.0, fx_interp_name='idw',
fx_interp_radius_x=None, fx_interp_radius_y=None,
fx_cpu=1):
if var_units is None:
logging.warning(' ===> Wind speed variable unit is undefined; set to [m s-1]')
var_units = 'm s-1'
if var_units != 'm s-1':
logging.warning(' ===> Wind speed variable units in wrong format; expected in [m s-1], passed in [' +
var_units + ']')
if var_data.ndim > 1:
logging.error(' ===> Wind speed variable dimensions are not allowed')
raise IOError('Dimension must be equal to 1')
if ref_geo_x.ndim == 1 and ref_geo_y.ndim == 1:
grid_geo_x, grid_geo_y = np.meshgrid(ref_geo_x, ref_geo_y)
elif ref_geo_x.ndim == 2 and ref_geo_y.ndim == 2:
grid_geo_x = ref_geo_x
grid_geo_y = ref_geo_y
else:
logging.error(' ===> Reference dimensions in bad format')
raise IOError('Data format not allowed')
# Interpolate point(s) data to grid
grid_data = interp_point2grid(var_data, var_geo_x, var_geo_y, grid_geo_x, grid_geo_y, epsg_code=ref_epsg,
interp_no_data=fx_nodata, interp_method=fx_interp_name,
interp_radius_x=fx_interp_radius_x,
interp_radius_y=fx_interp_radius_y, n_cpu=fx_cpu)
# Filter data nan and over domain
grid_data[np.isnan(grid_data)] = var_missing_value
grid_data[np.isnan(ref_geo_z)] = var_fill_value
grid_data[ref_geo_z == ref_no_data] = np.nan
# Debug
# plt.figure()
# plt.imshow(grid_data)
# plt.colorbar()
# plt.clim([0, 10])
# plt.show()
return grid_data
# -------------------------------------------------------------------------------------
# -------------------------------------------------------------------------------------
# Method to compute incoming radiation map
def compute_incoming_radiation(var_data, var_geo_x, var_geo_y,
ref_geo_x, ref_geo_y, ref_geo_z, ref_epsg='4326', ref_no_data=-9999.0,
var_units='W m-2', var_missing_value=-9999.0, var_fill_value=-9999.0,
fx_nodata=-9999.0, fx_interp_name='idw',
fx_interp_radius_x=None, fx_interp_radius_y=None,
fx_cpu=1):
if var_units is None:
logging.warning(' ===> Incoming radiation variable unit is undefined; set to [W m-2]')
var_units = 'W m-2'
if var_units != 'W m-2':
logging.warning(' ===> Incoming radiation variable units in wrong format; expected in [W m-2], passed in [' +
var_units + ']')
if var_data.ndim > 1:
logging.error(' ===> Incoming radiation variable dimensions are not allowed')
raise IOError('Dimension must be equal to 1')
if ref_geo_x.ndim == 1 and ref_geo_y.ndim == 1:
grid_geo_x, grid_geo_y = np.meshgrid(ref_geo_x, ref_geo_y)
elif ref_geo_x.ndim == 2 and ref_geo_y.ndim == 2:
grid_geo_x = ref_geo_x
grid_geo_y = ref_geo_y
else:
logging.error(' ===> Reference dimensions in bad format')
raise IOError('Data format not allowed')
# Interpolate point(s) data to grid
grid_data = interp_point2grid(var_data, var_geo_x, var_geo_y, grid_geo_x, grid_geo_y, epsg_code=ref_epsg,
interp_no_data=fx_nodata, interp_method=fx_interp_name,
interp_radius_x=fx_interp_radius_x,
interp_radius_y=fx_interp_radius_y, n_cpu=fx_cpu)
# Filter data nan and over domain
grid_data[np.isnan(grid_data)] = var_missing_value
grid_data[np.isnan(ref_geo_z)] = var_fill_value
grid_data[ref_geo_z == ref_no_data] = np.nan
# Debug
# plt.figure()
# plt.imshow(grid_data)
# plt.colorbar()
# plt.clim([-50, 1200])
# plt.show()
return grid_data
# -------------------------------------------------------------------------------------
# -------------------------------------------------------------------------------------
# Method to compute relative humidity map
def compute_relative_humidity(var_data, var_geo_x, var_geo_y,
ref_geo_x, ref_geo_y, ref_geo_z, ref_epsg='4326', ref_no_data=-9999.0,
var_units='%', var_missing_value=-9999.0, var_fill_value=-9999.0,
fx_nodata=-9999.0, fx_interp_name='idw',
fx_interp_radius_x=None, fx_interp_radius_y=None,
fx_cpu=1):
if var_units is None:
logging.warning(' ===> Relative humidity variable unit is undefined; set to [%]')
var_units = '%'
if var_units != '%':
logging.warning(' ===> Relative humidity variable units in wrong format; expected in [%], passed in [' +
var_units + ']')
if var_data.ndim > 1:
logging.error(' ===> Relative humidity variable dimensions are not allowed')
raise IOError('Dimension must be equal to 1')
if ref_geo_x.ndim == 1 and ref_geo_y.ndim == 1:
grid_geo_x, grid_geo_y = np.meshgrid(ref_geo_x, ref_geo_y)
elif ref_geo_x.ndim == 2 and ref_geo_y.ndim == 2:
grid_geo_x = ref_geo_x
grid_geo_y = ref_geo_y
else:
logging.error(' ===> Reference dimensions in bed format')
raise IOError('Data format not allowed')
# Interpolate point(s) data to grid
grid_data = interp_point2grid(var_data, var_geo_x, var_geo_y, grid_geo_x, grid_geo_y, epsg_code=ref_epsg,
interp_no_data=fx_nodata, interp_method=fx_interp_name,
interp_radius_x=fx_interp_radius_x,
interp_radius_y=fx_interp_radius_y, n_cpu=fx_cpu)
# Filter data nan and over domain
grid_data[np.isnan(grid_data)] = var_missing_value
grid_data[np.isnan(ref_geo_z)] = var_fill_value
grid_data[ref_geo_z == ref_no_data] = np.nan
# Debug
# plt.figure()
# plt.imshow(grid_data)
# plt.colorbar()
# plt.clim([0, 100])
# plt.show()
return grid_data
# -------------------------------------------------------------------------------------
# -------------------------------------------------------------------------------------
# Method to compute air pressure map
def compute_air_pressure(var_data, var_geo_x, var_geo_y,
ref_geo_x, ref_geo_y, ref_geo_z, ref_epsg='4326', ref_no_data=-9999.0,
var_units='hPa', var_missing_value=-9999.0, var_fill_value=-9999.0,
fx_nodata=-9999.0, fx_interp_name='idw',
fx_interp_radius_x=None, fx_interp_radius_y=None,
fx_cpu=1):
if var_units is None:
logging.warning(' ===> Air pressure variable unit is undefined; set to [hPa]')
var_units = 'hPa'
if var_units != 'hPa':
logging.warning(' ===> Air pressure variable units in wrong format; expected in [hPa], passed in [' +
var_units + ']')
if var_data.ndim > 1:
logging.error(' ===> Air pressure variable dimensions are not allowed')
raise IOError('Dimension must be equal to 1')
if ref_geo_x.ndim == 1 and ref_geo_y.ndim == 1:
grid_geo_x, grid_geo_y = np.meshgrid(ref_geo_x, ref_geo_y)
elif ref_geo_x.ndim == 2 and ref_geo_y.ndim == 2:
grid_geo_x = ref_geo_x
grid_geo_y = ref_geo_y
else:
logging.error(' ===> Reference dimensions in bad format')
raise IOError('Data format not allowed')
# Interpolate point(s) data to grid
grid_data = interp_point2grid(var_data, var_geo_x, var_geo_y, grid_geo_x, grid_geo_y, epsg_code=ref_epsg,
interp_no_data=fx_nodata, interp_method=fx_interp_name,
interp_radius_x=fx_interp_radius_x,
interp_radius_y=fx_interp_radius_y, n_cpu=fx_cpu)
# Filter data nan and over domain
grid_data[np.isnan(grid_data)] = var_missing_value
grid_data[np.isnan(ref_geo_z)] = var_fill_value
grid_data[ref_geo_z == ref_no_data] = np.nan
# Debug
# plt.figure()
# plt.imshow(grid_data)
# plt.colorbar()
# plt.clim([-10, 30])
# plt.show()
return grid_data
# -------------------------------------------------------------------------------------
| 42.908309 | 117 | 0.574691 | 2,056 | 14,975 | 3.829767 | 0.07928 | 0.060198 | 0.02667 | 0.02286 | 0.829947 | 0.783211 | 0.757557 | 0.725679 | 0.702946 | 0.702946 | 0 | 0.021307 | 0.257229 | 14,975 | 348 | 118 | 43.031609 | 0.686595 | 0.167479 | 0 | 0.692683 | 0 | 0 | 0.149709 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.029268 | false | 0.029268 | 0.034146 | 0 | 0.092683 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
f1a5ce8bdc8b607a8a268814b0f25927445fc020 | 75 | py | Python | tests/test_version.py | HazardDede/dictmentor | 9670c180b08c4bc957e90436701123653c17fd97 | [
"MIT"
] | null | null | null | tests/test_version.py | HazardDede/dictmentor | 9670c180b08c4bc957e90436701123653c17fd97 | [
"MIT"
] | null | null | null | tests/test_version.py | HazardDede/dictmentor | 9670c180b08c4bc957e90436701123653c17fd97 | [
"MIT"
] | null | null | null | import dictmentor
def test_version_for_smoke():
dictmentor.version()
| 12.5 | 29 | 0.773333 | 9 | 75 | 6.111111 | 0.777778 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.146667 | 75 | 5 | 30 | 15 | 0.859375 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | true | 0 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
f1c820480801118becbd2de9b4fa54a5cb41f960 | 10,411 | py | Python | jmetal/core/test/test_quality_indicator.py | 12yuens2/jMetalPy | 6f54940cb205df831f5498e2eac2520b331ee4fd | [
"MIT"
] | 335 | 2017-03-16T19:44:50.000Z | 2022-03-30T08:50:46.000Z | jmetal/core/test/test_quality_indicator.py | 12yuens2/jMetalPy | 6f54940cb205df831f5498e2eac2520b331ee4fd | [
"MIT"
] | 85 | 2017-05-16T06:40:51.000Z | 2022-02-05T23:43:49.000Z | jmetal/core/test/test_quality_indicator.py | 12yuens2/jMetalPy | 6f54940cb205df831f5498e2eac2520b331ee4fd | [
"MIT"
] | 130 | 2017-02-08T01:19:15.000Z | 2022-03-25T08:32:08.000Z | import unittest
from os.path import dirname, join
from pathlib import Path
import numpy as np
from jmetal.core.quality_indicator import GenerationalDistance, InvertedGenerationalDistance, EpsilonIndicator, \
HyperVolume
class GenerationalDistanceTestCases(unittest.TestCase):
""" Class including unit tests for class GenerationalDistance
"""
def test_should_constructor_create_a_non_null_object(self) -> None:
indicator = GenerationalDistance([])
self.assertIsNotNone(indicator)
def test_get_name_return_the_right_value(self):
self.assertEqual("Generational Distance", GenerationalDistance([]).get_name())
def test_get_short_name_return_the_right_value(self):
self.assertEqual("GD", GenerationalDistance([]).get_short_name())
def test_case1(self):
"""
Case 1. Reference front: [[1.0, 1.0]], front: [[1.0, 1.0]]
Expected result: the distance to the nearest point of the reference front is 0.0
:return:
"""
indicator = GenerationalDistance(np.array([[1.0, 1.0]]))
front = np.array([[1.0, 1.0]])
result = indicator.compute(front)
self.assertEqual(0.0, result)
def test_case2(self):
"""
Case 2. Reference front: [[1.0, 1.0], [2.0, 2.0], front: [[1.0, 1.0]]
Expected result: the distance to the nearest point of the reference front is 0.0
:return:
"""
indicator = GenerationalDistance(np.array([[1.0, 1.0], [2.0, 2.0]]))
front = np.array([[1.0, 1.0]])
result = indicator.compute(front)
self.assertEqual(0.0, result)
def test_case3(self):
"""
Case 3. Reference front: [[1.0, 1.0, 1.0], [2.0, 2.0, 2.0]], front: [[1.0, 1.0, 1.0]]
Expected result: the distance to the nearest point of the reference front is 0.0. Example with three objectives
:return:
"""
indicator = GenerationalDistance(np.array([[1.0, 1.0, 1.0], [2.0, 2.0, 2.0]]))
front = np.array([[1.0, 1.0, 1.0]])
result = indicator.compute(front)
self.assertEqual(0.0, result)
def test_case4(self):
"""
Case 4. reference front: [[1.0, 1.0], [2.0, 2.0]], front: [[1.5, 1.5]]
Expected result: the distance to the nearest point of the reference front is the euclidean distance to any of the
points of the reference front
:return:
"""
indicator = GenerationalDistance(np.array([[1.0, 1.0], [2.0, 2.0]]))
front = np.array([[1.5, 1.5]])
result = indicator.compute(front)
self.assertEqual(np.sqrt(pow(1.0 - 1.5, 2) + pow(1.0 - 1.5, 2)), result)
self.assertEqual(np.sqrt(pow(2.0 - 1.5, 2) + pow(2.0 - 1.5, 2)), result)
def test_case5(self):
"""
Case 5. reference front: [[1.0, 1.0], [2.1, 2.1]], front: [[1.5, 1.5]]
Expected result: the distance to the nearest point of the reference front is the euclidean distance
to the nearest point of the reference front ([1.0, 1.0])
:return:
"""
indicator = GenerationalDistance(np.array([[1.0, 1.0], [2.1, 2.1]]))
front = np.array([[1.5, 1.5]])
result = indicator.compute(front)
self.assertEqual(np.sqrt(pow(1.0 - 1.5, 2) + pow(1.0 - 1.5, 2)), result)
self.assertEqual(np.sqrt(pow(2.0 - 1.5, 2) + pow(2.0 - 1.5, 2)), result)
def test_case6(self):
"""
Case 6. reference front: [[1.0, 1.0], [2.1, 2.1]], front: [[1.5, 1.5], [2.2, 2.2]]
Expected result: the distance to the nearest point of the reference front is the average of the sum of each point
of the front to the nearest point of the reference front
:return:
"""
indicator = GenerationalDistance(np.array([[1.0, 1.0], [2.1, 2.1]]))
front = np.array([[1.5, 1.5], [2.2, 2.2]])
result = indicator.compute(front)
distance_of_first_point = np.sqrt(pow(1.0 - 1.5, 2) + pow(1.0 - 1.5, 2))
distance_of_second_point = np.sqrt(pow(2.1 - 2.2, 2) + pow(2.1 - 2.2, 2))
self.assertEqual((distance_of_first_point + distance_of_second_point) / 2.0, result)
def test_case7(self):
"""
Case 7. reference front: [[1.0, 1.0], [2.1, 2.1]], front: [[1.5, 1.5], [2.2, 2.2], [1.9, 1.9]]
Expected result: the distance to the nearest point of the reference front is the sum of each point of the front to the
nearest point of the reference front
:return:
"""
indicator = GenerationalDistance(np.array([[1.0, 1.0], [2.1, 2.1]]))
front = np.array([[1.5, 1.5], [2.2, 2.2], [1.9, 1.9]])
result = indicator.compute(front)
distance_of_first_point = np.sqrt(pow(1.0 - 1.5, 2) + pow(1.0 - 1.5, 2))
distance_of_second_point = np.sqrt(pow(2.1 - 2.2, 2) + pow(2.1 - 2.2, 2))
distance_of_third_point = np.sqrt(pow(2.1 - 1.9, 2) + pow(2.1 - 1.9, 2))
self.assertEqual((distance_of_first_point + distance_of_second_point + distance_of_third_point) / 3.0, result)
class InvertedGenerationalDistanceTestCases(unittest.TestCase):
""" Class including unit tests for class InvertedGenerationalDistance
"""
def test_should_constructor_create_a_non_null_object(self) -> None:
indicator = InvertedGenerationalDistance([])
self.assertIsNotNone(indicator)
def test_get_name_return_the_right_value(self):
self.assertEqual("Inverted Generational Distance", InvertedGenerationalDistance([]).get_name())
def test_get_short_name_return_the_right_value(self):
self.assertEqual("IGD", InvertedGenerationalDistance([]).get_short_name())
def test_case1(self):
"""
Case 1. Reference front: [[1.0, 1.0]], front: [[1.0, 1.0]]
Expected result = 0.0
Comment: simplest case
:return:
"""
indicator = InvertedGenerationalDistance(np.array([[1.0, 1.0]]))
front = np.array([[1.0, 1.0]])
result = indicator.compute(front)
self.assertEqual(0.0, result)
def test_case2(self):
"""
Case 2. Reference front: [[1.0, 1.0], [2.0, 2.0], front: [[1.0, 1.0]]
Expected result: average of the sum of the distances of the points of the reference front to the front
:return:
"""
indicator = InvertedGenerationalDistance(np.array([[1.0, 1.0], [2.0, 2.0]]))
front = np.array([[1.0, 1.0]])
result = indicator.compute(front)
distance_of_first_point = np.sqrt(pow(1.0 - 1.0, 2) + pow(1.0 - 1.0, 2))
distance_of_second_point = np.sqrt(pow(2.0 - 1.0, 2) + pow(2.0 - 1.0, 2))
self.assertEqual((distance_of_first_point + distance_of_second_point) / 2.0, result)
def test_case3(self):
"""
Case 3. Reference front: [[1.0, 1.0, 1.0], [2.0, 2.0, 2.0]], front: [[1.0, 1.0, 1.0]]
Expected result: average of the sum of the distances of the points of the reference front to the front.
Example with three objectives
:return:
"""
indicator = InvertedGenerationalDistance(np.array([[1.0, 1.0, 1.0], [2.0, 2.0, 2.0]]))
front = np.array([[1.0, 1.0, 1.0]])
result = indicator.compute(front)
distance_of_first_point = np.sqrt(pow(1.0 - 1.0, 2) + pow(1.0 - 1.0, 2) + pow(1.0 - 1.0, 2))
distance_of_second_point = np.sqrt(pow(2.0 - 1.0, 2) + pow(2.0 - 1.0, 2) + pow(2.0 - 1.0, 2))
self.assertEqual((distance_of_first_point + distance_of_second_point) / 2.0, result)
def test_case4(self):
"""
Case 4. reference front: [[1.0, 1.0], [2.1, 2.1]], front: [[1.5, 1.5], [2.2, 2.2]]
Expected result: average of the sum of the distances of the points of the reference front to the front.
Example with three objectives
:return:
"""
indicator = InvertedGenerationalDistance(np.array([[1.0, 1.0], [2.1, 2.1]]))
front = np.array([[1.5, 1.5], [2.2, 2.2]])
result = indicator.compute(front)
distance_of_first_point = np.sqrt(pow(1.0 - 1.5, 2) + pow(1.0 - 1.5, 2))
distance_of_second_point = np.sqrt(pow(2.1 - 2.2, 2) + pow(2.1 - 2.2, 2))
self.assertEqual((distance_of_first_point + distance_of_second_point) / 2.0, result)
def test_case5(self):
"""
Case 5. reference front: [[1.0, 1.0], [2.1, 2.1]], front: [[1.5, 1.5], [2.2, 2.2], [1.9, 1.9]]
Expected result: average of the sum of the distances of the points of the reference front to the front.
Example with three objectives
:return:
"""
indicator = InvertedGenerationalDistance(np.array([[1.0, 1.0], [2.0, 2.0]]))
front = np.array([[1.5, 1.5], [2.2, 2.2], [1.9, 1.9]])
result = indicator.compute(front)
distance_of_first_point = np.sqrt(pow(1.0 - 1.5, 2) + pow(1.0 - 1.5, 2))
distance_of_second_point = np.sqrt(pow(2.0 - 1.9, 2) + pow(2.0 - 1.9, 2))
self.assertEqual((distance_of_first_point + distance_of_second_point) / 2.0, result)
class EpsilonIndicatorTestCases(unittest.TestCase):
""" Class including unit tests for class EpsilonIndicator
"""
def test_should_constructor_create_a_non_null_object(self) -> None:
indicator = EpsilonIndicator(np.array([[1.0, 1.0], [2.0, 2.0]]))
self.assertIsNotNone(indicator)
class HyperVolumeTestCases(unittest.TestCase):
def setUp(self):
self.file_path = dirname(join(dirname(__file__)))
def test_should_hypervolume_return_5_0(self):
reference_point = [2, 2, 2]
front = np.array([[1, 0, 1], [0, 1, 0]])
hv = HyperVolume(reference_point)
value = hv.compute(front)
self.assertEqual(5.0, value)
def test_should_hypervolume_return_the_correct_value_when_applied_to_the_ZDT1_reference_front(self):
filename = 'jmetal/core/test/ZDT1.pf'
front = []
if Path(filename).is_file():
with open(filename) as file:
for line in file:
vector = [float(x) for x in line.split()]
front.append(vector)
else:
print("error")
reference_point = [1, 1]
hv = HyperVolume(reference_point)
value = hv.compute(np.array(front))
self.assertAlmostEqual(0.666, value, delta=0.001)
if __name__ == '__main__':
unittest.main()
| 37.44964 | 126 | 0.601095 | 1,581 | 10,411 | 3.844402 | 0.079696 | 0.037512 | 0.032083 | 0.03488 | 0.815893 | 0.804541 | 0.794669 | 0.779368 | 0.75617 | 0.734781 | 0 | 0.078109 | 0.246182 | 10,411 | 277 | 127 | 37.584838 | 0.696356 | 0.258573 | 0 | 0.579365 | 0 | 0 | 0.012974 | 0.003348 | 0 | 0 | 0 | 0 | 0.18254 | 1 | 0.174603 | false | 0 | 0.039683 | 0 | 0.246032 | 0.007937 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
7b12cc3c06a6db3d6f780ffd3d555454b8a165a4 | 27 | py | Python | devilry/devilry_cradmin/devilry_multiselect2/__init__.py | aless80/devilry-django | 416c262e75170d5662542f15e2d7fecf5ab84730 | [
"BSD-3-Clause"
] | 29 | 2015-01-18T22:56:23.000Z | 2020-11-10T21:28:27.000Z | devilry/devilry_cradmin/devilry_multiselect2/__init__.py | aless80/devilry-django | 416c262e75170d5662542f15e2d7fecf5ab84730 | [
"BSD-3-Clause"
] | 786 | 2015-01-06T16:10:18.000Z | 2022-03-16T11:10:50.000Z | devilry/devilry_cradmin/devilry_multiselect2/__init__.py | aless80/devilry-django | 416c262e75170d5662542f15e2d7fecf5ab84730 | [
"BSD-3-Clause"
] | 15 | 2015-04-06T06:18:43.000Z | 2021-02-24T12:28:30.000Z | from . import user # noqa
| 13.5 | 26 | 0.666667 | 4 | 27 | 4.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.259259 | 27 | 1 | 27 | 27 | 0.9 | 0.148148 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
7b2ea04745cc3bc79b0fd4ed389982038a845237 | 34 | py | Python | src/__init__.py | syakoo/azfunc-extensions | 4512c2a835399203a23689310ecea0e7605255b1 | [
"MIT"
] | null | null | null | src/__init__.py | syakoo/azfunc-extensions | 4512c2a835399203a23689310ecea0e7605255b1 | [
"MIT"
] | null | null | null | src/__init__.py | syakoo/azfunc-extensions | 4512c2a835399203a23689310ecea0e7605255b1 | [
"MIT"
] | null | null | null | from .doc_dc import dc2doc, doc2dc | 34 | 34 | 0.823529 | 6 | 34 | 4.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.066667 | 0.117647 | 34 | 1 | 34 | 34 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
9e5a219bb1ace659a25bd5b7ed008ac300a2404c | 49 | py | Python | envs/gym-target/gym_target/envs/__init__.py | bcaramiaux/humane-methods | d0ecfea8e348721e91dd36cf2a17e7868efd48ae | [
"MIT"
] | null | null | null | envs/gym-target/gym_target/envs/__init__.py | bcaramiaux/humane-methods | d0ecfea8e348721e91dd36cf2a17e7868efd48ae | [
"MIT"
] | null | null | null | envs/gym-target/gym_target/envs/__init__.py | bcaramiaux/humane-methods | d0ecfea8e348721e91dd36cf2a17e7868efd48ae | [
"MIT"
] | 1 | 2020-06-02T10:57:54.000Z | 2020-06-02T10:57:54.000Z | from gym_target.envs.target_env import TargetEnv
| 24.5 | 48 | 0.877551 | 8 | 49 | 5.125 | 0.875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.081633 | 49 | 1 | 49 | 49 | 0.911111 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
7b4ccefbf39b83c8791025ccf4ab70fac9fc2f17 | 26 | py | Python | dolphindb_numpy/compat/__init__.py | jiajiaxu123/Orca | e86189e70c1d0387816bb98b8047a6232fbda9df | [
"Apache-2.0"
] | 20 | 2019-12-02T11:49:12.000Z | 2021-12-24T19:34:32.000Z | dolphindb_numpy/compat/__init__.py | jiajiaxu123/Orca | e86189e70c1d0387816bb98b8047a6232fbda9df | [
"Apache-2.0"
] | null | null | null | dolphindb_numpy/compat/__init__.py | jiajiaxu123/Orca | e86189e70c1d0387816bb98b8047a6232fbda9df | [
"Apache-2.0"
] | 5 | 2019-12-02T12:16:22.000Z | 2021-10-22T02:27:47.000Z | from numpy.compat import * | 26 | 26 | 0.807692 | 4 | 26 | 5.25 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.115385 | 26 | 1 | 26 | 26 | 0.913043 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
7b92cdbb6ab0b02525957c227f219ea6c79d4700 | 208 | py | Python | src/atomate2/vasp/schemas/calc_types/__init__.py | Zhuoying/atomate2 | 4501c8ff2a72243dee51afb17d93ecff426b3e8c | [
"BSD-3-Clause-LBNL"
] | 14 | 2021-09-24T05:18:02.000Z | 2022-03-31T23:12:47.000Z | src/atomate2/vasp/schemas/calc_types/__init__.py | Zhuoying/atomate2 | 4501c8ff2a72243dee51afb17d93ecff426b3e8c | [
"BSD-3-Clause-LBNL"
] | 83 | 2021-11-02T17:19:57.000Z | 2022-03-31T17:27:00.000Z | src/atomate2/vasp/schemas/calc_types/__init__.py | Zhuoying/atomate2 | 4501c8ff2a72243dee51afb17d93ecff426b3e8c | [
"BSD-3-Clause-LBNL"
] | 11 | 2021-11-19T09:50:45.000Z | 2022-03-31T05:56:39.000Z | """Module defining vasp calculation types."""
from atomate2.vasp.schemas.calc_types.enums import CalcType, RunType, TaskType
from atomate2.vasp.schemas.calc_types.utils import calc_type, run_type, task_type
| 41.6 | 81 | 0.822115 | 30 | 208 | 5.533333 | 0.6 | 0.144578 | 0.192771 | 0.277108 | 0.385542 | 0.385542 | 0 | 0 | 0 | 0 | 0 | 0.010526 | 0.086538 | 208 | 4 | 82 | 52 | 0.863158 | 0.1875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
c872d17991912b9049464c5a8dbef8981cde04da | 9,596 | py | Python | simulated_fqi/mountaincar_experiments.py | bee-hive/nested-policy-rl | 56b0be37ed814265cb3ef26ea0a1a62b5cd7f05c | [
"MIT"
] | 1 | 2022-01-28T16:52:40.000Z | 2022-01-28T16:52:40.000Z | simulated_fqi/mountaincar_experiments.py | bee-hive/nested-policy-rl | 56b0be37ed814265cb3ef26ea0a1a62b5cd7f05c | [
"MIT"
] | null | null | null | simulated_fqi/mountaincar_experiments.py | bee-hive/nested-policy-rl | 56b0be37ed814265cb3ef26ea0a1a62b5cd7f05c | [
"MIT"
] | null | null | null | import configargparse
import torch
import torch.optim as optim
import sys
sys.path.append('../')
from environments import MountainCarEnv, Continuous_MountainCarEnv
from models.agents import NFQAgent
from models.networks import NFQNetwork, ContrastiveNFQNetwork
from util import get_logger, close_logger, load_models, make_reproducible, save_models
import matplotlib.pyplot as plt
import numpy as np
import itertools
import seaborn as sns
import tqdm
# def generate_data(init_experience=400, dataset='train'):
# env_bg = Continuous_MountainCarEnv(group=0)
# env_fg = Continuous_MountainCarEnv(group=1)
# bg_rollouts = []
# fg_rollouts = []
# if init_experience > 0:
# for _ in range(init_experience):
# rollout_bg, episode_cost = env_bg.generate_rollout(
# None, render=False, group=0, dataset=dataset
# )
# rollout_fg, episode_cost = env_fg.generate_rollout(
# None, render=False, group=1, dataset=dataset
# )
# bg_rollouts.extend(rollout_bg)
# fg_rollouts.extend(rollout_fg)
# bg_rollouts.extend(fg_rollouts)
# all_rollouts = bg_rollouts.copy()
# return all_rollouts, env_bg, env_fg
#
# is_contrastive=False
# epoch = 100
# evaluations = 10
# verbose=True
# print("Generating Data")
# train_rollouts, train_env_bg, train_env_fg = generate_data(dataset='train')
# test_rollouts, eval_env_bg, eval_env_fg = generate_data(dataset='test')
#
# nfq_net = ContrastiveNFQNetwork(
# state_dim=train_env_bg.state_dim, is_contrastive=is_contrastive
# )
# optimizer = optim.Adam(nfq_net.parameters(), lr=1e-1)
#
# nfq_agent = NFQAgent(nfq_net, optimizer)
#
# # NFQ Main loop
# bg_success_queue = [0] * 3
# fg_success_queue = [0] * 3
# epochs_fg = 0
# eval_fg = 0
# for k, epoch in enumerate(tqdm.tqdm(range(epoch + 1))):
# state_action_b, target_q_values, groups = nfq_agent.generate_pattern_set(
# train_rollouts
# )
# X = state_action_b
# train_groups = groups
#
# if not nfq_net.freeze_shared:
# loss = nfq_agent.train((state_action_b, target_q_values, groups))
#
# eval_episode_length_fg, eval_success_fg, eval_episode_cost_fg = 0, 0, 0
# if nfq_net.freeze_shared:
# eval_fg += 1
#
# if eval_fg > 50:
# loss = nfq_agent.train((state_action_b, target_q_values, groups))
#
# (eval_episode_length_bg, eval_success_bg, eval_episode_cost_bg) = nfq_agent.evaluate_car(eval_env_bg, render=False)
# (eval_episode_length_fg,eval_success_fg, eval_episode_cost_fg) = nfq_agent.evaluate_car(eval_env_fg, render=False)
#
# bg_success_queue = bg_success_queue[1:]
# bg_success_queue.append(1 if eval_success_bg else 0)
#
# fg_success_queue = fg_success_queue[1:]
# fg_success_queue.append(1 if eval_success_fg else 0)
#
# printed_bg = False
# printed_fg = False
#
# if sum(bg_success_queue) == 3 and not nfq_net.freeze_shared == True:
# if epochs_fg == 0:
# epochs_fg = epoch
# printed_bg = True
# nfq_net.freeze_shared = True
# if verbose:
# print("FREEZING SHARED")
# if is_contrastive:
# for param in nfq_net.layers_shared.parameters():
# param.requires_grad = False
# for param in nfq_net.layers_last_shared.parameters():
# param.requires_grad = False
# for param in nfq_net.layers_fg.parameters():
# param.requires_grad = True
# for param in nfq_net.layers_last_fg.parameters():
# param.requires_grad = True
# else:
# for param in nfq_net.layers_fg.parameters():
# param.requires_grad = False
# for param in nfq_net.layers_last_fg.parameters():
# param.requires_grad = False
#
# optimizer = optim.Adam(
# itertools.chain(
# nfq_net.layers_fg.parameters(),
# nfq_net.layers_last_fg.parameters(),
# ),
# lr=1e-1,
# )
# nfq_agent._optimizer = optimizer
#
#
# if sum(fg_success_queue) == 3:
# printed_fg = True
# break
#
# eval_env_bg.step_number = 0
# eval_env_fg.step_number = 0
#
# eval_env_bg.max_steps = 1000
# eval_env_fg.max_steps = 1000
#
# performance_fg = []
# performance_bg = []
# num_steps_bg = []
# num_steps_fg = []
# total = 0
import configargparse
import torch
import torch.optim as optim
import sys
sys.path.append('../')
from environments import MountainCarEnv, Continuous_MountainCarEnv
from models.agents import NFQAgent
from models.networks import NFQNetwork, ContrastiveNFQNetwork
from util import get_logger, close_logger, load_models, make_reproducible, save_models
import matplotlib.pyplot as plt
import numpy as np
import itertools
import seaborn as sns
import tqdm
def generate_data(init_experience=50, bg_only=False, continuous=False, agent=None):
if continuous:
env_bg = Continuous_MountainCarEnv(group=0)
env_fg = Continuous_MountainCarEnv(group=1)
else:
env_bg = MountainCarEnv(group=0)
env_fg = MountainCarEnv(group=1)
bg_rollouts = []
fg_rollouts = []
if init_experience > 0:
for _ in range(init_experience):
rollout_bg, episode_cost = env_bg.generate_rollout(
agent, render=False, group=0
)
bg_rollouts.extend(rollout_bg)
if not bg_only:
rollout_fg, episode_cost = env_fg.generate_rollout(
agent, render=False, group=1
)
fg_rollouts.extend(rollout_fg)
bg_rollouts.extend(fg_rollouts)
all_rollouts = bg_rollouts.copy()
return all_rollouts, env_bg, env_fg
train_rollouts, train_env_bg, train_env_fg = generate_data(bg_only=True, continuous=False)
test_rollouts, eval_env_bg, eval_env_fg = generate_data(bg_only=True, continuous=False)
is_contrastive = False
epochs = 100
evaluations = 1
nfq_net = ContrastiveNFQNetwork(
state_dim=train_env_bg.state_dim, is_contrastive=is_contrastive, deep=True
)
optimizer = optim.Adam(nfq_net.parameters(), lr=1e-1)
nfq_agent = NFQAgent(nfq_net, optimizer)
# NFQ Main loop
bg_success_queue = [0] * 3
fg_success_queue = [0] * 3
epochs_fg = 0
eval_fg = 0
train_rewards = [r[2] for r in train_rollouts]
test_rewards = [r[2] for r in test_rollouts]
print("Average Train Reward: " + str(np.average(train_rewards)) + " Average Test Reward: " + str(np.average(test_rewards)))
print("Epochs: " + str(epochs))
for k, ep in enumerate(tqdm.tqdm(range(epochs + 1))):
state_action_b, target_q_values, groups = nfq_agent.generate_pattern_set(train_rollouts)
if not nfq_net.freeze_shared:
loss = nfq_agent.train((state_action_b, target_q_values, groups))
eval_episode_length_fg, eval_success_fg, eval_episode_cost_fg = 0, 0, 0
if nfq_net.freeze_shared:
eval_fg += 1
if eval_fg > 50:
loss = nfq_agent.train((state_action_b, target_q_values, groups))
(eval_episode_length_bg, eval_success_bg, eval_episode_cost_bg) = nfq_agent.evaluate_car(eval_env_bg, render=False)
#(eval_episode_length_fg, eval_success_fg, eval_episode_cost_fg) = nfq_agent.evaluate_car(eval_env_fg, render=False)
bg_success_queue = bg_success_queue[1:]
bg_success_queue.append(1 if eval_success_bg else 0)
#fg_success_queue = fg_success_queue[1:]
#fg_success_queue.append(1 if eval_success_fg else 0)
if sum(bg_success_queue) == 3 and not nfq_net.freeze_shared == True:
if epochs_fg == 0:
epochs_fg = ep
nfq_net.freeze_shared = True
print("FREEZING SHARED")
if is_contrastive:
for param in nfq_net.layers_shared.parameters():
param.requires_grad = False
for param in nfq_net.layers_last_shared.parameters():
param.requires_grad = False
for param in nfq_net.layers_fg.parameters():
param.requires_grad = True
for param in nfq_net.layers_last_fg.parameters():
param.requires_grad = True
else:
for param in nfq_net.layers_fg.parameters():
param.requires_grad = False
for param in nfq_net.layers_last_fg.parameters():
param.requires_grad = False
optimizer = optim.Adam(
itertools.chain(
nfq_net.layers_fg.parameters(),
nfq_net.layers_last_fg.parameters(),
),
lr=1e-1,
)
nfq_agent._optimizer = optimizer
break
if sum(fg_success_queue) == 3:
break
train_rollouts, train_env_bg, train_env_fg = generate_data(bg_only=True, continuous=False, agent=nfq_agent)
test_rollouts, eval_env_bg, eval_env_fg = generate_data(bg_only=True, continuous=False, agent=nfq_agent)
train_rewards = [r[2] for r in train_rollouts]
test_rewards = [r[2] for r in test_rollouts]
print("Average Train Reward: " + str(np.average(train_rewards)) + " Average Test Reward: " + str(np.average(test_rewards)))
if ep % 10 == 0:
for it in range(evaluations):
(
eval_episode_length_bg,
eval_success_bg,
eval_episode_cost_bg,
) = nfq_agent.evaluate_car(eval_env_bg, render=True)
print(eval_episode_length_bg, eval_success_bg, eval_episode_cost_bg)
train_env_bg.close()
eval_env_bg.close() | 36.625954 | 127 | 0.664339 | 1,290 | 9,596 | 4.604651 | 0.109302 | 0.030303 | 0.032323 | 0.026263 | 0.904714 | 0.871886 | 0.847306 | 0.847306 | 0.833838 | 0.833838 | 0 | 0.013221 | 0.243331 | 9,596 | 262 | 128 | 36.625954 | 0.804848 | 0.416945 | 0 | 0.396825 | 0 | 0 | 0.02137 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.007937 | false | 0 | 0.206349 | 0 | 0.222222 | 0.039683 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
c8b36ea0ad22cc16e0e702ccc9fb16e600d55591 | 3,847 | py | Python | chebpy/sph/fbmc.py | Hadrien-Montanelli/chebpy | c22f1f13b42b3c80f2e34be6e7136ef2d0277971 | [
"MIT"
] | 1 | 2020-12-02T10:17:26.000Z | 2020-12-02T10:17:26.000Z | chebpy/sph/fbmc.py | Hadrien-Montanelli/chebpy | c22f1f13b42b3c80f2e34be6e7136ef2d0277971 | [
"MIT"
] | null | null | null | chebpy/sph/fbmc.py | Hadrien-Montanelli/chebpy | c22f1f13b42b3c80f2e34be6e7136ef2d0277971 | [
"MIT"
] | null | null | null | #!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Created on Tue Dec 15 13:51:33 2020
@author: montanelli
"""
# Standard imports:
import numpy as np
from scipy.linalg import toeplitz
def fbmc(F):
"""Enforce the BMC-I symmetry conditions for the DFS coefficients F."""
# Get the dimension:
n = len(F)
Fbmc = F.copy()
# %% Step 1: enforce f_{j,k} = -f_{-j,k} for odd k.
# Exctract odd modes in k and all modes in j:
idx_k = 2*np.arange(int(n/2)) + 1
idx_j = np.arange(n)
idx_k, idx_j = np.meshgrid(idx_k, idx_j)
Fodd = F[idx_j, idx_k]
# Matrices:
I = np.eye(int(n/2)+1, n)
col = np.zeros(int(n/2)+1)
col[1] = 1
row = np.zeros(n)
J = toeplitz(col, row)
A = I + np.fliplr(J)
A[-1, int(n/2)] = 1
# Minimum Frobenius-norm solution:
C = A.T @ np.linalg.inv(A @ A.T) @ (A @ Fodd)
Fbmc[idx_j, idx_k] = F[idx_j, idx_k] - C
# %% Step 2: enforce f_{j,k} = f_{-j,k} for even k.
# Exctract even modes in k and all modes in j, and enforce pole condition:
idx_k = 2*np.arange(int(n/2))
idx_j = np.arange(n)
idx_k, idx_j = np.meshgrid(idx_k, idx_j)
Feven = F[idx_j, idx_k]
# Matrices:
I = np.eye(int(n/2), n)
col = np.zeros(int(n/2))
col[1] = 1
row = np.zeros(n)
J = toeplitz(col, row)
A = I - np.fliplr(J)
A[0, :] = 1
P = np.zeros([1, n])
P[0, :] = (-1)**np.arange(n)
A = np.concatenate((P, A), axis=0)
# Minimum Frobenius-norm solution:
C = A.T @ np.linalg.inv(A @ A.T) @ (A @ Feven)
Fbmc[idx_j, idx_k] = F[idx_j, idx_k] - C
# %% Step 3: enforce Re(f_{j,k}) = -Re(f_{j,-k}) for odd k.
# Exctract odd modes in k and all modes in j:
idx_k = 2*np.arange(int(n/2)) + 1
idx_j = np.arange(n)
idx_k, idx_j = np.meshgrid(idx_k, idx_j)
Fodd = np.real(Fbmc[idx_j, idx_k])
# Matrices:
I = np.eye(int(n/4), int(n/2))
B = I + np.fliplr(I)
# Minimum Frobenius-norm solution:
C = (Fodd @ B.T) @ np.linalg.inv(B @ B.T) @ B
Fbmc[idx_j, idx_k] = Fbmc[idx_j, idx_k] - C
# %% Step 4: enforce Re(f_{j,k}) = Re(f_{j,-k}) for even k.
# Exctract even modes in k (but exclude k=-n/2, 0) and all modes in j:
idx_k = 2*np.arange(1, int(n/4))
idx_k = np.concatenate((idx_k, 2*np.arange(int(n/4)+1, int(n/2))))
idx_j = np.arange(n)
idx_k, idx_j = np.meshgrid(idx_k, idx_j)
Feven = np.real(Fbmc[idx_j, idx_k])
# Matrices:
I = np.eye(int(n/4)-1, int(n/2)-2)
B = I - np.fliplr(I)
# Minimum Frobenius-norm solution:
C = (Feven @ B.T) @ np.linalg.inv(B @ B.T) @ B
Fbmc[idx_j, idx_k] = Fbmc[idx_j, idx_k] - C
# %% Step 5: enforce Im(f_{j,k}) = Im(f_{j,-k}) for odd k.
# Exctract odd modes in k and all modes in j:
idx_k = 2*np.arange(int(n/2)) + 1
idx_j = np.arange(n)
idx_k, idx_j = np.meshgrid(idx_k, idx_j)
Fodd = np.imag(Fbmc[idx_j, idx_k])
# Matrices:
I = np.eye(int(n/4), int(n/2))
B = I - np.fliplr(I)
# Minimum Frobenius-norm solution:
C = (Fodd @ B.T) @ np.linalg.inv(B @ B.T) @ B
Fbmc[idx_j, idx_k] = Fbmc[idx_j, idx_k] - 1j*C
# %% Step 6: enforce Im(f_{j,k}) = -Im(f_{j,-k}) for even k.
# Exctract even modes in k and all modes in j:
idx_k = 2*np.arange(int(n/2))
idx_j = np.arange(n)
idx_k, idx_j = np.meshgrid(idx_k, idx_j)
Feven = np.imag(Fbmc[idx_j, idx_k])
# Matrices:
I = np.eye(int(n/4)+1, int(n/2))
col = np.zeros(int(n/4)+1)
col[1] = 1
row = np.zeros(int(n/2))
J = toeplitz(col, row)
B = I + np.fliplr(J)
B[B==2] = 1
# Minimum Frobenius-norm solution:
C = (Feven @ B.T) @ np.linalg.inv(B @ B.T) @ B
Fbmc[idx_j, idx_k] = Fbmc[idx_j, idx_k] - 1j*C
return Fbmc | 28.708955 | 78 | 0.541981 | 740 | 3,847 | 2.701351 | 0.12973 | 0.076038 | 0.057529 | 0.072036 | 0.803902 | 0.794397 | 0.794397 | 0.758379 | 0.748374 | 0.748374 | 0 | 0.02921 | 0.279179 | 3,847 | 134 | 79 | 28.708955 | 0.69167 | 0.291916 | 0 | 0.534247 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.013699 | false | 0 | 0.027397 | 0 | 0.054795 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
c8e0dc883af65d833177a6305055724af090f2bc | 1,501 | py | Python | ladim_plugins/release/farms.py | pnsaevik/ladim_plugins | 2097a451346e2517e50f735be8b31862f24e64e2 | [
"MIT"
] | null | null | null | ladim_plugins/release/farms.py | pnsaevik/ladim_plugins | 2097a451346e2517e50f735be8b31862f24e64e2 | [
"MIT"
] | null | null | null | ladim_plugins/release/farms.py | pnsaevik/ladim_plugins | 2097a451346e2517e50f735be8b31862f24e64e2 | [
"MIT"
] | 1 | 2020-07-09T08:18:36.000Z | 2020-07-09T08:18:36.000Z | def polygon(loknr):
import re
import numpy as np
import requests
wfs_url = 'https://ogc.fiskeridir.no/wfs.ashx'
payload = dict(
service='WFS',
version='2.0.0',
request='GetFeature',
typeName='layer_203',
maxFeatures=5000000,
srsName='EPSG:4258'
)
r = requests.get(wfs_url, params=payload)
members = re.findall(r'<wfs:member>(.*?)</wfs:member>', r.text, re.DOTALL)
member = next(m for m in members if f'<ms:loknr>{loknr}</ms:loknr>' in m)
pos_list = re.search(r'<gml:posList.*?>(.*?)</gml:posList>', member,
re.DOTALL).groups()[0]
lat, lon = np.array(pos_list.strip().split(" ")).astype('float').reshape(
(-1, 2)).T
return lon[:-1], lat[:-1]
def location(loknr):
import re
import numpy as np
import requests
wfs_url = 'https://ogc.fiskeridir.no/wfs.ashx'
payload = dict(
service='WFS',
version='2.0.0',
request='GetFeature',
typeName='layer_262',
maxFeatures=5000000,
srsName='EPSG:4258'
)
r = requests.get(wfs_url, params=payload)
members = re.findall(r'<wfs:member>(.*?)</wfs:member>', r.text, re.DOTALL)
member = next(m for m in members if f'<ms:loknr>{loknr}</ms:loknr>' in m)
pos_list = re.search(r'<gml:pos.*?>(.*?)</gml:pos>', member,
re.DOTALL).groups()[0]
lat, lon = np.array(pos_list.strip().split(" ")).astype('float')
return lon, lat
| 31.270833 | 78 | 0.56962 | 204 | 1,501 | 4.142157 | 0.323529 | 0.028402 | 0.030769 | 0.04497 | 0.894675 | 0.894675 | 0.894675 | 0.894675 | 0.894675 | 0.894675 | 0 | 0.035367 | 0.246502 | 1,501 | 47 | 79 | 31.93617 | 0.71176 | 0 | 0 | 0.682927 | 0 | 0 | 0.219853 | 0.118588 | 0 | 0 | 0 | 0 | 0 | 1 | 0.04878 | false | 0 | 0.146341 | 0 | 0.243902 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
c8ff115613567c40075575ad6885d07a54d60c6d | 138 | py | Python | lost_ds/segmentation/api.py | l3p-cv/lost_ds | 4a2f3ef027128b759d28e67cb1fdaa0a557e343c | [
"MIT"
] | 1 | 2022-03-30T11:29:57.000Z | 2022-03-30T11:29:57.000Z | lost_ds/segmentation/api.py | l3p-cv/lost_ds | 4a2f3ef027128b759d28e67cb1fdaa0a557e343c | [
"MIT"
] | null | null | null | lost_ds/segmentation/api.py | l3p-cv/lost_ds | 4a2f3ef027128b759d28e67cb1fdaa0a557e343c | [
"MIT"
] | null | null | null | from lost_ds.segmentation.semantic_seg import semantic_segmentation
from lost_ds.segmentation.anno_from_seg import segmentation_to_lost
| 27.6 | 67 | 0.898551 | 20 | 138 | 5.8 | 0.45 | 0.137931 | 0.172414 | 0.37931 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.072464 | 138 | 4 | 68 | 34.5 | 0.90625 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
cdd8d263a46f36d08e536a4906a0902e4038efdd | 29 | py | Python | syenv/__init__.py | Arthuchaut/syenv | cd3166f736c0ef8d9fc4164c9c40f01eab6d2cb1 | [
"MIT"
] | null | null | null | syenv/__init__.py | Arthuchaut/syenv | cd3166f736c0ef8d9fc4164c9c40f01eab6d2cb1 | [
"MIT"
] | null | null | null | syenv/__init__.py | Arthuchaut/syenv | cd3166f736c0ef8d9fc4164c9c40f01eab6d2cb1 | [
"MIT"
] | null | null | null | from syenv.syenv import Syenv | 29 | 29 | 0.862069 | 5 | 29 | 5 | 0.6 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.103448 | 29 | 1 | 29 | 29 | 0.961538 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
cddc17d89f828570fd001f51d6de165259f0d291 | 207 | py | Python | foundation/jobs/views.py | Mindelirium/foundation | 2d07e430915d696ca7376afea633692119c4d30e | [
"MIT"
] | null | null | null | foundation/jobs/views.py | Mindelirium/foundation | 2d07e430915d696ca7376afea633692119c4d30e | [
"MIT"
] | null | null | null | foundation/jobs/views.py | Mindelirium/foundation | 2d07e430915d696ca7376afea633692119c4d30e | [
"MIT"
] | null | null | null | from django.views.generic.base import TemplateView
class JobListView(TemplateView):
template_name = "jobs/job_list.html"
class JobHelperView(TemplateView):
template_name = "jobs/job_helper.html"
| 20.7 | 50 | 0.782609 | 25 | 207 | 6.32 | 0.68 | 0.253165 | 0.303797 | 0.35443 | 0.392405 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.125604 | 207 | 9 | 51 | 23 | 0.872928 | 0 | 0 | 0 | 0 | 0 | 0.183575 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.2 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 6 |
a8263fe06da44a83c0c488b11031383dd234dbd3 | 820 | py | Python | NU-CS5001/lab03/cap_vowels.py | zahraaliaghazadeh/python | 2f2d0141a916c99e8724f803bd4e5c7246a7a02e | [
"MIT"
] | null | null | null | NU-CS5001/lab03/cap_vowels.py | zahraaliaghazadeh/python | 2f2d0141a916c99e8724f803bd4e5c7246a7a02e | [
"MIT"
] | null | null | null | NU-CS5001/lab03/cap_vowels.py | zahraaliaghazadeh/python | 2f2d0141a916c99e8724f803bd4e5c7246a7a02e | [
"MIT"
] | null | null | null | def cap_vowels(sentence):
answer = ""
for letter in sentence:
# check for vowels and make them uppercase
if letter in "aeiouAEIOU":
answer = answer + letter.upper()
# check for consonants and make them uppercase
else:
answer = answer + letter.lower()
return answer
print(cap_vowels(input("Enter a sentence: ")))
# def cap_vowels():
# sentence = input("Enter a sentence: ")
# vowels = "aeiouAEIOU"
# answer = ""
# for letter in sentence:
# # check for vowels and make them uppercase
# if letter in vowels:
# answer = answer + letter.upper()
# # check for consonants and make them uppercase
# else:
# answer = answer + letter.lower()
# return answer
# print(cap_vowels())
| 26.451613 | 56 | 0.585366 | 92 | 820 | 5.173913 | 0.26087 | 0.07563 | 0.092437 | 0.168067 | 0.768908 | 0.768908 | 0.768908 | 0.768908 | 0.768908 | 0.768908 | 0 | 0 | 0.318293 | 820 | 30 | 57 | 27.333333 | 0.851521 | 0.59878 | 0 | 0 | 0 | 0 | 0.090032 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.111111 | false | 0 | 0 | 0 | 0.222222 | 0.111111 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
b5261e25c86ae245e8a50fac712a7397ab8973a5 | 22,485 | py | Python | .ipynb_checkpoints/func-checkpoint.py | rokosbasilisk/random-network-distillation-pytorch | 4bed5379b05d2b2851237334527ec1075c50c0e3 | [
"MIT"
] | null | null | null | .ipynb_checkpoints/func-checkpoint.py | rokosbasilisk/random-network-distillation-pytorch | 4bed5379b05d2b2851237334527ec1075c50c0e3 | [
"MIT"
] | null | null | null | .ipynb_checkpoints/func-checkpoint.py | rokosbasilisk/random-network-distillation-pytorch | 4bed5379b05d2b2851237334527ec1075c50c0e3 | [
"MIT"
] | null | null | null | {
"cells": [
{
"cell_type": "code",
"execution_count": 11,
"metadata": {},
"outputs": [],
"source": [
"import re\n",
"import nle\n",
"import gym \n",
"import random\n",
"import numpy as np\n",
"from PIL import Image, ImageDraw\n",
"import cv2"
]
},
{
"cell_type": "code",
"execution_count": 12,
"metadata": {},
"outputs": [],
"source": [
"env = gym.make(\"NetHackChallenge-v0\")"
]
},
{
"cell_type": "code",
"execution_count": 13,
"metadata": {},
"outputs": [],
"source": [
"obs = env.reset()"
]
},
{
"cell_type": "code",
"execution_count": 14,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'glyphs': array([[2359, 2359, 2359, ..., 2359, 2359, 2359],\n",
" [2359, 2359, 2359, ..., 2359, 2359, 2359],\n",
" [2359, 2359, 2359, ..., 2359, 2359, 2359],\n",
" ...,\n",
" [2359, 2359, 2359, ..., 2359, 2359, 2359],\n",
" [2359, 2359, 2359, ..., 2359, 2359, 2359],\n",
" [2359, 2359, 2359, ..., 2359, 2359, 2359]], dtype=int16),\n",
" 'chars': array([[32, 32, 32, ..., 32, 32, 32],\n",
" [32, 32, 32, ..., 32, 32, 32],\n",
" [32, 32, 32, ..., 32, 32, 32],\n",
" ...,\n",
" [32, 32, 32, ..., 32, 32, 32],\n",
" [32, 32, 32, ..., 32, 32, 32],\n",
" [32, 32, 32, ..., 32, 32, 32]], dtype=uint8),\n",
" 'colors': array([[0, 0, 0, ..., 0, 0, 0],\n",
" [0, 0, 0, ..., 0, 0, 0],\n",
" [0, 0, 0, ..., 0, 0, 0],\n",
" ...,\n",
" [0, 0, 0, ..., 0, 0, 0],\n",
" [0, 0, 0, ..., 0, 0, 0],\n",
" [0, 0, 0, ..., 0, 0, 0]], dtype=uint8),\n",
" 'specials': array([[0, 0, 0, ..., 0, 0, 0],\n",
" [0, 0, 0, ..., 0, 0, 0],\n",
" [0, 0, 0, ..., 0, 0, 0],\n",
" ...,\n",
" [0, 0, 0, ..., 0, 0, 0],\n",
" [0, 0, 0, ..., 0, 0, 0],\n",
" [0, 0, 0, ..., 0, 0, 0]], dtype=uint8),\n",
" 'blstats': array([28, 9, 17, 17, 14, 18, 9, 8, 9, 0, 16, 16, 1, 0, 2, 2, 8,\n",
" 0, 1, 0, 1, 1, 0, 0, 1, 0]),\n",
" 'message': array([ 72, 101, 108, 108, 111, 32, 65, 103, 101, 110, 116, 44, 32,\n",
" 119, 101, 108, 99, 111, 109, 101, 32, 116, 111, 32, 78, 101,\n",
" 116, 72, 97, 99, 107, 33, 32, 32, 89, 111, 117, 32, 97,\n",
" 114, 101, 32, 97, 32, 110, 101, 117, 116, 114, 97, 108, 32,\n",
" 104, 117, 109, 97, 110, 32, 67, 97, 118, 101, 109, 97, 110,\n",
" 46, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\n",
" 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\n",
" 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\n",
" 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\n",
" 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\n",
" 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\n",
" 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\n",
" 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\n",
" 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\n",
" 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\n",
" 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\n",
" 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\n",
" 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\n",
" 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\n",
" 0, 0, 0, 0, 0, 0, 0, 0, 0], dtype=uint8),\n",
" 'inv_glyphs': array([1965, 1975, 2351, 2352, 2019, 5976, 5976, 5976, 5976, 5976, 5976,\n",
" 5976, 5976, 5976, 5976, 5976, 5976, 5976, 5976, 5976, 5976, 5976,\n",
" 5976, 5976, 5976, 5976, 5976, 5976, 5976, 5976, 5976, 5976, 5976,\n",
" 5976, 5976, 5976, 5976, 5976, 5976, 5976, 5976, 5976, 5976, 5976,\n",
" 5976, 5976, 5976, 5976, 5976, 5976, 5976, 5976, 5976, 5976, 5976],\n",
" dtype=int16),\n",
" 'inv_strs': array([[97, 32, 43, ..., 0, 0, 0],\n",
" [97, 32, 43, ..., 0, 0, 0],\n",
" [49, 52, 32, ..., 0, 0, 0],\n",
" ...,\n",
" [ 0, 0, 0, ..., 0, 0, 0],\n",
" [ 0, 0, 0, ..., 0, 0, 0],\n",
" [ 0, 0, 0, ..., 0, 0, 0]], dtype=uint8),\n",
" 'inv_letters': array([ 97, 98, 99, 100, 101, 0, 0, 0, 0, 0, 0, 0, 0,\n",
" 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\n",
" 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\n",
" 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\n",
" 0, 0, 0], dtype=uint8),\n",
" 'inv_oclasses': array([ 2, 2, 13, 13, 3, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18,\n",
" 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18,\n",
" 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18,\n",
" 18, 18, 18, 18], dtype=uint8),\n",
" 'tty_chars': array([[ 72, 101, 108, ..., 32, 32, 32],\n",
" [ 32, 32, 32, ..., 32, 32, 32],\n",
" [ 32, 32, 32, ..., 32, 32, 32],\n",
" ...,\n",
" [ 32, 32, 32, ..., 32, 32, 32],\n",
" [ 65, 103, 101, ..., 32, 32, 32],\n",
" [ 68, 108, 118, ..., 32, 32, 32]], dtype=uint8),\n",
" 'tty_colors': array([[7, 7, 7, ..., 0, 0, 0],\n",
" [0, 0, 0, ..., 0, 0, 0],\n",
" [0, 0, 0, ..., 0, 0, 0],\n",
" ...,\n",
" [0, 0, 0, ..., 0, 0, 0],\n",
" [7, 7, 7, ..., 0, 0, 0],\n",
" [7, 7, 7, ..., 0, 0, 0]], dtype=int8),\n",
" 'tty_cursor': array([10, 28], dtype=uint8),\n",
" 'misc': array([0, 0, 0], dtype=int32)}"
]
},
"execution_count": 14,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"obs"
]
},
{
"cell_type": "code",
"execution_count": 22,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"array([[ 67, 97, 118, ..., 32, 32, 32],\n",
" [ 32, 32, 32, ..., 32, 32, 32],\n",
" [ 32, 32, 32, ..., 32, 32, 32],\n",
" ...,\n",
" [ 32, 32, 32, ..., 32, 32, 32],\n",
" [ 65, 103, 101, ..., 32, 32, 32],\n",
" [ 68, 108, 118, ..., 32, 32, 32]], dtype=uint8)"
]
},
"execution_count": 22,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"obs['tty_chars']"
]
},
{
"cell_type": "code",
"execution_count": 15,
"metadata": {},
"outputs": [],
"source": [
"random_action = random.randint(0,113)"
]
},
{
"cell_type": "code",
"execution_count": 16,
"metadata": {},
"outputs": [],
"source": [
"q = env.step(random_action)[0]"
]
},
{
"cell_type": "code",
"execution_count": 17,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'glyphs': array([[2359, 2359, 2359, ..., 2359, 2359, 2359],\n",
" [2359, 2359, 2359, ..., 2359, 2359, 2359],\n",
" [2359, 2359, 2359, ..., 2359, 2359, 2359],\n",
" ...,\n",
" [2359, 2359, 2359, ..., 2359, 2359, 2359],\n",
" [2359, 2359, 2359, ..., 2359, 2359, 2359],\n",
" [2359, 2359, 2359, ..., 2359, 2359, 2359]], dtype=int16),\n",
" 'chars': array([[32, 32, 32, ..., 32, 32, 32],\n",
" [32, 32, 32, ..., 32, 32, 32],\n",
" [32, 32, 32, ..., 32, 32, 32],\n",
" ...,\n",
" [32, 32, 32, ..., 32, 32, 32],\n",
" [32, 32, 32, ..., 32, 32, 32],\n",
" [32, 32, 32, ..., 32, 32, 32]], dtype=uint8),\n",
" 'colors': array([[0, 0, 0, ..., 0, 0, 0],\n",
" [0, 0, 0, ..., 0, 0, 0],\n",
" [0, 0, 0, ..., 0, 0, 0],\n",
" ...,\n",
" [0, 0, 0, ..., 0, 0, 0],\n",
" [0, 0, 0, ..., 0, 0, 0],\n",
" [0, 0, 0, ..., 0, 0, 0]], dtype=uint8),\n",
" 'specials': array([[0, 0, 0, ..., 0, 0, 0],\n",
" [0, 0, 0, ..., 0, 0, 0],\n",
" [0, 0, 0, ..., 0, 0, 0],\n",
" ...,\n",
" [0, 0, 0, ..., 0, 0, 0],\n",
" [0, 0, 0, ..., 0, 0, 0],\n",
" [0, 0, 0, ..., 0, 0, 0]], dtype=uint8),\n",
" 'blstats': array([28, 9, 17, 17, 14, 18, 9, 8, 9, 0, 16, 16, 1, 0, 2, 2, 8,\n",
" 0, 1, 0, 1, 1, 0, 0, 1, 0]),\n",
" 'message': array([ 67, 97, 118, 101, 109, 101, 110, 32, 97, 114, 101, 110, 39,\n",
" 116, 32, 97, 98, 108, 101, 32, 116, 111, 32, 117, 115, 101,\n",
" 32, 116, 119, 111, 32, 119, 101, 97, 112, 111, 110, 115, 32,\n",
" 97, 116, 32, 111, 110, 99, 101, 46, 0, 0, 0, 0, 0,\n",
" 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\n",
" 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\n",
" 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\n",
" 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\n",
" 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\n",
" 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\n",
" 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\n",
" 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\n",
" 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\n",
" 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\n",
" 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\n",
" 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\n",
" 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\n",
" 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\n",
" 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\n",
" 0, 0, 0, 0, 0, 0, 0, 0, 0], dtype=uint8),\n",
" 'inv_glyphs': array([1965, 1975, 2351, 2352, 2019, 5976, 5976, 5976, 5976, 5976, 5976,\n",
" 5976, 5976, 5976, 5976, 5976, 5976, 5976, 5976, 5976, 5976, 5976,\n",
" 5976, 5976, 5976, 5976, 5976, 5976, 5976, 5976, 5976, 5976, 5976,\n",
" 5976, 5976, 5976, 5976, 5976, 5976, 5976, 5976, 5976, 5976, 5976,\n",
" 5976, 5976, 5976, 5976, 5976, 5976, 5976, 5976, 5976, 5976, 5976],\n",
" dtype=int16),\n",
" 'inv_strs': array([[97, 32, 43, ..., 0, 0, 0],\n",
" [97, 32, 43, ..., 0, 0, 0],\n",
" [49, 52, 32, ..., 0, 0, 0],\n",
" ...,\n",
" [ 0, 0, 0, ..., 0, 0, 0],\n",
" [ 0, 0, 0, ..., 0, 0, 0],\n",
" [ 0, 0, 0, ..., 0, 0, 0]], dtype=uint8),\n",
" 'inv_letters': array([ 97, 98, 99, 100, 101, 0, 0, 0, 0, 0, 0, 0, 0,\n",
" 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\n",
" 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\n",
" 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\n",
" 0, 0, 0], dtype=uint8),\n",
" 'inv_oclasses': array([ 2, 2, 13, 13, 3, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18,\n",
" 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18,\n",
" 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18,\n",
" 18, 18, 18, 18], dtype=uint8),\n",
" 'tty_chars': array([[ 67, 97, 118, ..., 32, 32, 32],\n",
" [ 32, 32, 32, ..., 32, 32, 32],\n",
" [ 32, 32, 32, ..., 32, 32, 32],\n",
" ...,\n",
" [ 32, 32, 32, ..., 32, 32, 32],\n",
" [ 65, 103, 101, ..., 32, 32, 32],\n",
" [ 68, 108, 118, ..., 32, 32, 32]], dtype=uint8),\n",
" 'tty_colors': array([[7, 7, 7, ..., 0, 0, 0],\n",
" [0, 0, 0, ..., 0, 0, 0],\n",
" [0, 0, 0, ..., 0, 0, 0],\n",
" ...,\n",
" [0, 0, 0, ..., 0, 0, 0],\n",
" [7, 7, 7, ..., 0, 0, 0],\n",
" [7, 7, 7, ..., 0, 0, 0]], dtype=int8),\n",
" 'tty_cursor': array([10, 28], dtype=uint8),\n",
" 'misc': array([0, 0, 0], dtype=int32)}"
]
},
"execution_count": 17,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"q"
]
},
{
"cell_type": "code",
"execution_count": 192,
"metadata": {},
"outputs": [],
"source": [
"img = nle.nethack.tty_render(q['tty_chars'], q['tty_colors'], q['tty_cursor'])"
]
},
{
"cell_type": "code",
"execution_count": 193,
"metadata": {},
"outputs": [],
"source": [
"def process_frame(frame):\n",
" blstats = frame['blstats']\n",
" msg = frame['message']\n",
" message = ''\n",
" for c in msg:\n",
" message = message+chr(c)\n",
" img = nle.nethack.tty_render(frame['tty_chars'],frame['tty_colors'],frame['tty_cursor'])\n",
" ansi_escape = re.compile(r'\\x1B(?:[@-Z\\\\-_]|\\[[0-?]*[ -/]*[@-~])')\n",
" img = ansi_escape.sub('', img).split('\\n')[2:-2]\n",
" frame = ''\n",
" for l in img:\n",
" frame = frame+l+'\\n'\n",
" img = Image.new(mode='RGB',size=(790,370))\n",
" text = ImageDraw.Draw(img)\n",
" text.text((0, 0),frame, fill=(255,255,255))\n",
" return np.array(img),message.split('\\x00')[0],blstats"
]
},
{
"cell_type": "code",
"execution_count": 194,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"<matplotlib.image.AxesImage at 0x7febd2119510>"
]
},
"execution_count": 194,
"metadata": {},
"output_type": "execute_result"
},
{
"data": {
"image/png": "iVBORw0KGgoAAAANSUhEUgAAAagAAADBCAYAAACaPiTmAAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADh0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uMy4yLjEsIGh0dHA6Ly9tYXRwbG90bGliLm9yZy+j8jraAAATLElEQVR4nO3dfaxd1Xnn8e+P13QgChBXlmt7BtI4qag0NciiRIkiGpQW0Ki0UiYyGiWoouOqJWpQI3UgI01aaSK1ozaZRlOl4w5MSJVCPHmZWIgZQgijKFXLW0LAxiU4CRG2DC6QAJ1qAr73mT/OuuTk5r75nnPu2Wfn+5GW7t5r77P3c61jP14ve+1UFZIkdc0p0w5AkqSlmKAkSZ1kgpIkdZIJSpLUSSYoSVInmaAkSZ00kQSV5Iokjyc5nOTGSdxDktRvGfdzUElOBb4JvBM4AjwAXFNVj431RpKkXptEC+oS4HBVfbuqXgZuB66ewH0kST122gSuuRV4amj/CPCLK30gictZSNKIqioL27/yS2fVc8/PjXS9hx75wV1VdcVSx5JsBz4JbAYK2FtVf5bkD4B/C/xDO/WDVXVn+8xNwHXAHPC7VXXXSvefRIJakyR7gD3Tur8k9dlzz89x/13/fKRrnLrliU0rHD4BfKCqvpbktcBDSe5uxz5aVX8yfHKSC4HdwM8DPwN8KcmbqmrZLDqJBHUU2D60v63V/Yiq2gvsBVtQkjRuRfFKnZjc9auOAcfa9ktJDjHoQVvO1cDtVfUD4DtJDjMYEvrb5T4wiTGoB4AdSS5IcgaDjLl/AveRJC2jgBPMjVTWKsn5wEXAfa3qfUkeSXJLknNb3VLDPysltPEnqKo6AbwPuAs4BOyrqoPjvo8kaXlFMVejFWBTkgeHyo8NyyQ5G/gscENVvQh8HPhZYCeDFtafrvd3mMgYVBsQu3MS15Ykra6AV5gf9TLPVtWu5Q4mOZ1BcvpUVX0OoKqeGTr+l8AdbXdNwz/DXElCknpqnhqprCRJgJuBQ1X1kaH6LUOn/TpwoG3vB3YnOTPJBcAO4P6V7jG1WXySpMkp4JXJvpD2rcB7gEeTPNzqPghck2RnC+FJ4LcAqupgkn3AYwxmAF6/0gw+MEFJUi9VFS9PMEFV1VeBLHFo2eGdqvow8OG13sMEJUk9VDD6CNSUmaAkqYeK8Eot1cCZHSYoSeqhAl6e8XlwJihJ6ql5W1CSpK6ZJ7zMqdMOYyQmKI1NVTF4NEJSF9iCkiR1ThFeLltQ0o8YbkktbC9XB6x4fC11kn7cYKkjE5QE8GqyGE4aJ1u33s9I+lFVYa6cxSdJ6hhbUJKkjrIFJZ00x46kyRssFmsLSjppJilpsgZLHc32P/GzHb1m0vAMPkmTUcCcSx1JJ8fWkzR5gxaUXXzSSTE5SZNXZYKSJHVQgbP4JEndYxefNMSxJalbnCQhSeocW1DSEtazyKuLxUrj5YO60pBRFnl1sVhpvKrC/IxPkpjt6CVJS1poQY1SVpJke5J7kzyW5GCS97f685LcneSJ9vPcVp8kH0tyOMkjSS5e7XcwQUlSLw0Wix2lrOIE8IGquhC4FLg+yYXAjcA9VbUDuKftA1wJ7GhlD/Dx1W5gF58k9dCkx6Cq6hhwrG2/lOQQsBW4GrisnXYr8H+Af9fqP1mDdc7+Lsk5Sba06yzJBKWxWPx23OUsdXytdYuPL3VvSQNFOLFBkySSnA9cBNwHbB5KOk8Dm9v2VuCpoY8daXUmKE3WyUxeWG2SxFqvc7LnST9JqmCuRv67sSnJg0P7e6tq7/AJSc4GPgvcUFUvLvrPYyVZ98rQJihJ6qEinJgfuQX1bFXtWu5gktMZJKdPVdXnWvUzC113SbYAx1v9UWD70Me3tbplOUlCknpoMAZ1ykhlJRk0lW4GDlXVR4YO7QeubdvXAl8Yqn9vm813KfDCSuNPYAtKknpq4s9BvRV4D/Bokodb3QeBPwL2JbkO+C7w7nbsTuAq4DDwT8BvrHYDE5Qk9VAVq7aCRrt+fRVYbpDr8iXOL+D6k7nHqtEnuSXJ8SQHhurG9iCWJGky5uuUkcq0rSWCTwBXLKob24NYkqTxG0wzP2WkMm2rdvFV1VfaHPdhY3sQS5I0fgXjmMU3Vesdgxr5Qawkexi0siRJ41ZhfvTnoKZq5EkS630Qqz3stRdglAe5JEk/rqAT3XSjWG+CGtuDWJKk8SuY+RbUetPr2B7EkiSN32AliVNGKtO2agsqyW0MJkRsSnIE+BBjfBBLkjQB9RPQxVdV1yxzaCwPYkmSxq8PXXyuJCFJPbTQxTfLTFCS1FNreCtup5mgJKmHquzikyR1Upizi0+S1EVlC0qS1DVVMDdvgpIkdUzhJAlJUie5WKwkqaPm7eKTJHXNYAzKLj5JUgfVjL/IyAQlST1UhHlbUJKkLprxBtS63wclSeqygprPSGU1SW5JcjzJgaG6P0hyNMnDrVw1dOymJIeTPJ7kV1a7vi0oSeqpDZjF9wngvwCfXFT/0ar6k+GKJBcCu4GfB34G+FKSN1XV3HIXtwUlST1UDJY6GqWseo+qrwDPrzGkq4Hbq+oHVfUdBi+2vWSlD5igJKmPNqCLbwXvS/JI6wI8t9VtBZ4aOudIq1uWCUqS+qpGLLApyYNDZc8a7vpx4GeBncAx4E/XG75jUJLUSyO3ggCerapdJ/OBqnrm1QiSvwTuaLtHge1Dp25rdcuyBSVJfTSlLr4kW4Z2fx1YmOG3H9id5MwkFwA7gPtXupYtKEnqrcnO4ktyG3AZg67AI8CHgMuS7GTQSfgk8FsAVXUwyT7gMeAEcP1KM/jABCVJ/TU/2ctX1TVLVN+8wvkfBj681uuboCSpj1oX3ywzQUlSX834WkcmKEnqqdiCkiR1zg+fZZpZJihJ6qWALShJUidNeBbfpJmgJKmPCljDgq9dZoKSpJ7KjLegXOpIktRJtqAkqaecZi5J6p7CSRKSpG7KjD8HteoYVJLtSe5N8liSg0ne3+rPS3J3kifaz3NbfZJ8LMnh9kbFiyf9S0iSljA/YpmytUySOAF8oKouBC4Frk9yIXAjcE9V7QDuafsAVzJ4z8cOYA+DtytKkjZQajAGNUqZtlUTVFUdq6qvte2XgEMM3iN/NXBrO+1W4Nfa9tXAJ2vg74BzFr3ASpK0EUZ/5ftUndQYVJLzgYuA+4DNVXWsHXoa2Ny2twJPDX3sSKs7NlRHe7f9Wt5vL0lah1l/DmrNCSrJ2cBngRuq6sXkh82/qqrk5IbjqmovsLdduwO5WpJ6pGZ/ksSaElSS0xkkp09V1eda9TNJtlTVsdaFd7zVHwW2D318W6uTJG2kGW9BrWUWXxi8wvdQVX1k6NB+4Nq2fS3whaH697bZfJcCLwx1BUqSNkhqtDJta2lBvRV4D/Bokodb3QeBPwL2JbkO+C7w7nbsTuAq4DDwT8BvjDViSdLadCDJjGLVBFVVXwWWm294+RLnF3D9iHFJkkZRP0GTJCRJM2bGW1CuZi5JPRQGLahRyqr3SG5JcjzJgaG6sa0yZIKSpD6qySco4BPAFYvqxrbKkAlKkvpqwitJVNVXgOcXVY9tlSHHoCSpp6Y0SWKkVYaGmaAkqY/GM4tvU5IHh/b3tlWA1hbCOlYZGmaCkqS+Gn0W37NVteskPzO2VYYcg5KkntqASRJLGdsqQ7agJKmPNuCVGUluAy5j0BV4BPgQY1xlyAQlST0UJr+eXlVds8yhsawyZIKSpJ5yqSNJUjfN+FJHJihJ6iMXi5UkdZUJSpLUSV146eAoTFCS1EfFzL/y3QQlST20EdPMJ80EJUk9lfnZzlAmKEnqI2fxSZI6a7YbUCYoSeorW1CSpO4pJ0lIkjoo2IKSJHWUs/gkSd2zAe+DmjQTlCT1VOamHcFoTFCS1EdlF58kqaOcxSdJ6hxn8UmSuqlqUGaYCUqSesoWlCSpewoyZwtKktRFE85PSZ4EXgLmgBNVtSvJecCngfOBJ4F3V9X31nP9U8YTpiSpazJfI5U1+qWq2llVu9r+jcA9VbUDuKftr8uqCSrJa5Lcn+QbSQ4m+cNWf0GS+5IcTvLpJGe0+jPb/uF2/Pz1BidJWr/UaGWdrgZubdu3Ar+23gutpQX1A+AdVfULwE7giiSXAn8MfLSq3gh8D7iunX8d8L1W/9F2niRpA6U2pAVVwBeTPJRkT6vbXFXH2vbTwOb1/g6rJqga+Me2e3orBbwD+EyrH86Sw9nzM8DlSbLeACVJ65O5GqkAm5I8OFT2LLrF26rqYuBK4Pokbx8+WFUjrQi4pkkSSU4FHgLeCPw58C3g+1V1op1yBNjatrcCT7XgTiR5AXg98Oyia+4BFv+ykqRxGM9isc8OjS39+C2qjrafx5N8HrgEeCbJlqo6lmQLcHy9N1/TJImqmquqncC2FsDPrfeGQ9fcW1W7VvrlJUnrNVr33mpdfEnOSvLahW3gl4EDwH7g2nbatcAX1vsbnNQ086r6fpJ7gbcA5yQ5rbWitgFH22lHge3AkSSnAa8DnltvgJKkdZj8c1Cbgc+3EZzTgL+uqv+d5AFgX5LrgO8C717vDVZNUEl+GnilJaefAt7JYOLDvcC7gNv50Sy5kD3/th3/cuuHlCRtpAn+01tV3wZ+YYn654DLx3GPtbSgtgC3tnGoU4B9VXVHkseA25P8R+DrwM3t/JuBv0pyGHge2D2OQCVJJ6f3r9uoqkeAi5ao/zaD8ajF9f8P+NdjiU6StH4z3nnlUkeS1EOpci0+SVJHzc/2cuYmKEnqowJmOz+ZoCSpr2ILSpLUPb5RV5LURQU4SUKS1EV28UmSuqeAvj+oK0maReU0c0lSBxUwZ4KSJHVOQZmgJEldYwtKktRZPgclSeoeJ0lIkrqogLm5aUcxEhOUJPWVXXySpM6pomxBSZI6yVl8kqTOKSdJSJI6ata7+E6ZdgCSpElo74MapawiyRVJHk9yOMmN4/4NbEFJUh9NeJp5klOBPwfeCRwBHkiyv6oeG9c9TFCS1EM1+Vl8lwCHq+rbAEluB64GTFCSpJXVZN8HtRV4amj/CPCL47xBVxLUPwKPTzuINdoEPDvtINbIWMdvVuIEY52Ursb6L4Z3XuJ7d31pft+mEa/5miQPDu3vraq9I15zzbqSoB6vql3TDmItkjxorOM3K7HOSpxgrJMyK7FW1RUTvsVRYPvQ/rZWNzbO4pMkrccDwI4kFyQ5A9gN7B/nDbrSgpIkzZCqOpHkfcBdwKnALVV1cJz36EqC2rA+zTEw1smYlVhnJU4w1kmZpVgnqqruBO6c1PVTM77arSSpnxyDkiR10tQT1KSXylhHPLckOZ7kwFDdeUnuTvJE+3luq0+Sj7XYH0ly8QbGuT3JvUkeS3Iwyfs7HOtrktyf5Bst1j9s9Rckua/F9Ok20EqSM9v+4Xb8/I2KdSjmU5N8PckdXY41yZNJHk3y8MJ04I5+B85J8pkkf5/kUJK3dDTON7c/y4XyYpIbuhjrT4SqmlphMLD2LeANwBnAN4ALpxzT24GLgQNDdf8JuLFt3wj8cdu+CvhfQIBLgfs2MM4twMVt+7XAN4ELOxprgLPb9unAfS2GfcDuVv8XwG+37d8B/qJt7wY+PYXvwe8Bfw3c0fY7GSvwJLBpUV0XvwO3Ar/Zts8AzulinItiPhV4msHzRZ2Ota9lujeHtwB3De3fBNw09T8UOH9Rgnoc2NK2tzB4bgvgvwLXLHXeFGL+AoM1sTodK/DPgK8xeOL8WeC0xd8FBrOC3tK2T2vnZQNj3AbcA7wDuKP949PVWJdKUJ36DgCvA76z+M+la3EuEfcvA38zC7H2tUy7i2+ppTK2TimWlWyuqmNt+2lgc9vuRPytW+kiBi2TTsbausweBo4DdzNoOX+/qk4sEc+rsbbjLwCv36hYgf8M/D6w8DKd19PdWAv4YpKHkuxpdV37DlwA/APw31u36X9LclYH41xsN3Bb2+56rL007QQ1c2rw36TOTH1McjbwWeCGqnpx+FiXYq2quarayaB1cgnwc1MOaUlJ/hVwvKoemnYsa/S2qroYuBK4Psnbhw925DtwGoNu849X1UXA/2XQTfaqjsT5qjbG+KvA/1h8rGux9tm0E9TEl8oYk2eSbAFoP4+3+qnGn+R0BsnpU1X1uS7HuqCqvg/cy6Cb7JwkC8/iDcfzaqzt+OuA5zYoxLcCv5rkSeB2Bt18f9bRWKmqo+3nceDzDJJ/174DR4AjVXVf2/8Mg4TVtTiHXQl8raqeaftdjrW3pp2gJr5UxpjsB65t29cyGO9ZqH9vm8lzKfDCUDfARCUJcDNwqKo+0vFYfzrJOW37pxiMlR1ikKjetUysC7/Du4Avt/+1TlxV3VRV26rqfAbfxy9X1b/pYqxJzkry2oVtBmMmB+jYd6CqngaeSvLmVnU5g1cydCrORa7hh917CzF1Ndb+mvYgGINZMN9kMCbx7zsQz23AMeAVBv/zu47BmMI9wBPAl4Dz2rlh8MKubwGPArs2MM63MehmeAR4uJWrOhrrvwS+3mI9APyHVv8G4H7gMIOulDNb/Wva/uF2/A1T+i5cxg9n8XUu1hbTN1o5uPD3p6PfgZ3Ag+078D+Bc7sYZ7v/WQxawa8bqutkrH0vriQhSeqkaXfxSZK0JBOUJKmTTFCSpE4yQUmSOskEJUnqJBOUJKmTTFCSpE4yQUmSOun/AzavzFjzmp1YAAAAAElFTkSuQmCC\n",
"text/plain": [
"<Figure size 432x288 with 2 Axes>"
]
},
"metadata": {
"needs_background": "light"
},
"output_type": "display_data"
}
],
"source": [
"imshow(process_frame(q)[0])"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.7.11"
}
},
"nbformat": 4,
"nbformat_minor": 4
}
| 57.802057 | 6,784 | 0.525017 | 2,328 | 22,485 | 5.03823 | 0.158505 | 0.113906 | 0.14656 | 0.164038 | 0.316992 | 0.279222 | 0.271123 | 0.264302 | 0.264302 | 0.264302 | 0 | 0.231859 | 0.290282 | 22,485 | 388 | 6,785 | 57.951031 | 0.503133 | 0 | 0 | 0.641753 | 0 | 0.175258 | 0.83171 | 0.320347 | 0 | 1 | 0.000623 | 0 | 0 | 1 | 0 | true | 0 | 0.018041 | 0 | 0.018041 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
b56e7fb01448cc2fbd7102b8f1c31b49d3a0f2d5 | 63 | py | Python | import/kiso.py | yo16/python_tips | 7f3f3e873e71ab199ec22a85b85359ed7fc619e7 | [
"MIT"
] | null | null | null | import/kiso.py | yo16/python_tips | 7f3f3e873e71ab199ec22a85b85359ed7fc619e7 | [
"MIT"
] | 3 | 2017-11-27T23:47:57.000Z | 2017-12-19T03:52:58.000Z | import/kiso.py | yo16/tips_python | 7f3f3e873e71ab199ec22a85b85359ed7fc619e7 | [
"MIT"
] | null | null | null | import kiso_impl
print(kiso_impl.getsomething('**'))
# **aaa
| 10.5 | 35 | 0.698413 | 8 | 63 | 5.25 | 0.75 | 0.380952 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.111111 | 63 | 5 | 36 | 12.6 | 0.75 | 0.079365 | 0 | 0 | 0 | 0 | 0.035714 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0.5 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 6 |
b5771b7ad9cf6f59c88386c8dd1fa5c404d4db22 | 111 | py | Python | testproject/testapp/views.py | stormpath/stormpath-django | af60eb5da2115d94ac313613c5d4e6b9f3d16157 | [
"Apache-2.0"
] | 36 | 2015-01-13T00:21:07.000Z | 2017-11-07T11:45:25.000Z | testproject/testapp/views.py | stormpath/stormpath-django | af60eb5da2115d94ac313613c5d4e6b9f3d16157 | [
"Apache-2.0"
] | 55 | 2015-01-07T09:53:50.000Z | 2017-02-07T00:31:20.000Z | testproject/testapp/views.py | stormpath/stormpath-django | af60eb5da2115d94ac313613c5d4e6b9f3d16157 | [
"Apache-2.0"
] | 24 | 2015-01-06T16:17:33.000Z | 2017-04-21T14:00:16.000Z | from django.shortcuts import render
def home(request):
return render(request, 'testapp/index.html', {})
| 15.857143 | 52 | 0.720721 | 14 | 111 | 5.714286 | 0.857143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.153153 | 111 | 6 | 53 | 18.5 | 0.851064 | 0 | 0 | 0 | 0 | 0 | 0.163636 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
b59a9d56d2290ec65795b01a4233ca3d431a0583 | 2,955 | py | Python | asana/resources/gen/user_task_lists.py | FiyaFly/python-asana | ef9e6ff3e82e9f1ca18d526401f524698c7215c7 | [
"MIT"
] | 266 | 2015-02-13T18:14:08.000Z | 2022-03-29T22:03:33.000Z | asana/resources/gen/user_task_lists.py | FiyaFly/python-asana | ef9e6ff3e82e9f1ca18d526401f524698c7215c7 | [
"MIT"
] | 77 | 2015-02-13T00:22:11.000Z | 2022-02-20T07:56:14.000Z | asana/resources/gen/user_task_lists.py | FiyaFly/python-asana | ef9e6ff3e82e9f1ca18d526401f524698c7215c7 | [
"MIT"
] | 95 | 2015-03-18T23:28:57.000Z | 2022-02-20T23:28:58.000Z | # coding=utf-8
class _UserTaskLists:
def __init__(self, client=None):
self.client = client
def get_user_task_list(self, user_task_list_gid, params=None, **options):
"""Get a user task list
:param str user_task_list_gid: (required) Globally unique identifier for the user task list.
:param Object params: Parameters for the request
:param **options
- opt_fields {list[str]}: Defines fields to return. Some requests return *compact* representations of objects in order to conserve resources and complete the request more efficiently. Other times requests return more information than you may need. This option allows you to list the exact set of fields that the API should be sure to return for the objects. The field names should be provided as paths, described below. The id of included objects will always be returned, regardless of the field options.
- opt_pretty {bool}: Provides “pretty” output. Provides the response in a “pretty” format. In the case of JSON this means doing proper line breaking and indentation to make it readable. This will take extra time and increase the response size so it is advisable only to use this during debugging.
:return: Object
"""
if params is None:
params = {}
path = "/user_task_lists/{user_task_list_gid}".replace("{user_task_list_gid}", user_task_list_gid)
return self.client.get(path, params, **options)
def get_user_task_list_for_user(self, user_gid, params=None, **options):
"""Get a user's task list
:param str user_gid: (required) A string identifying a user. This can either be the string \"me\", an email, or the gid of a user.
:param Object params: Parameters for the request
- workspace {str}: (required) The workspace in which to get the user task list.
:param **options
- opt_fields {list[str]}: Defines fields to return. Some requests return *compact* representations of objects in order to conserve resources and complete the request more efficiently. Other times requests return more information than you may need. This option allows you to list the exact set of fields that the API should be sure to return for the objects. The field names should be provided as paths, described below. The id of included objects will always be returned, regardless of the field options.
- opt_pretty {bool}: Provides “pretty” output. Provides the response in a “pretty” format. In the case of JSON this means doing proper line breaking and indentation to make it readable. This will take extra time and increase the response size so it is advisable only to use this during debugging.
:return: Object
"""
if params is None:
params = {}
path = "/users/{user_gid}/user_task_list".replace("{user_gid}", user_gid)
return self.client.get(path, params, **options)
| 84.428571 | 517 | 0.714721 | 442 | 2,955 | 4.68552 | 0.278281 | 0.046354 | 0.063737 | 0.036214 | 0.819894 | 0.768711 | 0.768711 | 0.703042 | 0.665379 | 0.665379 | 0 | 0.000434 | 0.220643 | 2,955 | 34 | 518 | 86.911765 | 0.898828 | 0.728257 | 0 | 0.461538 | 0 | 0 | 0.148204 | 0.103293 | 0 | 0 | 0 | 0 | 0 | 1 | 0.230769 | false | 0 | 0 | 0 | 0.461538 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
b59d4447a1c19c454d0a78f619a6fd06c22b5e24 | 3,124 | py | Python | example_old.py | hikarimusic2002/BIOSTATS | ffd108c60fcf06073253380cc1d8b9fc448e8812 | [
"MIT"
] | null | null | null | example_old.py | hikarimusic2002/BIOSTATS | ffd108c60fcf06073253380cc1d8b9fc448e8812 | [
"MIT"
] | null | null | null | example_old.py | hikarimusic2002/BIOSTATS | ffd108c60fcf06073253380cc1d8b9fc448e8812 | [
"MIT"
] | null | null | null | import biostats as bs
import pandas as pd
# ---------------------------------------------------------------
# One-Way ANOVA
data = pd.read_csv("biostats/dataset/penguins.csv")
result = bs.one_way_anova(data, "bill_length_mm", "species")
result2 = bs.one_way_anova(data, "bill_length_mm", "species", 1)
#print(result)
# Two-Way ANOVA
data = pd.read_csv("biostats/dataset/penguins.csv")
result = bs.two_way_anova(data, "bill_length_mm", ["species", "island"])
result2 = bs.two_way_anova(data, "bill_length_mm", ["species", "island"], 1)
#print(result)
# N-Way ANOVA
data = pd.read_csv("biostats/dataset/penguins.csv")
result = bs.n_way_anova(data, "bill_length_mm", ["species", "island", "sex"])
result2 = bs.n_way_anova(data, "bill_length_mm", ["species", "island", "sex"], 1)
#print(result)
# ---------------------------------------------------------------
# One-Way ANCOVA
data = pd.read_csv("biostats/dataset/penguins.csv")
result = bs.one_way_ancova(data, "body_mass_g", "species", "bill_length_mm")
result2 = bs.one_way_ancova(data, "body_mass_g", "species", "bill_length_mm", 1)
#print(result)
# Two-Way ANCOVA
data = pd.read_csv("biostats/dataset/penguins.csv")
result = bs.two_way_ancova(data, "body_mass_g", "species", ["bill_length_mm", "bill_depth_mm"])
result2 = bs.two_way_ancova(data, "body_mass_g", "species", ["bill_length_mm", "bill_depth_mm"], 1)
print(result)
print(result2)
# N-Way ANCOVA
data = pd.read_csv("biostats/dataset/penguins.csv")
result = bs.two_way_ancova(data, "body_mass_g", "species", ["bill_length_mm", "bill_depth_mm", "flipper_length_mm"])
result2 = bs.two_way_ancova(data, "body_mass_g", "species", ["bill_length_mm", "bill_depth_mm", "flipper_length_mm"], 1)
#print(result)
# ---------------------------------------------------------------
# Chi-Square Independence
data = pd.read_csv("biostats/dataset/titanic.csv")
result = bs.chi_square_independence(data, "survived", "pclass")
result2 = bs.chi_square_independence(data, "survived", "pclass", 1)
#print(result)
# Chi-Square Fit
data = pd.read_csv("biostats/dataset/titanic.csv")
result = bs.chi_square_fit(data, "pclass", {1: 0.3, 2: 0.2, 3: 0.5})
result2 = bs.chi_square_fit(data, "pclass", {1: 0.3, 2: 0.2, 3: 0.5}, 1)
#print(result)
# ---------------------------------------------------------------
# Linear Regression
data = pd.read_csv("biostats/dataset/penguins.csv")
result = bs.linear_regression(data, "body_mass_g", "bill_length_mm")
result2 = bs.linear_regression(data, "body_mass_g", "bill_length_mm", 1)
#print(result)
# Multiple Regression
data = pd.read_csv("biostats/dataset/penguins.csv")
result = bs.multiple_regression(data, "body_mass_g", ["bill_length_mm", "flipper_length_mm"], ["species", "sex"])
result2 = bs.multiple_regression(data, "body_mass_g", ["bill_length_mm", "flipper_length_mm"], ["species", "sex"], 1)
#print(result)
# Logistic Regression
data = pd.read_csv("biostats/dataset/penguins.csv")
result = bs.logistic_regression(data, "species", "Adelie", ["bill_length_mm", "flipper_length_mm"], ["sex"])
#print(result)
# ---------------------------------------------------------------
| 39.544304 | 120 | 0.653969 | 438 | 3,124 | 4.390411 | 0.109589 | 0.091524 | 0.106084 | 0.074363 | 0.874155 | 0.835673 | 0.809152 | 0.772231 | 0.772231 | 0.73895 | 0 | 0.013636 | 0.084507 | 3,124 | 78 | 121 | 40.051282 | 0.658741 | 0.201344 | 0 | 0.305556 | 0 | 0 | 0.401945 | 0.128444 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.055556 | 0 | 0.055556 | 0.055556 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
b5a1c9c1bfb9f95d95f6141f7e61bce8d9ac3345 | 17 | py | Python | tests/CompileTests/Python_tests/test2011_004.py | maurizioabba/rose | 7597292cf14da292bdb9a4ef573001b6c5b9b6c0 | [
"BSD-3-Clause"
] | 488 | 2015-01-09T08:54:48.000Z | 2022-03-30T07:15:46.000Z | tests/CompileTests/Python_tests/test2011_004.py | sujankh/rose-matlab | 7435d4fa1941826c784ba97296c0ec55fa7d7c7e | [
"BSD-3-Clause"
] | 174 | 2015-01-28T18:41:32.000Z | 2022-03-31T16:51:05.000Z | tests/CompileTests/Python_tests/test2011_004.py | sujankh/rose-matlab | 7435d4fa1941826c784ba97296c0ec55fa7d7c7e | [
"BSD-3-Clause"
] | 146 | 2015-04-27T02:48:34.000Z | 2022-03-04T07:32:53.000Z | def foo():
123
| 5.666667 | 10 | 0.529412 | 3 | 17 | 3 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.25 | 0.294118 | 17 | 2 | 11 | 8.5 | 0.5 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | true | 0 | 0 | 0 | 0.5 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
b5b30923481de02d5314575c692ac6e6bfb99e21 | 170 | py | Python | LinkedList/node.py | bolusarz/Data-Structures | 0a279628d774e8bfb807505aa9cbc47f465bb49e | [
"MIT"
] | null | null | null | LinkedList/node.py | bolusarz/Data-Structures | 0a279628d774e8bfb807505aa9cbc47f465bb49e | [
"MIT"
] | null | null | null | LinkedList/node.py | bolusarz/Data-Structures | 0a279628d774e8bfb807505aa9cbc47f465bb49e | [
"MIT"
] | null | null | null | class Node:
def __init__(self, data):
self.data = data
self.next = None
def __eq__(self, o: object) -> bool:
return self.data == o.data
| 18.888889 | 40 | 0.558824 | 23 | 170 | 3.782609 | 0.565217 | 0.275862 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.323529 | 170 | 8 | 41 | 21.25 | 0.756522 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0 | 0.166667 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
b5b7f7b90dd0d9695ad466b21d6086f11af3b20d | 28 | py | Python | package-files/simple_icd_10/__init__.py | StefanoTrv/simple_icd_10 | 4995baacb8a5f5e78c067a5c17734ff1af283704 | [
"CC0-1.0"
] | 8 | 2020-12-07T14:41:00.000Z | 2022-02-05T09:15:44.000Z | package-files/simple_icd_10/__init__.py | StefanoTrv/simple-icd-10 | c1a0d15ab6a7a924bfaac3d889716380e5441370 | [
"CC0-1.0"
] | 2 | 2021-08-16T09:55:18.000Z | 2021-09-23T21:00:31.000Z | package-files/simple_icd_10/__init__.py | StefanoTrv/simple-icd-10 | c1a0d15ab6a7a924bfaac3d889716380e5441370 | [
"CC0-1.0"
] | null | null | null | from .simple_icd_10 import * | 28 | 28 | 0.821429 | 5 | 28 | 4.2 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.08 | 0.107143 | 28 | 1 | 28 | 28 | 0.76 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
a90ab980cce33b951310eb1b66e4bdf472b6883a | 38,093 | py | Python | instances/passenger_demand/pas-20210421-2109-int16e/85.py | LHcau/scheduling-shared-passenger-and-freight-transport-on-a-fixed-infrastructure | bba1e6af5bc8d9deaa2dc3b83f6fe9ddf15d2a11 | [
"BSD-3-Clause"
] | null | null | null | instances/passenger_demand/pas-20210421-2109-int16e/85.py | LHcau/scheduling-shared-passenger-and-freight-transport-on-a-fixed-infrastructure | bba1e6af5bc8d9deaa2dc3b83f6fe9ddf15d2a11 | [
"BSD-3-Clause"
] | null | null | null | instances/passenger_demand/pas-20210421-2109-int16e/85.py | LHcau/scheduling-shared-passenger-and-freight-transport-on-a-fixed-infrastructure | bba1e6af5bc8d9deaa2dc3b83f6fe9ddf15d2a11 | [
"BSD-3-Clause"
] | null | null | null |
"""
PASSENGERS
"""
numPassengers = 3705
passenger_arriving = (
(5, 11, 12, 3, 1, 0, 6, 6, 9, 8, 3, 0), # 0
(3, 8, 10, 2, 2, 0, 14, 8, 6, 1, 3, 0), # 1
(2, 10, 6, 1, 5, 0, 10, 8, 7, 3, 6, 0), # 2
(5, 6, 8, 7, 2, 0, 8, 9, 9, 4, 4, 0), # 3
(2, 12, 9, 4, 3, 0, 11, 9, 6, 6, 4, 0), # 4
(2, 8, 9, 5, 2, 0, 7, 11, 6, 2, 1, 0), # 5
(1, 12, 9, 3, 1, 0, 9, 6, 9, 6, 6, 0), # 6
(4, 9, 8, 4, 2, 0, 6, 13, 4, 9, 2, 0), # 7
(3, 12, 8, 4, 7, 0, 9, 11, 9, 1, 4, 0), # 8
(6, 13, 8, 5, 7, 0, 11, 8, 6, 3, 1, 0), # 9
(8, 7, 13, 5, 3, 0, 8, 2, 8, 9, 4, 0), # 10
(6, 13, 7, 3, 1, 0, 13, 7, 4, 4, 1, 0), # 11
(3, 10, 11, 3, 2, 0, 6, 5, 5, 3, 3, 0), # 12
(5, 11, 3, 4, 5, 0, 6, 8, 7, 2, 4, 0), # 13
(5, 5, 12, 1, 3, 0, 9, 6, 5, 2, 5, 0), # 14
(9, 12, 13, 0, 1, 0, 8, 13, 5, 10, 3, 0), # 15
(7, 10, 11, 2, 1, 0, 9, 13, 9, 4, 3, 0), # 16
(2, 12, 7, 4, 5, 0, 9, 6, 5, 3, 0, 0), # 17
(5, 8, 10, 4, 1, 0, 4, 10, 9, 3, 1, 0), # 18
(5, 11, 5, 5, 2, 0, 9, 10, 4, 3, 2, 0), # 19
(3, 12, 12, 8, 3, 0, 9, 15, 7, 3, 5, 0), # 20
(1, 9, 3, 3, 3, 0, 6, 7, 6, 2, 1, 0), # 21
(3, 17, 5, 6, 4, 0, 7, 6, 9, 7, 2, 0), # 22
(4, 13, 8, 3, 5, 0, 7, 11, 4, 7, 4, 0), # 23
(7, 11, 6, 6, 3, 0, 10, 14, 6, 6, 3, 0), # 24
(2, 15, 7, 6, 1, 0, 2, 10, 8, 3, 2, 0), # 25
(5, 7, 10, 5, 4, 0, 13, 8, 4, 4, 3, 0), # 26
(8, 12, 16, 6, 5, 0, 6, 12, 6, 7, 2, 0), # 27
(5, 9, 8, 5, 5, 0, 7, 7, 7, 12, 1, 0), # 28
(7, 11, 11, 1, 2, 0, 8, 6, 10, 7, 2, 0), # 29
(7, 14, 8, 8, 6, 0, 7, 6, 5, 3, 5, 0), # 30
(6, 8, 6, 6, 2, 0, 7, 15, 12, 3, 4, 0), # 31
(10, 9, 9, 11, 0, 0, 5, 10, 7, 5, 1, 0), # 32
(2, 8, 10, 4, 5, 0, 8, 7, 8, 7, 2, 0), # 33
(7, 8, 6, 1, 6, 0, 8, 9, 7, 7, 1, 0), # 34
(7, 15, 6, 6, 3, 0, 7, 11, 5, 3, 0, 0), # 35
(5, 10, 10, 2, 1, 0, 7, 15, 7, 4, 2, 0), # 36
(8, 7, 8, 2, 3, 0, 4, 10, 3, 4, 2, 0), # 37
(12, 10, 7, 5, 1, 0, 4, 17, 6, 6, 2, 0), # 38
(4, 14, 7, 5, 1, 0, 8, 5, 8, 4, 0, 0), # 39
(2, 11, 8, 5, 5, 0, 13, 8, 9, 3, 3, 0), # 40
(5, 10, 12, 2, 2, 0, 16, 10, 8, 5, 4, 0), # 41
(9, 6, 12, 11, 3, 0, 5, 12, 3, 5, 5, 0), # 42
(6, 9, 4, 3, 1, 0, 9, 5, 11, 2, 3, 0), # 43
(8, 11, 4, 5, 3, 0, 5, 8, 11, 3, 2, 0), # 44
(7, 9, 10, 6, 4, 0, 7, 11, 8, 2, 2, 0), # 45
(8, 7, 7, 5, 1, 0, 2, 7, 8, 6, 4, 0), # 46
(3, 12, 7, 4, 3, 0, 13, 10, 7, 8, 1, 0), # 47
(1, 19, 6, 3, 1, 0, 3, 12, 7, 5, 0, 0), # 48
(4, 15, 10, 2, 3, 0, 6, 14, 4, 5, 2, 0), # 49
(5, 7, 6, 5, 4, 0, 6, 9, 4, 4, 0, 0), # 50
(4, 8, 9, 5, 5, 0, 8, 11, 10, 6, 2, 0), # 51
(3, 13, 9, 9, 2, 0, 10, 8, 8, 3, 2, 0), # 52
(4, 9, 11, 4, 4, 0, 3, 11, 6, 3, 3, 0), # 53
(5, 14, 11, 5, 3, 0, 9, 11, 5, 6, 1, 0), # 54
(9, 9, 6, 2, 5, 0, 8, 8, 6, 6, 3, 0), # 55
(10, 15, 10, 2, 3, 0, 5, 18, 8, 6, 1, 0), # 56
(2, 11, 7, 5, 1, 0, 4, 7, 8, 11, 1, 0), # 57
(6, 11, 8, 4, 1, 0, 8, 10, 12, 6, 1, 0), # 58
(0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0), # 59
)
station_arriving_intensity = (
(4.239442493415277, 10.874337121212122, 12.79077763496144, 10.138043478260869, 11.428846153846154, 7.610869565217392), # 0
(4.27923521607648, 10.995266557940518, 12.859864860039991, 10.194503019323673, 11.51450641025641, 7.608275422705315), # 1
(4.318573563554774, 11.114402244668911, 12.927312196515281, 10.249719806763286, 11.598358974358975, 7.60560193236715), # 2
(4.357424143985952, 11.231615625000002, 12.993070372750644, 10.303646739130434, 11.680326923076926, 7.60284945652174), # 3
(4.395753565505805, 11.346778142536477, 13.057090117109396, 10.356236714975847, 11.760333333333335, 7.600018357487922), # 4
(4.433528436250122, 11.459761240881035, 13.11932215795487, 10.407442632850241, 11.838301282051281, 7.597108997584541), # 5
(4.470715364354698, 11.570436363636365, 13.179717223650389, 10.457217391304349, 11.914153846153846, 7.594121739130435), # 6
(4.507280957955322, 11.678674954405162, 13.238226042559269, 10.50551388888889, 11.987814102564105, 7.591056944444445), # 7
(4.543191825187787, 11.784348456790122, 13.294799343044847, 10.552285024154589, 12.059205128205129, 7.587914975845411), # 8
(4.578414574187884, 11.88732831439394, 13.34938785347044, 10.597483695652175, 12.12825, 7.584696195652175), # 9
(4.612915813091406, 11.987485970819305, 13.401942302199371, 10.64106280193237, 12.194871794871796, 7.581400966183574), # 10
(4.646662150034143, 12.084692869668913, 13.452413417594972, 10.682975241545895, 12.25899358974359, 7.578029649758455), # 11
(4.679620193151888, 12.178820454545454, 13.500751928020566, 10.723173913043478, 12.320538461538462, 7.574582608695652), # 12
(4.71175655058043, 12.26974016905163, 13.546908561839473, 10.761611714975846, 12.37942948717949, 7.5710602053140095), # 13
(4.743037830455566, 12.357323456790127, 13.590834047415022, 10.798241545893719, 12.435589743589743, 7.567462801932367), # 14
(4.773430640913081, 12.441441761363635, 13.632479113110538, 10.833016304347826, 12.488942307692309, 7.563790760869566), # 15
(4.802901590088772, 12.521966526374861, 13.671794487289347, 10.86588888888889, 12.539410256410257, 7.560044444444445), # 16
(4.831417286118428, 12.598769195426486, 13.708730898314768, 10.896812198067634, 12.586916666666667, 7.556224214975846), # 17
(4.8589443371378405, 12.671721212121213, 13.74323907455013, 10.925739130434785, 12.631384615384619, 7.552330434782609), # 18
(4.8854493512828014, 12.740694020061728, 13.775269744358756, 10.952622584541063, 12.67273717948718, 7.5483634661835755), # 19
(4.910898936689104, 12.805559062850728, 13.804773636103969, 10.9774154589372, 12.710897435897436, 7.544323671497584), # 20
(4.935259701492538, 12.866187784090906, 13.831701478149103, 11.000070652173914, 12.74578846153846, 7.540211413043479), # 21
(4.958498253828894, 12.922451627384962, 13.856003998857469, 11.020541062801932, 12.777333333333331, 7.5360270531400975), # 22
(4.980581201833967, 12.97422203633558, 13.877631926592404, 11.038779589371982, 12.805455128205129, 7.531770954106282), # 23
(5.001475153643547, 13.021370454545455, 13.896535989717222, 11.054739130434783, 12.830076923076923, 7.52744347826087), # 24
(5.0211467173934246, 13.063768325617284, 13.91266691659526, 11.068372584541065, 12.851121794871794, 7.523044987922706), # 25
(5.039562501219393, 13.101287093153758, 13.925975435589832, 11.079632850241545, 12.86851282051282, 7.518575845410628), # 26
(5.056689113257243, 13.133798200757575, 13.936412275064265, 11.088472826086958, 12.88217307692308, 7.514036413043479), # 27
(5.072493161642767, 13.161173092031426, 13.943928163381893, 11.09484541062802, 12.89202564102564, 7.509427053140097), # 28
(5.086941254511755, 13.183283210578004, 13.948473828906026, 11.09870350241546, 12.89799358974359, 7.504748128019324), # 29
(5.1000000000000005, 13.200000000000001, 13.950000000000001, 11.100000000000001, 12.9, 7.5), # 30
(5.112219245524297, 13.213886079545453, 13.948855917874395, 11.099765849673204, 12.89926985815603, 7.4934020156588375), # 31
(5.124174680306906, 13.227588636363638, 13.945456038647343, 11.099067973856208, 12.897095035460993, 7.483239613526571), # 32
(5.135871675191815, 13.241105965909092, 13.93984891304348, 11.097913235294119, 12.893498936170213, 7.469612293853072), # 33
(5.147315601023018, 13.254436363636366, 13.93208309178744, 11.096308496732028, 12.888504964539008, 7.452619556888223), # 34
(5.158511828644501, 13.267578124999998, 13.922207125603865, 11.094260620915033, 12.882136524822696, 7.432360902881893), # 35
(5.169465728900256, 13.280529545454549, 13.91026956521739, 11.091776470588236, 12.874417021276598, 7.408935832083959), # 36
(5.180182672634271, 13.293288920454547, 13.896318961352657, 11.088862908496733, 12.865369858156027, 7.382443844744294), # 37
(5.190668030690537, 13.305854545454546, 13.8804038647343, 11.08552679738562, 12.855018439716313, 7.352984441112776), # 38
(5.200927173913044, 13.318224715909091, 13.862572826086955, 11.081775, 12.843386170212765, 7.32065712143928), # 39
(5.21096547314578, 13.330397727272729, 13.842874396135267, 11.077614379084968, 12.830496453900707, 7.285561385973679), # 40
(5.220788299232737, 13.342371874999998, 13.821357125603866, 11.073051797385622, 12.816372695035462, 7.247796734965852), # 41
(5.230401023017903, 13.354145454545458, 13.798069565217393, 11.068094117647059, 12.801038297872342, 7.207462668665667), # 42
(5.239809015345269, 13.365716761363636, 13.773060265700483, 11.06274820261438, 12.784516666666667, 7.164658687323005), # 43
(5.249017647058824, 13.377084090909092, 13.746377777777779, 11.05702091503268, 12.76683120567376, 7.119484291187739), # 44
(5.258032289002557, 13.388245738636364, 13.718070652173916, 11.050919117647059, 12.748005319148938, 7.072038980509745), # 45
(5.266858312020461, 13.399200000000002, 13.688187439613529, 11.044449673202614, 12.72806241134752, 7.022422255538898), # 46
(5.275501086956522, 13.409945170454547, 13.656776690821255, 11.037619444444445, 12.707025886524825, 6.970733616525071), # 47
(5.283965984654732, 13.420479545454548, 13.623886956521739, 11.030435294117646, 12.68491914893617, 6.9170725637181425), # 48
(5.292258375959079, 13.430801420454543, 13.589566787439615, 11.022904084967323, 12.66176560283688, 6.861538597367982), # 49
(5.300383631713555, 13.440909090909088, 13.553864734299518, 11.015032679738564, 12.63758865248227, 6.804231217724471), # 50
(5.308347122762149, 13.450800852272728, 13.516829347826087, 11.006827941176471, 12.612411702127659, 6.7452499250374816), # 51
(5.316154219948849, 13.460475, 13.47850917874396, 10.998296732026144, 12.58625815602837, 6.684694219556889), # 52
(5.3238102941176475, 13.469929829545457, 13.438952777777779, 10.98944591503268, 12.559151418439718, 6.622663601532567), # 53
(5.331320716112533, 13.479163636363635, 13.398208695652173, 10.980282352941177, 12.531114893617023, 6.559257571214393), # 54
(5.338690856777493, 13.488174715909091, 13.356325483091787, 10.970812908496733, 12.502171985815604, 6.494575628852241), # 55
(5.3459260869565215, 13.496961363636363, 13.313351690821257, 10.961044444444445, 12.472346099290782, 6.428717274695986), # 56
(5.353031777493607, 13.505521875000003, 13.269335869565218, 10.950983823529413, 12.441660638297872, 6.361782008995502), # 57
(5.360013299232737, 13.513854545454544, 13.224326570048309, 10.940637908496733, 12.410139007092198, 6.293869332000667), # 58
(0.0, 0.0, 0.0, 0.0, 0.0, 0.0), # 59
)
passenger_arriving_acc = (
(5, 11, 12, 3, 1, 0, 6, 6, 9, 8, 3, 0), # 0
(8, 19, 22, 5, 3, 0, 20, 14, 15, 9, 6, 0), # 1
(10, 29, 28, 6, 8, 0, 30, 22, 22, 12, 12, 0), # 2
(15, 35, 36, 13, 10, 0, 38, 31, 31, 16, 16, 0), # 3
(17, 47, 45, 17, 13, 0, 49, 40, 37, 22, 20, 0), # 4
(19, 55, 54, 22, 15, 0, 56, 51, 43, 24, 21, 0), # 5
(20, 67, 63, 25, 16, 0, 65, 57, 52, 30, 27, 0), # 6
(24, 76, 71, 29, 18, 0, 71, 70, 56, 39, 29, 0), # 7
(27, 88, 79, 33, 25, 0, 80, 81, 65, 40, 33, 0), # 8
(33, 101, 87, 38, 32, 0, 91, 89, 71, 43, 34, 0), # 9
(41, 108, 100, 43, 35, 0, 99, 91, 79, 52, 38, 0), # 10
(47, 121, 107, 46, 36, 0, 112, 98, 83, 56, 39, 0), # 11
(50, 131, 118, 49, 38, 0, 118, 103, 88, 59, 42, 0), # 12
(55, 142, 121, 53, 43, 0, 124, 111, 95, 61, 46, 0), # 13
(60, 147, 133, 54, 46, 0, 133, 117, 100, 63, 51, 0), # 14
(69, 159, 146, 54, 47, 0, 141, 130, 105, 73, 54, 0), # 15
(76, 169, 157, 56, 48, 0, 150, 143, 114, 77, 57, 0), # 16
(78, 181, 164, 60, 53, 0, 159, 149, 119, 80, 57, 0), # 17
(83, 189, 174, 64, 54, 0, 163, 159, 128, 83, 58, 0), # 18
(88, 200, 179, 69, 56, 0, 172, 169, 132, 86, 60, 0), # 19
(91, 212, 191, 77, 59, 0, 181, 184, 139, 89, 65, 0), # 20
(92, 221, 194, 80, 62, 0, 187, 191, 145, 91, 66, 0), # 21
(95, 238, 199, 86, 66, 0, 194, 197, 154, 98, 68, 0), # 22
(99, 251, 207, 89, 71, 0, 201, 208, 158, 105, 72, 0), # 23
(106, 262, 213, 95, 74, 0, 211, 222, 164, 111, 75, 0), # 24
(108, 277, 220, 101, 75, 0, 213, 232, 172, 114, 77, 0), # 25
(113, 284, 230, 106, 79, 0, 226, 240, 176, 118, 80, 0), # 26
(121, 296, 246, 112, 84, 0, 232, 252, 182, 125, 82, 0), # 27
(126, 305, 254, 117, 89, 0, 239, 259, 189, 137, 83, 0), # 28
(133, 316, 265, 118, 91, 0, 247, 265, 199, 144, 85, 0), # 29
(140, 330, 273, 126, 97, 0, 254, 271, 204, 147, 90, 0), # 30
(146, 338, 279, 132, 99, 0, 261, 286, 216, 150, 94, 0), # 31
(156, 347, 288, 143, 99, 0, 266, 296, 223, 155, 95, 0), # 32
(158, 355, 298, 147, 104, 0, 274, 303, 231, 162, 97, 0), # 33
(165, 363, 304, 148, 110, 0, 282, 312, 238, 169, 98, 0), # 34
(172, 378, 310, 154, 113, 0, 289, 323, 243, 172, 98, 0), # 35
(177, 388, 320, 156, 114, 0, 296, 338, 250, 176, 100, 0), # 36
(185, 395, 328, 158, 117, 0, 300, 348, 253, 180, 102, 0), # 37
(197, 405, 335, 163, 118, 0, 304, 365, 259, 186, 104, 0), # 38
(201, 419, 342, 168, 119, 0, 312, 370, 267, 190, 104, 0), # 39
(203, 430, 350, 173, 124, 0, 325, 378, 276, 193, 107, 0), # 40
(208, 440, 362, 175, 126, 0, 341, 388, 284, 198, 111, 0), # 41
(217, 446, 374, 186, 129, 0, 346, 400, 287, 203, 116, 0), # 42
(223, 455, 378, 189, 130, 0, 355, 405, 298, 205, 119, 0), # 43
(231, 466, 382, 194, 133, 0, 360, 413, 309, 208, 121, 0), # 44
(238, 475, 392, 200, 137, 0, 367, 424, 317, 210, 123, 0), # 45
(246, 482, 399, 205, 138, 0, 369, 431, 325, 216, 127, 0), # 46
(249, 494, 406, 209, 141, 0, 382, 441, 332, 224, 128, 0), # 47
(250, 513, 412, 212, 142, 0, 385, 453, 339, 229, 128, 0), # 48
(254, 528, 422, 214, 145, 0, 391, 467, 343, 234, 130, 0), # 49
(259, 535, 428, 219, 149, 0, 397, 476, 347, 238, 130, 0), # 50
(263, 543, 437, 224, 154, 0, 405, 487, 357, 244, 132, 0), # 51
(266, 556, 446, 233, 156, 0, 415, 495, 365, 247, 134, 0), # 52
(270, 565, 457, 237, 160, 0, 418, 506, 371, 250, 137, 0), # 53
(275, 579, 468, 242, 163, 0, 427, 517, 376, 256, 138, 0), # 54
(284, 588, 474, 244, 168, 0, 435, 525, 382, 262, 141, 0), # 55
(294, 603, 484, 246, 171, 0, 440, 543, 390, 268, 142, 0), # 56
(296, 614, 491, 251, 172, 0, 444, 550, 398, 279, 143, 0), # 57
(302, 625, 499, 255, 173, 0, 452, 560, 410, 285, 144, 0), # 58
(302, 625, 499, 255, 173, 0, 452, 560, 410, 285, 144, 0), # 59
)
passenger_arriving_rate = (
(4.239442493415277, 8.699469696969697, 7.674466580976864, 4.055217391304347, 2.2857692307692306, 0.0, 7.610869565217392, 9.143076923076922, 6.082826086956521, 5.1163110539845755, 2.174867424242424, 0.0), # 0
(4.27923521607648, 8.796213246352414, 7.715918916023995, 4.077801207729468, 2.3029012820512818, 0.0, 7.608275422705315, 9.211605128205127, 6.116701811594203, 5.1439459440159965, 2.1990533115881035, 0.0), # 1
(4.318573563554774, 8.891521795735128, 7.7563873179091685, 4.099887922705314, 2.3196717948717946, 0.0, 7.60560193236715, 9.278687179487179, 6.1498318840579715, 5.170924878606112, 2.222880448933782, 0.0), # 2
(4.357424143985952, 8.9852925, 7.795842223650386, 4.121458695652173, 2.336065384615385, 0.0, 7.60284945652174, 9.34426153846154, 6.18218804347826, 5.197228149100257, 2.246323125, 0.0), # 3
(4.395753565505805, 9.07742251402918, 7.834254070265637, 4.142494685990338, 2.352066666666667, 0.0, 7.600018357487922, 9.408266666666668, 6.213742028985508, 5.222836046843758, 2.269355628507295, 0.0), # 4
(4.433528436250122, 9.167808992704828, 7.8715932947729215, 4.1629770531400965, 2.367660256410256, 0.0, 7.597108997584541, 9.470641025641024, 6.244465579710145, 5.247728863181948, 2.291952248176207, 0.0), # 5
(4.470715364354698, 9.25634909090909, 7.907830334190233, 4.182886956521739, 2.382830769230769, 0.0, 7.594121739130435, 9.531323076923076, 6.274330434782609, 5.271886889460156, 2.3140872727272725, 0.0), # 6
(4.507280957955322, 9.34293996352413, 7.942935625535561, 4.2022055555555555, 2.397562820512821, 0.0, 7.591056944444445, 9.590251282051284, 6.303308333333334, 5.295290417023708, 2.3357349908810323, 0.0), # 7
(4.543191825187787, 9.427478765432097, 7.976879605826908, 4.220914009661835, 2.4118410256410256, 0.0, 7.587914975845411, 9.647364102564103, 6.3313710144927535, 5.317919737217938, 2.3568696913580243, 0.0), # 8
(4.578414574187884, 9.509862651515151, 8.009632712082263, 4.23899347826087, 2.4256499999999996, 0.0, 7.584696195652175, 9.702599999999999, 6.358490217391305, 5.339755141388175, 2.377465662878788, 0.0), # 9
(4.612915813091406, 9.589988776655444, 8.041165381319622, 4.256425120772947, 2.438974358974359, 0.0, 7.581400966183574, 9.755897435897436, 6.384637681159421, 5.360776920879748, 2.397497194163861, 0.0), # 10
(4.646662150034143, 9.66775429573513, 8.071448050556983, 4.273190096618357, 2.4517987179487175, 0.0, 7.578029649758455, 9.80719487179487, 6.409785144927537, 5.380965367037988, 2.4169385739337823, 0.0), # 11
(4.679620193151888, 9.743056363636363, 8.100451156812339, 4.289269565217391, 2.4641076923076923, 0.0, 7.574582608695652, 9.85643076923077, 6.433904347826087, 5.400300771208226, 2.4357640909090907, 0.0), # 12
(4.71175655058043, 9.815792135241303, 8.128145137103683, 4.304644685990338, 2.475885897435898, 0.0, 7.5710602053140095, 9.903543589743592, 6.456967028985507, 5.418763424735789, 2.4539480338103257, 0.0), # 13
(4.743037830455566, 9.8858587654321, 8.154500428449014, 4.3192966183574875, 2.4871179487179482, 0.0, 7.567462801932367, 9.948471794871793, 6.478944927536231, 5.4363336189660085, 2.471464691358025, 0.0), # 14
(4.773430640913081, 9.953153409090907, 8.179487467866322, 4.33320652173913, 2.4977884615384616, 0.0, 7.563790760869566, 9.991153846153846, 6.499809782608695, 5.452991645244214, 2.488288352272727, 0.0), # 15
(4.802901590088772, 10.017573221099887, 8.203076692373608, 4.346355555555555, 2.507882051282051, 0.0, 7.560044444444445, 10.031528205128204, 6.519533333333333, 5.468717794915738, 2.504393305274972, 0.0), # 16
(4.831417286118428, 10.079015356341188, 8.22523853898886, 4.358724879227053, 2.517383333333333, 0.0, 7.556224214975846, 10.069533333333332, 6.538087318840581, 5.483492359325907, 2.519753839085297, 0.0), # 17
(4.8589443371378405, 10.13737696969697, 8.245943444730077, 4.370295652173914, 2.5262769230769235, 0.0, 7.552330434782609, 10.105107692307694, 6.55544347826087, 5.4972956298200515, 2.5343442424242424, 0.0), # 18
(4.8854493512828014, 10.192555216049382, 8.265161846615253, 4.381049033816424, 2.534547435897436, 0.0, 7.5483634661835755, 10.138189743589743, 6.571573550724637, 5.510107897743501, 2.5481388040123454, 0.0), # 19
(4.910898936689104, 10.244447250280581, 8.282864181662381, 4.3909661835748794, 2.542179487179487, 0.0, 7.544323671497584, 10.168717948717948, 6.58644927536232, 5.5219094544415865, 2.5611118125701453, 0.0), # 20
(4.935259701492538, 10.292950227272724, 8.299020886889462, 4.400028260869565, 2.5491576923076917, 0.0, 7.540211413043479, 10.196630769230767, 6.600042391304348, 5.53268059125964, 2.573237556818181, 0.0), # 21
(4.958498253828894, 10.337961301907969, 8.313602399314481, 4.408216425120773, 2.555466666666666, 0.0, 7.5360270531400975, 10.221866666666664, 6.6123246376811595, 5.542401599542987, 2.584490325476992, 0.0), # 22
(4.980581201833967, 10.379377629068463, 8.326579155955441, 4.415511835748792, 2.5610910256410255, 0.0, 7.531770954106282, 10.244364102564102, 6.623267753623189, 5.551052770636961, 2.5948444072671157, 0.0), # 23
(5.001475153643547, 10.417096363636363, 8.337921593830332, 4.421895652173912, 2.5660153846153846, 0.0, 7.52744347826087, 10.264061538461538, 6.632843478260869, 5.558614395886888, 2.6042740909090907, 0.0), # 24
(5.0211467173934246, 10.451014660493826, 8.347600149957156, 4.427349033816426, 2.5702243589743587, 0.0, 7.523044987922706, 10.280897435897435, 6.641023550724639, 5.565066766638103, 2.6127536651234564, 0.0), # 25
(5.039562501219393, 10.481029674523006, 8.355585261353898, 4.431853140096617, 2.5737025641025637, 0.0, 7.518575845410628, 10.294810256410255, 6.647779710144927, 5.570390174235932, 2.6202574186307515, 0.0), # 26
(5.056689113257243, 10.507038560606059, 8.361847365038559, 4.435389130434783, 2.5764346153846156, 0.0, 7.514036413043479, 10.305738461538462, 6.653083695652175, 5.574564910025706, 2.6267596401515148, 0.0), # 27
(5.072493161642767, 10.52893847362514, 8.366356898029135, 4.437938164251207, 2.578405128205128, 0.0, 7.509427053140097, 10.313620512820512, 6.656907246376812, 5.5775712653527565, 2.632234618406285, 0.0), # 28
(5.086941254511755, 10.546626568462402, 8.369084297343615, 4.439481400966184, 2.579598717948718, 0.0, 7.504748128019324, 10.318394871794872, 6.659222101449276, 5.57938953156241, 2.6366566421156006, 0.0), # 29
(5.1000000000000005, 10.56, 8.370000000000001, 4.44, 2.58, 0.0, 7.5, 10.32, 6.660000000000001, 5.58, 2.64, 0.0), # 30
(5.112219245524297, 10.571108863636361, 8.369313550724637, 4.439906339869282, 2.5798539716312057, 0.0, 7.4934020156588375, 10.319415886524823, 6.659859509803923, 5.579542367149758, 2.6427772159090903, 0.0), # 31
(5.124174680306906, 10.582070909090909, 8.367273623188405, 4.439627189542483, 2.5794190070921985, 0.0, 7.483239613526571, 10.317676028368794, 6.659440784313724, 5.578182415458937, 2.6455177272727273, 0.0), # 32
(5.135871675191815, 10.592884772727274, 8.363909347826088, 4.439165294117647, 2.5786997872340423, 0.0, 7.469612293853072, 10.314799148936169, 6.658747941176471, 5.575939565217392, 2.6482211931818185, 0.0), # 33
(5.147315601023018, 10.603549090909091, 8.359249855072465, 4.438523398692811, 2.5777009929078014, 0.0, 7.452619556888223, 10.310803971631206, 6.657785098039217, 5.572833236714976, 2.6508872727272728, 0.0), # 34
(5.158511828644501, 10.614062499999998, 8.353324275362318, 4.437704248366013, 2.576427304964539, 0.0, 7.432360902881893, 10.305709219858157, 6.65655637254902, 5.568882850241546, 2.6535156249999994, 0.0), # 35
(5.169465728900256, 10.624423636363638, 8.346161739130434, 4.436710588235294, 2.5748834042553193, 0.0, 7.408935832083959, 10.299533617021277, 6.655065882352941, 5.564107826086956, 2.6561059090909094, 0.0), # 36
(5.180182672634271, 10.634631136363637, 8.337791376811595, 4.435545163398693, 2.573073971631205, 0.0, 7.382443844744294, 10.29229588652482, 6.65331774509804, 5.558527584541062, 2.6586577840909094, 0.0), # 37
(5.190668030690537, 10.644683636363636, 8.32824231884058, 4.4342107189542475, 2.5710036879432625, 0.0, 7.352984441112776, 10.28401475177305, 6.651316078431372, 5.5521615458937195, 2.661170909090909, 0.0), # 38
(5.200927173913044, 10.654579772727272, 8.317543695652173, 4.43271, 2.568677234042553, 0.0, 7.32065712143928, 10.274708936170212, 6.649065, 5.545029130434782, 2.663644943181818, 0.0), # 39
(5.21096547314578, 10.664318181818182, 8.305724637681159, 4.431045751633987, 2.566099290780141, 0.0, 7.285561385973679, 10.264397163120565, 6.646568627450981, 5.537149758454106, 2.6660795454545454, 0.0), # 40
(5.220788299232737, 10.673897499999997, 8.29281427536232, 4.429220718954248, 2.563274539007092, 0.0, 7.247796734965852, 10.253098156028368, 6.643831078431373, 5.5285428502415455, 2.6684743749999993, 0.0), # 41
(5.230401023017903, 10.683316363636365, 8.278841739130435, 4.427237647058823, 2.560207659574468, 0.0, 7.207462668665667, 10.240830638297872, 6.640856470588235, 5.519227826086957, 2.6708290909090913, 0.0), # 42
(5.239809015345269, 10.692573409090908, 8.26383615942029, 4.4250992810457515, 2.556903333333333, 0.0, 7.164658687323005, 10.227613333333332, 6.637648921568627, 5.509224106280192, 2.673143352272727, 0.0), # 43
(5.249017647058824, 10.701667272727272, 8.247826666666667, 4.422808366013072, 2.5533662411347517, 0.0, 7.119484291187739, 10.213464964539007, 6.634212549019608, 5.498551111111111, 2.675416818181818, 0.0), # 44
(5.258032289002557, 10.71059659090909, 8.23084239130435, 4.420367647058823, 2.5496010638297872, 0.0, 7.072038980509745, 10.198404255319149, 6.630551470588235, 5.487228260869566, 2.6776491477272724, 0.0), # 45
(5.266858312020461, 10.71936, 8.212912463768117, 4.417779869281045, 2.5456124822695037, 0.0, 7.022422255538898, 10.182449929078015, 6.626669803921568, 5.475274975845411, 2.67984, 0.0), # 46
(5.275501086956522, 10.727956136363636, 8.194066014492753, 4.415047777777778, 2.5414051773049646, 0.0, 6.970733616525071, 10.165620709219858, 6.6225716666666665, 5.462710676328501, 2.681989034090909, 0.0), # 47
(5.283965984654732, 10.736383636363637, 8.174332173913044, 4.412174117647059, 2.536983829787234, 0.0, 6.9170725637181425, 10.147935319148935, 6.618261176470588, 5.449554782608695, 2.6840959090909093, 0.0), # 48
(5.292258375959079, 10.744641136363633, 8.15374007246377, 4.409161633986929, 2.5323531205673757, 0.0, 6.861538597367982, 10.129412482269503, 6.613742450980394, 5.435826714975845, 2.6861602840909082, 0.0), # 49
(5.300383631713555, 10.752727272727268, 8.13231884057971, 4.406013071895425, 2.527517730496454, 0.0, 6.804231217724471, 10.110070921985816, 6.6090196078431385, 5.421545893719807, 2.688181818181817, 0.0), # 50
(5.308347122762149, 10.760640681818181, 8.110097608695652, 4.4027311764705885, 2.5224823404255314, 0.0, 6.7452499250374816, 10.089929361702126, 6.604096764705883, 5.406731739130435, 2.6901601704545453, 0.0), # 51
(5.316154219948849, 10.768379999999999, 8.087105507246376, 4.399318692810457, 2.517251631205674, 0.0, 6.684694219556889, 10.069006524822695, 6.5989780392156865, 5.391403671497584, 2.6920949999999997, 0.0), # 52
(5.3238102941176475, 10.775943863636364, 8.063371666666667, 4.395778366013072, 2.5118302836879436, 0.0, 6.622663601532567, 10.047321134751774, 6.593667549019608, 5.375581111111111, 2.693985965909091, 0.0), # 53
(5.331320716112533, 10.783330909090907, 8.038925217391304, 4.392112941176471, 2.5062229787234043, 0.0, 6.559257571214393, 10.024891914893617, 6.5881694117647065, 5.359283478260869, 2.6958327272727267, 0.0), # 54
(5.338690856777493, 10.790539772727271, 8.013795289855072, 4.388325163398693, 2.5004343971631204, 0.0, 6.494575628852241, 10.001737588652482, 6.58248774509804, 5.342530193236715, 2.697634943181818, 0.0), # 55
(5.3459260869565215, 10.79756909090909, 7.988011014492754, 4.384417777777777, 2.494469219858156, 0.0, 6.428717274695986, 9.977876879432625, 6.576626666666667, 5.325340676328502, 2.6993922727272723, 0.0), # 56
(5.353031777493607, 10.804417500000001, 7.96160152173913, 4.380393529411765, 2.4883321276595742, 0.0, 6.361782008995502, 9.953328510638297, 6.570590294117648, 5.307734347826087, 2.7011043750000003, 0.0), # 57
(5.360013299232737, 10.811083636363634, 7.934595942028984, 4.376255163398692, 2.4820278014184396, 0.0, 6.293869332000667, 9.928111205673758, 6.564382745098039, 5.289730628019323, 2.7027709090909084, 0.0), # 58
(0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0), # 59
)
passenger_allighting_rate = (
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 0
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 1
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 2
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 3
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 4
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 5
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 6
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 7
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 8
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 9
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 10
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 11
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 12
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 13
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 14
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 15
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 16
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 17
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 18
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 19
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 20
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 21
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 22
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 23
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 24
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 25
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 26
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 27
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 28
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 29
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 30
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 31
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 32
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 33
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 34
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 35
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 36
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 37
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 38
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 39
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 40
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 41
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 42
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 43
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 44
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 45
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 46
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 47
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 48
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 49
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 50
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 51
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 52
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 53
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 54
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 55
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 56
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 57
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 58
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 59
)
"""
parameters for reproducibiliy. More information: https://numpy.org/doc/stable/reference/random/parallel.html
"""
#initial entropy
entropy = 258194110137029475889902652135037600173
#index for seed sequence child
child_seed_index = (
1, # 0
84, # 1
)
| 113.710448 | 214 | 0.730554 | 5,147 | 38,093 | 5.404702 | 0.233728 | 0.31059 | 0.245884 | 0.465885 | 0.326695 | 0.325545 | 0.325545 | 0.325545 | 0.325545 | 0.325545 | 0 | 0.820126 | 0.118499 | 38,093 | 334 | 215 | 114.050898 | 0.008309 | 0.031791 | 0 | 0.202532 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.015823 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
a965f4c87723b069ad5d67452b0ee024d53d7902 | 190 | py | Python | mlvajra/model/__init__.py | rajagurunath/mlvajra | abaa40717342cecc785144700884e1c9d5910c43 | [
"Apache-2.0"
] | 2 | 2019-04-22T12:25:05.000Z | 2019-05-05T16:49:12.000Z | mlvajra/model/__init__.py | rajagurunath/mlvajra | abaa40717342cecc785144700884e1c9d5910c43 | [
"Apache-2.0"
] | 9 | 2019-04-06T14:27:22.000Z | 2021-04-30T20:42:19.000Z | mlvajra/model/__init__.py | rajagurunath/mlvajra | abaa40717342cecc785144700884e1c9d5910c43 | [
"Apache-2.0"
] | null | null | null | """
model building using torch,tensorflow,spark,sklearn
"""
try:
from mlvajra.model.DSL import LudwigModel
except ImportError as e:
print(e)
from mlvajra.model.vajron import * | 23.75 | 52 | 0.726316 | 25 | 190 | 5.52 | 0.76 | 0.15942 | 0.231884 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.178947 | 190 | 8 | 53 | 23.75 | 0.884615 | 0.268421 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.6 | 0 | 0.6 | 0.2 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
8d10ba514d2a87e2223b784ced3fcdb51040f607 | 17 | py | Python | django/DCRUMSplunkApplication/__init__.py | Dynatrace/DCRUM-Splunk-Application | ae6f5f766750bfc56d2c31d75256320341b50f35 | [
"BSD-3-Clause"
] | 2 | 2016-06-20T02:02:34.000Z | 2021-12-15T12:07:51.000Z | django/DCRUMSplunkApplication/__init__.py | Dynatrace/DCRUM-Splunk-Application | ae6f5f766750bfc56d2c31d75256320341b50f35 | [
"BSD-3-Clause"
] | null | null | null | django/DCRUMSplunkApplication/__init__.py | Dynatrace/DCRUM-Splunk-Application | ae6f5f766750bfc56d2c31d75256320341b50f35 | [
"BSD-3-Clause"
] | 2 | 2020-01-20T04:36:55.000Z | 2021-03-24T08:00:11.000Z | # Copyright 2014
| 8.5 | 16 | 0.764706 | 2 | 17 | 6.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.285714 | 0.176471 | 17 | 1 | 17 | 17 | 0.642857 | 0.823529 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
8d19d3ae95ec7628bb809d271e87fec642673faf | 34 | py | Python | mulpro/__init__.py | mhantke/mulpro | ca78269dfa448b929388b919f83534d85c5aed09 | [
"BSD-2-Clause"
] | 2 | 2016-08-18T15:59:11.000Z | 2021-11-16T13:41:42.000Z | mulpro/__init__.py | mhantke/mulpro | ca78269dfa448b929388b919f83534d85c5aed09 | [
"BSD-2-Clause"
] | 1 | 2016-08-17T14:42:10.000Z | 2016-08-17T15:23:48.000Z | mulpro/__init__.py | mhantke/mulpro | ca78269dfa448b929388b919f83534d85c5aed09 | [
"BSD-2-Clause"
] | 2 | 2016-08-17T14:03:22.000Z | 2021-03-08T16:37:59.000Z | from mulpro import mulpro, logger
| 17 | 33 | 0.823529 | 5 | 34 | 5.6 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.147059 | 34 | 1 | 34 | 34 | 0.965517 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
8d2954efedc8da093b6b52571657fa9c86c01270 | 29 | py | Python | anvil/mpl_util/__init__.py | benlawraus/pyDALAnvilWorks | 8edc67b0fbe65bdcc0ef6fd2424f55046cacba7c | [
"MIT"
] | 6 | 2021-11-14T22:49:40.000Z | 2022-03-26T17:40:40.000Z | anvil/mpl_util/__init__.py | benlawraus/pyDALAnvilWorks | 8edc67b0fbe65bdcc0ef6fd2424f55046cacba7c | [
"MIT"
] | null | null | null | anvil/mpl_util/__init__.py | benlawraus/pyDALAnvilWorks | 8edc67b0fbe65bdcc0ef6fd2424f55046cacba7c | [
"MIT"
] | 1 | 2022-01-31T01:18:32.000Z | 2022-01-31T01:18:32.000Z | from .anvilMpl_util import *
| 14.5 | 28 | 0.793103 | 4 | 29 | 5.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.137931 | 29 | 1 | 29 | 29 | 0.88 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
8d2d7fd99f3b452fa946dc27868b25b762590222 | 12,565 | py | Python | src/CommandHandler.py | ninjacoder88/lifx-lan-py | ea1ed9dae90a96f2fa82c3e6f982de7087cfac38 | [
"MIT"
] | null | null | null | src/CommandHandler.py | ninjacoder88/lifx-lan-py | ea1ed9dae90a96f2fa82c3e6f982de7087cfac38 | [
"MIT"
] | null | null | null | src/CommandHandler.py | ninjacoder88/lifx-lan-py | ea1ed9dae90a96f2fa82c3e6f982de7087cfac38 | [
"MIT"
] | null | null | null | import PacketBuilder
import CommandHandlerHelp
def handle_help(command):
CommandHandlerHelp.handle_help(command)
return ""
def handle_device_get_service(options):
if "-help" in options:
handle_help("device-get-service")
return ""
else:
return PacketBuilder.build_device_get_service_packet()
def handle_device_get_host_info(options):
if "-help" in options:
handle_help("device-get-host")
return ""
if "-a" in options:
return PacketBuilder.build_device_get_host_info_packet(0)
elif "-t" in options:
target = int(options["-t"])
return PacketBuilder.build_device_get_host_info_packet(target)
else:
print("invalid options. use -help to see usage")
return ""
def handle_device_get_host_firmware(options):
if "-help" in options:
handle_help("device-get-host-firmware")
return ""
if "-a" in options:
return PacketBuilder.build_device_get_host_firmware_packet(0)
elif "-t" in options:
target = int(options["-t"])
return PacketBuilder.build_device_get_host_firmware_packet(target)
else:
print("invalid options. use -help to see usage")
return ""
def handle_device_get_wifi_info(options):
if "-help" in options:
handle_help("device-get-wifi-info")
return ""
if "-a" in options:
return PacketBuilder.build_device_get_wifi_info_packet(0)
elif "-t" in options:
target = int(options["-t"])
return PacketBuilder.build_device_get_wifi_info_packet(target)
else:
print("invalid options. use -help to see usage")
return ""
def handle_device_get_wifi_firmware(options):
if "-help" in options:
handle_help("device-get-wifi-firmware")
return ""
if "-a" in options:
return PacketBuilder.build_device_get_wifi_firmware_packet(0)
elif "-t" in options:
target = int(options["-t"])
return PacketBuilder.build_device_get_wifi_firmware_packet(target)
else:
print("invalid options. use -help to see usage")
return ""
def handle_device_get_power(options):
if "-help" in options:
handle_help("device-get-power")
return ""
if "-a" in options:
return PacketBuilder.build_device_get_power_packet(0)
elif "-t" in options:
target = int(options["-t"])
return PacketBuilder.build_device_get_power_packet(target)
else:
print("invalid options. use -help to see usage")
return ""
def handle_device_set_power(options):
if "-help" in options:
handle_help("device-set-power")
return ""
level = -1
if "-l" in options:
level = int(options["-l"]) * 65535
else:
print("invalid options. -l is required. use -help to see usage")
return ""
if "-a" in options:
return PacketBuilder.build_device_set_power_packet(0, level)
elif "-t" in options:
target = int(options["-t"])
return PacketBuilder.build_device_set_power_packet(target, level)
else:
print("invalid options. use -help to see usage")
return ""
def handle_device_get_label(options):
if "-help" in options:
handle_help("device-get-label")
return ""
if "-a" in options:
return PacketBuilder.build_device_get_label_packet(0)
elif "-t" in options:
target = int(options["-t"])
return PacketBuilder.build_device_get_label_packet(target)
else:
print("invalid options. use -help to see usage")
return ""
def handle_device_set_label(options):#need to validate
if "-help" in options:
handle_help("device-set-label")
return ""
label = "N/A"
if "-l" in options:
label = options["-l"]
else:
print("invalid options. -l is required. use -help to see usage")
return ""
if "-t" in options:
target = int(options["-t"])
return PacketBuilder.build_device_set_label_packet(target, label)
else:
print("invalid options. use -help to see usage")
return ""
def handle_device_get_version(options):
if "-help" in options:
handle_help("device-get-version")
return ""
if "-a" in options:
return PacketBuilder.build_device_get_version_packet(0)
elif "-t" in options:
target = int(options["-t"])
return PacketBuilder.build_device_get_version_packet(target)
else:
print("invalid options. use -help to see usage")
return ""
def handle_device_get_info(options):
if "-help" in options:
handle_help("device-get-info")
return ""
if "-a" in options:
return PacketBuilder.build_device_get_info_packet(0)
elif "-t" in options:
target = int(options["-t"])
return PacketBuilder.build_device_get_info_packet(target)
else:
print("invalid options. use -help to see usage")
return ""
def handle_device_get_location(options):
if "-help" in options:
handle_help("device-get-location")
return ""
if "-a" in options:
return PacketBuilder.build_device_get_location_packet(0)
elif "-t" in options:
target = int(options["-t"])
return PacketBuilder.build_device_get_location_packet(target)
else:
print("invalid options. use -help to see usage")
return ""
def handle_device_set_location(options):
print("not yet implemented")
return ""
#if "-help" in options:
# handle_help("device-set-location")
# return ""
#if "-a" in options:
# return PacketBuilder.build_device_set_location_packet(0)
#elif "-t" in options:
# target = int(options["-t"])
# return PacketBuilder.build_device_set_location_packet(target)
#else:
# print("invalid options. use -help to see usage")
# return ""
#create guid
#created updated_at
#PacketBuilder.build_device_set_location_packet(target, location_value, label_value, updated_at_value)
def handle_device_get_group(options):
if "-help" in options:
handle_help("device-get-group")
return ""
if "-a" in options:
return PacketBuilder.build_device_get_group_packet(0)
elif "-t" in options:
target = int(options["-t"])
return PacketBuilder.build_device_get_group_packet(target)
else:
print("invalid options. use -help to see usage")
return ""
def handle_device_set_group(options):
print("not yet implemented")
return ""
#if "-help" in options:
# print(available_options)
# return ""
#label = "N/A"
#if "-l" in options:
# label = options["-l"]
#else:
# print("invalid options. -l is required. use -help to see usage")
# return ""
#if "-a" in options:
# return PacketBuilder.build_device_set_group_packet(0, )
#elif "-t" in options:
# target = int(options["-t"])
# return PacketBuilder.build_device_set_group_packet(target)
#else:
# print("invalid options. use -help to see usage")
# return ""
#create source
#PacketBuilder.build_device_set_group_packet(target, group_value, label_value, update_at_value)
def handle_device_echo_request(options):
if "-help" in options:
handle_help("device-echo-request")
return ""
payload = "101010101010101"
if "-a" in options:
return PacketBuilder.build_device_echo_request_packet(0, payload)
elif "-t" in options:
target = int(options["-t"])
return PacketBuilder.build_device_echo_request_packet(target, payload)
else:
print("invalid options. use -help to see usage")
return ""
def handle_light_get_state(options):
if "-help" in options:
handle_help("light-get-state")
return ""
if "-a" in options:
return PacketBuilder.build_light_get_state_packet(0)
elif "-t" in options:
target = int(options["-t"])
return PacketBuilder.build_light_get_state_packet(target)
else:
print("invalid options. use -help to see usage")
return ""
def handle_light_set_color(options):
if "-help" in options:
handle_help("light-set-color")
return ""
hue = -1
saturation = -1
brightness = -1
kelvin = -1
if "-h" in options:
hue_value = int(options["-h"])
hue = int(float(hue_value) / 360 * 65535)
else:
print("invalid options. -h is required. use -help to see usage")
return ""
if "-s" in options:
saturation_value = int(options["-s"])
saturation = int(float(saturation_value) / 100 * 65535)
else:
print("invalid options. -s is required. use -help to see usage")
return ""
if "-b" in options:
brightness_value = int(options["-b"])
brightness = int(float(brightness_value) / 100 * 65535)
else:
print("invalid options. -b is required. use -help to see usage")
return ""
if "-k" in options:
kelvin = int(options["-k"])
else:
print("invalid options. -k is required. use -help to see usage")
return ""
if "-a" in options:
return PacketBuilder.build_light_set_color_packet(0, hue, saturation, brightness, kelvin, 0)
elif "-t" in options:
target = int(options["-t"])
return PacketBuilder.build_light_set_color_packet(target, hue, saturation, brightness, kelvin, 0)
else:
print("invalid options. use -help to see usage")
return ""
def handle_light_set_waveform(options):
print("not yet implemented")
return ""
#if "-help" in options:
# handle_help("light-set-waveform")
# return ""
#else:
# print("invalid options. use -help to see usage")
# return ""
#PacketBuilder.build_ligth_set_waveform_packet(target, transient_value, hue_value, sat_value, brightness_value, kelvin_value, period_value, cycles_value, skew_ration_value, waveform_value)
def handle_light_get_power(options):
if "-help" in options:
handle_help("light-get-power")
return ""
if "-a" in options:
return PacketBuilder.build_light_get_power_packet(0)
elif "-t" in options:
target = int(options["-t"])
return PacketBuilder.build_light_get_power_packet(target)
else:
print("invalid options. use -help to see usage")
return ""
def handle_light_set_power(options):
if "-help" in options:
handle_help("light-set-power")
return ""
level = -1
if "-l" in options:
level = int(options["-l"]) * 65535
else:
print("invalid options. -l is required. use -help to see usage")
return ""
if "-a" in options:
return PacketBuilder.build_light_set_power_packet(0, level, 0)
elif "-t" in options:
target = int(options["-t"])
return PacketBuilder.build_light_set_power_packet(target, level, 0)
else:
print("invalid options. use -help to see usage")
return ""
def handle_light_set_waveform_optional(options):
print("not yet implemented")
return ""
#if "-help" in options:
# handle_help("light-set-waveform-optional")
# return ""
#else:
# print("invalid options. use -help to see usage")
# return ""
#PacketBuilder.build_light_set_waveform_optional_packet(target, transient_value, hue_value, sat_value, brightness_value, kelvin_value, period_value, cycles_value, skew_ration_value, waveform_value, set_hue_value, set_sat_value, set_brightness_value, set_kelvin_value)build_light_set_power_packet(source, target, level_value, duration_value)
def handle_light_get_infrared(options):
print("not yet implemented")
return ""
#if "-help" in options:
# handle_help("light-get-infrared")
# return ""
#else:
# print("invalid options. use -help to see usage")
# return ""
#PacketBuilder.build_light_get_infrared_packet(target)
def handle_light_set_infrared(options):
print("not yet implemented")
return ""
#if "-help" in options:
# handle_help("light-set-infrared")
# return ""
#else:
# print("invalid options. use -help to see usage")
# return ""
#PacketBuilder.build_light_set_infrared_packet(target, brightness_value) | 31.491228 | 344 | 0.633347 | 1,561 | 12,565 | 4.881486 | 0.057015 | 0.081496 | 0.132283 | 0.09357 | 0.878215 | 0.837533 | 0.837533 | 0.798556 | 0.782021 | 0.708136 | 0 | 0.008241 | 0.256347 | 12,565 | 399 | 345 | 31.491228 | 0.807256 | 0.182014 | 0 | 0.628159 | 0 | 0 | 0.167368 | 0.004698 | 0 | 0 | 0 | 0 | 0 | 1 | 0.090253 | false | 0 | 0.00722 | 0 | 0.397112 | 0.108303 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
8d55ec80f1b1c0f8708c8a91cf80e88027cba8aa | 147 | py | Python | pycq/iterable/iterablehelper.py | janusko/pycq | d58d5d3b3fe1af93f5922abd237e1cdd00cde4af | [
"MIT"
] | null | null | null | pycq/iterable/iterablehelper.py | janusko/pycq | d58d5d3b3fe1af93f5922abd237e1cdd00cde4af | [
"MIT"
] | null | null | null | pycq/iterable/iterablehelper.py | janusko/pycq | d58d5d3b3fe1af93f5922abd237e1cdd00cde4af | [
"MIT"
] | null | null | null | class IterableHelper:
def __init__(self, iterator):
self.__iterator = iterator
def __iter__(self):
return self.__iterator
| 21 | 34 | 0.673469 | 15 | 147 | 5.8 | 0.533333 | 0.413793 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.251701 | 147 | 6 | 35 | 24.5 | 0.790909 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.4 | false | 0 | 0 | 0.2 | 0.8 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
8d6222b708a1db80d8e0f2d452ad48eb83238f56 | 19 | py | Python | src/recipy/exports/__init__.py | emilywilder/recipy | fb64978a120ef62c372ccc96fbed8760b7abf1e4 | [
"Apache-2.0"
] | null | null | null | src/recipy/exports/__init__.py | emilywilder/recipy | fb64978a120ef62c372ccc96fbed8760b7abf1e4 | [
"Apache-2.0"
] | null | null | null | src/recipy/exports/__init__.py | emilywilder/recipy | fb64978a120ef62c372ccc96fbed8760b7abf1e4 | [
"Apache-2.0"
] | null | null | null | from . import yaml
| 9.5 | 18 | 0.736842 | 3 | 19 | 4.666667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.210526 | 19 | 1 | 19 | 19 | 0.933333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
a5c27145398579886ccb0c463e5ee7f86896be0e | 712 | py | Python | test/artifact/test_repository.py | hugemane/prerequisite-dependency-presider | 3986aa1e0a0c4a89dc5ec3f70dbe21b77a8a6077 | [
"Unlicense"
] | null | null | null | test/artifact/test_repository.py | hugemane/prerequisite-dependency-presider | 3986aa1e0a0c4a89dc5ec3f70dbe21b77a8a6077 | [
"Unlicense"
] | null | null | null | test/artifact/test_repository.py | hugemane/prerequisite-dependency-presider | 3986aa1e0a0c4a89dc5ec3f70dbe21b77a8a6077 | [
"Unlicense"
] | null | null | null | from unittest import TestCase
class TestFile(TestCase):
def test_exists(self):
from pdp.artifact.repository import ArtifactHost
artifact_host = ArtifactHost('artifact', 'artifact.lxd')
script_content = artifact_host.get_artifact_file_content('~/repo/script/jvm-run-script.sh')
self.assertTrue(len(script_content) > 0)
def test_exists_with_private_key(self):
from pdp.artifact.repository import ArtifactHost
artifact_host = ArtifactHost('artifact', 'artifact.lxd', '/home/hugemane/.ssh/test_nopass_rsa')
script_content = artifact_host.get_artifact_file_content('~/repo/script/jvm-run-script.sh')
self.assertTrue(len(script_content) > 0)
| 41.882353 | 103 | 0.730337 | 88 | 712 | 5.670455 | 0.397727 | 0.160321 | 0.052104 | 0.076152 | 0.769539 | 0.769539 | 0.769539 | 0.769539 | 0.769539 | 0.769539 | 0 | 0.00335 | 0.161517 | 712 | 16 | 104 | 44.5 | 0.832496 | 0 | 0 | 0.5 | 0 | 0 | 0.192416 | 0.136236 | 0 | 0 | 0 | 0 | 0.166667 | 1 | 0.166667 | false | 0.083333 | 0.25 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
a5cf3a08e3520966551ee0ec504dee37b2af0aa0 | 154 | py | Python | truffe2/generic/startup.py | JonathanCollaud/truffe2 | 5cbb055ac1acf7e7dc697340618fcb56c67fbd91 | [
"BSD-2-Clause"
] | 9 | 2016-09-14T02:19:19.000Z | 2020-10-18T14:52:14.000Z | truffe2/generic/startup.py | JonathanCollaud/truffe2 | 5cbb055ac1acf7e7dc697340618fcb56c67fbd91 | [
"BSD-2-Clause"
] | 19 | 2016-11-09T21:28:51.000Z | 2021-02-10T22:37:31.000Z | truffe2/generic/startup.py | JonathanCollaud/truffe2 | 5cbb055ac1acf7e7dc697340618fcb56c67fbd91 | [
"BSD-2-Clause"
] | 13 | 2016-12-31T14:22:09.000Z | 2020-12-27T19:43:19.000Z |
from generic.models import GenericModel, GenericStateModel
def startup():
"""Create urls, models and cie at startup"""
GenericModel.startup()
| 17.111111 | 58 | 0.727273 | 17 | 154 | 6.588235 | 0.764706 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.175325 | 154 | 8 | 59 | 19.25 | 0.88189 | 0.246753 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | true | 0 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
a5e16f571673626f76e16c0bd4700348d1b5a849 | 528 | py | Python | del_func.py | kundan134/btp-trpo | dc38b82169de44d28706d2d173f5f82a04339253 | [
"MIT"
] | null | null | null | del_func.py | kundan134/btp-trpo | dc38b82169de44d28706d2d173f5f82a04339253 | [
"MIT"
] | null | null | null | del_func.py | kundan134/btp-trpo | dc38b82169de44d28706d2d173f5f82a04339253 | [
"MIT"
] | null | null | null |
from math import exp, log10
def f0(max_kl,epoch):
return max_kl
def f1(max_kl,epoch):
return max_kl+(epoch//10)*0.005
def f2(max_kl,epoch):
return max_kl+0.01*(epoch//10)
def f3(max_kl,epoch):
return max_kl+log10(epoch)*(0.1)
def f4(max_kl,epoch):
return 0.5
def f5(max_kl,epoch):
return 0.5 - (epoch//10)*0.01
def f6(max_kl,epoch):
return 0.5/exp(epoch//10)
def f7(max_kl,epoch):
x=epoch//30
return 0.01+0.8/(x+1)
def f8(max_kl,epoch):
x=epoch//30
return 0.01+0.8/(x*x+1)
| 16 | 36 | 0.632576 | 111 | 528 | 2.891892 | 0.252252 | 0.202492 | 0.311526 | 0.34891 | 0.616822 | 0.616822 | 0.186916 | 0.186916 | 0.186916 | 0.186916 | 0 | 0.128505 | 0.189394 | 528 | 32 | 37 | 16.5 | 0.621495 | 0 | 0 | 0.095238 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.428571 | false | 0 | 0.047619 | 0.333333 | 0.904762 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
570cf15a6d9052e742da4058c5a74fa397356ac4 | 3,180 | py | Python | AI_takeoff/customGym/custom_gym/envs/myxpc/actions/throttle.py | Skillerde6de/Minor-AI-2019_2020 | 57f6aed2d8066e48e2d99c8b97d5839b4f6ae7bc | [
"MIT"
] | 1 | 2021-01-08T08:14:34.000Z | 2021-01-08T08:14:34.000Z | AI_takeoff/customGym/custom_gym/envs/myxpc/actions/throttle.py | Skillerde6de/Minor-AI-2019_2020 | 57f6aed2d8066e48e2d99c8b97d5839b4f6ae7bc | [
"MIT"
] | 1 | 2020-07-04T20:42:17.000Z | 2020-07-04T20:43:40.000Z | AI_takeoff/customGym/custom_gym/envs/myxpc/actions/throttle.py | Skillerde6de/Minor-AI-2019_2020 | 57f6aed2d8066e48e2d99c8b97d5839b4f6ae7bc | [
"MIT"
] | null | null | null | from custom_gym.envs.myxpc import xpc2 as xpc
def throttle_up_full():
print('throttle_up_full')
with xpc.XPlaneConnect() as client:
# Verify connection
try:
# If X-Plane does not respond to the request, a timeout error
# will be raised.
client.getDREF("sim/test/test_float")
except:
print("Error establishing connection to X-Plane.")
print("Exiting...")
return
throttle_uf = [-998, -998, -998, 1.0, -998, -998, -998]
client.sendCTRL(throttle_uf)
def throttle_up_half():
print('throttle_up_half')
with xpc.XPlaneConnect() as client:
# Verify connection
try:
# If X-Plane does not respond to the request, a timeout error
# will be raised.
client.getDREF("sim/test/test_float")
except:
print("Error establishing connection to X-Plane.")
print("Exiting...")
return
throttle_uh = [-998, -998, -998, 0.5, -998, -998, -998]
client.sendCTRL(throttle_uh)
def throttle_up_low():
print('throttle_up_low')
with xpc.XPlaneConnect() as client:
# Verify connection
try:
# If X-Plane does not respond to the request, a timeout error
# will be raised.
client.getDREF("sim/test/test_float")
except:
print("Error establishing connection to X-Plane.")
print("Exiting...")
return
throttle_uh = [-998, -998, -998, 0.2, -998, -998, -998]
client.sendCTRL(throttle_uh)
def throttle_neutral():
print('throttle_neutral')
with xpc.XPlaneConnect() as client:
# Verify connection
try:
# If X-Plane does not respond to the request, a timeout error
# will be raised.
client.getDREF("sim/test/test_float")
except:
print("Error establishing connection to X-Plane.")
print("Exiting...")
return
throttle_n = [-998, -998, -998, 0, -998, -998, -998]
client.sendCTRL(throttle_n)
def throttle_down_half():
print('throttle_down_half')
with xpc.XPlaneConnect() as client:
# Verify connection
try:
# If X-Plane does not respond to the request, a timeout error
# will be raised.
client.getDREF("sim/test/test_float")
except:
print("Error establishing connection to X-Plane.")
print("Exiting...")
return
throttle_dh = [-998, -998, -998, -0.5, -998, -998, -998]
client.sendCTRL(throttle_dh)
def throttle_down_full():
print('throttle_down_full')
with xpc.XPlaneConnect() as client:
# Verify connection
try:
# If X-Plane does not respond to the request, a timeout error
# will be raised.
client.getDREF("sim/test/test_float")
except:
print("Error establishing connection to X-Plane.")
print("Exiting...")
return
throttle_df = [-998, -998, -998, 1.0, -998, -998, -998]
client.sendCTRL(throttle_df)
| 32.783505 | 73 | 0.571384 | 378 | 3,180 | 4.698413 | 0.153439 | 0.081081 | 0.060811 | 0.074324 | 0.860923 | 0.860923 | 0.860923 | 0.842905 | 0.842905 | 0.810811 | 0 | 0.055376 | 0.318553 | 3,180 | 96 | 74 | 33.125 | 0.76419 | 0.177044 | 0 | 0.656716 | 0 | 0 | 0.199846 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.089552 | false | 0 | 0.014925 | 0 | 0.19403 | 0.268657 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
5740e2e30331715c5a6618ce4683fb22638eb09a | 9,363 | py | Python | Circuit/CircuitTest.py | fangzhouwang/-CADisCMOSExplorer | 49e5737fd7b3e06879daa09bb93747c2d2829740 | [
"Apache-2.0"
] | null | null | null | Circuit/CircuitTest.py | fangzhouwang/-CADisCMOSExplorer | 49e5737fd7b3e06879daa09bb93747c2d2829740 | [
"Apache-2.0"
] | 3 | 2018-05-04T18:12:03.000Z | 2018-05-04T19:13:29.000Z | Circuit/CircuitTest.py | fangzhouwang/CADisCMOSExplorer | 49e5737fd7b3e06879daa09bb93747c2d2829740 | [
"Apache-2.0"
] | null | null | null | import unittest
import os
from Circuit.Netlist import *
from Circuit.CSim import *
class CSimTestCase(unittest.TestCase):
def test_csim(self):
str_netlist = "M0001 GND IN001 OUT01 GND NMOS\n" \
"M0002 VDD IN001 OUT01 VDD PMOS\n"
bsf, bsf_weak = csim(str_netlist)
self.assertEqual(bsf, '10')
self.assertEqual(bsf_weak, '10')
class NetlistTestCase(unittest.TestCase):
def test_create_netlist(self):
netlist = Netlist()
str_netlist = "M0001 N0002 IN001 N0001 GND NMOS\n"\
"M0002 VDD N0001 N0002 GND NMOS\n" \
"M0003 N0001 IN001 IN002 VDD PMOS\n" \
"M0004 OUT01 N0001 IN002 VDD PMOS\n"
netlist.set_netlist(str_netlist)
self.assertEqual(str_netlist, str(netlist))
def test_equ_netlist_swap_diff(self):
netlist = Netlist()
str_netlist = "M0001 VDD IN001 IN002 GND NMOS\n"
netlist.set_netlist(str_netlist)
equ_netlists = ["M0001 VDD IN001 IN002 GND NMOS\n",
"M0001 IN002 IN001 VDD GND NMOS\n",
"M0001 VDD IN002 IN001 GND NMOS\n",
"M0001 IN001 IN002 VDD GND NMOS\n"]
self.assertCountEqual(equ_netlists, netlist.get_equ_netlists())
def test_equ_netlist_complete_test_1(self):
netlist = Netlist()
str_netlist = "M0001 N0002 IN001 N0001 GND NMOS\n"
netlist.set_netlist(str_netlist)
equ_netlists = []
script_dir = os.path.dirname(os.path.realpath(__file__)) + '/'
with open(script_dir + 'complete_test_results_1.txt') as results:
temp_netlist = ''
for line in results:
if line == "\n":
equ_netlists.append(temp_netlist)
temp_netlist = ''
continue
temp_netlist += line
self.assertEqual(len(equ_netlists), len(list(netlist.get_equ_netlists())))
self.assertCountEqual(equ_netlists, netlist.get_equ_netlists())
def test_equ_netlist_complete_test_2(self):
netlist = Netlist()
str_netlist = "M0001 N0002 IN001 N0001 GND NMOS\n" \
"M0002 OUT01 N0001 IN002 VDD PMOS\n"
netlist.set_netlist(str_netlist)
equ_netlists = []
script_dir = os.path.dirname(os.path.realpath(__file__)) + '/'
with open(script_dir + 'complete_test_results_2.txt') as results:
temp_netlist = ''
for line in results:
if line == "\n":
equ_netlists.append(temp_netlist)
temp_netlist = ''
continue
temp_netlist += line
self.assertEqual(len(equ_netlists), len(list(netlist.get_equ_netlists())))
self.assertCountEqual(equ_netlists, netlist.get_equ_netlists())
def test_equ_netlist_complete_test_3(self):
netlist = Netlist()
str_netlist = "M0001 N0002 IN001 N0001 GND NMOS\n" \
"M0002 VDD N0001 N0002 GND NMOS\n" \
"M0003 N0001 IN001 IN002 VDD PMOS\n" \
"M0004 OUT01 N0001 IN002 VDD PMOS\n"
netlist.set_netlist(str_netlist)
equ_netlists = []
script_dir = os.path.dirname(os.path.realpath(__file__)) + '/'
with open(script_dir + 'complete_test_results_3.txt') as results:
temp_netlist = ''
for line in results:
if line == "\n":
equ_netlists.append(temp_netlist)
temp_netlist = ''
continue
temp_netlist += line
self.assertEqual(len(equ_netlists), len(list(netlist.get_equ_netlists())))
self.assertCountEqual(equ_netlists, netlist.get_equ_netlists())
def test_update_transistors(self):
netlist = Netlist()
str_netlist = "M0003 N0002 IN001 N0001 GND NMOS\n" \
"M0008 VDD N0001 N0002 GND NMOS\n" \
"M0001 N0001 IN001 IN002 VDD PMOS\n" \
"M0003 OUT01 N0001 IN002 VDD PMOS\n"
netlist.set_netlist(str_netlist)
netlist.update_transistor_names()
cnt = 1
for transistor in netlist.get_transistors():
self.assertEqual(int(transistor.get_name()[1:]), cnt)
cnt += 1
def test_remove_transistor(self):
netlist = Netlist()
str_netlist = "M0003 N0002 IN001 N0001 GND NMOS\n" \
"M0008 VDD N0001 N0002 GND NMOS\n" \
"M0001 N0001 IN001 IN002 VDD PMOS\n" \
"M0003 OUT01 N0001 IN002 VDD PMOS\n"
netlist.set_netlist(str_netlist)
netlist.remove_transistor("M0008", True)
str_netlist = "M0001 N0002 IN001 N0001 GND NMOS\n" \
"M0002 N0001 IN001 IN002 VDD PMOS\n" \
"M0003 OUT01 N0001 IN002 VDD PMOS\n"
self.assertEqual(str_netlist, netlist.get_netlist_string())
self.assertEqual(len(netlist.p_transistors_), 2)
self.assertEqual(len(netlist.n_transistors_), 1)
def test_remove_transistor_with_dup(self):
netlist = Netlist()
str_netlist = "M0003 N0002 IN001 N0001 GND NMOS\n" \
"M0008 VDD N0001 N0002 GND NMOS\n" \
"M0001 N0001 IN001 IN002 VDD PMOS\n" \
"M0003 OUT01 N0001 IN002 VDD PMOS\n"
netlist.set_netlist(str_netlist)
netlist.remove_transistor("M0003", True)
str_netlist = "M0001 VDD N0001 N0002 GND NMOS\n" \
"M0002 N0001 IN001 IN002 VDD PMOS\n" \
"M0003 OUT01 N0001 IN002 VDD PMOS\n"
self.assertEqual(str_netlist, netlist.get_netlist_string())
self.assertEqual(len(netlist.p_transistors_), 2)
self.assertEqual(len(netlist.n_transistors_), 1)
def test_short_transistor_with_auto_node_removal(self):
netlist = Netlist()
str_netlist = "M0001 OUT01 N0001 IN002 VDD PMOS\n"
netlist.set_netlist(str_netlist)
self.assertEqual(len(netlist.node_dicts_[netlist.get_set_name_for_node('N0001')]), 1)
old_gate_name = netlist.turn_on_transistor('M0001')
self.assertEqual(old_gate_name, 'N0001')
self.assertEqual(len(netlist.node_dicts_[netlist.get_set_name_for_node('N0001')]), 0)
def test_short_transistor_without_auto_node_removal(self):
netlist = Netlist()
str_netlist = "M0001 VDD N0001 N0002 GND NMOS\n" \
"M0002 N0001 IN001 IN002 VDD PMOS\n" \
"M0003 OUT01 N0001 IN002 VDD PMOS\n"
netlist.set_netlist(str_netlist)
self.assertEqual(len(netlist.node_dicts_[netlist.get_set_name_for_node('N0001')]), 2)
old_gate_name = netlist.turn_on_transistor('M0001')
self.assertEqual(old_gate_name, 'N0001')
self.assertEqual(len(netlist.node_dicts_[netlist.get_set_name_for_node('N0001')]), 2)
def test_unshort_transistor(self):
netlist = Netlist()
str_netlist = "M0001 OUT01 N0001 IN002 VDD PMOS\n"
netlist.set_netlist(str_netlist)
self.assertEqual(len(netlist.node_dicts_[netlist.get_set_name_for_node('N0001')]), 1)
old_gate_name = netlist.turn_on_transistor('M0001')
self.assertEqual(old_gate_name, 'N0001')
self.assertEqual(len(netlist.node_dicts_[netlist.get_set_name_for_node('N0001')]), 0)
netlist.replace_transistor_gate('M0001', old_gate_name)
self.assertEqual(len(netlist.node_dicts_[netlist.get_set_name_for_node('N0001')]), 1)
self.assertEqual(str_netlist, netlist.get_netlist_string())
def test_transistor_gate_diff_same(self):
netlist = Netlist()
str_netlist = "M0001 VDD VDD N0002 GND NMOS\n" \
"M0002 N0001 IN002 IN002 VDD PMOS\n" \
"M0003 OUT01 N0001 IN002 VDD PMOS\n"
netlist.set_netlist(str_netlist)
self.assertTrue(netlist.get_transistor('M0001').is_gate_same_as_one_diff())
self.assertTrue(netlist.get_transistor('M0002').is_gate_same_as_one_diff())
self.assertFalse(netlist.get_transistor('M0003').is_gate_same_as_one_diff())
def test_get_max_cnt(self):
netlist = Netlist()
str_netlist = "M0001 VDD VDD N0002 GND NMOS\n" \
"M0002 N0001 IN002 IN002 VDD PMOS\n" \
"M0003 OUT01 N0001 IN002 VDD PMOS\n"
netlist.set_netlist(str_netlist)
self.assertEqual(2, netlist.get_max_cnt_for_dict('internal'))
self.assertEqual(2, netlist.get_max_cnt_for_dict('in'))
def test_shift_node_cnt(self):
netlist = Netlist()
str_netlist = "M0001 VDD VDD N0002 GND NMOS\n" \
"M0002 N0001 IN002 IN002 VDD PMOS\n" \
"M0003 OUT01 N0001 IN002 VDD PMOS\n"
netlist.set_netlist(str_netlist)
netlist.shift_node_cnt_for_dict('internal', 3)
str_netlist = "M0001 VDD VDD N0005 GND NMOS\n" \
"M0002 N0004 IN002 IN002 VDD PMOS\n" \
"M0003 OUT01 N0004 IN002 VDD PMOS\n"
self.assertEqual(str_netlist, netlist.get_netlist_string())
self.assertCountEqual(['N0004', 'N0005'], netlist.node_dicts_['internal'].keys())
if __name__ == '__main__':
unittest.main()
| 45.231884 | 93 | 0.615935 | 1,156 | 9,363 | 4.728374 | 0.089965 | 0.07135 | 0.090194 | 0.064215 | 0.833333 | 0.810099 | 0.802964 | 0.776253 | 0.767472 | 0.747164 | 0 | 0.117028 | 0.289971 | 9,363 | 206 | 94 | 45.451456 | 0.705174 | 0 | 0 | 0.685083 | 0 | 0 | 0.212646 | 0.008651 | 0 | 0 | 0 | 0 | 0.19337 | 1 | 0.082873 | false | 0 | 0.022099 | 0 | 0.116022 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
f5349ba2d4a67c15eea86c0e2da1c9944d04b222 | 6,005 | py | Python | ner/tests/data/test_bert_model_collate.py | freedomkite/easytext | ef83261a366bd8d7c259aa112da14f3fa7cdf918 | [
"MIT"
] | 17 | 2020-06-19T12:12:13.000Z | 2022-01-28T02:07:01.000Z | ner/tests/data/test_bert_model_collate.py | freedomkite/easytext | ef83261a366bd8d7c259aa112da14f3fa7cdf918 | [
"MIT"
] | 24 | 2020-06-08T08:51:36.000Z | 2022-02-08T03:30:19.000Z | ner/tests/data/test_bert_model_collate.py | freedomkite/easytext | ef83261a366bd8d7c259aa112da14f3fa7cdf918 | [
"MIT"
] | 7 | 2020-07-20T06:40:00.000Z | 2022-01-28T03:52:49.000Z | #!/usr/bin/env python 3
# -*- coding: utf-8 -*-
#
# Copyright (c) 2020 PanXu, Inc. All Rights Reserved
#
"""
测试 Bert Model Collate
Authors: PanXu
Date: 2020/09/10 15:23:00
"""
import logging
from torch.utils.data import DataLoader
from easytext.data import ModelInputs
from easytext.utils.json_util import json2str
from easytext.utils import log_util
from ner.tests import ASSERT
from ner.data import BertModelCollate
log_util.config()
def test_bert_model_collate_with_special_token(msra_dataset, msra_vocabulary, bert_tokenizer):
"""
测试带有 CLS 和 SEP 的 bert model collate
:param msra_dataset: msra 数据集
:param msra_vocabulary: 在 conftest.py 中的 msra_vocabulary 返回结果
:return: None
"""
label_vocab = msra_vocabulary["label_vocabulary"]
sequence_max_len = 13
model_collate = BertModelCollate(tokenizer=bert_tokenizer,
sequence_label_vocab=label_vocab,
add_special_token=True,
sequence_max_len=sequence_max_len)
batch_size = 5
data_loader = DataLoader(dataset=msra_dataset,
batch_size=batch_size,
shuffle=False,
num_workers=0,
collate_fn=model_collate)
for model_inputs in data_loader:
model_inputs: ModelInputs = model_inputs
logging.info(f"model inputs: {json2str(model_inputs)}")
ASSERT.assertEqual(batch_size, model_inputs.batch_size)
ASSERT.assertEqual((batch_size, sequence_max_len), model_inputs.labels.size())
input_ids = model_inputs.model_inputs["input_ids"]
ASSERT.assertEqual((batch_size, sequence_max_len), input_ids.size())
sequence_mask = model_inputs.model_inputs["sequence_mask"]
ASSERT.assertEqual((batch_size, sequence_max_len), sequence_mask.size())
ASSERT.assertEqual((batch_size, sequence_max_len), model_inputs.labels.size())
sequence_mask0 = sequence_mask[0].tolist()
expect_sequence_mask0 = [0] + [1] * (sequence_max_len - 2) + [0]
ASSERT.assertEqual(expect_sequence_mask0, sequence_mask0)
sequence_mask4 = sequence_mask[4].tolist()
expect_sequence_mask0 = [0] + [1] * 8 + [0] * (sequence_max_len - 8 - 1)
ASSERT.assertEqual(expect_sequence_mask0, sequence_mask4)
sequence_label0 = model_inputs.labels[0].tolist()
sequence_label0_str = model_inputs.model_inputs["metadata"][0]["labels"][0:sequence_max_len - 2]
expect_sequence_label0 = [label_vocab.padding_index] \
+ [label_vocab.index(label) for label in sequence_label0_str] \
+ [label_vocab.padding_index]
ASSERT.assertEqual(sequence_label0, expect_sequence_label0)
sequence_label4 = model_inputs.labels[4].tolist()
sequence_label4_str = model_inputs.model_inputs["metadata"][4]["labels"]
expect_sequence_label4 = [label_vocab.padding_index] \
+ [label_vocab.index(label) for label in sequence_label4_str] \
+ [label_vocab.padding_index] * (sequence_max_len - 1 - len(sequence_label4_str))
ASSERT.assertEqual(sequence_label4, expect_sequence_label4)
def test_bert_model_collate_without_special_token(msra_dataset, msra_vocabulary, bert_tokenizer):
"""
测试没有 CLS 和 SEP 的 bert model collate
:param msra_dataset: msra 数据集
:param msra_vocabulary: 在 conftest.py 中的 msra_vocabulary 返回结果
:return: None
"""
label_vocab = msra_vocabulary["label_vocabulary"]
sequence_max_len = 13
model_collate = BertModelCollate(tokenizer=bert_tokenizer,
sequence_label_vocab=label_vocab,
add_special_token=False,
sequence_max_len=sequence_max_len)
batch_size = 5
data_loader = DataLoader(dataset=msra_dataset,
batch_size=batch_size,
shuffle=False,
num_workers=0,
collate_fn=model_collate)
for model_inputs in data_loader:
model_inputs: ModelInputs = model_inputs
logging.info(f"model inputs: {json2str(model_inputs)}")
ASSERT.assertEqual(batch_size, model_inputs.batch_size)
ASSERT.assertEqual((batch_size, sequence_max_len), model_inputs.labels.size())
input_ids = model_inputs.model_inputs["input_ids"]
ASSERT.assertEqual((batch_size, sequence_max_len), input_ids.size())
sequence_mask = model_inputs.model_inputs["sequence_mask"]
ASSERT.assertEqual((batch_size, sequence_max_len), sequence_mask.size())
ASSERT.assertEqual((batch_size, sequence_max_len), model_inputs.labels.size())
sequence_mask0 = sequence_mask[0].tolist()
expect_sequence_mask0 = [1] * sequence_max_len
ASSERT.assertEqual(expect_sequence_mask0, sequence_mask0)
sequence_mask4 = sequence_mask[4].tolist()
expect_sequence_mask0 = [1] * 8 + [0] * (sequence_max_len - 8)
ASSERT.assertEqual(expect_sequence_mask0, sequence_mask4)
sequence_label0 = model_inputs.labels[0].tolist()
sequence_label0_str = model_inputs.model_inputs["metadata"][0]["labels"][0:sequence_max_len]
expect_sequence_label0 = [label_vocab.index(label) for label in sequence_label0_str]
ASSERT.assertEqual(sequence_label0, expect_sequence_label0)
sequence_label4 = model_inputs.labels[4].tolist()
sequence_label4_str = model_inputs.model_inputs["metadata"][4]["labels"]
expect_sequence_label4 = [label_vocab.index(label) for label in sequence_label4_str] \
+ [label_vocab.padding_index] * (sequence_max_len - len(sequence_label4_str))
ASSERT.assertEqual(sequence_label4, expect_sequence_label4)
| 40.302013 | 114 | 0.664113 | 705 | 6,005 | 5.297872 | 0.157447 | 0.106024 | 0.082463 | 0.069612 | 0.894779 | 0.865596 | 0.864793 | 0.855154 | 0.82838 | 0.82838 | 0 | 0.023039 | 0.248293 | 6,005 | 148 | 115 | 40.574324 | 0.804386 | 0.074271 | 0 | 0.689655 | 0 | 0 | 0.037866 | 0.008738 | 0 | 0 | 0 | 0 | 0.218391 | 1 | 0.022989 | false | 0 | 0.08046 | 0 | 0.103448 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
f545fb5f89ff571cf66ba415334b1187b547d30c | 96 | py | Python | venv/lib/python3.8/site-packages/aiohttp/client_exceptions.py | Retraces/UkraineBot | 3d5d7f8aaa58fa0cb8b98733b8808e5dfbdb8b71 | [
"MIT"
] | null | null | null | venv/lib/python3.8/site-packages/aiohttp/client_exceptions.py | Retraces/UkraineBot | 3d5d7f8aaa58fa0cb8b98733b8808e5dfbdb8b71 | [
"MIT"
] | null | null | null | venv/lib/python3.8/site-packages/aiohttp/client_exceptions.py | Retraces/UkraineBot | 3d5d7f8aaa58fa0cb8b98733b8808e5dfbdb8b71 | [
"MIT"
] | null | null | null | /home/runner/.cache/pip/pool/7d/6d/ab/ac311c5a2b70a57850205b558ae0b62441c3c75a085d742c8fa6067792 | 96 | 96 | 0.895833 | 9 | 96 | 9.555556 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.447917 | 0 | 96 | 1 | 96 | 96 | 0.447917 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
1973aed1b67ef0cc11fd77ceb613243317ee225a | 70 | py | Python | MeshFitter/__init__.py | hizkiaadrian/3detector | 7024d27ff491e46de05618e0472acea956ec3ea8 | [
"MIT"
] | null | null | null | MeshFitter/__init__.py | hizkiaadrian/3detector | 7024d27ff491e46de05618e0472acea956ec3ea8 | [
"MIT"
] | null | null | null | MeshFitter/__init__.py | hizkiaadrian/3detector | 7024d27ff491e46de05618e0472acea956ec3ea8 | [
"MIT"
] | null | null | null | from MeshFitter.CarMesh import *
from MeshFitter.LossFunction import * | 35 | 37 | 0.842857 | 8 | 70 | 7.375 | 0.625 | 0.474576 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.1 | 70 | 2 | 37 | 35 | 0.936508 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
19747dbdcdb81e53b1775f8ad71adb6edcce1780 | 1,705 | py | Python | tests/unit/internal/test_timeutils.py | Sage-Bionetworks/spccore | c63a88ef472b83be11594b820a072f6d79080a73 | [
"Apache-2.0"
] | 1 | 2019-06-13T20:47:59.000Z | 2019-06-13T20:47:59.000Z | tests/unit/internal/test_timeutils.py | Sage-Bionetworks/spccore | c63a88ef472b83be11594b820a072f6d79080a73 | [
"Apache-2.0"
] | 12 | 2019-06-13T23:32:59.000Z | 2019-08-27T01:24:57.000Z | tests/unit/internal/test_timeutils.py | Sage-Bionetworks/spccore | c63a88ef472b83be11594b820a072f6d79080a73 | [
"Apache-2.0"
] | 3 | 2019-06-13T20:50:01.000Z | 2019-08-29T19:34:31.000Z | from spccore.internal.timeutils import *
def test_from_epoch_time_to_iso():
assert from_epoch_time_to_iso(None) is None
assert from_epoch_time_to_iso(0) == '1970-01-01T00:00:00.000Z'
assert from_epoch_time_to_iso(1561939380.9995) == '2019-07-01T00:03:01.000Z'
assert from_epoch_time_to_iso(-6106060800.0) == '1776-07-04T00:00:00.000Z'
def test_from_iso_to_datetime():
assert from_iso_to_datetime('1970-01-01T00:00:00.000Z') == UNIX_EPOCH
assert from_iso_to_datetime('2019-07-01T00:03:00.999Z') == datetime.datetime(2019, 7, 1, 0, 3, 0, 999000)
assert from_iso_to_datetime('1776-07-04T00:00:00.000Z') == datetime.datetime(1776, 7, 4, 0, 0, 0)
def test_from_datetime_to_iso():
assert from_datetime_to_iso(UNIX_EPOCH) == '1970-01-01T00:00:00.000Z'
assert from_datetime_to_iso(datetime.datetime(2019, 7, 1, 0, 3, 0, 999499)) == '2019-07-01T00:03:00.999Z'
assert from_datetime_to_iso(datetime.datetime(2019, 7, 1, 0, 3, 0, 999500)) == '2019-07-01T00:03:01.000Z'
assert from_datetime_to_iso(datetime.datetime(1776, 7, 4, 0, 0, 0)) == '1776-07-04T00:00:00.000Z'
def test_from_epoch_time_to_datetime():
assert from_epoch_time_to_datetime(0) == UNIX_EPOCH
assert from_epoch_time_to_datetime(1561939380.9995) == datetime.datetime(2019, 7, 1, 0, 3, 0, 999500)
assert from_epoch_time_to_datetime(-6106060800.0) == datetime.datetime(1776, 7, 4, 0, 0, 0)
def test_from_datetime_to_epoch_time():
assert from_datetime_to_epoch_time(datetime.datetime(1776, 7, 4, 0, 0, 0)) == -6106060800.0
assert from_datetime_to_epoch_time(datetime.datetime(2019, 7, 1, 0, 3, 0, 999500)) == 1561939380.9995
assert from_datetime_to_epoch_time(UNIX_EPOCH) == 0
| 50.147059 | 109 | 0.73607 | 296 | 1,705 | 3.942568 | 0.128378 | 0.145673 | 0.100257 | 0.115681 | 0.841474 | 0.745501 | 0.511568 | 0.487575 | 0.272494 | 0.214225 | 0 | 0.240134 | 0.123167 | 1,705 | 33 | 110 | 51.666667 | 0.540468 | 0 | 0 | 0 | 0 | 0 | 0.140762 | 0.140762 | 0 | 0 | 0 | 0 | 0.73913 | 1 | 0.217391 | true | 0 | 0.043478 | 0 | 0.26087 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
19930cf173e2c0419bd0d394837a601c6fecbee9 | 105 | py | Python | src/example_package/newex.py | nnceylan/neco-example-package | d839ab34d7997374ce28dfccc944481bf12277d0 | [
"MIT"
] | null | null | null | src/example_package/newex.py | nnceylan/neco-example-package | d839ab34d7997374ce28dfccc944481bf12277d0 | [
"MIT"
] | null | null | null | src/example_package/newex.py | nnceylan/neco-example-package | d839ab34d7997374ce28dfccc944481bf12277d0 | [
"MIT"
] | null | null | null | def squared(number):
return number * number
def two_squared(number):
return 2 * number * number
| 17.5 | 30 | 0.695238 | 14 | 105 | 5.142857 | 0.428571 | 0.361111 | 0.527778 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.012195 | 0.219048 | 105 | 5 | 31 | 21 | 0.865854 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | false | 0 | 0 | 0.5 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 6 |
2742373461687a87108ecec628f9dc88c21fac65 | 42 | py | Python | datagen/__init__.py | glebkorolkov/datagen | fcb7b7205ce18eb26cdb2550601cc4cd07423c9f | [
"MIT"
] | 2 | 2019-12-19T15:56:46.000Z | 2020-12-02T04:16:44.000Z | datagen/__init__.py | glebkorolkov/datagen | fcb7b7205ce18eb26cdb2550601cc4cd07423c9f | [
"MIT"
] | null | null | null | datagen/__init__.py | glebkorolkov/datagen | fcb7b7205ce18eb26cdb2550601cc4cd07423c9f | [
"MIT"
] | null | null | null | from .data_generator import DataGenerator
| 21 | 41 | 0.880952 | 5 | 42 | 7.2 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.095238 | 42 | 1 | 42 | 42 | 0.947368 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
27ac060d9cd8b52257d5342b296173bec9416c47 | 79 | py | Python | jacdac/speech_synthesis/__init__.py | microsoft/jacdac-python | 712ad5559e29065f5eccb5dbfe029c039132df5a | [
"MIT"
] | 1 | 2022-02-15T21:30:36.000Z | 2022-02-15T21:30:36.000Z | jacdac/speech_synthesis/__init__.py | microsoft/jacdac-python | 712ad5559e29065f5eccb5dbfe029c039132df5a | [
"MIT"
] | null | null | null | jacdac/speech_synthesis/__init__.py | microsoft/jacdac-python | 712ad5559e29065f5eccb5dbfe029c039132df5a | [
"MIT"
] | 1 | 2022-02-08T19:32:45.000Z | 2022-02-08T19:32:45.000Z | # Autogenerated file.
from .client import SpeechSynthesisClient # type: ignore
| 26.333333 | 56 | 0.810127 | 8 | 79 | 8 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.126582 | 79 | 2 | 57 | 39.5 | 0.927536 | 0.405063 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
27b3d0cd701c30143dda8e8608f75831d8ab28b7 | 211 | py | Python | notecoin/huobi/model/core.py | notechats/notecoin | 57e1ed71567ce8864158f24c00ed47addbd9851f | [
"Apache-2.0"
] | null | null | null | notecoin/huobi/model/core.py | notechats/notecoin | 57e1ed71567ce8864158f24c00ed47addbd9851f | [
"Apache-2.0"
] | null | null | null | notecoin/huobi/model/core.py | notechats/notecoin | 57e1ed71567ce8864158f24c00ed47addbd9851f | [
"Apache-2.0"
] | 1 | 2022-03-26T11:42:18.000Z | 2022-03-26T11:42:18.000Z |
class BaseModel:
def __init__(self, name='base', *args, **kwargs):
self.name = name
def train(self, df, *args, **kwargs):
pass
def predict(self, df, *args, **kwargs):
pass
| 19.181818 | 53 | 0.554502 | 26 | 211 | 4.346154 | 0.5 | 0.265487 | 0.176991 | 0.283186 | 0.353982 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.2891 | 211 | 10 | 54 | 21.1 | 0.753333 | 0 | 0 | 0.285714 | 0 | 0 | 0.019048 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.428571 | false | 0.285714 | 0 | 0 | 0.571429 | 0 | 1 | 0 | 0 | null | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 6 |
e31263bf5bf55b23c4f923b0287c0f00ad6e765b | 271 | py | Python | test.py | hassantahhan/iampassword | 45d2deb3ad25f4e97d577740977e66119e8eb463 | [
"Apache-2.0"
] | null | null | null | test.py | hassantahhan/iampassword | 45d2deb3ad25f4e97d577740977e66119e8eb463 | [
"Apache-2.0"
] | null | null | null | test.py | hassantahhan/iampassword | 45d2deb3ad25f4e97d577740977e66119e8eb463 | [
"Apache-2.0"
] | null | null | null | import handler
def test_set_iam_password_policy():
print("### test_set_iam_password_policy() started")
print(handler.set_iam_password_policy())
print("### test_set_iam_password_policy() ended!")
if __name__ == "__main__":
test_set_iam_password_policy()
| 27.1 | 55 | 0.749077 | 36 | 271 | 4.888889 | 0.388889 | 0.170455 | 0.397727 | 0.568182 | 0.715909 | 0.556818 | 0.556818 | 0.556818 | 0.556818 | 0.556818 | 0 | 0 | 0.125461 | 271 | 9 | 56 | 30.111111 | 0.742616 | 0 | 0 | 0 | 0 | 0 | 0.335793 | 0.221402 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | true | 0.714286 | 0.142857 | 0 | 0.285714 | 0.428571 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 1 | 0 | 6 |
e31859555ecd6e661397bbfd6c1f2ffdcb38e95c | 122 | py | Python | seamless/graphs/multi_module/mytestpackage/sub/mod1.py | sjdv1982/seamless | 1b814341e74a56333c163f10e6f6ceab508b7df9 | [
"MIT"
] | 15 | 2017-06-07T12:49:12.000Z | 2020-07-25T18:06:04.000Z | seamless/graphs/multi_module/mytestpackage/sub/mod1.py | sjdv1982/seamless | 1b814341e74a56333c163f10e6f6ceab508b7df9 | [
"MIT"
] | 110 | 2016-06-21T23:20:44.000Z | 2022-02-24T16:15:22.000Z | seamless/graphs/multi_module/mytestpackage/sub/mod1.py | sjdv1982/seamless | 1b814341e74a56333c163f10e6f6ceab508b7df9 | [
"MIT"
] | 6 | 2016-06-21T11:19:22.000Z | 2019-01-21T13:45:39.000Z | from .. import testvalue
from mytestpackage.mod3 import testfunc
from ..mod4 import blah
def func():
return testvalue
| 20.333333 | 39 | 0.770492 | 16 | 122 | 5.875 | 0.6875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.019608 | 0.163934 | 122 | 5 | 40 | 24.4 | 0.901961 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | true | 0 | 0.6 | 0.2 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
e3298bd703a55f70a03ee13f6b08d1206cfbe5e0 | 12,127 | py | Python | PiCN/Layers/ICNLayer/ContentStore/test/test_NamedObjectTree.py | NikolaiRutz/PiCN | 7775c61caae506a88af2e4ec34349e8bd9098459 | [
"BSD-3-Clause"
] | null | null | null | PiCN/Layers/ICNLayer/ContentStore/test/test_NamedObjectTree.py | NikolaiRutz/PiCN | 7775c61caae506a88af2e4ec34349e8bd9098459 | [
"BSD-3-Clause"
] | 5 | 2020-07-15T09:01:42.000Z | 2020-09-28T08:45:21.000Z | PiCN/Layers/ICNLayer/ContentStore/test/test_NamedObjectTree.py | NikolaiRutz/PiCN | 7775c61caae506a88af2e4ec34349e8bd9098459 | [
"BSD-3-Clause"
] | null | null | null | """Tests for the NamedObjectTree data structure"""
import unittest
from PiCN.Layers.ICNLayer.ContentStore.NamedObjectTree import NamedObjectTree
from PiCN.Layers.ICNLayer.ContentStore.BaseContentStore import ContentStoreEntry
from PiCN.Packets import Content, Name
class test_NamedObjectTree(unittest.TestCase):
def setUp(self):
self.tree1_co = NamedObjectTree() # for tests with objects of type Content
self.tree_cse = NamedObjectTree() # for tests with objects of type ContentStoreEntry
def tearDown(self):
pass
def test_empty_tree(self):
n = Name("/does/not/exist")
self.tree1_co.exact_lookup(n)
self.tree1_co.remove(n)
def test_insert_and_exact_lookup(self):
# create content objects
c1 = Content("/ndn/ch/unibas/foo1", "unibas-foo1")
c2 = Content("/ndn/ch/unibas/foo2", "unibas-foo2")
c3 = Content("/ndn/ch/unibas/foo/bar1", "unibas-foo-bar1")
c4 = Content("/ndn/ch/unibas/foo3", "unibas-foo3")
c5 = Content("/ndn/ch/unibas/foo/bar2", "unibas-foo-bar2")
c6 = Content("/ndn/ch/unibas/foo4", "unibas-foo4")
c7 = Content("/ndn/ch", "ndn-ch")
# insert
self.tree1_co.insert(c1)
self.tree1_co.insert(c2)
self.tree1_co.insert(c3)
self.tree1_co.insert(c4)
self.tree1_co.insert(c5)
self.tree1_co.insert(c6)
self.tree1_co.insert(c7)
# exact lookup
self.assertEqual(self.tree1_co.exact_lookup(c1.name), c1)
self.assertEqual(self.tree1_co.exact_lookup(c2.name), c2)
self.assertEqual(self.tree1_co.exact_lookup(c3.name), c3)
self.assertEqual(self.tree1_co.exact_lookup(c4.name), c4)
self.assertEqual(self.tree1_co.exact_lookup(c5.name), c5)
self.assertEqual(self.tree1_co.exact_lookup(c6.name), c6)
self.assertEqual(self.tree1_co.exact_lookup(c7.name), c7)
self.assertEqual(self.tree1_co.exact_lookup(Name("/ndn")), None)
self.assertEqual(self.tree1_co.exact_lookup(Name("/unknown")), None)
self.assertEqual(self.tree1_co.exact_lookup(Name("/ndn/ch/unknown")), None)
self.assertEqual(self.tree1_co.exact_lookup(Name("/ndn/ch/unibas/foo1/unknown")), None)
def test_insert_remove_exact_lookup(self):
# create content objects
c1 = Content("/ndn/ch/unibas/foo1", "unibas-foo1")
c2 = Content("/ndn/ch/unibas/foo2", "unibas-foo2")
c3 = Content("/ndn/ch/unibas/foo/bar1", "unibas-foo-bar1")
c4 = Content("/ndn/ch/unibas/foo3", "unibas-foo3")
c5 = Content("/ndn/ch/unibas/foo/bar2", "unibas-foo-bar2")
c6 = Content("/ndn/ch/unibas/foo4", "unibas-foo4")
c7 = Content("/ndn/ch", "ndn-ch")
# insert
self.tree1_co.insert(c1)
self.tree1_co.insert(c2)
self.tree1_co.insert(c3)
self.tree1_co.insert(c4)
self.tree1_co.insert(c5)
self.tree1_co.insert(c6)
self.tree1_co.insert(c7)
# remove
self.tree1_co.remove(c2.name)
self.tree1_co.remove(c5.name)
self.tree1_co.remove(c7.name)
# exact lookup
self.assertEqual(self.tree1_co.exact_lookup(c1.name), c1)
self.assertEqual(self.tree1_co.exact_lookup(c2.name), None)
self.assertEqual(self.tree1_co.exact_lookup(c3.name), c3)
self.assertEqual(self.tree1_co.exact_lookup(c4.name), c4)
self.assertEqual(self.tree1_co.exact_lookup(c5.name), None)
self.assertEqual(self.tree1_co.exact_lookup(c6.name), c6)
self.assertEqual(self.tree1_co.exact_lookup(c7.name), None)
# insert again and lookup
self.tree1_co.insert(c2)
self.tree1_co.insert(c5)
self.tree1_co.insert(c7)
self.assertEqual(self.tree1_co.exact_lookup(c2.name), c2)
self.assertEqual(self.tree1_co.exact_lookup(c5.name), c5)
self.assertEqual(self.tree1_co.exact_lookup(c7.name), c7)
def test_insert_remove_prefix_lookup(self):
# create content objects
c1 = Content("/ndn/ch/unibas/foo1", "unibas-foo1")
c2 = Content("/ndn/ch/unibas/foo2", "unibas-foo2")
c3 = Content("/ndn/ch/unibas/foo/bar1", "unibas-foo-bar1")
c4 = Content("/ndn/ch/unibas/foo3", "unibas-foo3")
c5 = Content("/ndn/ch/unibas/foo/bar2", "unibas-foo-bar2")
c6 = Content("/ndn/ch/unibas/foo4", "unibas-foo4")
c7 = Content("/ndn/ch", "ndn-ch")
# insert
self.tree1_co.insert(c1)
self.tree1_co.insert(c2)
self.tree1_co.insert(c3)
self.tree1_co.insert(c4)
self.tree1_co.insert(c5)
self.tree1_co.insert(c6)
self.tree1_co.insert(c7)
# prefix lookup
n1 = self.tree1_co.prefix_lookup(Name("/ndn")).name
self.assertTrue(Name("/ndn").is_prefix_of(n1))
n2 = self.tree1_co.prefix_lookup(Name("/ndn/ch")).name
self.assertTrue(Name("/ndn/ch").is_prefix_of(n2))
n3 = self.tree1_co.prefix_lookup(Name("/ndn/ch/unibas")).name
self.assertTrue(Name("/ndn/ch/unibas").is_prefix_of(n3))
n4 = self.tree1_co.prefix_lookup(Name("/ndn/ch/unibas/foo1")).name
self.assertTrue(Name("/ndn/ch/unibas/foo1").is_prefix_of(n4))
n5 = self.tree1_co.prefix_lookup(Name("/ndn/ch/unibas/foo/bar1")).name
self.assertTrue(Name("/ndn/ch/unibas/foo/bar1").is_prefix_of(n5))
self.assertEqual(self.tree1_co.prefix_lookup(Name("/unknown")), None)
self.assertEqual(self.tree1_co.prefix_lookup(Name("/ndn/unknown")), None)
self.assertEqual(self.tree1_co.prefix_lookup(Name("/ndn/ch/unknown")), None)
self.assertEqual(self.tree1_co.prefix_lookup(Name("/ndn/ch/foo1/unknown")), None)
# remove
self.tree1_co.remove(c1.name)
# prefix lookup
n1 = self.tree1_co.prefix_lookup(Name("/ndn")).name
self.assertTrue(Name("/ndn").is_prefix_of(n1))
n2 = self.tree1_co.prefix_lookup(Name("/ndn/ch")).name
self.assertTrue(Name("/ndn/ch").is_prefix_of(n2))
#################### Same Tests with objects of type ContentStoreEntry instead of Content ####################
def test_cse_empty_tree(self):
n = Name("/does/not/exist")
self.tree_cse.exact_lookup(n)
self.tree_cse.remove(n)
def test_cse_insert_and_exact_lookup(self):
# create objects of type ContentStoreEntry
cse1 = ContentStoreEntry(Content("/ndn/ch/unibas/foo1", "unibas-foo1"))
cse2 = ContentStoreEntry(Content("/ndn/ch/unibas/foo2", "unibas-foo2"))
cse3 = ContentStoreEntry(Content("/ndn/ch/unibas/foo/bar1", "unibas-foo-bar1"))
cse4 = ContentStoreEntry(Content("/ndn/ch/unibas/foo3", "unibas-foo3"))
cse5 = ContentStoreEntry(Content("/ndn/ch/unibas/foo/bar2", "unibas-foo-bar2"))
cse6 = ContentStoreEntry(Content("/ndn/ch/unibas/foo4", "unibas-foo4"))
cse7 = ContentStoreEntry(Content("/ndn/ch", "ndn-ch"))
# insert
self.tree_cse.insert(cse1)
self.tree_cse.insert(cse2)
self.tree_cse.insert(cse3)
self.tree_cse.insert(cse4)
self.tree_cse.insert(cse5)
self.tree_cse.insert(cse6)
self.tree_cse.insert(cse7)
# exact lookup
self.assertEqual(self.tree_cse.exact_lookup(cse1.name), cse1)
self.assertEqual(self.tree_cse.exact_lookup(cse2.name), cse2)
self.assertEqual(self.tree_cse.exact_lookup(cse3.name), cse3)
self.assertEqual(self.tree_cse.exact_lookup(cse4.name), cse4)
self.assertEqual(self.tree_cse.exact_lookup(cse5.name), cse5)
self.assertEqual(self.tree_cse.exact_lookup(cse6.name), cse6)
self.assertEqual(self.tree_cse.exact_lookup(cse7.name), cse7)
self.assertEqual(self.tree_cse.exact_lookup(Name("/ndn")), None)
self.assertEqual(self.tree_cse.exact_lookup(Name("/unknown")), None)
self.assertEqual(self.tree_cse.exact_lookup(Name("/ndn/ch/unknown")), None)
self.assertEqual(self.tree_cse.exact_lookup(Name("/ndn/ch/unibas/foo1/unknown")), None)
def test_cse_insert_remove_exact_lookup(self):
# create objects of type ContentStoreEntry
cse1 = ContentStoreEntry(Content("/ndn/ch/unibas/foo1", "unibas-foo1"))
cse2 = ContentStoreEntry(Content("/ndn/ch/unibas/foo2", "unibas-foo2"))
cse3 = ContentStoreEntry(Content("/ndn/ch/unibas/foo/bar1", "unibas-foo-bar1"))
cse4 = ContentStoreEntry(Content("/ndn/ch/unibas/foo3", "unibas-foo3"))
cse5 = ContentStoreEntry(Content("/ndn/ch/unibas/foo/bar2", "unibas-foo-bar2"))
cse6 = ContentStoreEntry(Content("/ndn/ch/unibas/foo4", "unibas-foo4"))
cse7 = ContentStoreEntry(Content("/ndn/ch", "ndn-ch"))
# insert
self.tree_cse.insert(cse1)
self.tree_cse.insert(cse2)
self.tree_cse.insert(cse3)
self.tree_cse.insert(cse4)
self.tree_cse.insert(cse5)
self.tree_cse.insert(cse6)
self.tree_cse.insert(cse7)
# remove
self.tree_cse.remove(cse2.name)
self.tree_cse.remove(cse5.name)
self.tree_cse.remove(cse7.name)
# exact lookup
self.assertEqual(self.tree_cse.exact_lookup(cse1.name), cse1)
self.assertEqual(self.tree_cse.exact_lookup(cse2.name), None)
self.assertEqual(self.tree_cse.exact_lookup(cse3.name), cse3)
self.assertEqual(self.tree_cse.exact_lookup(cse4.name), cse4)
self.assertEqual(self.tree_cse.exact_lookup(cse5.name), None)
self.assertEqual(self.tree_cse.exact_lookup(cse6.name), cse6)
self.assertEqual(self.tree_cse.exact_lookup(cse7.name), None)
# insert again and lookup
self.tree_cse.insert(cse2)
self.tree_cse.insert(cse5)
self.tree_cse.insert(cse7)
self.assertEqual(self.tree_cse.exact_lookup(cse2.name), cse2)
self.assertEqual(self.tree_cse.exact_lookup(cse5.name), cse5)
self.assertEqual(self.tree_cse.exact_lookup(cse7.name), cse7)
def test_cse_insert_remove_prefix_lookup(self):
# create objects of type ContentStoreEntry
cse1 = Content("/ndn/ch/unibas/foo1", "unibas-foo1")
cse2 = Content("/ndn/ch/unibas/foo2", "unibas-foo2")
cse3 = Content("/ndn/ch/unibas/foo/bar1", "unibas-foo-bar1")
cse4 = Content("/ndn/ch/unibas/foo3", "unibas-foo3")
cse5 = Content("/ndn/ch/unibas/foo/bar2", "unibas-foo-bar2")
cse6 = Content("/ndn/ch/unibas/foo4", "unibas-foo4")
cse7 = Content("/ndn/ch", "ndn-ch")
# insert
self.tree_cse.insert(cse1)
self.tree_cse.insert(cse2)
self.tree_cse.insert(cse3)
self.tree_cse.insert(cse4)
self.tree_cse.insert(cse5)
self.tree_cse.insert(cse6)
self.tree_cse.insert(cse7)
# prefix lookup
n1 = self.tree_cse.prefix_lookup(Name("/ndn")).name
self.assertTrue(Name("/ndn").is_prefix_of(n1))
n2 = self.tree_cse.prefix_lookup(Name("/ndn/ch")).name
self.assertTrue(Name("/ndn/ch").is_prefix_of(n2))
n3 = self.tree_cse.prefix_lookup(Name("/ndn/ch/unibas")).name
self.assertTrue(Name("/ndn/ch/unibas").is_prefix_of(n3))
n4 = self.tree_cse.prefix_lookup(Name("/ndn/ch/unibas/foo1")).name
self.assertTrue(Name("/ndn/ch/unibas/foo1").is_prefix_of(n4))
n5 = self.tree_cse.prefix_lookup(Name("/ndn/ch/unibas/foo/bar1")).name
self.assertTrue(Name("/ndn/ch/unibas/foo/bar1").is_prefix_of(n5))
self.assertEqual(self.tree_cse.prefix_lookup(Name("/unknown")), None)
self.assertEqual(self.tree_cse.prefix_lookup(Name("/ndn/unknown")), None)
self.assertEqual(self.tree_cse.prefix_lookup(Name("/ndn/ch/unknown")), None)
self.assertEqual(self.tree_cse.prefix_lookup(Name("/ndn/ch/foo1/unknown")), None)
# remove
self.tree_cse.remove(cse1.name)
# prefix lookup
n1 = self.tree_cse.prefix_lookup(Name("/ndn")).name
self.assertTrue(Name("/ndn").is_prefix_of(n1))
n2 = self.tree_cse.prefix_lookup(Name("/ndn/ch")).name
self.assertTrue(Name("/ndn/ch").is_prefix_of(n2)) | 45.762264 | 114 | 0.654655 | 1,676 | 12,127 | 4.585919 | 0.049523 | 0.049441 | 0.090164 | 0.084309 | 0.950169 | 0.913609 | 0.895134 | 0.87848 | 0.849076 | 0.79209 | 0 | 0.034166 | 0.191474 | 12,127 | 265 | 115 | 45.762264 | 0.74972 | 0.051208 | 0 | 0.675 | 0 | 0 | 0.15865 | 0.036907 | 0 | 0 | 0 | 0 | 0.32 | 1 | 0.05 | false | 0.005 | 0.02 | 0 | 0.075 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
e356b0c535f1e36ead08e7ac02792ce48e9cfe26 | 73 | py | Python | conda_build_fort/run.py | neda-dtu/conda_build_fort | a9e56de6f3cccf1a8ae148245961d1036f7963dc | [
"MIT"
] | null | null | null | conda_build_fort/run.py | neda-dtu/conda_build_fort | a9e56de6f3cccf1a8ae148245961d1036f7963dc | [
"MIT"
] | null | null | null | conda_build_fort/run.py | neda-dtu/conda_build_fort | a9e56de6f3cccf1a8ae148245961d1036f7963dc | [
"MIT"
] | 1 | 2020-01-05T16:25:27.000Z | 2020-01-05T16:25:27.000Z | from .adder_mod.adder import adder as fort_add
print(fort_add(0.3, 0.7)) | 24.333333 | 46 | 0.767123 | 16 | 73 | 3.3125 | 0.6875 | 0.264151 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.061538 | 0.109589 | 73 | 3 | 47 | 24.333333 | 0.753846 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0.5 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 6 |
8b5b9c5ac5122dee57c2e36bde893ee70c1cdbd6 | 3,519 | py | Python | tests/test_azimuthalaverage.py | hessammehr/agpy | a9436f8e5b9210ef8a86d03d0fd94f2d4e6212db | [
"MIT"
] | 16 | 2015-05-08T11:14:26.000Z | 2021-11-19T19:05:16.000Z | tests/test_azimuthalaverage.py | hessammehr/agpy | a9436f8e5b9210ef8a86d03d0fd94f2d4e6212db | [
"MIT"
] | 3 | 2016-05-12T16:27:14.000Z | 2020-12-27T01:14:24.000Z | tests/test_azimuthalaverage.py | hessammehr/agpy | a9436f8e5b9210ef8a86d03d0fd94f2d4e6212db | [
"MIT"
] | 19 | 2015-03-30T22:34:14.000Z | 2020-11-25T23:29:53.000Z | from agpy import azimuthalAverage
from pylab import *
yy,xx = indices([10,10])
rr1 = hypot(xx-5,yy-5)
rr2 = hypot(xx-4.5,yy-4.5)
rr3 = hypot(xx-4.43,yy-4.53)
exp1 = exp(-(rr1**2)/(2.0*5**2))
exp2 = exp(-(rr2**2)/(2.0*5**2))
exp3 = exp(-(rr3**2)/(2.0*5**2))
exp1 /= exp1.max()
exp2 /= exp2.max()
exp3 /= exp3.max()
azr1,azav1 = azimuthalAverage(exp1,center=[5,5],binsize=1.0,returnradii=True)
azr2,azav2 = azimuthalAverage(exp2,center=[4.5,4.5],binsize=1.0,returnradii=True)
azr3,azav3 = azimuthalAverage(exp3,center=[4.43,4.53],binsize=1.0,returnradii=True)
azr1b,azav1b = azimuthalAverage(exp1,center=[5,5],binsize=0.5,returnradii=True)
azr2b,azav2b = azimuthalAverage(exp2,center=[4.5,4.5],binsize=0.5,returnradii=True)
azr3b,azav3b = azimuthalAverage(exp3,center=[4.43,4.53],binsize=0.5,returnradii=True)
figure(2)
subplot(231)
plot(azr1,azav1,'x')
title("Center 5,5, binsize 1")
subplot(234)
plot(azr1b,azav1b,'x')
title("Center 5,5, binsize 0.5")
subplot(232)
plot(azr2,azav2,'x')
title("Center 4.5,4.5, binsize 1")
subplot(235)
plot(azr2b,azav2b,'x')
title("Center 4.5,4.5, binsize 0.5")
subplot(233)
plot(azr3,azav3,'x')
title("Center 4.43,4.53, binsize 1")
subplot(236)
plot(azr3b,azav3b,'x')
title("Center 4.43,4.53, binsize 0.5")
savefig("azimuthalaverage_test_small.png")
yy,xx = indices([100,100])
rr1 = hypot(xx-50,yy-50)
rr2 = hypot(xx-49.5,yy-49.5)
rr3 = hypot(xx-49.43,yy-49.53)
exp1 = exp(-(rr1**2)/(2.0*50**2))
exp2 = exp(-(rr2**2)/(2.0*50**2))
exp3 = exp(-(rr3**2)/(2.0*50**2))
exp1 /= exp1.max()
exp2 /= exp2.max()
exp3 /= exp3.max()
azr1,azav1 = azimuthalAverage(exp1,center=[50,50],binsize=1.0,returnradii=True)
azr2,azav2 = azimuthalAverage(exp2,center=[49.5,49.5],binsize=1.0,returnradii=True)
azr3,azav3 = azimuthalAverage(exp3,center=[49.43,49.53],binsize=1.0,returnradii=True)
azr1b,azav1b = azimuthalAverage(exp1,center=[50,50],binsize=0.5,returnradii=True)
azr2b,azav2b = azimuthalAverage(exp2,center=[49.5,49.5],binsize=0.5,returnradii=True)
azr3b,azav3b = azimuthalAverage(exp3,center=[49.43,49.53],binsize=0.5,returnradii=True)
figure(1)
subplot(231)
plot(azr1,azav1,'x')
title("Center 50,50, binsize 1")
subplot(234)
plot(azr1b,azav1b,'x')
title("Center 50,50, binsize 0.5")
subplot(232)
plot(azr2,azav2,'x')
title("Center 49.5,49.5, binsize 1")
subplot(235)
plot(azr2b,azav2b,'x')
title("Center 49.5,49.5, binsize 0.5")
subplot(233)
plot(azr3,azav3,'x')
title("Center 49.43,49.53, binsize 1")
subplot(236)
plot(azr3b,azav3b,'x')
title("Center 49.43,49.53, binsize 0.5")
savefig("azimuthalaverage_test.png")
azr1,azav1 = azimuthalAverage(exp1,center=[50,50],binsize=1.0,steps=True)
azr2,azav2 = azimuthalAverage(exp2,center=[49.5,49.5],binsize=1.0,steps=True)
azr3,azav3 = azimuthalAverage(exp3,center=[49.43,49.53],binsize=1.0,steps=True)
azr1b,azav1b = azimuthalAverage(exp1,center=[50,50],binsize=0.5,steps=True)
azr2b,azav2b = azimuthalAverage(exp2,center=[49.5,49.5],binsize=0.5,steps=True)
azr3b,azav3b = azimuthalAverage(exp3,center=[49.43,49.53],binsize=0.5,steps=True)
figure(3)
subplot(231)
plot(azr1,azav1)
title("Center 50,50, binsize 1")
subplot(234)
plot(azr1b,azav1b)
title("Center 50,50, binsize 0.5")
subplot(232)
plot(azr2,azav2)
title("Center 49.5,49.5, binsize 1")
subplot(235)
plot(azr2b,azav2b)
title("Center 49.5,49.5, binsize 0.5")
subplot(233)
plot(azr3,azav3)
title("Center 49.43,49.53, binsize 1")
subplot(236)
plot(azr3b,azav3b)
title("Center 49.43,49.53, binsize 0.5")
savefig("azimuthalaverage_test_steps.png")
#import pdb; pdb.set_trace()
| 29.822034 | 87 | 0.714976 | 635 | 3,519 | 3.952756 | 0.100787 | 0.016733 | 0.064542 | 0.031873 | 0.911155 | 0.884861 | 0.881673 | 0.81992 | 0.776494 | 0.776494 | 0 | 0.154718 | 0.072464 | 3,519 | 117 | 88 | 30.076923 | 0.614277 | 0.007673 | 0 | 0.48 | 0 | 0 | 0.165855 | 0.024921 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.02 | 0 | 0.02 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
8b70ac4f33751f84874946aa530be5168c5949a1 | 45 | py | Python | bqskit/bqskit/__init__.py | BQSKit/qfast | 06df0c7439ae096af2d1fa3e97b44512618f5e4a | [
"BSD-3-Clause-LBNL"
] | 12 | 2020-09-23T17:43:17.000Z | 2022-01-17T18:23:11.000Z | bqskit/bqskit/__init__.py | edyounis/qfast | 06df0c7439ae096af2d1fa3e97b44512618f5e4a | [
"BSD-3-Clause-LBNL"
] | 3 | 2020-09-26T00:46:55.000Z | 2021-03-15T17:52:54.000Z | bqskit/bqskit/__init__.py | BQSKit/qfast | 06df0c7439ae096af2d1fa3e97b44512618f5e4a | [
"BSD-3-Clause-LBNL"
] | 2 | 2021-05-31T05:29:20.000Z | 2021-12-06T13:18:22.000Z | from .synthesis import synthesize_for_qiskit
| 22.5 | 44 | 0.888889 | 6 | 45 | 6.333333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.088889 | 45 | 1 | 45 | 45 | 0.926829 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
8b9f66c1c8e1de986f135895af1b300bc372c19d | 5,197 | py | Python | tests/python/test_model.py | aTrotier/sycomore | 32e438d3a90ca0a9d051bb6acff461e06079116d | [
"MIT"
] | 14 | 2019-11-06T09:23:09.000Z | 2022-01-11T19:08:36.000Z | tests/python/test_model.py | aTrotier/sycomore | 32e438d3a90ca0a9d051bb6acff461e06079116d | [
"MIT"
] | 2 | 2020-12-01T15:48:27.000Z | 2020-12-04T15:19:37.000Z | tests/python/test_model.py | aTrotier/sycomore | 32e438d3a90ca0a9d051bb6acff461e06079116d | [
"MIT"
] | 2 | 2020-08-12T04:36:36.000Z | 2021-05-27T13:17:34.000Z | import math
import unittest
import sycomore
from sycomore.units import *
class TestModel(unittest.TestCase):
def test_pulse(self):
model = sycomore.como.Model(
sycomore.Species(1*s, 0.1*s),
sycomore.Magnetization(0, 0, 1),
[["dummy", sycomore.TimeInterval(0*s)]])
model.apply_pulse(sycomore.Pulse(41*deg, 27*deg))
grid = model.magnetization()
for index, _ in sycomore.GridScanner(grid.origin(), grid.shape()):
if index == sycomore.Index(0):
self.assertAlmostEqual(
grid[index].p , 0.210607912662250-0.413341301933443j)
self.assertAlmostEqual(grid[index].z, 0.754709580222772)
self.assertAlmostEqual(
grid[index].m, 0.210607912662250+0.413341301933443j)
else:
self.assertEqual(grid[index].p, 0)
self.assertAlmostEqual(grid[index].z, 0)
self.assertAlmostEqual(grid[index].m, 0)
def test_time_interval(self):
model = sycomore.como.Model(
sycomore.Species(math.log(2)*Hz, math.log(2)*Hz),
sycomore.Magnetization(0, 0, 1), [
["foo", sycomore.TimeInterval(1*s)],
["bar", sycomore.TimeInterval(1*s)]])
model.apply_pulse(sycomore.Pulse(45*deg, 90*deg))
model.apply_time_interval("foo")
grid = model.magnetization()
for index, _ in sycomore.GridScanner(grid.origin(), grid.shape()):
if index == sycomore.Index(-1, 0):
self.assertEqual(grid[index].p, 0)
self.assertEqual(grid[index].z, 0)
self.assertAlmostEqual(grid[index].m, 0.25)
elif index == sycomore.Index(0, 0):
self.assertEqual(grid[index].p, 0)
self.assertEqual(grid[index].z, 0.5*(1+math.sqrt(2)/2))
self.assertEqual(grid[index].m, 0)
elif index == sycomore.Index(1, 0):
self.assertAlmostEqual(grid[index].p, 0.25)
self.assertEqual(grid[index].z, 0)
self.assertEqual(grid[index].m, 0)
else:
self.assertEqual(grid[index].p , 0)
self.assertAlmostEqual(grid[index].z, 0)
self.assertAlmostEqual(grid[index].m, 0)
model.apply_time_interval("bar")
grid = model.magnetization()
for index, _ in sycomore.GridScanner(grid.origin(), grid.shape()):
if index == sycomore.Index(-1, -1):
self.assertEqual(grid[index].p, 0)
self.assertEqual(grid[index].z, 0)
self.assertAlmostEqual(grid[index].m, 0.125)
elif index == sycomore.Index(0, 0):
self.assertEqual(grid[index].p, 0)
self.assertEqual(grid[index].z, 0.5+0.25*(1+math.sqrt(2)/2))
self.assertEqual(grid[index].m, 0)
elif index == sycomore.Index(1, 1):
self.assertAlmostEqual(grid[index].p, 0.125)
self.assertEqual(grid[index].z, 0)
self.assertEqual(grid[index].m, 0)
else:
self.assertEqual(grid[index].p , 0)
self.assertAlmostEqual(grid[index].z, 0)
self.assertAlmostEqual(grid[index].m, 0)
isochromat = model.isochromat()
self.assertAlmostEqual(isochromat[0], 0.125*math.sqrt(2))
self.assertAlmostEqual(isochromat[1], 0)
self.assertAlmostEqual(isochromat[2], 0.5+0.25*(1+math.sqrt(2)/2))
isochromat = model.isochromat(
{sycomore.Index(0,0), sycomore.Index(-1, -1)})
self.assertAlmostEqual(isochromat[0], 0.125*math.sqrt(2)/2)
self.assertAlmostEqual(isochromat[1], 0)
self.assertAlmostEqual(isochromat[2], 0.5+0.25*(1+math.sqrt(2)/2))
def test_diffusion(self):
model = sycomore.como.Model(
sycomore.Species(0*Hz, 0*Hz, 1*um*um/ms),
sycomore.Magnetization(0, 0, 1), [
["foo", sycomore.TimeInterval(500*ms, 0.1*rad/um)]])
model.apply_pulse(sycomore.Pulse(40*deg, 0*deg))
model.apply_time_interval("foo")
grid = model.magnetization()
for index, _ in sycomore.GridScanner(grid.origin(), grid.shape()):
if index == sycomore.Index(-1):
self.assertEqual(grid[index].p, 0)
self.assertEqual(grid[index].z, 0)
self.assertAlmostEqual(grid[index].m, 0+0.003062528150606j)
elif index == sycomore.Index(0):
self.assertEqual(grid[index].p, 0)
self.assertAlmostEqual(grid[index].z, 0.766044443118978)
self.assertEqual(grid[index].m, 0)
elif index == sycomore.Index(1):
self.assertAlmostEqual(grid[index].p, 0-0.003062528150606j)
self.assertEqual(grid[index].z, 0)
self.assertEqual(grid[index].m, 0)
else:
self.assertEqual(grid[index].p , 0)
self.assertAlmostEqual(grid[index].z, 0)
self.assertAlmostEqual(grid[index].m, 0)
if __name__ == "__main__":
unittest.main()
| 42.598361 | 76 | 0.565711 | 611 | 5,197 | 4.770867 | 0.111293 | 0.129674 | 0.156432 | 0.197599 | 0.824014 | 0.798628 | 0.75163 | 0.664494 | 0.631904 | 0.600343 | 0 | 0.075646 | 0.292861 | 5,197 | 121 | 77 | 42.950413 | 0.717551 | 0 | 0 | 0.548077 | 0 | 0 | 0.005965 | 0 | 0 | 0 | 0 | 0 | 0.461538 | 1 | 0.028846 | false | 0 | 0.038462 | 0 | 0.076923 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
8ba3484563e14b0c6c9a6cb22b077abf92d490db | 185 | py | Python | func/python/bench_nbody.py | jchesterpivotal/Faasm | d4e25baf0c69df7eea8614de3759792748f7b9d4 | [
"Apache-2.0"
] | 1 | 2020-12-02T14:01:07.000Z | 2020-12-02T14:01:07.000Z | func/python/bench_nbody.py | TNTtian/Faasm | 377f4235063a7834724cc750697d3e0280d4a581 | [
"Apache-2.0"
] | null | null | null | func/python/bench_nbody.py | TNTtian/Faasm | 377f4235063a7834724cc750697d3e0280d4a581 | [
"Apache-2.0"
] | null | null | null | from pyperformance.benchmarks.bm_nbody import bench_nbody, DEFAULT_REFERENCE, DEFAULT_ITERATIONS
if __name__ == "__main__":
bench_nbody(10, DEFAULT_REFERENCE, DEFAULT_ITERATIONS)
| 30.833333 | 96 | 0.827027 | 22 | 185 | 6.272727 | 0.636364 | 0.144928 | 0.333333 | 0.478261 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.012048 | 0.102703 | 185 | 5 | 97 | 37 | 0.819277 | 0 | 0 | 0 | 0 | 0 | 0.043243 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.333333 | 0 | 0.333333 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
8bb0a66c1e6fbb1488d122d30a10e7d14105d168 | 11,606 | py | Python | src/genie/libs/parser/junos/tests/ShowTedDatabaseExtensive/cli/equal/golden_output_expected.py | balmasea/genieparser | d1e71a96dfb081e0a8591707b9d4872decd5d9d3 | [
"Apache-2.0"
] | 204 | 2018-06-27T00:55:27.000Z | 2022-03-06T21:12:18.000Z | src/genie/libs/parser/junos/tests/ShowTedDatabaseExtensive/cli/equal/golden_output_expected.py | balmasea/genieparser | d1e71a96dfb081e0a8591707b9d4872decd5d9d3 | [
"Apache-2.0"
] | 468 | 2018-06-19T00:33:18.000Z | 2022-03-31T23:23:35.000Z | src/genie/libs/parser/junos/tests/ShowTedDatabaseExtensive/cli/equal/golden_output_expected.py | balmasea/genieparser | d1e71a96dfb081e0a8591707b9d4872decd5d9d3 | [
"Apache-2.0"
] | 309 | 2019-01-16T20:21:07.000Z | 2022-03-30T12:56:41.000Z | expected_output = {
"isis_nodes": 0,
"inet_nodes": 6,
"node": {
"10.4.1.1": {
"type": "Rtr",
"age": 1024,
"link_in": 0,
"link_out": 1,
"protocol": {
"OSPF(0.0.0.4)": {
"to": {
"172.16.1.1": {
"local": {
"10.4.0.2": {
"remote": {
"10.4.0.1": {
"local_interface_index": 0,
"remote_interface_index": 0,
"color": "0 <none>",
"metric": 1,
"static_bw": "2000Mbps",
"reservable_bw": "0bps",
"available_bw": {
0: {"bw": "10bps"},
1: {"bw": "10bps"},
2: {"bw": "0bps"},
3: {"bw": "0bps"},
4: {"bw": "10bps"},
5: {"bw": "0bps"},
6: {"bw": "0bps"},
7: {"bw": "0bps"},
},
"interface_switching_capability_descriptor": {
"1": {
"switching_type": "Packet",
"encoding_type": "Packet",
"maximum_lsp_bw": {
0: {"bw": "0bps"},
1: {"bw": "0bps"},
2: {"bw": "0bps"},
3: {"bw": "0bps"},
4: {"bw": "0bps"},
5: {"bw": "0bps"},
6: {"bw": "0bps"},
7: {"bw": "0bps"},
},
}
},
"p2p_adj_sid": {
"sid": {
"12345": {
"address_family": "IPV4",
"flags": "0x24",
"weight": 0,
}
}
},
}
}
}
}
}
},
"prefixes": {
"10.4.1.1/32": {
"flags": "0x60",
"prefix_sid": {1234: {"flags": "0x00", "algo": 0}},
}
},
"spring_capabilities": {
"srgb_block": {"start": 12000, "range": 3000, "flags": "0x00"}
},
"spring_algorithms": ["0", "1"],
}
},
},
"10.16.2.2-1": {
"type": "Net",
"age": 1024,
"link_in": 0,
"link_out": 2,
"protocol": {
"OSPF(0.0.0.4)": {
"to": {
"10.16.2.34": {
"local": {
"0.0.0.0": {
"remote": {
"0.0.0.0": {
"local_interface_index": 0,
"remote_interface_index": 0,
"metric": 0,
"interface_switching_capability_descriptor": {
"1": {
"switching_type": "Packet",
"encoding_type": "Packet",
"maximum_lsp_bw": {
0: {"bw": "0bps"},
1: {"bw": "0bps"},
2: {"bw": "0bps"},
3: {"bw": "0bps"},
4: {"bw": "0bps"},
5: {"bw": "0bps"},
6: {"bw": "1000bps"},
7: {"bw": "0bps"},
},
}
},
}
}
}
}
},
"10.16.2.42": {
"local": {
"0.0.0.0": {
"remote": {
"0.0.0.0": {
"local_interface_index": 0,
"remote_interface_index": 0,
"metric": 0,
"interface_switching_capability_descriptor": {
"1": {
"switching_type": "Packet",
"encoding_type": "Packet",
"maximum_lsp_bw": {
0: {"bw": "0bps"},
1: {"bw": "0bps"},
2: {"bw": "0bps"},
3: {"bw": "0bps"},
4: {"bw": "0bps"},
5: {"bw": "0bps"},
6: {"bw": "0bps"},
7: {"bw": "0bps"},
},
}
},
}
}
}
}
},
}
}
},
},
"172.16.1.4-1": {
"type": "Net",
"age": 2048,
"link_in": 0,
"link_out": 2,
"protocol": {
"OSPF(0.0.0.4)": {
"to": {
"172.16.85.48": {
"local": {
"0.0.0.0": {
"remote": {
"0.0.0.0": {
"local_interface_index": 0,
"remote_interface_index": 0,
"metric": 0,
"interface_switching_capability_descriptor": {
"1": {
"switching_type": "Packet",
"encoding_type": "Packet",
"maximum_lsp_bw": {
0: {"bw": "0bps"},
1: {"bw": "0bps"},
2: {"bw": "0bps"},
3: {"bw": "0bps"},
4: {"bw": "0bps"},
5: {"bw": "0bps"},
6: {"bw": "0bps"},
7: {"bw": "0bps"},
},
}
},
}
}
}
}
},
"172.16.85.52": {
"local": {
"0.0.0.0": {
"remote": {
"0.0.0.0": {
"local_interface_index": 0,
"remote_interface_index": 0,
"metric": 0,
"interface_switching_capability_descriptor": {
"1": {
"switching_type": "Packet",
"encoding_type": "Packet",
"maximum_lsp_bw": {
0: {"bw": "0bps"},
1: {"bw": "0bps"},
2: {"bw": "0bps"},
3: {"bw": "0bps"},
4: {"bw": "0bps"},
5: {"bw": "0bps"},
6: {"bw": "0bps"},
7: {"bw": "0bps"},
},
}
},
}
}
}
}
},
}
}
},
},
"10.36.3.3": {"type": "---", "age": 3440, "link_in": 1, "link_out": 0},
"10.64.4.4": {"type": "---", "age": 2560, "link_in": 1, "link_out": 0},
},
}
| 52.279279 | 90 | 0.148716 | 486 | 11,606 | 3.390947 | 0.176955 | 0.163835 | 0.034587 | 0.019417 | 0.739078 | 0.739078 | 0.720874 | 0.703884 | 0.668689 | 0.638956 | 0 | 0.113456 | 0.750905 | 11,606 | 221 | 91 | 52.515837 | 0.456589 | 0 | 0 | 0.538462 | 0 | 0 | 0.145701 | 0.036188 | 0 | 0 | 0.001379 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
8bd146aadce86fd65e1815828bc89d545875220c | 134 | py | Python | stride/config.py | hasadna/open-bus-stride-client | 2849564545a10ffc331ef64e5edc8a52acef264b | [
"MIT"
] | 1 | 2022-03-25T11:04:48.000Z | 2022-03-25T11:04:48.000Z | stride/config.py | hasadna/open-bus-stride-client | 2849564545a10ffc331ef64e5edc8a52acef264b | [
"MIT"
] | null | null | null | stride/config.py | hasadna/open-bus-stride-client | 2849564545a10ffc331ef64e5edc8a52acef264b | [
"MIT"
] | null | null | null | import os
STRIDE_API_BASE_URL = (os.environ.get('STRIDE_API_BASE_URL') or 'https://open-bus-stride-api.hasadna.org.il').rstrip('/')
| 26.8 | 121 | 0.738806 | 23 | 134 | 4.043478 | 0.695652 | 0.290323 | 0.27957 | 0.344086 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.067164 | 134 | 4 | 122 | 33.5 | 0.744 | 0 | 0 | 0 | 0 | 0 | 0.462687 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
47ec05920f6eba6877a96d4d9cd266984fe41144 | 36 | py | Python | dash_labs/plugins/__init__.py | johnkangw/dash-labs | 6c34eba81faf1cb0cfd79961e54673326639d13a | [
"MIT"
] | null | null | null | dash_labs/plugins/__init__.py | johnkangw/dash-labs | 6c34eba81faf1cb0cfd79961e54673326639d13a | [
"MIT"
] | null | null | null | dash_labs/plugins/__init__.py | johnkangw/dash-labs | 6c34eba81faf1cb0cfd79961e54673326639d13a | [
"MIT"
] | null | null | null |
from .pages import page_container
| 9 | 33 | 0.805556 | 5 | 36 | 5.6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.166667 | 36 | 3 | 34 | 12 | 0.933333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
9a671e5a06af9b77dea7766650f3e6ea7d940e38 | 30 | py | Python | hsvpicker/__init__.py | Karthikprabuvetrivel/hsvpicker | a7839dab793cdc992c6a15c3b6018a9790879534 | [
"MIT"
] | null | null | null | hsvpicker/__init__.py | Karthikprabuvetrivel/hsvpicker | a7839dab793cdc992c6a15c3b6018a9790879534 | [
"MIT"
] | null | null | null | hsvpicker/__init__.py | Karthikprabuvetrivel/hsvpicker | a7839dab793cdc992c6a15c3b6018a9790879534 | [
"MIT"
] | null | null | null | from hsvpicker.hsv import HSV
| 15 | 29 | 0.833333 | 5 | 30 | 5 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.133333 | 30 | 1 | 30 | 30 | 0.961538 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
d0584e756ce63b374d24614fee81f23446f09277 | 8,273 | py | Python | tests.py | DoggieLicc/discord-webhooks | 3d6a55a7f60c2e6e32ad9f8f2aa56017cf7b1879 | [
"MIT"
] | 23 | 2018-11-10T22:07:47.000Z | 2022-03-30T08:57:43.000Z | tests.py | DoggieLicc/discord-webhooks | 3d6a55a7f60c2e6e32ad9f8f2aa56017cf7b1879 | [
"MIT"
] | 1 | 2020-09-25T10:13:17.000Z | 2020-09-28T10:12:17.000Z | tests.py | DoggieLicc/discord-webhooks | 3d6a55a7f60c2e6e32ad9f8f2aa56017cf7b1879 | [
"MIT"
] | 6 | 2019-06-28T23:07:19.000Z | 2021-04-21T11:44:05.000Z | import unittest
import json
from discord_webhooks import DiscordWebhooks
class BaseTest(unittest.TestCase):
def test_standard_message(self):
"""
Tests a standard messgae payload with nothing but content.
"""
webhook = DiscordWebhooks('webhook_url')
webhook.set_content(content='Montezuma')
expected_payload = {
'content': 'Montezuma',
'embeds': [
{
'fields': [],
'image': {},
'author': {},
'thumbnail': {},
'footer': {},
}
]
}
self.assertEqual(webhook.format_payload(), expected_payload)
def test_generic_embed_message(self):
"""
Tests a generic message payload.
"""
webhook = DiscordWebhooks('webhook_url')
webhook.set_content(content='Montezuma', title='Best Cat Ever', description='Seriously', \
url='http://github.com/JamesIves', color=0xF58CBA, timestamp='2018-11-09T04:10:42.039Z')
expected_payload = \
{
'content': 'Montezuma',
'embeds': [
{
'title': 'Best Cat Ever',
'description': 'Seriously',
'url': 'http://github.com/JamesIves',
'color': 16092346,
'timestamp': '2018-11-09T04:10:42.039Z',
'fields': [],
'image': {},
'author': {},
'thumbnail': {},
'footer': {},
}
]
}
self.assertEquals(webhook.format_payload(), expected_payload)
def test_set_image(self):
"""
Tests the set_image method and ensures the data gets added to the payload.
"""
webhook = DiscordWebhooks('webhook_url')
webhook.set_content(content='Montezuma')
webhook.set_image(url='https://avatars1.githubusercontent.com/u/10888441?s=460&v=4')
expected_payload = \
{
'content': 'Montezuma',
'embeds': [
{
'fields': [],
'image': {
'url': 'https://avatars1.githubusercontent.com/u/10888441?s=460&v=4'
},
'author': {},
'thumbnail': {},
'footer': {},
}
]
}
self.assertEquals(webhook.format_payload(), expected_payload)
def test_set_thumbnail(self):
"""
Tests the set_thumbnail method and ensures the data gets added to the payload.
"""
webhook = DiscordWebhooks('webhook_url')
webhook.set_content(content='Montezuma')
webhook.set_thumbnail(url='https://avatars1.githubusercontent.com/u/10888441?s=460&v=4')
expected_payload = \
{
'content': 'Montezuma',
'embeds': [
{
'fields': [],
'image': {},
'author': {},
'thumbnail': {
'url': 'https://avatars1.githubusercontent.com/u/10888441?s=460&v=4'
},
'footer': {},
}
]
}
self.assertEquals(webhook.format_payload(), expected_payload)
def test_set_author(self):
"""
Tests the set_author method and ensures the data gets added to the payload.
"""
webhook = DiscordWebhooks('webhook_url')
webhook.set_content(content='Montezuma')
webhook.set_author(name='James Ives', url='https://jamesiv.es', icon_url='https://avatars1.githubusercontent.com/u/10888441?s=460&v=4')
expected_payload = \
{
'content': 'Montezuma',
'embeds': [
{
'fields': [],
'image': {},
'author': {
'name': 'James Ives',
'url': 'https://jamesiv.es',
'icon_url': 'https://avatars1.githubusercontent.com/u/10888441?s=460&v=4'
},
'thumbnail': {},
'footer': {},
}
]
}
self.assertEquals(webhook.format_payload(), expected_payload)
def test_set_footer(self):
"""
Tests the set_footer method and ensures the data gets added to the payload.
"""
webhook = DiscordWebhooks('webhook_url')
webhook.set_footer(text='Footer', icon_url='https://avatars1.githubusercontent.com/u/10888441?s=460&v=4')
expected_payload = \
{
'embeds': [
{
'fields': [],
'image': {},
'author': {},
'thumbnail': {},
'footer': {
'text': 'Footer',
'icon_url': 'https://avatars1.githubusercontent.com/u/10888441?s=460&v=4'
},
}
]
}
self.assertEquals(webhook.format_payload(), expected_payload)
def test_add_field(self):
"""
Tests the set_field method and ensures the data gets added to the payload.
"""
webhook = DiscordWebhooks('webhook_url')
webhook.add_field(name='Field1', value='Value1', inline=True)
webhook.add_field(name='Field2', value='Value2', inline=True)
webhook.add_field(name='Field3', value='Value3', inline=False)
# Inline should default to false
webhook.add_field(name='Field4', value='Value4')
expected_payload = \
{
'embeds': [
{
'fields': [
{
'name': 'Field1',
'value': 'Value1',
'inline': True
},
{
'name': 'Field2',
'value': 'Value2',
'inline': True
},
{
'name': 'Field3',
'value': 'Value3',
'inline': False
},
{
'name': 'Field4',
'value': 'Value4',
'inline': False
},
],
'image': {},
'author': {},
'thumbnail': {},
'footer': {},
}
]
}
self.assertEquals(webhook.format_payload(), expected_payload)
def test_complex_embed(self):
"""
Tests a combination of all methods to form a complex payload object.
"""
webhook = DiscordWebhooks('webhook_url')
webhook.set_content(content='Montezuma', title='Best Cat Ever', description='Seriously', \
url='http://github.com/JamesIves', color=0xF58CBA, timestamp='2018-11-09T04:10:42.039Z')
webhook.set_image(url='https://avatars1.githubusercontent.com/u/10888441?s=460&v=4')
webhook.set_thumbnail(url='https://avatars1.githubusercontent.com/u/10888441?s=460&v=4')
webhook.set_author(name='James Ives', url='https://jamesiv.es', icon_url='https://avatars1.githubusercontent.com/u/10888441?s=460&v=4')
webhook.set_footer(text='Footer', icon_url='https://avatars1.githubusercontent.com/u/10888441?s=460&v=4')
webhook.add_field(name='Field', value='Value!')
self.maxDiff = None
expected_payload = \
{
'content': 'Montezuma',
'embeds': [
{
'title': 'Best Cat Ever',
'description': 'Seriously',
'url': 'http://github.com/JamesIves',
'color': 16092346,
'timestamp': '2018-11-09T04:10:42.039Z',
'fields': [
{
'name': 'Field',
'value': 'Value!',
'inline': False
}
],
'image': {
'url': 'https://avatars1.githubusercontent.com/u/10888441?s=460&v=4'
},
'author': {
'name': 'James Ives',
'url': 'https://jamesiv.es',
'icon_url': 'https://avatars1.githubusercontent.com/u/10888441?s=460&v=4'
},
'thumbnail': {
'url': 'https://avatars1.githubusercontent.com/u/10888441?s=460&v=4'
},
'footer': {
'text': 'Footer',
'icon_url': 'https://avatars1.githubusercontent.com/u/10888441?s=460&v=4'
},
}
]
}
self.assertEquals(webhook.format_payload(), expected_payload)
if __name__ == '__main__':
unittest.main()
| 30.869403 | 139 | 0.502599 | 752 | 8,273 | 5.409574 | 0.146277 | 0.039331 | 0.06293 | 0.129794 | 0.838496 | 0.838496 | 0.774582 | 0.764258 | 0.764258 | 0.731072 | 0 | 0.058484 | 0.351021 | 8,273 | 267 | 140 | 30.985019 | 0.699199 | 0.069261 | 0 | 0.580189 | 0 | 0 | 0.297104 | 0.012693 | 0 | 0 | 0.002116 | 0 | 0.037736 | 1 | 0.037736 | false | 0 | 0.014151 | 0 | 0.056604 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
d0599d8646a078aad6e1157bb8ac088d5d4dcd34 | 148 | py | Python | star-wars-analysis/match_csv_yarn/testing_stuff.py | GeneralMisquoti/star-wars-prequels-dialogues | 6d64bdb5e8a11badaf658ad21fc64459b574ce70 | [
"MIT"
] | null | null | null | star-wars-analysis/match_csv_yarn/testing_stuff.py | GeneralMisquoti/star-wars-prequels-dialogues | 6d64bdb5e8a11badaf658ad21fc64459b574ce70 | [
"MIT"
] | 1 | 2020-06-23T20:51:32.000Z | 2020-06-24T10:20:29.000Z | star-wars-analysis/match_csv_yarn/testing_stuff.py | GeneralMisquoti/star-wars-prequels-dialogues | 6d64bdb5e8a11badaf658ad21fc64459b574ce70 | [
"MIT"
] | null | null | null | from match_csv_yarn import split_row
print(split_row("Yes, Master. How do you think this trade viceroy will deal with the chancellor's demands?"))
| 37 | 109 | 0.797297 | 26 | 148 | 4.384615 | 0.923077 | 0.140351 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.135135 | 148 | 3 | 110 | 49.333333 | 0.890625 | 0 | 0 | 0 | 0 | 0 | 0.601351 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0.5 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 6 |
d0dd8de3c534e024cc6669fb5bc16e57e653bf17 | 155 | py | Python | datastore/writer/postgresql_backend/__init__.py | FinnStutzenstein/openslides-datastore-service | 07a8022b46683a223cbed0f6a925d81499ee71ba | [
"MIT"
] | null | null | null | datastore/writer/postgresql_backend/__init__.py | FinnStutzenstein/openslides-datastore-service | 07a8022b46683a223cbed0f6a925d81499ee71ba | [
"MIT"
] | null | null | null | datastore/writer/postgresql_backend/__init__.py | FinnStutzenstein/openslides-datastore-service | 07a8022b46683a223cbed0f6a925d81499ee71ba | [
"MIT"
] | null | null | null | from .sql_database_backend_service import SqlDatabaseBackendService # noqa
from .sql_occ_locker_backend_service import SqlOccLockerBackendService # noqa
| 51.666667 | 78 | 0.883871 | 17 | 155 | 7.647059 | 0.647059 | 0.107692 | 0.307692 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.090323 | 155 | 2 | 79 | 77.5 | 0.921986 | 0.058065 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
19083cd3ce5dde49aa7f156be31d1172b5a6a66c | 245 | py | Python | password_security/tests/__init__.py | juazisco/gestion_rifa | bce6b75f17cb5ab2df7e2f7dd5141fc85a1a5bfb | [
"MIT"
] | null | null | null | password_security/tests/__init__.py | juazisco/gestion_rifa | bce6b75f17cb5ab2df7e2f7dd5141fc85a1a5bfb | [
"MIT"
] | null | null | null | password_security/tests/__init__.py | juazisco/gestion_rifa | bce6b75f17cb5ab2df7e2f7dd5141fc85a1a5bfb | [
"MIT"
] | null | null | null | # Copyright 2015 LasLabs Inc.
# License LGPL-3.0 or later (http://www.gnu.org/licenses/lgpl.html).
from . import test_res_users # noqa
from . import test_password_security_home # noqa
from . import test_password_security_session # noqa
| 35 | 68 | 0.755102 | 37 | 245 | 4.783784 | 0.702703 | 0.169492 | 0.237288 | 0.20339 | 0.384181 | 0.384181 | 0 | 0 | 0 | 0 | 0 | 0.028986 | 0.155102 | 245 | 6 | 69 | 40.833333 | 0.826087 | 0.444898 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.666667 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 6 |
ef921858d91971ecc58db59a303dbcdc66e2a69b | 41 | py | Python | tests/graphs.py | gcomte/lndmanage | 297be179b08e6c976872aa35699efab158fe5bd5 | [
"MIT"
] | null | null | null | tests/graphs.py | gcomte/lndmanage | 297be179b08e6c976872aa35699efab158fe5bd5 | [
"MIT"
] | null | null | null | tests/graphs.py | gcomte/lndmanage | 297be179b08e6c976872aa35699efab158fe5bd5 | [
"MIT"
] | null | null | null | # TODO: place here all graphs for testing | 41 | 41 | 0.780488 | 7 | 41 | 4.571429 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.170732 | 41 | 1 | 41 | 41 | 0.941176 | 0.95122 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 1 | null | 1 | null | true | 0 | 0 | null | null | null | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
ef9375945ebd0dd524c1d6d304e4ecb0f6e2a4aa | 32 | py | Python | sentry_plugin/__init__.py | gsmadi/trinity-sentry-plugin | 64f622eee97fe072240711cace13ff232565d2ff | [
"MIT"
] | null | null | null | sentry_plugin/__init__.py | gsmadi/trinity-sentry-plugin | 64f622eee97fe072240711cace13ff232565d2ff | [
"MIT"
] | null | null | null | sentry_plugin/__init__.py | gsmadi/trinity-sentry-plugin | 64f622eee97fe072240711cace13ff232565d2ff | [
"MIT"
] | null | null | null | from .plugin import SentryPlugin | 32 | 32 | 0.875 | 4 | 32 | 7 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.09375 | 32 | 1 | 32 | 32 | 0.965517 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
efed78907469c99d69cd6912275845cf4b0c2bc1 | 172 | py | Python | run_as_admin.py | Hazmatt101/galaga-killer | 40cdbcb7fc0ae6bd700d0a8d6b79d2780d183401 | [
"MIT"
] | null | null | null | run_as_admin.py | Hazmatt101/galaga-killer | 40cdbcb7fc0ae6bd700d0a8d6b79d2780d183401 | [
"MIT"
] | null | null | null | run_as_admin.py | Hazmatt101/galaga-killer | 40cdbcb7fc0ae6bd700d0a8d6b79d2780d183401 | [
"MIT"
] | null | null | null | from src.scripts.python import clean_build_game
from src.scripts.python.mame_runner_util import MameRunnerUtil
MameRunnerUtil.get_admin()
clean_build_game.initialize() | 34.4 | 63 | 0.854651 | 24 | 172 | 5.833333 | 0.625 | 0.1 | 0.2 | 0.285714 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.081395 | 172 | 5 | 64 | 34.4 | 0.886076 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
effdee4660ab67610b0776e8fbc9dffe1c454729 | 27 | py | Python | image-inpainting/__main__.py | ameli/image-inpainting | 2ae1b231ec9f99f6c300018e0427f73d07fffc6f | [
"MIT"
] | null | null | null | image-inpainting/__main__.py | ameli/image-inpainting | 2ae1b231ec9f99f6c300018e0427f73d07fffc6f | [
"MIT"
] | null | null | null | image-inpainting/__main__.py | ameli/image-inpainting | 2ae1b231ec9f99f6c300018e0427f73d07fffc6f | [
"MIT"
] | null | null | null | print('image inpainting!')
| 13.5 | 26 | 0.740741 | 3 | 27 | 6.666667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.074074 | 27 | 1 | 27 | 27 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0.62963 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
4bcee40061a3ec121ea318d60543f114aa18d043 | 71 | py | Python | app/lib/__init__.py | ChrisChou-freeman/aqi_app | 983a651156f47748be9a0914642a2c268821d574 | [
"MIT"
] | 1 | 2022-02-18T10:07:21.000Z | 2022-02-18T10:07:21.000Z | app/lib/__init__.py | ChrisChou-freeman/aqi_app | 983a651156f47748be9a0914642a2c268821d574 | [
"MIT"
] | null | null | null | app/lib/__init__.py | ChrisChou-freeman/aqi_app | 983a651156f47748be9a0914642a2c268821d574 | [
"MIT"
] | null | null | null | from . import net
request = net.request
net.a_request = net.a_request
| 14.2 | 29 | 0.760563 | 12 | 71 | 4.333333 | 0.416667 | 0.576923 | 0.5 | 0.692308 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.15493 | 71 | 4 | 30 | 17.75 | 0.866667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.333333 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
ef1f2e30baf75627f1814f8e6ce17027a9bb0977 | 187,767 | py | Python | Mini-Project-III/Script/Neural Networks Mini Project-II.py | cankocagil/Neural-Networks | 5cb51ccb7dcc8afd30c5745111a87498ec38e006 | [
"MIT"
] | null | null | null | Mini-Project-III/Script/Neural Networks Mini Project-II.py | cankocagil/Neural-Networks | 5cb51ccb7dcc8afd30c5745111a87498ec38e006 | [
"MIT"
] | null | null | null | Mini-Project-III/Script/Neural Networks Mini Project-II.py | cankocagil/Neural-Networks | 5cb51ccb7dcc8afd30c5745111a87498ec38e006 | [
"MIT"
] | null | null | null |
# Necessary imports :
import numpy as np
import matplotlib.pyplot as plt
import h5py
# %%
# Necessary imports :
#import numpy as np
#import matplotlib.pyplot as plt
#import h5py
import math
import pandas as pd
import seaborn as sns
import sys
question = input('Please enter question number 1/3 :')
def can_kocagil_21602218_hw3(question):
if question == '1' :
# To add a new cell, type '# %%'
# To add a new markdown cell, type '# %% [markdown]'
# %%
# %%
def get_data(path) -> dict :
"""
Given the path of the dataset, return
training and testing images with respective
labels.
"""
with h5py.File(path,'r') as F:
# Names variable contains the names of training and testing file
names = list(F.keys())
data = np.array(F[names[0]][()])
invXForm = np.array(F[names[1]][()])
xForm = np.array(F[names[2]][()])
return {'data' : data,
'invXForm': invXForm,
'xForm' : xForm}
path = 'assign3_data1.h5'
data_h5 = get_data(path)
# %%
data = data_h5['data']
invXForm = data_h5['invXForm']
xForm = data_h5['xForm']
# %%
print(f'The data has a shape: {data.shape}')
# %%
data = np.swapaxes(data,1,3)
# %%
print(f'The data has a shape: {data.shape}')
# %%
class ImagePreprocessing:
"""
_____Image preprocessor_____
Functions :
--- ToGray(data)
-Takes an input image then converts to gray scale by Luminosity Model
--- MeanRemoval(data)
-Extracking the mean of each image themselves
--- ClipStd(data)
- Clipping the input image within given condition
--- Normalize(data,min_scale,max_scale)
- Normalizing input image to [min_scale,max_scale]
--- Flatten(data)
- Flattening input image
"""
def __init__(self):
pass
def ToGray(self,data):
"""
Given the input image converting gray scale according to luminosity model
"""
R_linear = 0.2126
G_linear = 0.7152
B_linear = 0.0722
gray_data = (data[:,:,:,0] * R_linear) + (data[:,:,:,1] * G_linear) + (data[:,:,:,2] * B_linear)
return gray_data
def MeanRemoval(self,data):
"""
Given the input image, substracking the mean of pixel intensity of each image
"""
axis = (1,2)
mean_pixel = np.mean(data,axis = axis)
num_samples = data.shape[0]
# Substracking means of each image seperately :
for i in range(num_samples):
data[i] -= mean_pixel[i]
return data
def ClipStd(self,data,std_scaler):
"""
Given the data and range of standart deviation scaler,
return clipped data
"""
std_pixel = np.std(data)
min_cond = - std_scaler * std_pixel
max_cond = std_scaler * std_pixel
clipped_data = np.clip(data,min_cond,max_cond)
return clipped_data
def Normalize(self,data,min_scale,max_scale):
"""
Given the data, normalize to given interval [min_val,max_val]
"""
min = data.max()
max = data.min()
# First normalize in [0,1]
norm_data = (data - min) / (max-min)
# Normalizing in [min_scale,max_scale]
range = max_scale - min_scale
interval_scaled_data = (norm_data * range) + min_scale
return interval_scaled_data
def Flatten(self,data):
"""
Given the input image data returning flattened version of the data
"""
num_samples = data.shape[0]
flatten = data.reshape(num_samples,-1)
return flatten
# %%
# Defining preprocessor :
preprocessor = ImagePreprocessing()
# %%
# Converting gray scale :
gray_data = preprocessor.ToGray(data = data)
# %%
# Mean removing :
mean_removed_data = preprocessor.MeanRemoval(data = gray_data)
# %%
# Standart deviation clipping :
clipped_data = preprocessor.ClipStd(data = mean_removed_data,std_scaler = 3)
# %%
# Normalized data
data_processed = preprocessor.Normalize(data = clipped_data, min_scale = 0.1, max_scale = 0.9)
# %%
print(f' Maximum val of data : {data_processed.max()}')
print(f' Minimum val of data : {data_processed.min()}')
# %%
def plot_patches(data,num_patches, cmap = 'viridis'):
np.random.seed(15)
num_samples = data.shape[0]
random_indexes = np.random.randint(num_samples,size = num_patches)
plt.figure(figsize = (18,16))
for i in range(num_patches):
plt.subplot(20,20,i+1)
plt.imshow(data[random_indexes[i]],cmap = cmap)
plt.axis('off')
plt.show()
# %%
plot_patches(preprocessor.Normalize(data = data, min_scale = 0, max_scale = 1),num_patches = 200)
# %%
#plot_patches(data,num_patches = 200)
# %%
plot_patches(data_processed,num_patches = 200, cmap = 'gray')
# %%
class Autoencoder:
"""
____Autoencoder____
Functions :
--- __init__(input_size,hidden_size)
- Building overall architecture of the model
--- InitParams(input_size,hidden_size)
- Initializing configurable parameters
--- aeCost(W,data,params)
- Calculating cost and it's derivatives
--- Forward(X)
- Forward pass
--- Backward(X)
- Calculation of gradients w.r.t. loss function
--- KL_divergence()
- Calculate KL divergence and it's gradients
--- TykhonowRegulator(X,grad)
- Computing Tykhonow regularization term and it's gradient
--- Predict(X)
- To make predictions
--- Sigmoid(X, grad)
- Compute sigmoidal activation and it's gradients
--- History()
- To keep track history of the model
"""
def __init__(self,input_size,hidden_size,lambd):
"""
Construction of the architecture of the autoencoder
"""
np.random.seed(1500)
self.lambd = lambd
self.beta = 1e-1
self.rho = 5e-2
self.learning_rate = 9e-1
self.params = {'L_in' : input_size,
'L_hidden' : hidden_size,
'Lambda' : self.lambd,
'Beta' : self.beta,
'Rho' : self.rho}
self.W_e = self.InitParams(input_size,hidden_size)
self.loss = []
def InitParams(self,input_size,hidden_size):
"""
Given the size of the input node and hidden node, initialize the weights
drawn from uniform distribution ~ Uniform[- sqrt(6/(L_pre + L_post)) , sqrt(6/(L_pre + L_post))]
"""
self.input_size = input_size
self.hidden_size = hidden_size
self.output_size = input_size
W1_high = self.w_o(input_size,hidden_size)
W1_low = - W1_high
W1_size = (input_size,hidden_size)
self.W1 = np.random.uniform(W1_low,W1_high,size = W1_size)
B1_size = (1,hidden_size)
self.B1 = np.random.uniform(W1_low,W1_high,size = B1_size)
W2_high = self.w_o(hidden_size,self.output_size)
W2_low = - W2_high
W2_size = (hidden_size,self.output_size)
self.W2 = np.random.uniform(W2_low,W2_high,size = W2_size)
B2_size = (1,self.output_size)
self.B2 = np.random.uniform(W1_low,W1_high,size = B2_size)
return {'W1' : self.W1,
'W2' : self.W2,
'B1' : self.B1,
'B2' : self.B2}
def w_o(self,L_pre,L_post):
return np.sqrt(6/(L_pre + L_post))
def sigmoid(self,X, grad = True):
"""
Computing sigmoid and it's gradient w.r.t. it's input
"""
sig = 1/(1 + np.exp(-X))
return sig * (1-sig) if grad else sig
def forward(self,X):
"""
Forward propagation
"""
W1 = self.W_e['W1']
W2 = self.W_e['W2']
B1 = self.W_e['B1']
B2 = self.W_e['B2']
Z1 = np.dot(X,W1) + B1
A1 = self.sigmoid(Z1,grad = False)
Z2 = np.dot(A1,W2) + B2
A2 = self.sigmoid(Z2,grad = False)
return {"Z1": Z1,"A1": A1,
"Z2": Z2,"A2": A2}
def total_loss(self,outs,label):
W1 = self.W_e['W1']
W2 = self.W_e['W2']
Lambda = self.params['Lambda']
beta = self.params['Beta']
rho = self.params['Rho']
J_mse = self.MSE(outs['A2'],label, grad = False)
J_tykhonow = self.TykhonowRegularization(W1 = W1, W2 = W2,lambd = Lambda, grad = False)
J_KL = self.KL_divergence(rho = rho,expected = np.mean(outs['A1']), beta = beta, grad = False)
return J_mse + J_tykhonow + J_KL
def MSE(self,pred,label, grad = True):
"""
Calculating Mean Sqaured Error and it's gradient w.r.t. output
"""
return 1/2 * (pred - label) if grad else 1/2 * np.sum((pred - label)**2)/pred.shape[0]
def aeCost(self,data):
outs = self.forward(data)
loss = self.total_loss(outs,data)
grads = self.backward(outs,data)
return {'J' : loss,
'J_grad' : grads}
def KL_divergence(self,rho,beta,expected,grad = True):
"""
Computing KL-divergence and it's gradients, note that gradients is only for W1
"""
return np.tile(beta * (-(rho/expected) + (1-rho)/(1-expected) ), reps = (10240,1)) if grad else beta * (np.sum((rho * np.log(rho/expected)) + ((1-rho)*np.log((1-rho)/(1-expected)))))
def TykhonowRegularization(self,W1,W2,lambd,grad = True):
"""
L2 based regularization computing and it's gradients
"""
return {'dW1': lambd * W1, 'dW2': lambd * W2} if grad else (lambd/2) * (np.sum(W1**2) + np.sum(W2**2))
def backward(self,outs,data):
"""
Given the forward pass outputs, input and their labels,
returning gradients w.r.t. loss functions
"""
m = data.shape[0]
Lambda = self.params['Lambda']
beta = self.params['Beta']
rho = self.params['Rho']
W1 = self.W_e['W1']
W2 = self.W_e['W2']
B1 = self.W_e['B1']
B2 = self.W_e['B2']
Z1 = outs['Z1']
A1 = outs['A1']
Z2 = outs['Z2']
A2 = outs['A2']
L2_grad = self.TykhonowRegularization(W1,W2,lambd = Lambda , grad = True)
KL_grad_W1 = self.KL_divergence(rho,beta,expected = np.mean(A1),grad = True)
dZ2 = self.MSE(A2,data, grad = True) * self.sigmoid(Z2, grad = True)
dW2 = (1/m) * (np.dot(A1.T,dZ2) + L2_grad['dW2'])
dB2 = (1/m) * (np.sum(dZ2, axis=0, keepdims=True))
dZ1 = (np.dot(dZ2,W2.T) + KL_grad_W1) * self.sigmoid(Z1,grad = True)
dW1 = (1/m) * (np.dot(data.T,dZ1) + L2_grad['dW1'])
dB1 = (1/m) * (np.sum(dZ1, axis=0, keepdims=True))
assert (dW1.shape == W1.shape and dW2.shape == W2.shape)
return {"dW1": dW1, "dW2": dW2,
"dB1": dB1, "dB2": dB2}
def fit(self,data,epochs = 5000,verbose = True):
"""
Given the traning dataset,their labels and number of epochs
fitting the model, and measure the performance
by validating training dataset.
"""
for epoch in range(epochs):
loss_and_grads = self.aeCost(data)
self.step(grads = loss_and_grads['J_grad'])
if verbose:
print(f"[{epoch}/{epochs}] ----------> Loss :{loss_and_grads['J']}")
self.loss.append(loss_and_grads['J'])
def step(self,grads):
"""
Updating configurable parameters according to full-batch stochastic gradient update rule
"""
self.W_e['W1'] += -self.learning_rate * grads['dW1']
self.W_e['W2'] += -self.learning_rate * grads['dW2']
self.W_e['B1'] += -self.learning_rate * grads['dB1']
self.W_e['B2'] += -self.learning_rate * grads['dB2']
self.learning_rate *= 0.9999
def evaluate(self):
plt.plot(self.loss, color = 'orange')
plt.xlabel(' # of Epochs')
plt.ylabel('Loss')
plt.title('Training Loss versus Epochs')
plt.legend([f'Loss : {self.loss[-1]}'])
def display_weights(self):
"""
Display weights as a image for feature representation
"""
W1 = self.W_e['W1']
num_disp = W1.shape[1]
fig = plt.figure(figsize = (9,8))
for i in range(num_disp):
plt.subplot(8,8,i+1)
plt.imshow(W1.T[i].reshape(16,16),cmap = 'gray')
plt.axis('off')
fig.suptitle('Hidden Layer Feature Representation')
plt.show()
def display_outputs(self,output,data,num = 4):
"""
Displaying outputs, please give only sqaured values, i.e., 1,4,16,...
"""
random_indexes = np.random.randint(output.shape[0],size = num)
plt.figure(figsize=(12, 4))
for i in range(len(random_indexes)):
ax = plt.subplot(2,5,i+1)
plt.imshow(output[random_indexes[i]].reshape(16,16),cmap = 'gray')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
plt.title("Reconstructed Image")
#plt.axis('off')
ax = plt.subplot(2, 5, i + 1 + 5)
plt.imshow(data[random_indexes[i]].reshape(16,16),cmap = 'gray')
plt.title("Original Image")
plt.gray()
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
plt.show()
def parameters(self):
"""
Returns configurable parameters
"""
return self.W_e
def history(self):
return {'Loss' : self.loss}
# %%
class Solver:
"""
Given as input, A Solver encapsulates all the logic necessary for training then
implement gradients solver to minimize the cost.The Solver performs stochastic gradient descent.
"""
def __init__(self, model,data):
self.model = model
self.data = data
def train(self,epochs = 5000,verbose = False):
"""
Optimization of the model by minimizing cost by solving gradients
"""
self.model.fit(self.data,epochs,verbose)
def parameters(self):
"""
Returning configurable parameters of the network
"""
return self.model.parameters()
# %%
data_feed = preprocessor.Flatten(data_processed)
input_size = data_feed.shape[1]
hidden_size = 64
autoencoder = Autoencoder(input_size = input_size, hidden_size = hidden_size,lambd = 5e-4)
# %%
solver = Solver(model = autoencoder, data = data_feed)
solver.train(verbose = True)
# %%
net_params = solver.parameters()
net_history = autoencoder.history()
# %%
autoencoder.evaluate()
# %%
autoencoder.display_weights()
# %%
preds = autoencoder.forward(data_feed)
autoencoder.display_outputs(preds['A2'],data_feed)
# %%
hidden_size_1 = 10
lambd_1 = 0
autoencoder_1 = Autoencoder(input_size = input_size, hidden_size = hidden_size_1, lambd = lambd_1)
solver_1 = Solver(model = autoencoder_1, data = data_feed)
solver_1.train()
autoencoder_1.evaluate()
autoencoder_1.display_weights()
preds_1 = autoencoder_1.forward(data_feed)
autoencoder_1.display_outputs(preds_1['A2'],data_feed)
# %%
hidden_size_2 = 10
lambd_2 = 1e-3
autoencoder_2 = Autoencoder(input_size = input_size, hidden_size = hidden_size_2, lambd = lambd_2)
solver_2 = Solver(model = autoencoder_2, data = data_feed)
solver_2.train()
autoencoder_2.evaluate()
autoencoder_2.display_weights()
preds_2 = autoencoder_2.forward(data_feed)
autoencoder_2.display_outputs(preds_2['A2'],data_feed)
# %%
hidden_size_3 = 10
lambd_3 = 1e-5
autoencoder_3 = Autoencoder(input_size = input_size, hidden_size = hidden_size_3, lambd = lambd_3)
solver_3 = Solver(model = autoencoder_3, data = data_feed)
solver_3.train()
autoencoder_3.evaluate()
autoencoder_3.display_weights()
preds_3 = autoencoder_3.forward(data_feed)
autoencoder_3.display_outputs(preds_3['A2'],data_feed)
# %%
hidden_size_4 = 50
lambd_4 = 0
autoencoder_4 = Autoencoder(input_size = input_size, hidden_size = hidden_size_4, lambd = lambd_4)
solver_4 = Solver(model = autoencoder_4, data = data_feed)
solver_4.train()
autoencoder_4.evaluate()
autoencoder_4.display_weights()
preds_4 = autoencoder_4.forward(data_feed)
autoencoder_4.display_outputs(preds_4['A2'],data_feed)
# %%
hidden_size_5 = 50
lambd_5 = 1e-3
autoencoder_5 = Autoencoder(input_size = input_size, hidden_size = hidden_size_5, lambd = lambd_5)
solver_5 = Solver(model = autoencoder_5, data = data_feed)
solver_5.train()
autoencoder_5.evaluate()
autoencoder_5.display_weights()
preds_5 = autoencoder_5.forward(data_feed)
autoencoder_5.display_outputs(preds_5['A2'],data_feed)
# %%
autoencoder_5.display_outputs(preds_5['A2'],data_feed)
# %%
hidden_size_6 = 50
lambd_6 = 1e-5
autoencoder_6 = Autoencoder(input_size = input_size, hidden_size = hidden_size_6, lambd = lambd_6)
solver_6 = Solver(model = autoencoder_6, data = data_feed)
solver_6.train()
autoencoder_6.evaluate()
autoencoder_6.display_weights()
preds_6 = autoencoder_6.forward(data_feed)
# %%
autoencoder_6.display_outputs(preds_6['A2'],data_feed)
# %%
hidden_size_7 = 100
lambd_7 = 0
autoencoder_7 = Autoencoder(input_size = input_size, hidden_size = hidden_size_7, lambd = lambd_7)
solver_7 = Solver(model = autoencoder_7, data = data_feed)
solver_7.train()
autoencoder_7.evaluate()
#autoencoder_7.display_weights()
preds_7 = autoencoder_7.forward(data_feed)
autoencoder_7.display_outputs(preds_7['A2'],data_feed)
# %%
#autoencoder_7.display_weights()
W1 = autoencoder_7.W_e['W1']
num_disp = W1.shape[1]
fig = plt.figure(figsize = (9,8))
for i in range(num_disp):
plt.subplot(10,10,i+1)
plt.imshow(W1.T[i].reshape(16,16),cmap = 'gray')
plt.axis('off')
fig.suptitle('Hidden Layer Feature Representation')
plt.show()
preds_7 = autoencoder_7.forward(data_feed)
autoencoder_7.display_outputs(preds_7['A2'],data_feed)
# %%
autoencoder_7.display_outputs(preds_7['A2'],data_feed)
# %%
hidden_size_8 = 100
lambd_8 = 1e-3
autoencoder_8 = Autoencoder(input_size = input_size, hidden_size = hidden_size_8, lambd = lambd_8)
solver_8 = Solver(model = autoencoder_8, data = data_feed)
solver_8.train()
autoencoder_8.evaluate()
W1 = autoencoder_8.W_e['W1']
num_disp = W1.shape[1]
fig = plt.figure(figsize = (9,8))
for i in range(num_disp):
plt.subplot(10,10,i+1)
plt.imshow(W1.T[i].reshape(16,16),cmap = 'gray')
plt.axis('off')
fig.suptitle('Hidden Layer Feature Representation')
plt.show()
preds_8 = autoencoder_8.forward(data_feed)
autoencoder_8.display_outputs(preds_8['A2'],data_feed)
# %%
hidden_size_9 = 100
lambd_9 = 1e-5
autoencoder_9 = Autoencoder(input_size = input_size, hidden_size = hidden_size_9, lambd = lambd_9)
solver_9 = Solver(model = autoencoder_9, data = data_feed)
solver_9.train()
autoencoder_9.evaluate()
W1 = autoencoder_9.W_e['W1']
num_disp = W1.shape[1]
fig = plt.figure(figsize = (9,8))
for i in range(num_disp):
plt.subplot(10,10,i+1)
plt.imshow(W1.T[i].reshape(16,16),cmap = 'gray')
plt.axis('off')
fig.suptitle('Hidden Layer Feature Representation')
plt.show()
preds_9 = autoencoder_9.forward(data_feed)
# %%
autoencoder_9.display_outputs(preds_9['A2'],data_feed)
# %%
import tensorflow as tf
from tensorflow.keras import layers
from tensorflow import keras
import tensorflow.keras.backend as K
# %%
if tf.test.gpu_device_name():
print('Default GPU Device:{}'.format(tf.test.gpu_device_name()))
else:
print("NO GPU, that's okey")
# %%
def MeanSquaredError():
def customMeanSquaredError(pred,label):
return 1/2 * K.sum((pred - label)**2)/pred.shape[0]
return customMeanSquaredError
def KL_divergence(rho, beta):
def customKL(out):
kl1 = rho*K.log(rho/K.mean(out, axis=0))
kl2 = (1-rho)*K.log((1-rho)/(1-K.mean(out, axis=0)))
return beta*K.sum(kl1+kl2)
return customKL
# %%
def create_model(hidden_size,lambd):
tf_weights = tf_weight_initializer(inp_dim = inp_dim, hidden_dim = hidden_size)
input_img = keras.Input(shape=(inp_dim,))
encoded = layers.Dense(encoding_dim,activation='sigmoid',
kernel_regularizer=tf.keras.regularizers.l2(lambd),
activity_regularizer=KL_divergence(rho,beta),
kernel_initializer = tf_weights['W1'],
bias_initializer = tf_weights['B1'])(input_img)
decoded = layers.Dense(inp_dim,activation='sigmoid',
activity_regularizer=tf.keras.regularizers.l2(lambd),
kernel_initializer = tf_weights['W2'],
bias_initializer = tf_weights['B2'])(encoded)
tf_autoencoder = keras.Model(input_img,decoded)
optimizer = tf.keras.optimizers.SGD(learning_rate=0.9,momentum=0,nesterov=False)
tf_autoencoder.compile(optimizer=optimizer,loss=MeanSquaredError())
return tf_autoencoder
def plot_tf_weights(W1):
num_disp = W1.shape[1]
fig = plt.figure(figsize = (9,8))
for i in range(num_disp):
plt.subplot(10,10,i+1)
plt.imshow(W1.T[i].reshape(16,16),cmap = 'gray')
plt.axis('off')
fig.suptitle('Hidden Layer Feature Representation')
plt.show()
# %%
tf_model_1 = create_model(hidden_size = 10,lambd = 0)
tf_model_1.fit(data_feed, data_feed,
epochs=5000,
batch_size=data_feed.shape[0])
tf_history_1 = tf_model_1.history.history
plt.plot(tf_history_1['loss'])
plt.xlabel('# of epochs')
plt.ylabel('Loss')
plt.title('Loss versus Epoch')
#plt.legend([f'Loss : {tf_history_1['loss'][-1]}'])
tf_preds_1 = tf_model_1.predict(data_feed)
autoencoder.display_outputs(tf_preds_1,data_feed)
tf_weights_1 = tf_model_1.get_weights()
plot_tf_weights(tf_weights_1[0])
# %%
tf_model_2 = create_model(hidden_size = 10,lambd = 1e-3)
tf_model_2.fit(data_feed, data_feed,
epochs=5000,
batch_size=data_feed.shape[0])
tf_history_2 = tf_model_2.history.history
plt.plot(tf_history_2['loss'])
plt.xlabel('# of epochs')
plt.ylabel('Loss')
plt.title('Loss versus Epoch')
tf_preds_2 = tf_model_2.predict(data_feed)
autoencoder.display_outputs(tf_preds_2,data_feed)
tf_weights_2 = tf_model_2.get_weights()
plot_tf_weights(tf_weights_2[0])
# %%
tf_model_3 = create_model(hidden_size = 10,lambd = 1e-5)
tf_model_3.fit(data_feed, data_feed,
epochs=5000,
batch_size=data_feed.shape[0])
tf_history_3 = tf_model_3.history.history
plt.plot(tf_history_3['loss'])
plt.xlabel('# of epochs')
plt.ylabel('Loss')
plt.title('Loss versus Epoch')
tf_preds_3 = tf_model_3.predict(data_feed)
autoencoder.display_outputs(tf_preds_3,data_feed)
tf_weights_3 = tf_model_3.get_weights()
plot_tf_weights(tf_weights_3[0])
# %%
tf_model_4 = create_model(hidden_size = 50,lambd = 0)
tf_model_4.fit(data_feed, data_feed,
epochs=5000,
batch_size=data_feed.shape[0])
tf_history_4 = tf_model_4.history.history
plt.plot(tf_history_4['loss'])
plt.xlabel('# of epochs')
plt.ylabel('Loss')
plt.title('Loss versus Epoch')
tf_preds_4 = tf_model_4.predict(data_feed)
autoencoder.display_outputs(tf_preds_4,data_feed)
tf_weights_4 = tf_model_4.get_weights()
plot_tf_weights(tf_weights_4[0])
# %%
tf_model_5 = create_model(hidden_size = 50,lambd = 1e-3)
tf_model_5.fit(data_feed, data_feed,
epochs=5000,
batch_size=data_feed.shape[0])
tf_history_5 = tf_model_5.history.history
plt.plot(tf_history_5['loss'])
plt.xlabel('# of epochs')
plt.ylabel('Loss')
plt.title('Loss versus Epoch')
tf_preds_5 = tf_model_5.predict(data_feed)
autoencoder.display_outputs(tf_preds_5,data_feed)
tf_weights_5 = tf_model_5.get_weights()
plot_tf_weights(tf_weights_5[0])
# %%
tf_model_6 = create_model(hidden_size = 50,lambd = 1e-5)
tf_model_6.fit(data_feed, data_feed,
epochs=5000,
batch_size=data_feed.shape[0])
tf_history_6 = tf_model_6.history.history
plt.plot(tf_history_6['loss'])
plt.xlabel('# of epochs')
plt.ylabel('Loss')
plt.title('Loss versus Epoch')
tf_preds_6 = tf_model_6.predict(data_feed)
autoencoder.display_outputs(tf_preds_6,data_feed)
tf_weights_6 = tf_model_1.get_weights()
plot_tf_weights(tf_weights_6[0])
# %%
tf_model_7 = create_model(hidden_size = 100,lambd = 0)
tf_model_7.fit(data_feed, data_feed,
epochs=5000,
batch_size=data_feed.shape[0])
tf_history_7 = tf_model_7.history.history
plt.plot(tf_history_7['loss'])
plt.xlabel('# of epochs')
plt.ylabel('Loss')
plt.title('Loss versus Epoch')
tf_preds_7 = tf_model_7.predict(data_feed)
autoencoder.display_outputs(tf_preds_7,data_feed)
tf_weights_7 = tf_model_7.get_weights()
plot_tf_weights(tf_weights_7[0])
# %%
tf_model_8 = create_model(hidden_size = 100,lambd = 1e-3)
tf_model_8.fit(data_feed, data_feed,
epochs=5000,
batch_size=data_feed.shape[0])
tf_history_8 = tf_model_8.history.history
plt.plot(tf_history_8['loss'])
plt.xlabel('# of epochs')
plt.ylabel('Loss')
plt.title('Loss versus Epoch')
tf_preds_8 = tf_model_8.predict(data_feed)
autoencoder.display_outputs(tf_preds_8,data_feed)
tf_weights_8 = tf_model_8.get_weights()
plot_tf_weights(tf_weights_8[0])
# %%
tf_model_9 = create_model(hidden_size = 100,lambd = 1e-5)
tf_model_9.fit(data_feed, data_feed,
epochs=5000,
batch_size=data_feed.shape[0])
tf_history_9 = tf_model_9.history.history
plt.plot(tf_history_9['loss'])
plt.xlabel('# of epochs')
plt.ylabel('Loss')
plt.title('Loss versus Epoch')
tf_preds_9 = tf_model_9.predict(data_feed)
autoencoder.display_outputs(tf_preds_9,data_feed)
tf_weights_9 = tf_model_9.get_weights()
plot_tf_weights(tf_weights_9[0])
# %%
#encoding_dim = 10
rho,beta = 5e-1,1e-1
inp_dim = 256
#lamb = 0
W_scaler = lambda L_pre,L_post : np.sqrt(6/(L_pre + L_post))
def tf_weight_initializer(inp_dim,hidden_dim):
initializer_1 = tf.keras.initializers.RandomUniform(minval=-W_scaler(inp_dim,hidden_dim), maxval=W_scaler(inp_dim,hidden_dim))
values_2 = initializer_1(shape=(inp_dim,hidden_dim))
initializer_2 = tf.keras.initializers.RandomUniform(minval=-W_scaler(hidden_dim,inp_dim), maxval=W_scaler(hidden_dim,inp_dim))
values_2 = initializer_2(shape=(inp_dim,hidden_dim))
initializer_3 = tf.keras.initializers.RandomUniform(minval=-W_scaler(inp_dim,hidden_dim), maxval=W_scaler(inp_dim,hidden_dim))
values_3 = initializer_3(shape=(1,hidden_dim))
initializer_4 = tf.keras.initializers.RandomUniform(minval=-W_scaler(hidden_dim,inp_dim), maxval=W_scaler(hidden_dim,inp_dim))
values_4 = initializer_4(shape=(1,inp_dim))
return {'W1':initializer_1,
'W2':initializer_2,
'B1':initializer_3,
'B2':initializer_4}
tf_weights = tf_weight_initializer(inp_dim = inp_dim, hidden_dim = encoding_dim)
# %%
input_img = keras.Input(shape=(inp_dim,))
encoded = layers.Dense(encoding_dim,activation='sigmoid',
kernel_regularizer=tf.keras.regularizers.l2(lamb),
activity_regularizer=KL_divergence(rho,beta),
kernel_initializer = tf_weights['W1'],
bias_initializer = tf_weights['B1'])(input_img)
decoded = layers.Dense(inp_dim,activation='sigmoid',
activity_regularizer=tf.keras.regularizers.l2(lamb),
kernel_initializer = tf_weights['W2'],
bias_initializer = tf_weights['B2'])(encoded)
tf_autoencoder = keras.Model(input_img,decoded)
# %%
tf_autoencoder.summary()
# %%
optimizer = tf.keras.optimizers.SGD(learning_rate=0.9,momentum=0,nesterov=False)
tf_autoencoder.compile(optimizer=optimizer,loss=MeanSquaredError())
# %%
tf_autoencoder.fit(data_feed, data_feed,
epochs=5000,
batch_size=data_feed.shape[0])
# %%
tf_history = tf_autoencoder.history.history
plt.plot(tf_history['loss'])
plt.xlabel('# of epochs')
plt.ylabel('Loss')
plt.title('Loss versus Epoch')
# %%
tf_preds = tf_autoencoder.predict(data_feed)
# %%
autoencoder.display_outputs(tf_preds,data_feed)
# %%
tf_weights = tf_autoencoder.get_weights()
# %%
tf_weights = tf_autoencoder.get_weights()
W1 = tf_weights[0]
num_disp = W1.shape[1]
fig = plt.figure(figsize = (9,8))
for i in range(num_disp):
plt.subplot(8,8,i+1)
plt.imshow(W1.T[i].reshape(16,16),cmap = 'gray')
plt.axis('off')
fig.suptitle('Hidden Layer Feature Representation')
plt.show()
# %%
# %%
input_img_optim = keras.Input(shape=(inp_dim,))
encoded_optim = layers.Dense(encoding_dim,activation='sigmoid',
kernel_regularizer=tf.keras.regularizers.l2(5e-4),
activity_regularizer=KL_divergence(rho,beta))(input_img_optim)
decoded_optim = layers.Dense(inp_dim,activation='sigmoid',
activity_regularizer=tf.keras.regularizers.l2(5e-4))(encoded_optim)
tf_autoencoder_optim = keras.Model(input_img_optim,decoded_optim)
tf_autoencoder.compile(optimizer='adam',loss=MeanSquaredError())
tf_autoencoder.summary()
# %%
tf_autoencoder.fit(data_feed, data_feed,
epochs=5000,
batch_size=data_feed.shape[0])
# %%
tf_history_optim = tf_autoencoder.history.history
plt.plot(tf_history_optim['loss'],color = 'green')
plt.xlabel('# of epochs')
plt.ylabel('Loss')
plt.title('Loss versus Epoch')
# %%
tf_preds_optim = tf_autoencoder.predict(data_feed)
autoencoder.display_outputs(tf_preds_optim,data_feed)
# %%
tf_weights_optim = tf_autoencoder_optim.get_weights()
W1 = tf_weights[0]
num_disp = W1.shape[1]
fig = plt.figure(figsize = (9,8))
for i in range(num_disp):
plt.subplot(8,8,i+1)
plt.imshow(W1.T[i].reshape(16,16),cmap = 'gray')
plt.axis('off')
fig.suptitle('Hidden Layer Feature Representation')
plt.show()
# %%
elif question == '3' :
# To add a new cell, type '# %%'
# To add a new markdown cell, type '# %% [markdown]'
# %%
def sigmoid(x):
c = np.clip(x,-700,700)
return 1 / (1 + np.exp(-c))
def dsigmoid(y):
return y * (1 - y)
def tanh(x):
return np.tanh(x)
def dtanh(y):
return 1 - y * y
# %%
with h5py.File('assign3_data3.h5','r') as F:
# Names variable contains the names of training and testing file
names = list(F.keys())
X_train = np.array(F[names[0]][()])
y_train = np.array(F[names[1]][()])
X_test = np.array(F[names[2]][()])
y_test = np.array(F[names[3]][()])
# %%
class Metrics:
"""
Necessary metrics to evaluate the model.
Functions(labels,preds):
--- confusion_matrix
--- accuracy_score
"""
def confusion_matrix(self,labels,preds):
"""
Takes desireds/labels and softmax predictions,
return a confusion matrix.
"""
label = pd.Series(labels,name='Actual')
pred = pd.Series(preds,name='Predicted')
return pd.crosstab(label,pred)
def accuracy_score(self,labels,preds):
"""
Takes desireds/labels and softmax predictions,
return a accuracy_score.
"""
count = 0
size = labels.shape[0]
for i in range(size):
if preds[i] == labels[i]:
count +=1
return 100 * (count/size)
def accuracy(self,labels,preds):
"""
Takes desireds/labels and softmax predictions,
return a accuracy.
"""
return 100 * (labels == preds).mean()
# %%
class Activations:
"""
Necessary activation functions for recurrent neural network(RNN,LSTM,GRU).
"""
def relu_alternative(self,X):
"""
Rectified linear unit activation(ReLU).
"""
return np.maximum(X, 0)
def ReLU(self,X):
"""
Rectified linear unit activation(ReLU).
Most time efficient version.
"""
return (abs(X) + X) / 2
def relu_another(self,X):
"""
Rectified linear unit activation(ReLU).
"""
return X * (X > 0)
def tanh(self,X):
return np.tanh(X)
def tanh_manuel(self,X):
"""
Hyperbolic tangent activation(tanh).
"""
return (np.exp(X) - np.exp(-X))/(np.exp(X) + np.exp(-X))
def sigmoid(self,X):
"""
Sigmoidal activation.
"""
c = np.clip(X,-700,700)
return 1/(1 + np.exp(-c))
def softmax(self,X):
"""
Stable version of softmax classifier, note that column sum is equal to 1.
"""
e_x = np.exp(X - np.max(X, axis=-1, keepdims=True))
return e_x / np.sum(e_x, axis=-1, keepdims=True)
def softmax_stable(self,X):
"""
Less stable version of softmax activation
"""
e_x = np.exp(X - np.max(X))
return e_x / np.sum(e_x)
def ReLUDerivative(self,X):
"""
The derivative of the ReLU function w.r.t. given input.
"""
return 1 * (X > 0)
def ReLU_grad(self,X):
"""
The derivative of the ReLU function w.r.t. given input.
"""
X[X<=0] = 0
X[X>1] = 1
return X
def dReLU(self,X):
"""
The derivative of the ReLU function w.r.t. given input.
"""
return np.where(X <= 0, 0, 1)
def dtanh(self,X):
"""
The derivative of the tanh function w.r.t. given input.
"""
return 1-(np.tanh(X)**2)
def dsigmoid(self,X):
"""
The derivative of the sigmoid function w.r.t. given input.
"""
return self.sigmoid(X) * (1-self.sigmoid(X))
def softmax_stable_gradient(self,soft_out):
return soft_out * (1 - soft_out)
def softmax_grad(self,softmax):
s = softmax.reshape(-1,1)
return np.diagflat(s) - np.dot(s, s.T)
def softmax_gradient(self,Sz):
"""Computes the gradient of the softmax function.
z: (T, 1) array of input values where the gradient is computed. T is the
number of output classes.
Returns D (T, T) the Jacobian matrix of softmax(z) at the given z. D[i, j]
is DjSi - the partial derivative of Si w.r.t. input j.
"""
# -SjSi can be computed using an outer product between Sz and itself. Then
# we add back Si for the i=j cases by adding a diagonal matrix with the
# values of Si on its diagonal.
D = -np.outer(Sz, Sz) + np.diag(Sz.flatten())
return D
# %%
class RNN(object):
"""
Recurrent Neural Network for classifying human activity.
RNN encapsulates all necessary logic for training the network.
"""
def __init__(self,input_dim = 3,hidden_dim = 128, seq_len = 150, learning_rate = 1e-1, mom_coeff = 0.85, batch_size = 32, output_class = 6):
"""
Initialization of weights/biases and other configurable parameters.
"""
np.random.seed(150)
self.input_dim = input_dim
self.hidden_dim = hidden_dim
# Unfold case T = 150 :
self.seq_len = seq_len
self.output_class = output_class
self.learning_rate = learning_rate
self.batch_size = batch_size
self.mom_coeff = mom_coeff
# Xavier uniform scaler :
Xavier = lambda fan_in,fan_out : math.sqrt(6/(fan_in + fan_out))
lim_inp2hid = Xavier(self.input_dim,self.hidden_dim)
self.W1 = np.random.uniform(-lim_inp2hid,lim_inp2hid,(self.input_dim,self.hidden_dim))
self.B1 = np.random.uniform(-lim_inp2hid,lim_inp2hid,(1,self.hidden_dim))
lim_hid2hid = Xavier(self.hidden_dim,self.hidden_dim)
self.W1_rec= np.random.uniform(-lim_hid2hid,lim_hid2hid,(self.hidden_dim,self.hidden_dim))
lim_hid2out = Xavier(self.hidden_dim,self.output_class)
self.W2 = np.random.uniform(-lim_hid2out,lim_hid2out,(self.hidden_dim,self.output_class))
self.B2 = np.random.uniform(-lim_inp2hid,lim_inp2hid,(1,self.output_class))
# To keep track loss and accuracy score :
self.train_loss,self.test_loss,self.train_acc,self.test_acc = [],[],[],[]
# Storing previous momentum updates :
self.prev_updates = {'W1' : 0,
'B1' : 0,
'W1_rec' : 0,
'W2' : 0,
'B2' : 0}
def forward(self,X) -> tuple:
""" Forward propagation of the RNN through time.
Inputs:
--- X is the bacth.
--- h_prev_state is the previous state of the hidden layer.
Returns:
--- (X_state,hidden_state,probs) as a tuple.
------ 1) X_state is the input across all time steps
------ 2) hidden_state is the hidden stages across time
------ 3) probs is the probabilities of each outputs, i.e. outputs of softmax
"""
X_state = dict()
hidden_state = dict()
output_state = dict()
probs = dict()
self.h_prev_state = np.zeros((1,self.hidden_dim))
hidden_state[-1] = np.copy(self.h_prev_state)
# Loop over time T = 150 :
for t in range(self.seq_len):
# Selecting first record with 3 inputs, dimension = (batch_size,input_size)
X_state[t] = X[:,t]
# Recurrent hidden layer :
hidden_state[t] = np.tanh(np.dot(X_state[t],self.W1) + np.dot(hidden_state[t-1],self.W1_rec) + self.B1)
output_state[t] = np.dot(hidden_state[t],self.W2) + self.B2
# Per class probabilites :
probs[t] = activations.softmax(output_state[t])
return (X_state,hidden_state,probs)
def BPTT(self,cache,Y):
"""
Back propagation through time algorihm.
Inputs:
-- Cache = (X_state,hidden_state,probs)
-- Y = desired output
Returns:
-- Gradients w.r.t. all configurable elements
"""
X_state,hidden_state,probs = cache
# backward pass: compute gradients going backwards
dW1, dW1_rec, dW2 = np.zeros_like(self.W1), np.zeros_like(self.W1_rec), np.zeros_like(self.W2)
dB1, dB2 = np.zeros_like(self.B1), np.zeros_like(self.B2)
dhnext = np.zeros_like(hidden_state[0])
dy = np.copy(probs[149])
dy[np.arange(len(Y)),np.argmax(Y,1)] -= 1
dB2 += np.sum(dy,axis = 0, keepdims = True)
dW2 += np.dot(hidden_state[149].T,dy)
for t in reversed(range(1,self.seq_len)):
dh = np.dot(dy,self.W2.T) + dhnext
dhrec = (1 - (hidden_state[t] * hidden_state[t])) * dh
dB1 += np.sum(dhrec,axis = 0, keepdims = True)
dW1 += np.dot(X_state[t].T,dhrec)
dW1_rec += np.dot(hidden_state[t-1].T,dhrec)
dhnext = np.dot(dhrec,self.W1_rec.T)
for grad in [dW1,dB1,dW1_rec,dW2,dB2]:
np.clip(grad, -10, 10, out = grad)
return [dW1,dB1,dW1_rec,dW2,dB2]
def earlyStopping(self,ce_train,ce_val,ce_threshold,acc_train,acc_val,acc_threshold):
if ce_train - ce_val < ce_threshold or acc_train - acc_val > acc_threshold:
return True
else:
return False
def CategoricalCrossEntropy(self,labels,preds):
"""
Computes cross entropy between labels and model's predictions
"""
predictions = np.clip(preds, 1e-12, 1. - 1e-12)
N = predictions.shape[0]
return -np.sum(labels * np.log(predictions + 1e-9)) / N
def step(self,grads,momentum = True):
"""
SGD on mini batches
"""
#for config_param,grad in zip([self.W1,self.B1,self.W1_rec,self.W2,self.B2],grads):
#config_param -= self.learning_rate * grad
if momentum:
delta_W1 = -self.learning_rate * grads[0] + self.mom_coeff * self.prev_updates['W1']
delta_B1 = -self.learning_rate * grads[1] + self.mom_coeff * self.prev_updates['B1']
delta_W1_rec = -self.learning_rate * grads[2] + self.mom_coeff * self.prev_updates['W1_rec']
delta_W2 = -self.learning_rate * grads[3] + self.mom_coeff * self.prev_updates['W2']
delta_B2 = -self.learning_rate * grads[4] + self.mom_coeff * self.prev_updates['B2']
self.W1 += delta_W1
self.W1_rec += delta_W1_rec
self.W2 += delta_W2
self.B1 += delta_B1
self.B2 += delta_B2
self.prev_updates['W1'] = delta_W1
self.prev_updates['W1_rec'] = delta_W1_rec
self.prev_updates['W2'] = delta_W2
self.prev_updates['B1'] = delta_B1
self.prev_updates['B2'] = delta_B2
self.learning_rate *= 0.9999
def fit(self,X,Y,X_val,y_val,epochs = 50 ,verbose = True, earlystopping = False):
"""
Given the traning dataset,their labels and number of epochs
fitting the model, and measure the performance
by validating training dataset.
"""
for epoch in range(epochs):
print(f'Epoch : {epoch + 1}')
perm = np.random.permutation(3000)
for i in range(round(X.shape[0]/self.batch_size)):
batch_start = i * self.batch_size
batch_finish = (i+1) * self.batch_size
index = perm[batch_start:batch_finish]
X_feed = X[index]
y_feed = Y[index]
cache_train = self.forward(X_feed)
grads = self.BPTT(cache_train,y_feed)
self.step(grads)
cross_loss_train = self.CategoricalCrossEntropy(y_feed,cache_train[2][149])
predictions_train = self.predict(X)
acc_train = metrics.accuracy(np.argmax(Y,1),predictions_train)
_,__,probs_test = self.forward(X_val)
cross_loss_val = self.CategoricalCrossEntropy(y_val,probs_test[149])
predictions_val = np.argmax(probs_test[149],1)
acc_val = metrics.accuracy(np.argmax(y_val,1),predictions_val)
if earlystopping:
if self.earlyStopping(ce_train = cross_loss_train,ce_val = cross_loss_val,ce_threshold = 3.0,acc_train = acc_train,acc_val = acc_val,acc_threshold = 15):
break
if verbose:
print(f"[{epoch + 1}/{epochs}] ------> Training : Accuracy : {acc_train}")
print(f"[{epoch + 1}/{epochs}] ------> Training : Loss : {cross_loss_train}")
print('______________________________________________________________________________________\n')
print(f"[{epoch + 1}/{epochs}] ------> Testing : Accuracy : {acc_val}")
print(f"[{epoch + 1}/{epochs}] ------> Testing : Loss : {cross_loss_val}")
print('______________________________________________________________________________________\n')
self.train_loss.append(cross_loss_train)
self.test_loss.append(cross_loss_val)
self.train_acc.append(acc_train)
self.test_acc.append(acc_val)
def predict(self,X):
_,__,probs = self.forward(X)
return np.argmax(probs[149],axis=1)
def history(self):
return {'TrainLoss' : self.train_loss,
'TrainAcc' : self.train_acc,
'TestLoss' : self.test_loss,
'TestAcc' : self.test_acc}
# %%
input_dim = 3
activations = Activations()
metrics = Metrics()
model = RNN(input_dim = input_dim,learning_rate = 1e-4, mom_coeff = 0.0, hidden_dim = 128)
# %%
model.fit(X_train,y_train,X_test,y_test,epochs = 35)
# %%
history = model.history()
# %%
plt.figure()
plt.plot(history['TestLoss'],'-o')
plt.plot(history['TrainLoss'],'-o')
plt.xlabel('# of epochs')
plt.ylabel('Loss')
plt.title('Categorical Cross Entropy over epochs')
plt.legend(['Test Loss','Train Loss'])
plt.show()
# %%
plt.figure()
plt.plot(history['TestAcc'],'-o')
plt.plot(history['TrainAcc'],'-o')
plt.xlabel('# of epochs')
plt.ylabel('Loss')
plt.title('Accuracy over epochs')
plt.legend(['Test Acc','Train Acc'])
plt.show()
# %%
train_preds = model.predict(X_train)
test_preds = model.predict(X_test)
# %%
confusion_mat_train = metrics.confusion_matrix(np.argmax(y_train,1),train_preds)
confusion_mat_test = metrics.confusion_matrix(np.argmax(y_test,1),test_preds)
# %%
body_movements = ['downstairs','jogging','sitting','standing','upstairs','walking']
confusion_mat_train_,confusion_mat_test_ = confusion_mat_train,confusion_mat_test
confusion_mat_train.columns = body_movements
confusion_mat_train.index = body_movements
print(confusion_mat_train)
# %%
confusion_mat_test.columns = body_movements
confusion_mat_test.index = body_movements
print(confusion_mat_train)
# %%
sns.heatmap(confusion_mat_train/np.sum(confusion_mat_train), annot=True,
fmt='.2%',cmap = 'Blues')
plt.show()
# %%
print(confusion_mat_test)
# %%
sns.heatmap(confusion_mat_test/np.sum(confusion_mat_test), annot=True,
fmt='.2%',cmap = 'Blues')
plt.show()
# %%
plt.matshow(confusion_mat_test, cmap=plt.cm.gray_r)
plt.title('Testing Confusion Matrix')
plt.colorbar()
tick_marks = np.arange(len(confusion_mat_test.columns))
plt.xticks(tick_marks, confusion_mat_test.columns, rotation=45)
plt.yticks(tick_marks, confusion_mat_test.index)
plt.tight_layout()
plt.ylabel(confusion_mat_test.index.name)
plt.xlabel(confusion_mat_test.columns.name)
plt.show()
# %%
plt.matshow(confusion_mat_train, cmap=plt.cm.gray_r)
plt.title('Training Confusion Matrix')
plt.colorbar()
tick_marks = np.arange(len(confusion_mat_train.columns))
plt.xticks(tick_marks, confusion_mat_train.columns, rotation=45)
plt.yticks(tick_marks, confusion_mat_train.index)
plt.tight_layout()
plt.ylabel(confusion_mat_train.index.name)
plt.xlabel(confusion_mat_train.columns.name)
plt.show()
# %%
sns.heatmap(confusion_mat_test/np.sum(confusion_mat_test), annot=True,
fmt='.2%',cmap = 'Greens')
plt.show()
# %%
sns.heatmap(confusion_mat_test/np.sum(confusion_mat_test), annot=True,
fmt='.2%',cmap = 'Blues')
plt.show()
# %%
class Multi_Layer_RNN(object):
"""
Recurrent Neural Network for classifying human activity.
RNN encapsulates all necessary logic for training the network.
"""
def __init__(self,input_dim = 3,hidden_dim_1 = 128, hidden_dim_2 = 64, seq_len = 150, learning_rate = 1e-1, mom_coeff = 0.85, batch_size = 32, output_class = 6):
"""
Initialization of weights/biases and other configurable parameters.
"""
np.random.seed(150)
self.input_dim = input_dim
self.hidden_dim_1 = hidden_dim_1
self.hidden_dim_2 = hidden_dim_2
# Unfold case T = 150 :
self.seq_len = seq_len
self.output_class = output_class
self.learning_rate = learning_rate
self.batch_size = batch_size
self.mom_coeff = mom_coeff
# Xavier uniform scaler :
Xavier = lambda fan_in,fan_out : math.sqrt(6/(fan_in + fan_out))
lim_inp2hid = Xavier(self.input_dim,self.hidden_dim_1)
self.W1 = np.random.uniform(-lim_inp2hid,lim_inp2hid,(self.input_dim,self.hidden_dim_1))
self.B1 = np.random.uniform(-lim_inp2hid,lim_inp2hid,(1,self.hidden_dim_1))
lim_hid2hid = Xavier(self.hidden_dim_1,self.hidden_dim_1)
self.W1_rec= np.random.uniform(-lim_hid2hid,lim_hid2hid,(self.hidden_dim_1,self.hidden_dim_1))
lim_hid2hid2 = Xavier(self.hidden_dim_1,self.hidden_dim_2)
self.W2 = np.random.uniform(-lim_hid2hid2,lim_hid2hid2,(self.hidden_dim_1,self.hidden_dim_2))
self.B2 = np.random.uniform(-lim_hid2hid2,lim_hid2hid2,(1,self.hidden_dim_2))
lim_hid2out = Xavier(self.hidden_dim_2,self.output_class)
self.W3 = np.random.uniform(-lim_hid2out,lim_hid2out,(self.hidden_dim_2,self.output_class))
self.B3 = np.random.uniform(-lim_inp2hid,lim_inp2hid,(1,self.output_class))
# To keep track loss and accuracy score :
self.train_loss,self.test_loss,self.train_acc,self.test_acc = [],[],[],[]
# Storing previous momentum updates :
self.prev_updates = {'W1' : 0,
'B1' : 0,
'W1_rec' : 0,
'W2' : 0,
'B2' : 0,
'W3' : 0,
'B3' : 0}
def forward(self,X) -> tuple:
"""
Forward propagation of the RNN through time.
__________________________________________________________
Inputs:
--- X is the bacth.
--- h_prev_state is the previous state of the hidden layer.
__________________________________________________________
Returns:
--- (X_state,hidden_state,probs) as a tuple.
------ 1) X_state is the input across all time steps
------ 2) hidden_state is the hidden stages across time
------ 3) probs is the probabilities of each outputs, i.e. outputs of softmax
__________________________________________________________
"""
X_state = dict()
hidden_state_1 = dict()
hidden_state_mlp = dict()
output_state = dict()
probs = dict()
mlp_linear = dict()
self.h_prev_state = np.zeros((1,self.hidden_dim_1))
hidden_state_1[-1] = np.copy(self.h_prev_state)
# Loop over time T = 150 :
for t in range(self.seq_len):
# Selecting first record with 3 inputs, dimension = (batch_size,input_size)
X_state[t] = X[:,t]
# Recurrent hidden layer :
hidden_state_1[t] = np.tanh(np.dot(X_state[t],self.W1) + np.dot(hidden_state_1[t-1],self.W1_rec) + self.B1)
mlp_linear[t] = np.dot(hidden_state_1[t],self.W2) + self.B2
hidden_state_mlp[t] = activations.ReLU(mlp_linear[t])
output_state[t] = np.dot(hidden_state_mlp[t],self.W3) + self.B3
# Per class probabilites :
probs[t] = activations.softmax(output_state[t])
return (X_state,hidden_state_1,mlp_linear,hidden_state_mlp,probs)
def BPTT(self,cache,Y):
"""
Back propagation through time algorihm.
Inputs:
-- Cache = (X_state,hidden_state,probs)
-- Y = desired output
Returns:
-- Gradients w.r.t. all configurable elements
"""
X_state,hidden_state_1,mlp_linear,hidden_state_mlp,probs = cache
# backward pass: compute gradients going backwards
dW1, dW1_rec, dW2, dW3 = np.zeros_like(self.W1), np.zeros_like(self.W1_rec), np.zeros_like(self.W2),np.zeros_like(self.W3)
dB1, dB2,dB3 = np.zeros_like(self.B1), np.zeros_like(self.B2),np.zeros_like(self.B3)
dhnext = np.zeros_like(hidden_state_1[0])
dy = np.copy(probs[149])
dy[np.arange(len(Y)),np.argmax(Y,1)] -= 1
#dy = probs[0] - Y[0]
dW3 += np.dot(hidden_state_mlp[149].T,dy)
dB3 += np.sum(dy,axis = 0, keepdims = True)
dy1 = np.dot(dy,self.W3.T) * activations.ReLU_grad(mlp_linear[149])
dB2 += np.sum(dy1,axis = 0, keepdims = True)
dW2 += np.dot(hidden_state_1[149].T,dy1)
for t in reversed(range(1,self.seq_len)):
dh = np.dot(dy1,self.W2.T) + dhnext
dhrec = (1 - (hidden_state_1[t] * hidden_state_1[t])) * dh
dB1 += np.sum(dhrec,axis = 0, keepdims = True)
dW1 += np.dot(X_state[t].T,dhrec)
dW1_rec += np.dot(hidden_state_1[t-1].T,dhrec)
dhnext = np.dot(dhrec,self.W1_rec.T)
for grad in [dW1,dB1,dW1_rec,dW2,dB2,dW3,dB3]:
np.clip(grad, -10, 10, out = grad)
return [dW1,dB1,dW1_rec,dW2,dB2,dW3,dB3]
def CategoricalCrossEntropy(self,labels,preds):
"""
Computes cross entropy between labels and model's predictions
"""
predictions = np.clip(preds, 1e-12, 1. - 1e-12)
N = predictions.shape[0]
return -np.sum(labels * np.log(predictions + 1e-9)) / N
def step(self,grads,momentum = True):
#for config_param,grad in zip([self.W1,self.B1,self.W1_rec,self.W2,self.B2,self.W3,self.B3],grads):
#config_param -= self.learning_rate * grad
if momentum:
delta_W1 = -self.learning_rate * grads[0] - self.mom_coeff * self.prev_updates['W1']
delta_B1 = -self.learning_rate * grads[1] - self.mom_coeff * self.prev_updates['B1']
delta_W1_rec = -self.learning_rate * grads[2] - self.mom_coeff * self.prev_updates['W1_rec']
delta_W2 = -self.learning_rate * grads[3] - self.mom_coeff * self.prev_updates['W2']
delta_B2 = -self.learning_rate * grads[4] - self.mom_coeff * self.prev_updates['B2']
delta_W3 = -self.learning_rate * grads[5] - self.mom_coeff * self.prev_updates['W3']
delta_B3 = -self.learning_rate * grads[6] - self.mom_coeff * self.prev_updates['B3']
self.W1 += delta_W1
self.W1_rec += delta_W1_rec
self.W2 += delta_W2
self.B1 += delta_B1
self.B2 += delta_B2
self.W3 += delta_W3
self.B3 += delta_B3
self.prev_updates['W1'] = delta_W1
self.prev_updates['W1_rec'] = delta_W1_rec
self.prev_updates['W2'] = delta_W2
self.prev_updates['B1'] = delta_B1
self.prev_updates['B2'] = delta_B2
self.prev_updates['W3'] = delta_W3
self.prev_updates['B3'] = delta_B3
self.learning_rate *= 0.9999
def fit(self,X,Y,X_val,y_val,epochs = 50 ,verbose = True, crossVal = False):
"""
Given the traning dataset,their labels and number of epochs
fitting the model, and measure the performance
by validating training dataset.
"""
for epoch in range(epochs):
print(f'Epoch : {epoch + 1}')
perm = np.random.permutation(3000)
for i in range(round(X.shape[0]/self.batch_size)):
batch_start = i * self.batch_size
batch_finish = (i+1) * self.batch_size
index = perm[batch_start:batch_finish]
X_feed = X[index]
y_feed = Y[index]
cache_train = self.forward(X_feed)
grads = self.BPTT(cache_train,y_feed)
self.step(grads)
if crossVal:
stop = self.cross_validation(X,val_X,Y,val_Y,threshold = 5)
if stop:
break
cross_loss_train = self.CategoricalCrossEntropy(y_feed,cache_train[4][149])
predictions_train = self.predict(X)
acc_train = metrics.accuracy(np.argmax(Y,1),predictions_train)
_,__,___,____, probs_test = self.forward(X_val)
cross_loss_val = self.CategoricalCrossEntropy(y_val,probs_test[149])
predictions_val = np.argmax(probs_test[149],1)
acc_val = metrics.accuracy(np.argmax(y_val,1),predictions_val)
if verbose:
print(f"[{epoch + 1}/{epochs}] ------> Training : Accuracy : {acc_train}")
print(f"[{epoch + 1}/{epochs}] ------> Training : Loss : {cross_loss_train}")
print('______________________________________________________________________________________\n')
print(f"[{epoch + 1}/{epochs}] ------> Testing : Accuracy : {acc_val}")
print(f"[{epoch + 1}/{epochs}] ------> Testing : Loss : {cross_loss_val}")
print('______________________________________________________________________________________\n')
self.train_loss.append(cross_loss_train)
self.test_loss.append(cross_loss_val)
self.train_acc.append(acc_train)
self.test_acc.append(acc_val)
def predict(self,X):
_,__,___,____,probs = self.forward(X)
return np.argmax(probs[149],axis=1)
def history(self):
return {'TrainLoss' : self.train_loss,
'TrainAcc' : self.train_acc,
'TestLoss' : self.test_loss,
'TestAcc' : self.test_acc}
# %%
multilayer_rnn = Multi_Layer_RNN(learning_rate=1e-4,mom_coeff=0.0,hidden_dim_1 = 128, hidden_dim_2 = 64)
# %%
multilayer_rnn.fit(X_train,y_train,X_test,y_test,epochs = 35)
# %%
multilayer_rnn_history = multilayer_rnn.history()
# %%
plt.figure()
plt.plot(multilayer_rnn_history['TestLoss'],'-o')
plt.plot(multilayer_rnn_history['TrainLoss'],'-o')
plt.xlabel('# of epochs')
plt.ylabel('Loss')
plt.title('Categorical Cross Entropy over epochs')
plt.legend(['Test Loss','Train Loss'])
plt.show()
# %%
plt.figure()
plt.plot(multilayer_rnn_history['TestAcc'],'-o')
plt.plot(multilayer_rnn_history['TrainAcc'],'-o')
plt.xlabel('# of epochs')
plt.ylabel('Loss')
plt.title('Accuracy over epochs')
plt.legend(['Test Acc','Train Acc'])
plt.show()
# %%
plt.figure()
plt.plot(multilayer_rnn_history['TrainAcc'],'-o')
plt.plot(history['TrainAcc'],'-o')
plt.xlabel('# of epochs')
plt.ylabel('Loss')
plt.title('Training Accuracy over epochs')
plt.legend(['Multi Layer RNN','Vanilla RNN'])
plt.show()
# %%
plt.plot(multilayer_rnn_history['TestAcc'],'-o')
plt.plot(history['TestAcc'],'-o')
plt.xlabel('# of epochs')
plt.ylabel('Loss')
plt.title('Testing Accuracy over epochs')
plt.legend(['Multi Layer RNN','Vanilla RNN'])
plt.show()
# %%
train_preds_multilayer_rnn = multilayer_rnn.predict(X_train)
test_preds_multilayer_rnn = multilayer_rnn.predict(X_test)
confusion_mat_train_multilayer_rnn = metrics.confusion_matrix(np.argmax(y_train,1),train_preds_multilayer_rnn)
confusion_mat_test_multilayer_rnn = metrics.confusion_matrix(np.argmax(y_test,1),test_preds_multilayer_rnn)
body_movements = ['downstairs','jogging','sitting','standing','upstairs','walking']
confusion_mat_train_multilayer_rnn.columns = body_movements
confusion_mat_train_multilayer_rnn.index = body_movements
confusion_mat_test_multilayer_rnn.columns = body_movements
confusion_mat_test_multilayer_rnn.index = body_movements
print(confusion_mat_train_multilayer_rnn)
# %%
print(confusion_mat_test_multilayer_rnn)
# %%
sns.heatmap(confusion_mat_test_multilayer_rnn/np.sum(confusion_mat_test_multilayer_rnn), annot=True,
fmt='.2%',cmap = 'Blues')
plt.show()
# %%
sns.heatmap(confusion_mat_train_multilayer_rnn/np.sum(confusion_mat_train_multilayer_rnn), annot=True,
fmt='.2%',cmap = 'Blues')
plt.show()
# %%
class Three_Hidden_Layer_RNN(object):
"""
Recurrent Neural Network for classifying human activity.
RNN encapsulates all necessary logic for training the network.
"""
def __init__(self,input_dim = 3,hidden_dim_1 = 128, hidden_dim_2 = 64,hidden_dim_3 = 32, seq_len = 150, learning_rate = 1e-1, mom_coeff = 0.85, batch_size = 32, output_class = 6):
"""
Initialization of weights/biases and other configurable parameters.
"""
np.random.seed(150)
self.input_dim = input_dim
self.hidden_dim_1 = hidden_dim_1
self.hidden_dim_2 = hidden_dim_2
self.hidden_dim_3 = hidden_dim_3
# Unfold case T = 150 :
self.seq_len = seq_len
self.output_class = output_class
self.learning_rate = learning_rate
self.batch_size = batch_size
self.mom_coeff = mom_coeff
# Xavier uniform scaler :
Xavier = lambda fan_in,fan_out : math.sqrt(6/(fan_in + fan_out))
lim_inp2hid = Xavier(self.input_dim,self.hidden_dim_1)
self.W1 = np.random.uniform(-lim_inp2hid,lim_inp2hid,(self.input_dim,self.hidden_dim_1))
self.B1 = np.random.uniform(-lim_inp2hid,lim_inp2hid,(1,self.hidden_dim_1))
lim_hid2hid = Xavier(self.hidden_dim_1,self.hidden_dim_1)
self.W1_rec= np.random.uniform(-lim_hid2hid,lim_hid2hid,(self.hidden_dim_1,self.hidden_dim_1))
lim_hid2hid2 = Xavier(self.hidden_dim_1,self.hidden_dim_2)
self.W2 = np.random.uniform(-lim_hid2hid2,lim_hid2hid2,(self.hidden_dim_1,self.hidden_dim_2))
self.B2 = np.random.uniform(-lim_hid2hid2,lim_hid2hid2,(1,self.hidden_dim_2))
lim_hid2hid3 = Xavier(self.hidden_dim_2,self.hidden_dim_3)
self.W3 = np.random.uniform(-lim_hid2hid3,lim_hid2hid3,(self.hidden_dim_2,self.hidden_dim_3))
self.B3 = np.random.uniform(-lim_hid2hid3,lim_hid2hid3,(1,self.hidden_dim_3))
lim_hid2out = Xavier(self.hidden_dim_3,self.output_class)
self.W4 = np.random.uniform(-lim_hid2out,lim_hid2out,(self.hidden_dim_3,self.output_class))
self.B4 = np.random.uniform(-lim_hid2out,lim_hid2out,(1,self.output_class))
# To keep track loss and accuracy score :
self.train_loss,self.test_loss,self.train_acc,self.test_acc = [],[],[],[]
# Storing previous momentum updates :
self.prev_updates = {'W1' : 0,
'B1' : 0,
'W1_rec' : 0,
'W2' : 0,
'B2' : 0,
'W3' : 0,
'W4' : 0,
'B3' : 0,
'B4' : 0}
def forward(self,X) -> tuple:
"""
Forward propagation of the RNN through time.
__________________________________________________________
Inputs:
--- X is the bacth.
--- h_prev_state is the previous state of the hidden layer.
__________________________________________________________
Returns:
--- (X_state,hidden_state,probs) as a tuple.
------ 1) X_state is the input across all time steps
------ 2) hidden_state is the hidden stages across time
------ 3) probs is the probabilities of each outputs, i.e. outputs of softmax
__________________________________________________________
"""
X_state = dict()
hidden_state_1 = dict()
hidden_state_mlp = dict()
hidden_state_mlp_2 = dict()
output_state = dict()
probs = dict()
mlp_linear = dict()
mlp_linear_2 = dict()
self.h_prev_state = np.zeros((1,self.hidden_dim_1))
hidden_state_1[-1] = np.copy(self.h_prev_state)
# Loop over time T = 150 :
for t in range(self.seq_len):
# Selecting first record with 3 inputs, dimension = (batch_size,input_size)
X_state[t] = X[:,t]
# Recurrent hidden layer :
hidden_state_1[t] = np.tanh(np.dot(X_state[t],self.W1) + np.dot(hidden_state_1[t-1],self.W1_rec) + self.B1)
mlp_linear[t] = np.dot(hidden_state_1[t],self.W2) + self.B2
hidden_state_mlp[t] = activations.ReLU(mlp_linear[t])
mlp_linear_2[t] = np.dot(hidden_state_mlp[t],self.W3) + self.B3
hidden_state_mlp_2[t] = activations.ReLU(mlp_linear_2[t])
output_state[t] = np.dot(hidden_state_mlp_2[t],self.W4) + self.B4
# Per class probabilites :
probs[t] = activations.softmax(output_state[t])
return (X_state,hidden_state_1,mlp_linear,hidden_state_mlp,mlp_linear_2,hidden_state_mlp_2,probs)
def BPTT(self,cache,Y):
"""
Back propagation through time algorihm.
Inputs:
-- Cache = (X_state,hidden_state,probs)
-- Y = desired output
Returns:
-- Gradients w.r.t. all configurable elements
"""
X_state,hidden_state_1,mlp_linear,hidden_state_mlp,mlp_linear_2,hidden_state_mlp_2,probs = cache
# backward pass: compute gradients going backwards
dW1, dW1_rec, dW2, dW3, dW4 = np.zeros_like(self.W1), np.zeros_like(self.W1_rec), np.zeros_like(self.W2),np.zeros_like(self.W3),np.zeros_like(self.W4)
dB1, dB2,dB3,dB4 = np.zeros_like(self.B1), np.zeros_like(self.B2),np.zeros_like(self.B3),np.zeros_like(self.B4)
dhnext = np.zeros_like(hidden_state_1[0])
for t in reversed(range(1,self.seq_len)):
dy = np.copy(probs[t])
dy[np.arange(len(Y)),np.argmax(Y,1)] -= 1
#dy = probs[0] - Y[0]
dW4 += np.dot(hidden_state_mlp_2[t].T,dy)
dB4 += np.sum(dy,axis = 0, keepdims = True)
dy1 = np.dot(dy,self.W4.T) * activations.ReLU_grad(mlp_linear_2[t])
dW3 += np.dot(hidden_state_mlp[t].T,dy1)
dB3 += np.sum(dy1,axis = 0, keepdims = True)
dy2 = np.dot(dy1,self.W3.T) * activations.ReLU_grad(mlp_linear[t])
dB2 += np.sum(dy2,axis = 0, keepdims = True)
dW2 += np.dot(hidden_state_1[t].T,dy2)
dh = np.dot(dy2,self.W2.T) + dhnext
dhrec = (1 - (hidden_state_1[t] * hidden_state_1[t])) * dh
dB1 += np.sum(dhrec,axis = 0, keepdims = True)
dW1 += np.dot(X_state[t].T,dhrec)
dW1_rec += np.dot(hidden_state_1[t-1].T,dhrec)
dhnext = np.dot(dhrec,self.W1_rec.T)
for grad in [dW1,dB1,dW1_rec,dW2,dB2,dW3,dB3,dW4,dB4]:
np.clip(grad, -10, 10, out = grad)
return [dW1,dB1,dW1_rec,dW2,dB2,dW3,dB3,dW4,dB4]
def CategoricalCrossEntropy(self,labels,preds):
"""
Computes cross entropy between labels and model's predictions
"""
predictions = np.clip(preds, 1e-12, 1. - 1e-12)
N = predictions.shape[0]
return -np.sum(labels * np.log(predictions + 1e-9)) / N
def step(self,grads,momentum = True):
#for config_param,grad in zip([self.W1,self.B1,self.W1_rec,self.W2,self.B2,self.W3,self.B3],grads):
#config_param -= self.learning_rate * grad
if momentum:
delta_W1 = -self.learning_rate * grads[0] + self.mom_coeff * self.prev_updates['W1']
delta_B1 = -self.learning_rate * grads[1] + self.mom_coeff * self.prev_updates['B1']
delta_W1_rec = -self.learning_rate * grads[2] + self.mom_coeff * self.prev_updates['W1_rec']
delta_W2 = -self.learning_rate * grads[3] + self.mom_coeff * self.prev_updates['W2']
delta_B2 = -self.learning_rate * grads[4] + self.mom_coeff * self.prev_updates['B2']
delta_W3 = -self.learning_rate * grads[5] + self.mom_coeff * self.prev_updates['W3']
delta_B3 = -self.learning_rate * grads[6] + self.mom_coeff * self.prev_updates['B3']
delta_W4 = -self.learning_rate * grads[7] + self.mom_coeff * self.prev_updates['W4']
delta_B4 = -self.learning_rate * grads[8] + self.mom_coeff * self.prev_updates['B4']
self.W1 += delta_W1
self.W1_rec += delta_W1_rec
self.W2 += delta_W2
self.B1 += delta_B1
self.B2 += delta_B2
self.W3 += delta_W3
self.B3 += delta_B3
self.W4 += delta_W4
self.B4 += delta_B4
self.prev_updates['W1'] = delta_W1
self.prev_updates['W1_rec'] = delta_W1_rec
self.prev_updates['W2'] = delta_W2
self.prev_updates['B1'] = delta_B1
self.prev_updates['B2'] = delta_B2
self.prev_updates['W3'] = delta_W3
self.prev_updates['B3'] = delta_B3
self.prev_updates['W4'] = delta_W4
self.prev_updates['B4'] = delta_B4
self.learning_rate *= 0.9999
def fit(self,X,Y,X_val,y_val,epochs = 50 ,verbose = True, crossVal = False):
"""
Given the traning dataset,their labels and number of epochs
fitting the model, and measure the performance
by validating training dataset.
"""
for epoch in range(epochs):
print(f'Epoch : {epoch + 1}')
perm = np.random.permutation(3000)
for i in range(round(X.shape[0]/self.batch_size)):
batch_start = i * self.batch_size
batch_finish = (i+1) * self.batch_size
index = perm[batch_start:batch_finish]
X_feed = X[index]
y_feed = Y[index]
cache_train = self.forward(X_feed)
grads = self.BPTT(cache_train,y_feed)
self.step(grads)
if crossVal:
stop = self.cross_validation(X,val_X,Y,val_Y,threshold = 5)
if stop:
break
cross_loss_train = self.CategoricalCrossEntropy(y_feed,cache_train[6][149])
predictions_train = self.predict(X)
acc_train = metrics.accuracy(np.argmax(Y,1),predictions_train)
_,__,___,____,_____,______, probs_test = self.forward(X_val)
cross_loss_val = self.CategoricalCrossEntropy(y_val,probs_test[149])
predictions_val = np.argmax(probs_test[149],1)
acc_val = metrics.accuracy(np.argmax(y_val,1),predictions_val)
if verbose:
print(f"[{epoch + 1}/{epochs}] ------> Training : Accuracy : {acc_train}")
print(f"[{epoch + 1}/{epochs}] ------> Training : Loss : {cross_loss_train}")
print('______________________________________________________________________________________\n')
print(f"[{epoch + 1}/{epochs}] ------> Testing : Accuracy : {acc_val}")
print(f"[{epoch + 1}/{epochs}] ------> Testing : Loss : {cross_loss_val}")
print('______________________________________________________________________________________\n')
self.train_loss.append(cross_loss_train)
self.test_loss.append(cross_loss_val)
self.train_acc.append(acc_train)
self.test_acc.append(acc_val)
def predict(self,X):
_,__,___,____,_____,______,probs = self.forward(X)
return np.argmax(probs[149],axis=1)
def history(self):
return {'TrainLoss' : self.train_loss,
'TrainAcc' : self.train_acc,
'TestLoss' : self.test_loss,
'TestAcc' : self.test_acc}
# %%
three_layer_rnn = Three_Hidden_Layer_RNN(hidden_dim_1 = 128, hidden_dim_2 = 64,hidden_dim_3 = 32, learning_rate = 1e-4, mom_coeff = 0.0, batch_size = 32, output_class = 6)
# %%
three_layer_rnn.fit(X_train,y_train,X_test,y_test,epochs=15)
# %%
three_layer_rnn_v1 = Three_Hidden_Layer_RNN(hidden_dim_1 = 128, hidden_dim_2 = 64,hidden_dim_3 = 32, learning_rate = 5e-5, mom_coeff = 0.0, batch_size = 32, output_class = 6)
three_layer_rnn_v1.fit(X_train,y_train,X_test,y_test,epochs=15)
# %%
three_layer_rnn_v2 = Three_Hidden_Layer_RNN(hidden_dim_1 = 128, hidden_dim_2 = 64,hidden_dim_3 = 32, learning_rate = 1e-4, mom_coeff = 0.0, batch_size = 32, output_class = 6)
three_layer_rnn_v2.fit(X_train,y_train,X_test,y_test,epochs=15)
# %%
three_layer_rnn_history = three_layer_rnn.history()
plt.figure()
plt.plot(three_layer_rnn_history['TestLoss'],'-o')
plt.plot(three_layer_rnn_history['TrainLoss'],'-o')
plt.xlabel('# of epochs')
plt.ylabel('Loss')
plt.title('Categorical Cross Entropy over epochs')
plt.legend(['Test Loss','Train Loss'])
plt.show()
# %%
plt.figure()
plt.plot(three_layer_rnn_history['TestAcc'],'-o')
plt.plot(three_layer_rnn_history['TrainAcc'],'-o')
plt.xlabel('# of epochs')
plt.ylabel('Loss')
plt.title('Accuracy over epochs')
plt.legend(['Test Acc','Train Acc'])
plt.show()
# %%
plt.figure()
plt.plot(three_layer_rnn_history['TrainAcc'],'-o')
plt.plot(multilayer_rnn_history['TrainAcc'],'-o')
plt.plot(history['TrainAcc'],'-o')
plt.xlabel('# of epochs')
plt.ylabel('Loss')
plt.title('Training Accuracy over epochs')
plt.legend(['3 hidden layer Rnn','Multi Layer RNN','Vanilla RNN'])
plt.show()
# %%
plt.figure()
plt.plot(three_layer_rnn_history['TestAcc'],'-o')
plt.plot(multilayer_rnn_history['TestAcc'],'-o')
plt.plot(history['TestAcc'],'-o')
plt.xlabel('# of epochs')
plt.ylabel('Loss')
plt.title('Testing Accuracy over epochs')
plt.legend(['3 hidden layer Rnn','Multi Layer RNN','Vanilla RNN'])
plt.show()
# %%
train_preds_three_layer_rnn_history = three_layer_rnn.predict(X_train)
test_preds_three_layer_rnn_history = three_layer_rnn.predict(X_test)
confusion_mat_train_three_layer_rnn_history = metrics.confusion_matrix(np.argmax(y_train,1),train_preds_three_layer_rnn_history)
confusion_mat_test_three_layer_rnn_history = metrics.confusion_matrix(np.argmax(y_test,1),test_preds_three_layer_rnn_history)
body_movements = ['downstairs','jogging','sitting','standing','upstairs','walking']
confusion_mat_train_three_layer_rnn_history.columns = body_movements
confusion_mat_train_three_layer_rnn_history.index = body_movements
confusion_mat_test_three_layer_rnn_history.columns = body_movements
confusion_mat_test_three_layer_rnn_history.index = body_movements
print(confusion_mat_train_three_layer_rnn_history)
# %%
sns.heatmap(confusion_mat_test_three_layer_rnn_history/np.sum(confusion_mat_test_three_layer_rnn_history), annot=True,
fmt='.2%',cmap = 'Blues')
plt.show()
# %%
sns.heatmap(confusion_mat_train_three_layer_rnn_history/np.sum(confusion_mat_train_three_layer_rnn_history), annot=True,
fmt='.2%',cmap = 'Blues')
plt.show()
# %%
class Five_Hidden_Layer_RNN(object):
"""
Recurrent Neural Network for classifying human activity.
RNN encapsulates all necessary logic for training the network.
"""
def __init__(self,input_dim = 3,hidden_dim_1 = 128, hidden_dim_2 = 64,hidden_dim_3 = 32,hidden_dim_4 = 16 ,hidden_dim_5 = 8, seq_len = 150, learning_rate = 1e-1, mom_coeff = 0.85, batch_size = 32, output_class = 6):
"""
Initialization of weights/biases and other configurable parameters.
"""
np.random.seed(150)
self.input_dim = input_dim
self.hidden_dim_1 = hidden_dim_1
self.hidden_dim_2 = hidden_dim_2
self.hidden_dim_3 = hidden_dim_3
self.hidden_dim_4 = hidden_dim_4
self.hidden_dim_5 = hidden_dim_5
# Unfold case T = 150 :
self.seq_len = seq_len
self.output_class = output_class
self.learning_rate = learning_rate
self.batch_size = batch_size
self.mom_coeff = mom_coeff
# Xavier uniform scaler :
Xavier = lambda fan_in,fan_out : math.sqrt(6/(fan_in + fan_out))
lim_inp2hid = Xavier(self.input_dim,self.hidden_dim_1)
self.W1 = np.random.uniform(-lim_inp2hid,lim_inp2hid,(self.input_dim,self.hidden_dim_1))
self.B1 = np.random.uniform(-lim_inp2hid,lim_inp2hid,(1,self.hidden_dim_1))
lim_hid2hid = Xavier(self.hidden_dim_1,self.hidden_dim_1)
self.W1_rec= np.random.uniform(-lim_hid2hid,lim_hid2hid,(self.hidden_dim_1,self.hidden_dim_1))
lim_hid2hid2 = Xavier(self.hidden_dim_1,self.hidden_dim_2)
self.W2 = np.random.uniform(-lim_hid2hid2,lim_hid2hid2,(self.hidden_dim_1,self.hidden_dim_2))
self.B2 = np.random.uniform(-lim_hid2hid2,lim_hid2hid2,(1,self.hidden_dim_2))
lim_hid2hid3 = Xavier(self.hidden_dim_2,self.hidden_dim_3)
self.W3 = np.random.uniform(-lim_hid2hid3,lim_hid2hid3,(self.hidden_dim_2,self.hidden_dim_3))
self.B3 = np.random.uniform(-lim_hid2hid3,lim_hid2hid3,(1,self.hidden_dim_3))
lim_hid2hid4 = Xavier(self.hidden_dim_3,self.hidden_dim_4)
self.W4 = np.random.uniform(-lim_hid2hid4,lim_hid2hid4,(self.hidden_dim_3,self.hidden_dim_4))
self.B4 = np.random.uniform(-lim_hid2hid4,lim_hid2hid4,(1,self.hidden_dim_4))
lim_hid2hid5 = Xavier(self.hidden_dim_4,self.hidden_dim_5)
self.W5 = np.random.uniform(-lim_hid2hid5,lim_hid2hid5,(self.hidden_dim_4,self.hidden_dim_5))
self.B5 = np.random.uniform(-lim_hid2hid5,lim_hid2hid5,(1,self.hidden_dim_5))
lim_hid2out = Xavier(self.hidden_dim_5,self.output_class)
self.W6 = np.random.uniform(-lim_hid2out,lim_hid2out,(self.hidden_dim_5,self.output_class))
self.B6 = np.random.uniform(-lim_hid2out,lim_hid2out,(1,self.output_class))
# To keep track loss and accuracy score :
self.train_loss,self.test_loss,self.train_acc,self.test_acc = [],[],[],[]
# Storing previous momentum updates :
self.prev_updates = {'W1' : 0,
'B1' : 0,
'W1_rec' : 0,
'W2' : 0,
'B2' : 0,
'W3' : 0,
'W4' : 0,
'B3' : 0,
'B4' : 0,
'W5' : 0,
'W6' : 0,
'B5' : 0,
'B6' : 0}
def forward(self,X) -> tuple:
"""
Forward propagation of the RNN through time.
__________________________________________________________
Inputs:
--- X is the bacth.
--- h_prev_state is the previous state of the hidden layer.
__________________________________________________________
Returns:
--- (X_state,hidden_state,probs) as a tuple.
------ 1) X_state is the input across all time steps
------ 2) hidden_state is the hidden stages across time
------ 3) probs is the probabilities of each outputs, i.e. outputs of softmax
__________________________________________________________
"""
X_state = dict()
hidden_state_1 = dict()
hidden_state_mlp = dict()
hidden_state_mlp_2 = dict()
hidden_state_mlp_3 = dict()
hidden_state_mlp_4 = dict()
output_state = dict()
probs = dict()
mlp_linear = dict()
mlp_linear_2 = dict()
mlp_linear_3 = dict()
mlp_linear_4 = dict()
self.h_prev_state = np.zeros((1,self.hidden_dim_1))
hidden_state_1[-1] = np.copy(self.h_prev_state)
# Loop over time T = 150 :
for t in range(self.seq_len):
# Selecting first record with 3 inputs, dimension = (batch_size,input_size)
X_state[t] = X[:,t]
# Recurrent hidden layer :
hidden_state_1[t] = np.tanh(np.dot(X_state[t],self.W1) + np.dot(hidden_state_1[t-1],self.W1_rec) + self.B1)
mlp_linear[t] = np.dot(hidden_state_1[t],self.W2) + self.B2
hidden_state_mlp[t] = activations.ReLU(mlp_linear[t])
mlp_linear_2[t] = np.dot(hidden_state_mlp[t],self.W3) + self.B3
hidden_state_mlp_2[t] = activations.ReLU(mlp_linear_2[t])
mlp_linear_3[t] = np.dot(hidden_state_mlp_2[t],self.W4) + self.B4
hidden_state_mlp_3[t] = activations.ReLU(mlp_linear_3[t])
mlp_linear_4[t] = np.dot(hidden_state_mlp_3[t],self.W5) + self.B5
hidden_state_mlp_4[t] = activations.ReLU(mlp_linear_4[t])
output_state[t] = np.dot(hidden_state_mlp_4[t],self.W6) + self.B6
# Per class probabilites :
probs[t] = activations.softmax(output_state[t])
return (X_state,hidden_state_1,mlp_linear,hidden_state_mlp,mlp_linear_2,hidden_state_mlp_2,mlp_linear_3,hidden_state_mlp_3,mlp_linear_4,hidden_state_mlp_4,probs)
def BPTT(self,cache,Y):
"""
Back propagation through time algorihm.
Inputs:
-- Cache = (X_state,hidden_state_1,mlp_linear,hidden_state_mlp,mlp_linear_2,hidden_state_mlp_2,mlp_linear_3,hidden_state_mlp_3,mlp_linear_4,hidden_state_mlp_4,probs)
-- Y = desired output
Returns:
-- Gradients w.r.t. all configurable elements
"""
X_state,hidden_state_1,mlp_linear,hidden_state_mlp,mlp_linear_2,hidden_state_mlp_2,mlp_linear_3,hidden_state_mlp_3,mlp_linear_4,hidden_state_mlp_4,probs = cache
# backward pass: compute gradients going backwards
dW1, dW1_rec, dW2, dW3, dW4, dW5, dW6 = np.zeros_like(self.W1), np.zeros_like(self.W1_rec), np.zeros_like(self.W2),np.zeros_like(self.W3),np.zeros_like(self.W4),np.zeros_like(self.W5),np.zeros_like(self.W6)
dB1, dB2,dB3,dB4,dB5,dB6 = np.zeros_like(self.B1), np.zeros_like(self.B2),np.zeros_like(self.B3),np.zeros_like(self.B4),np.zeros_like(self.B5),np.zeros_like(self.B6)
dhnext = np.zeros_like(hidden_state_1[0])
for t in reversed(range(1,self.seq_len)):
dy = np.copy(probs[149])
dy[np.arange(len(Y)),np.argmax(Y,1)] -= 1
#dy = probs[0] - Y[0]
dW6 += np.dot(hidden_state_mlp_4[t].T,dy)
dB6 += np.sum(dy,axis = 0, keepdims = True)
dy1 = np.dot(dy,self.W6.T) * activations.ReLU_grad(mlp_linear_4[t])
dW5 += np.dot(hidden_state_mlp_3[t].T,dy1)
dB5 += np.sum(dy1,axis = 0, keepdims = True)
dy2 = np.dot(dy1,self.W5.T) * activations.ReLU_grad(mlp_linear_3[t])
dW4 += np.dot(hidden_state_mlp_2[t].T,dy2)
dB4 += np.sum(dy2,axis = 0, keepdims = True)
dy3 = np.dot(dy2,self.W4.T) * activations.ReLU_grad(mlp_linear_2[t])
dW3 += np.dot(hidden_state_mlp[t].T,dy3)
dB3 += np.sum(dy3,axis = 0, keepdims = True)
dy4 = np.dot(dy3,self.W3.T) * activations.ReLU_grad(mlp_linear[t])
dB2 += np.sum(dy4,axis = 0, keepdims = True)
dW2 += np.dot(hidden_state_1[t].T,dy4)
dh = np.dot(dy4,self.W2.T) + dhnext
dhrec = (1 - (hidden_state_1[t] * hidden_state_1[t])) * dh
dB1 += np.sum(dhrec,axis = 0, keepdims = True)
dW1 += np.dot(X_state[t].T,dhrec)
dW1_rec += np.dot(hidden_state_1[t-1].T,dhrec)
dhnext = np.dot(dhrec,self.W1_rec.T)
for grad in [dW1,dB1,dW1_rec,dW2,dB2,dW3,dB3,dW4,dB4,dW5,dB5,dW6,dB6]:
np.clip(grad, -10, 10, out = grad)
return [dW1,dB1,dW1_rec,dW2,dB2,dW3,dB3,dW4,dB4,dW5,dB5,dW6,dB6]
def CategoricalCrossEntropy(self,labels,preds):
"""
Computes cross entropy between labels and model's predictions
"""
predictions = np.clip(preds, 1e-12, 1. - 1e-12)
N = predictions.shape[0]
return -np.sum(labels * np.log(predictions + 1e-9)) / N
def step(self,grads,momentum = True):
#for config_param,grad in zip([self.W1,self.B1,self.W1_rec,self.W2,self.B2,self.W3,self.B3],grads):
#config_param -= self.learning_rate * grad
if momentum:
delta_W1 = -self.learning_rate * grads[0] + self.mom_coeff * self.prev_updates['W1']
delta_B1 = -self.learning_rate * grads[1] + self.mom_coeff * self.prev_updates['B1']
delta_W1_rec = -self.learning_rate * grads[2] + self.mom_coeff * self.prev_updates['W1_rec']
delta_W2 = -self.learning_rate * grads[3] + self.mom_coeff * self.prev_updates['W2']
delta_B2 = -self.learning_rate * grads[4] + self.mom_coeff * self.prev_updates['B2']
delta_W3 = -self.learning_rate * grads[5] + self.mom_coeff * self.prev_updates['W3']
delta_B3 = -self.learning_rate * grads[6] + self.mom_coeff * self.prev_updates['B3']
delta_W4 = -self.learning_rate * grads[7] + self.mom_coeff * self.prev_updates['W4']
delta_B4 = -self.learning_rate * grads[8] + self.mom_coeff * self.prev_updates['B4']
delta_W5 = -self.learning_rate * grads[9] + self.mom_coeff * self.prev_updates['W5']
delta_B5 = -self.learning_rate * grads[10] + self.mom_coeff * self.prev_updates['B5']
delta_W6 = -self.learning_rate * grads[11] + self.mom_coeff * self.prev_updates['W6']
delta_B6 = -self.learning_rate * grads[12] + self.mom_coeff * self.prev_updates['B6']
self.W1 += delta_W1
self.W1_rec += delta_W1_rec
self.W2 += delta_W2
self.B1 += delta_B1
self.B2 += delta_B2
self.W3 += delta_W3
self.B3 += delta_B3
self.W4 += delta_W4
self.B4 += delta_B4
self.W5 += delta_W5
self.B5 += delta_B5
self.W6 += delta_W6
self.B6 += delta_B6
self.prev_updates['W1'] = delta_W1
self.prev_updates['W1_rec'] = delta_W1_rec
self.prev_updates['W2'] = delta_W2
self.prev_updates['B1'] = delta_B1
self.prev_updates['B2'] = delta_B2
self.prev_updates['W3'] = delta_W3
self.prev_updates['B3'] = delta_B3
self.prev_updates['W4'] = delta_W4
self.prev_updates['B4'] = delta_B4
self.prev_updates['W5'] = delta_W5
self.prev_updates['B5'] = delta_B5
self.prev_updates['W6'] = delta_W6
self.prev_updates['B6'] = delta_B6
self.learning_rate *= 0.9999
def fit(self,X,Y,X_val,y_val,epochs = 50 ,verbose = True, crossVal = False):
"""
Given the traning dataset,their labels and number of epochs
fitting the model, and measure the performance
by validating training dataset.
"""
for epoch in range(epochs):
print(f'Epoch : {epoch + 1}')
perm = np.random.permutation(3000)
for i in range(round(X.shape[0]/self.batch_size)):
batch_start = i * self.batch_size
batch_finish = (i+1) * self.batch_size
index = perm[batch_start:batch_finish]
X_feed = X[index]
y_feed = Y[index]
cache_train = self.forward(X_feed)
grads = self.BPTT(cache_train,y_feed)
self.step(grads)
if crossVal:
stop = self.cross_validation(X,val_X,Y,val_Y,threshold = 5)
if stop:
break
cross_loss_train = self.CategoricalCrossEntropy(y_feed,cache_train[10][149])
predictions_train = self.predict(X)
acc_train = metrics.accuracy(np.argmax(Y,1),predictions_train)
_,__,___,____,_____,______,_______,________,__________,___________, probs_test = self.forward(X_val)
cross_loss_val = self.CategoricalCrossEntropy(y_val,probs_test[149])
predictions_val = np.argmax(probs_test[149],1)
acc_val = metrics.accuracy(np.argmax(y_val,1),predictions_val)
if verbose:
print(f"[{epoch + 1}/{epochs}] ------> Training : Accuracy : {acc_train}")
print(f"[{epoch + 1}/{epochs}] ------> Training : Loss : {cross_loss_train}")
print('______________________________________________________________________________________\n')
print(f"[{epoch + 1}/{epochs}] ------> Testing : Accuracy : {acc_val}")
print(f"[{epoch + 1}/{epochs}] ------> Testing : Loss : {cross_loss_val}")
print('______________________________________________________________________________________\n')
self.train_loss.append(cross_loss_train)
self.test_loss.append(cross_loss_val)
self.train_acc.append(acc_train)
self.test_acc.append(acc_val)
def predict(self,X):
_,__,___,____,_____,______,_______,________,__________,___________, probs = self.forward(X)
return np.argmax(probs[149],axis=1)
def history(self):
return {'TrainLoss' : self.train_loss,
'TrainAcc' : self.train_acc,
'TestLoss' : self.test_loss,
'TestAcc' : self.test_acc}
# %%
five_hidden_layer_rnn = Five_Hidden_Layer_RNN(hidden_dim_1 = 128, hidden_dim_2 = 64,hidden_dim_3 = 32,hidden_dim_4 = 16 ,hidden_dim_5 = 8, learning_rate = 1e-4, mom_coeff = 0.0)
# %%
five_hidden_layer_rnn.fit(X_train,y_train,X_test,y_test,epochs = 35)
# %%
five_hidden_layer_rnn_history = five_hidden_layer_rnn.history()
plt.figure()
plt.plot(five_hidden_layer_rnn_history['TestLoss'],'-o')
plt.plot(five_hidden_layer_rnn_history['TrainLoss'],'-o')
plt.xlabel('# of epochs')
plt.ylabel('Loss')
plt.title('Categorical Cross Entropy over epochs')
plt.legend(['Test Loss','Train Loss'])
plt.show()
# %%
plt.figure()
plt.plot(five_hidden_layer_rnn_history['TestAcc'],'-o')
plt.plot(five_hidden_layer_rnn_history['TrainAcc'],'-o')
plt.xlabel('# of epochs')
plt.ylabel('Loss')
plt.title('Accuracy over epochs')
plt.legend(['Test Acc','Train Acc'])
plt.show()
# %%
plt.figure()
plt.plot(five_hidden_layer_rnn_history['TrainAcc'],'-o')
plt.plot(three_layer_rnn_history['TrainAcc'],'-o')
plt.plot(multilayer_rnn_history['TrainAcc'],'-o')
plt.plot(history['TrainAcc'],'-o')
plt.xlabel('# of epochs')
plt.ylabel('Loss')
plt.title('Training Accuracy over epochs')
plt.legend(['Five hidden layer RNN','3 hidden layer RNN','Multi Layer RNN','Vanilla RNN'])
plt.show()
# %%
plt.figure()
plt.plot(five_hidden_layer_rnn_history['TestAcc'],'-o')
plt.plot(three_layer_rnn_history['TestAcc'],'-o')
plt.plot(multilayer_rnn_history['TestAcc'],'-o')
plt.plot(history['TestAcc'],'-o')
plt.xlabel('# of epochs')
plt.ylabel('Loss')
plt.title('Testing Accuracy over epochs')
plt.legend(['Five hidden layer RNN','3 hidden layer RNN','Multi Layer RNN','Vanilla RNN'])
plt.show()
# %%
train_preds_five_hidden_layer_rnn = five_hidden_layer_rnn.predict(X_train)
test_preds_five_hidden_layer_rnn = five_hidden_layer_rnn.predict(X_test)
confusion_mat_train_five_hidden_layer_rnn = metrics.confusion_matrix(np.argmax(y_train,1),train_preds_five_hidden_layer_rnn)
confusion_mat_test_five_hidden_layer_rnn = metrics.confusion_matrix(np.argmax(y_test,1),test_preds_five_hidden_layer_rnn)
body_movements = ['downstairs','jogging','sitting','standing','upstairs','walking']
confusion_mat_train_five_hidden_layer_rnn.columns = body_movements
confusion_mat_train_five_hidden_layer_rnn.index = body_movements
confusion_mat_test_five_hidden_layer_rnn.columns = body_movements
confusion_mat_test_five_hidden_layer_rnn.index = body_movements
print(confusion_mat_test_five_hidden_layer_rnn)
# %%
sns.heatmap(confusion_mat_test_five_hidden_layer_rnn/np.sum(confusion_mat_test_five_hidden_layer_rnn), annot=True,
fmt='.2%',cmap = 'Blues')
plt.show()
# %%
sns.heatmap(confusion_mat_train_five_hidden_layer_rnn/np.sum(confusion_mat_train_five_hidden_layer_rnn), annot=True,
fmt='.2%',cmap = 'Blues')
plt.show()
# %% [markdown]
# LSTM
# %%
def sigmoid(x):
return 1 / (1 + np.exp(-x))
def dsigmoid(y):
return y * (1 - y)
def tanh(x):
return np.tanh(x)
def dtanh(y):
return 1 - y * y
# %%
class LSTM(object):
"""
Long-Short Term Memory Recurrent neural network, encapsulates all necessary logic for training, then built the hyperparameters and architecture of the network.
"""
def __init__(self,input_dim = 3,hidden_dim = 100,output_class = 6,seq_len = 150,batch_size = 30,learning_rate = 1e-1,mom_coeff = 0.85):
"""
Initialization of weights/biases and other configurable parameters.
"""
np.random.seed(150)
self.input_dim = input_dim
self.hidden_dim = hidden_dim
# Unfold case T = 150 :
self.seq_len = seq_len
self.output_class = output_class
self.learning_rate = learning_rate
self.batch_size = batch_size
self.mom_coeff = mom_coeff
self.input_stack_dim = self.input_dim + self.hidden_dim
# Xavier uniform scaler :
Xavier = lambda fan_in,fan_out : math.sqrt(6/(fan_in + fan_out))
lim1 = Xavier(self.input_dim,self.hidden_dim)
self.W_f = np.random.uniform(-lim1,lim1,(self.input_stack_dim,self.hidden_dim))
self.B_f = np.random.uniform(-lim1,lim1,(1,self.hidden_dim))
self.W_i = np.random.uniform(-lim1,lim1,(self.input_stack_dim,self.hidden_dim))
self.B_i = np.random.uniform(-lim1,lim1,(1,self.hidden_dim))
self.W_c = np.random.uniform(-lim1,lim1,(self.input_stack_dim,self.hidden_dim))
self.B_c = np.random.uniform(-lim1,lim1,(1,self.hidden_dim))
self.W_o = np.random.uniform(-lim1,lim1,(self.input_stack_dim,self.hidden_dim))
self.B_o = np.random.uniform(-lim1,lim1,(1,self.hidden_dim))
lim2 = Xavier(self.hidden_dim,self.output_class)
self.W = np.random.uniform(-lim2,lim2,(self.hidden_dim,self.output_class))
self.B = np.random.uniform(-lim2,lim2,(1,self.output_class))
# To keep track loss and accuracy score :
self.train_loss,self.test_loss,self.train_acc,self.test_acc = [],[],[],[]
# To keep previous updates in momentum :
self.previous_updates = [0] * 10
# For AdaGrad:
self.cache = [0] * 10
self.cache_rmsprop = [0] * 10
self.m = [0] * 10
self.v = [0] * 10
self.t = 1
def cell_forward(self,X,h_prev,C_prev):
"""
Takes input, previous hidden state and previous cell state, compute:
--- Forget gate + Input gate + New candidate input + New cell state +
output gate + hidden state. Then, classify by softmax.
"""
#print(X.shape,h_prev.shape)
# Stacking previous hidden state vector with inputs:
stack = np.column_stack([X,h_prev])
# Forget gate:
forget_gate = activations.sigmoid(np.dot(stack,self.W_f) + self.B_f)
# İnput gate:
input_gate = activations.sigmoid(np.dot(stack,self.W_i) + self.B_i)
# New candidate:
cell_bar = np.tanh(np.dot(stack,self.W_c) + self.B_c)
# New Cell state:
cell_state = forget_gate * C_prev + input_gate * cell_bar
# Output fate:
output_gate = activations.sigmoid(np.dot(stack,self.W_o) + self.B_o)
# Hidden state:
hidden_state = output_gate * np.tanh(cell_state)
# Classifiers (Softmax) :
dense = np.dot(hidden_state,self.W) + self.B
probs = activations.softmax(dense)
return (stack,forget_gate,input_gate,cell_bar,cell_state,output_gate,hidden_state,dense,probs)
def forward(self,X,h_prev,C_prev):
x_s,z_s,f_s,i_s = {},{},{},{}
C_bar_s,C_s,o_s,h_s = {},{},{},{}
v_s,y_s = {},{}
h_s[-1] = np.copy(h_prev)
C_s[-1] = np.copy(C_prev)
for t in range(self.seq_len):
x_s[t] = X[:,t,:]
z_s[t], f_s[t], i_s[t], C_bar_s[t], C_s[t], o_s[t], h_s[t],v_s[t], y_s[t] = self.cell_forward(x_s[t],h_s[t-1],C_s[t-1])
return (z_s, f_s, i_s, C_bar_s, C_s, o_s, h_s,v_s, y_s)
def BPTT(self,outs,Y):
z_s, f_s, i_s, C_bar_s, C_s, o_s, h_s,v_s, y_s = outs
dW_f, dW_i,dW_c, dW_o,dW = np.zeros_like(self.W_f), np.zeros_like(self.W_i), np.zeros_like(self.W_c),np.zeros_like(self.W_o),np.zeros_like(self.W)
dB_f, dB_i,dB_c,dB_o,dB = np.zeros_like(self.B_f), np.zeros_like(self.B_i),np.zeros_like(self.B_c),np.zeros_like(self.B_o),np.zeros_like(self.B)
dh_next = np.zeros_like(h_s[0])
dC_next = np.zeros_like(C_s[0])
# w.r.t. softmax input
ddense = np.copy(y_s[149])
ddense[np.arange(len(Y)),np.argmax(Y,1)] -= 1
#ddense[np.argmax(Y,1)] -=1
#ddense = y_s[149] - Y
# Softmax classifier's :
dW = np.dot(h_s[149].T,ddense)
dB = np.sum(ddense,axis = 0, keepdims = True)
# Backprop through time:
for t in reversed(range(1,self.seq_len)):
# Just equating more meaningful names
stack,forget_gate,input_gate,cell_bar,cell_state,output_gate,hidden_state,dense,probs = z_s[t], f_s[t], i_s[t], C_bar_s[t], C_s[t], o_s[t], h_s[t],v_s[t], y_s[t]
C_prev = C_s[t-1]
# w.r.t. softmax input
#ddense = np.copy(probs)
#ddense[np.arange(len(Y)),np.argmax(Y,1)] -= 1
#ddense[np.arange(len(Y)),np.argmax(Y,1)] -=1
# Softmax classifier's :
#dW += np.dot(hidden_state.T,ddense)
#dB += np.sum(ddense,axis = 0, keepdims = True)
# Output gate :
dh = np.dot(ddense,self.W.T) + dh_next
do = dh * np.tanh(cell_state)
do = do * dsigmoid(output_gate)
dW_o += np.dot(stack.T,do)
dB_o += np.sum(do,axis = 0, keepdims = True)
# Cell state:
dC = np.copy(dC_next)
dC += dh * output_gate * activations.dtanh(cell_state)
dC_bar = dC * input_gate
dC_bar = dC_bar * dtanh(cell_bar)
dW_c += np.dot(stack.T,dC_bar)
dB_c += np.sum(dC_bar,axis = 0, keepdims = True)
# Input gate:
di = dC * cell_bar
di = dsigmoid(input_gate) * di
dW_i += np.dot(stack.T,di)
dB_i += np.sum(di,axis = 0,keepdims = True)
# Forget gate:
df = dC * C_prev
df = df * dsigmoid(forget_gate)
dW_f += np.dot(stack.T,df)
dB_f += np.sum(df,axis = 0, keepdims = True)
dz = np.dot(df,self.W_f.T) + np.dot(di,self.W_i.T) + np.dot(dC_bar,self.W_c.T) + np.dot(do,self.W_o.T)
dh_next = dz[:,-self.hidden_dim:]
dC_next = forget_gate * dC
# List of gradients :
grads = [dW,dB,dW_o,dB_o,dW_c,dB_c,dW_i,dB_i,dW_f,dB_f]
# Clipping gradients anyway
for grad in grads:
np.clip(grad, -15, 15, out = grad)
return h_s[self.seq_len - 1],C_s[self.seq_len -1 ],grads
def fit(self,X,Y,X_val,y_val,epochs = 50 ,optimizer = 'SGD',verbose = True, crossVal = False):
"""
Given the traning dataset,their labels and number of epochs
fitting the model, and measure the performance
by validating training dataset.
"""
for epoch in range(epochs):
print(f'Epoch : {epoch + 1}')
perm = np.random.permutation(3000)
h_prev,C_prev = np.zeros((self.batch_size,self.hidden_dim)),np.zeros((self.batch_size,self.hidden_dim))
for i in range(round(X.shape[0]/self.batch_size) - 1):
batch_start = i * self.batch_size
batch_finish = (i+1) * self.batch_size
index = perm[batch_start:batch_finish]
# Feeding random indexes:
X_feed = X[index]
y_feed = Y[index]
# Forward + BPTT + SGD:
cache_train = self.forward(X_feed,h_prev,C_prev)
h,c,grads = self.BPTT(cache_train,y_feed)
if optimizer == 'SGD':
self.SGD(grads)
elif optimizer == 'AdaGrad' :
self.AdaGrad(grads)
elif optimizer == 'RMSprop':
self.RMSprop(grads)
elif optimizer == 'VanillaAdam':
self.VanillaAdam(grads)
else:
self.Adam(grads)
# Hidden state -------> Previous hidden state
# Cell state ---------> Previous cell state
h_prev,C_prev = h,c
# Training metrics calculations:
cross_loss_train = self.CategoricalCrossEntropy(y_feed,cache_train[8][149])
predictions_train = self.predict(X)
acc_train = metrics.accuracy(np.argmax(Y,1),predictions_train)
# Validation metrics calculations:
test_prevs = np.zeros((X_val.shape[0],self.hidden_dim))
_,__,___,____,_____,______,_______,________,probs_test = self.forward(X_val,test_prevs,test_prevs)
cross_loss_val = self.CategoricalCrossEntropy(y_val,probs_test[149])
predictions_val = np.argmax(probs_test[149],1)
acc_val = metrics.accuracy(np.argmax(y_val,1),predictions_val)
if verbose:
print(f"[{epoch + 1}/{epochs}] ------> Training : Accuracy : {acc_train}")
print(f"[{epoch + 1}/{epochs}] ------> Training : Loss : {cross_loss_train}")
print('______________________________________________________________________________________\n')
print(f"[{epoch + 1}/{epochs}] ------> Testing : Accuracy : {acc_val}")
print(f"[{epoch + 1}/{epochs}] ------> Testing : Loss : {cross_loss_val}")
print('______________________________________________________________________________________\n')
self.train_loss.append(cross_loss_train)
self.test_loss.append(cross_loss_val)
self.train_acc.append(acc_train)
self.test_acc.append(acc_val)
def params(self):
"""
Return all weights/biases in sequential order starting from end in list form.
"""
return [self.W,self.B,self.W_o,self.B_o,self.W_c,self.B_c,self.W_i,self.B_i,self.W_f,self.B_f]
def SGD(self,grads):
"""
Stochastic gradient descent with momentum on mini-batches.
"""
prevs = []
for param,grad,prev_update in zip(self.params(),grads,self.previous_updates):
delta = self.learning_rate * grad - self.mom_coeff * prev_update
param -= delta
prevs.append(delta)
self.previous_updates = prevs
self.learning_rate *= 0.99999
def AdaGrad(self,grads):
"""
AdaGrad adaptive optimization algorithm.
"""
i = 0
for param,grad in zip(self.params(),grads):
self.cache[i] += grad **2
param += -self.learning_rate * grad / (np.sqrt(self.cache[i]) + 1e-6)
i += 1
def RMSprop(self,grads,decay_rate = 0.9):
"""
RMSprop adaptive optimization algorithm
"""
i = 0
for param,grad in zip(self.params(),grads):
self.cache_rmsprop[i] = decay_rate * self.cache_rmsprop[i] + (1-decay_rate) * grad **2
param += - self.learning_rate * grad / (np.sqrt(self.cache_rmsprop[i])+ 1e-6)
i += 1
def VanillaAdam(self,grads,beta1 = 0.9,beta2 = 0.999):
"""
Adam optimizer, but bias correction is not implemented
"""
i = 0
for param,grad in zip(self.params(),grads):
self.m[i] = beta1 * self.m[i] + (1-beta1) * grad
self.v[i] = beta2 * self.v[i] + (1-beta2) * grad **2
param += -self.learning_rate * self.m[i] / (np.sqrt(self.v[i]) + 1e-8)
i += 1
def Adam(self,grads,beta1 = 0.9,beta2 = 0.999):
"""
Adam optimizer, bias correction is implemented.
"""
i = 0
for param,grad in zip(self.params(),grads):
self.m[i] = beta1 * self.m[i] + (1-beta1) * grad
self.v[i] = beta2 * self.v[i] + (1-beta2) * grad **2
m_corrected = self.m[i] / (1-beta1**self.t)
v_corrected = self.v[i] / (1-beta2**self.t)
param += -self.learning_rate * m_corrected / (np.sqrt(v_corrected) + 1e-8)
i += 1
self.t +=1
def CategoricalCrossEntropy(self,labels,preds):
"""
Computes cross entropy between labels and model's predictions
"""
predictions = np.clip(preds, 1e-12, 1. - 1e-12)
N = predictions.shape[0]
return -np.sum(labels * np.log(predictions + 1e-9)) / N
def predict(self,X):
"""
Return predictions, (not one hot encoded format)
"""
# Give zeros to hidden/cell states:
pasts = np.zeros((X.shape[0],self.hidden_dim))
_,__,___,____,_____,______,_______,_______,probs = self.forward(X,pasts,pasts)
return np.argmax(probs[149],axis=1)
def history(self):
return {'TrainLoss' : self.train_loss,
'TrainAcc' : self.train_acc,
'TestLoss' : self.test_loss,
'TestAcc' : self.test_acc}
# %%
lstm = LSTM(learning_rate = 5e-4,mom_coeff = 0.0,batch_size = 32,hidden_dim=128)
# %%
lstm.fit(X_train,y_train,X_test,y_test,epochs = 15,optimizer='SGD')
# %%
lstm_history = lstm.history()
# %%
train_preds_lstm = lstm.predict(X_train)
test_preds_lstm = lstm.predict(X_test)
confusion_mat_train_lstm = metrics.confusion_matrix(np.argmax(y_train,1),train_preds_lstm)
confusion_mat_test_lstm = metrics.confusion_matrix(np.argmax(y_test,1),test_preds_lstm)
body_movements = ['downstairs','jogging','sitting','standing','upstairs','walking']
confusion_mat_train_lstm.columns = body_movements
confusion_mat_train_lstm.index = body_movements
confusion_mat_test_lstm.columns = body_movements
confusion_mat_test_lstm.index = body_movements
sns.heatmap(confusion_mat_train_lstm/np.sum(confusion_mat_train_lstm), annot=True,
fmt='.2%',cmap = 'Blues')
plt.show()
sns.heatmap(confusion_mat_test_lstm/np.sum(confusion_mat_test_lstm), annot=True,
fmt='.2%',cmap = 'Blues')
plt.show()
# %%
lstm2 = LSTM(learning_rate = 2e-3,mom_coeff = 0.0,batch_size = 32,hidden_dim=128)
lstm2.fit(X_train,y_train,X_test,y_test,epochs = 15,optimizer='RMSprop')
# %%
lstm2_history = lstm2.history()
# %%
lstm3 = LSTM(learning_rate = 3e-3,mom_coeff = 0.0,batch_size = 32,hidden_dim=128)
lstm3.fit(X_train,y_train,X_test,y_test,epochs = 15,optimizer='Adam')
# %%
lstm4 = LSTM(learning_rate = 1e-3,mom_coeff = 0.0,batch_size = 32,hidden_dim=128)
lstm4.fit(X_train,y_train,X_test,y_test,epochs = 15,optimizer='AdaGrad')
# %%
lstm5 = LSTM(learning_rate = 1e-3,mom_coeff = 0.0,batch_size = 32,hidden_dim=128)
lstm5.fit(X_train,y_train,X_test,y_test,epochs = 15,optimizer='VanillaAdam')
# %%
lstm3_history = lstm3.history()
lstm4_history = lstm4.history()
lstm5_history = lstm5.history()
plt.figure()
plt.plot(lstm_history['TrainAcc'],'-o')
plt.plot(lstm2_history['TrainAcc'],'-o')
plt.plot(lstm3_history['TrainAcc'],'-o')
plt.plot(lstm4_history['TrainAcc'],'-o')
plt.plot(lstm5_history['TrainAcc'],'-o')
plt.xlabel('# of epochs')
plt.ylabel('Loss')
plt.title('Training Accuracy over epochs')
plt.legend(['SGD','RMSprop','Adam','AdaGrad','Vanilla Adam'])
plt.show()
plt.figure()
plt.plot(lstm_history['TestAcc'],'-o')
plt.plot(lstm2_history['TestAcc'],'-o')
plt.plot(lstm3_history['TestAcc'],'-o')
plt.plot(lstm4_history['TestAcc'],'-o')
plt.plot(lstm5_history['TestAcc'],'-o')
plt.xlabel('# of epochs')
plt.ylabel('Loss')
plt.title('Testing Accuracy over epochs')
plt.legend(['SGD','RMSprop','Adam','AdaGrad','Vanilla Adam'])
plt.show()
plt.figure()
plt.plot(lstm_history['TrainLoss'],'-o')
plt.plot(lstm2_history['TrainLoss'],'-o')
plt.plot(lstm3_history['TrainLoss'],'-o')
plt.plot(lstm4_history['TrainLoss'],'-o')
plt.plot(lstm5_history['TrainLoss'],'-o')
plt.xlabel('# of epochs')
plt.ylabel('Loss')
plt.title('Training Loss over epochs')
plt.legend(['SGD','RMSprop','Adam','AdaGrad','Vanilla Adam'])
plt.show()
plt.figure()
plt.plot(lstm_history['TestLoss'],'-o')
plt.plot(lstm2_history['TestLoss'],'-o')
plt.plot(lstm3_history['TestLoss'],'-o')
plt.plot(lstm4_history['TestLoss'],'-o')
plt.plot(lstm5_history['TestLoss'],'-o')
plt.xlabel('# of epochs')
plt.ylabel('Loss')
plt.title('Testing Loss over epochs')
plt.legend(['SGD','RMSprop','Adam','AdaGrad','Vanilla Adam'])
plt.show()
# %%
three_layer_rnn_v2_history = three_layer_rnn_v2.history()
plt.figure()
plt.plot(three_layer_rnn_v2_history['TrainAcc'],'-o')
plt.plot(lstm_history['TrainAcc'],'-o')
plt.xlabel('# of epochs')
plt.ylabel('Loss')
plt.title('Training Accuracy over epochs')
plt.legend(['Best RNN','Best LSTM'])
plt.show()
plt.figure()
plt.plot(three_layer_rnn_v2_history['TestAcc'],'-o')
plt.plot(lstm_history['TestAcc'],'-o')
plt.xlabel('# of epochs')
plt.ylabel('Loss')
plt.title('Testing Accuracy over epochs')
plt.legend(['Best RNN','Best LSTM'])
plt.show()
plt.figure()
plt.plot(three_layer_rnn_v2_history['TrainLoss'],'-o')
plt.plot(lstm_history['TrainLoss'],'-o')
plt.xlabel('# of epochs')
plt.ylabel('Loss')
plt.title('Training Loss over epochs')
plt.legend(['Best RNN','Best LSTM'])
plt.show()
plt.figure()
plt.plot(three_layer_rnn_v2_history['TestLoss'],'-o')
plt.plot(lstm_history['TestLoss'],'-o')
plt.xlabel('# of epochs')
plt.ylabel('Loss')
plt.title('Testing Loss over epochs')
plt.legend(['Best RNN','Best LSTM'])
plt.show()
# %%
train_preds_lstm = lstm3.predict(X_train)
test_preds_lstm = lstm3.predict(X_test)
confusion_mat_train_lstm = metrics.confusion_matrix(np.argmax(y_train,1),train_preds_lstm)
confusion_mat_test_lstm = metrics.confusion_matrix(np.argmax(y_test,1),test_preds_lstm)
body_movements = ['downstairs','jogging','sitting','standing','upstairs','walking']
confusion_mat_train_lstm.columns = body_movements
confusion_mat_train_lstm.index = body_movements
confusion_mat_test_lstm.columns = body_movements
confusion_mat_test_lstm.index = body_movements
sns.heatmap(confusion_mat_train_lstm/np.sum(confusion_mat_train_lstm), annot=True,
fmt='.2%',cmap = 'Blues')
plt.show()
sns.heatmap(confusion_mat_test_lstm/np.sum(confusion_mat_test_lstm), annot=True,
fmt='.2%',cmap = 'Blues')
plt.show()
# %%
# %%
class Multi_Layer_LSTM(object):
"""
Long-Short Term Memory Recurrent neural network, encapsulates all necessary logic for training, then built the hyperparameters and architecture of the network.
"""
def __init__(self,input_dim = 3,hidden_dim_1 = 128,hidden_dim_2 =64,output_class = 6,seq_len = 150,batch_size = 30,learning_rate = 1e-1,mom_coeff = 0.85):
"""
Initialization of weights/biases and other configurable parameters.
"""
np.random.seed(150)
self.input_dim = input_dim
self.hidden_dim_1 = hidden_dim_1
self.hidden_dim_2 = hidden_dim_2
# Unfold case T = 150 :
self.seq_len = seq_len
self.output_class = output_class
self.learning_rate = learning_rate
self.batch_size = batch_size
self.mom_coeff = mom_coeff
self.input_stack_dim = self.input_dim + self.hidden_dim_1
# Xavier uniform scaler :
Xavier = lambda fan_in,fan_out : math.sqrt(6/(fan_in + fan_out))
lim1 = Xavier(self.input_dim,self.hidden_dim_1)
self.W_f = np.random.uniform(-lim1,lim1,(self.input_stack_dim,self.hidden_dim_1))
self.B_f = np.random.uniform(-lim1,lim1,(1,self.hidden_dim_1))
self.W_i = np.random.uniform(-lim1,lim1,(self.input_stack_dim,self.hidden_dim_1))
self.B_i = np.random.uniform(-lim1,lim1,(1,self.hidden_dim_1))
self.W_c = np.random.uniform(-lim1,lim1,(self.input_stack_dim,self.hidden_dim_1))
self.B_c = np.random.uniform(-lim1,lim1,(1,self.hidden_dim_1))
self.W_o = np.random.uniform(-lim1,lim1,(self.input_stack_dim,self.hidden_dim_1))
self.B_o = np.random.uniform(-lim1,lim1,(1,self.hidden_dim_1))
lim2 = Xavier(self.hidden_dim_1,self.hidden_dim_2)
self.W_hid = np.random.uniform(-lim2,lim2,(self.hidden_dim_1,self.hidden_dim_2))
self.B_hid = np.random.uniform(-lim2,lim2,(1,self.hidden_dim_2))
lim3 = Xavier(self.hidden_dim_2,self.output_class)
self.W = np.random.uniform(-lim3,lim3,(self.hidden_dim_2,self.output_class))
self.B = np.random.uniform(-lim3,lim3,(1,self.output_class))
# To keep track loss and accuracy score :
self.train_loss,self.test_loss,self.train_acc,self.test_acc = [],[],[],[]
# To keep previous updates in momentum :
self.previous_updates = [0] * 13
# For AdaGrad:
self.cache = [0] * 13
self.cache_rmsprop = [0] * 13
self.m = [0] * 13
self.v = [0] * 13
self.t = 1
def cell_forward(self,X,h_prev,C_prev):
"""
Takes input, previous hidden state and previous cell state, compute:
--- Forget gate + Input gate + New candidate input + New cell state +
output gate + hidden state. Then, classify by softmax.
"""
#print(X.shape,h_prev.shape)
# Stacking previous hidden state vector with inputs:
stack = np.column_stack([X,h_prev])
# Forget gate:
forget_gate = activations.sigmoid(np.dot(stack,self.W_f) + self.B_f)
# İnput gate:
input_gate = activations.sigmoid(np.dot(stack,self.W_i) + self.B_i)
# New candidate:
cell_bar = np.tanh(np.dot(stack,self.W_c) + self.B_c)
# New Cell state:
cell_state = forget_gate * C_prev + input_gate * cell_bar
# Output fate:
output_gate = activations.sigmoid(np.dot(stack,self.W_o) + self.B_o)
# Hidden state:
hidden_state = output_gate * np.tanh(cell_state)
# Classifiers (Softmax) :
dense_hid = np.dot(hidden_state,self.W_hid) + self.B_hid
act = activations.ReLU(dense_hid)
dense = np.dot(act,self.W) + self.B
probs = activations.softmax(dense)
return (stack,forget_gate,input_gate,cell_bar,cell_state,output_gate,hidden_state,dense,probs,dense_hid,act)
def forward(self,X,h_prev,C_prev):
x_s,z_s,f_s,i_s = {},{},{},{}
C_bar_s,C_s,o_s,h_s = {},{},{},{}
v_s,y_s,v_1s,y_1s = {},{},{},{}
h_s[-1] = np.copy(h_prev)
C_s[-1] = np.copy(C_prev)
for t in range(self.seq_len):
x_s[t] = X[:,t,:]
z_s[t], f_s[t], i_s[t], C_bar_s[t], C_s[t], o_s[t], h_s[t],v_s[t], y_s[t],v_1s[t],y_1s[t] = self.cell_forward(x_s[t],h_s[t-1],C_s[t-1])
return (z_s, f_s, i_s, C_bar_s, C_s, o_s, h_s,v_s, y_s,v_1s,y_1s)
def BPTT(self,outs,Y):
z_s, f_s, i_s, C_bar_s, C_s, o_s, h_s,v_s, y_s,v_1s,y_1s = outs
dW_f, dW_i,dW_c, dW_o,dW,dW_hid = np.zeros_like(self.W_f), np.zeros_like(self.W_i), np.zeros_like(self.W_c),np.zeros_like(self.W_o),np.zeros_like(self.W),np.zeros_like(self.W_hid)
dB_f, dB_i,dB_c,dB_o,dB,dB_hid = np.zeros_like(self.B_f), np.zeros_like(self.B_i),np.zeros_like(self.B_c),np.zeros_like(self.B_o),np.zeros_like(self.B),np.zeros_like(self.B_hid)
dh_next = np.zeros_like(h_s[0])
dC_next = np.zeros_like(C_s[0])
# w.r.t. softmax input
ddense = np.copy(y_s[149])
ddense[np.arange(len(Y)),np.argmax(Y,1)] -= 1
#ddense[np.argmax(Y,1)] -=1
#ddense = y_s[149] - Y
# Softmax classifier's :
dW = np.dot(v_1s[149].T,ddense)
dB = np.sum(ddense,axis = 0, keepdims = True)
ddense_hid = np.dot(ddense,self.W.T) * activations.dReLU(v_1s[149])
dW_hid = np.dot(h_s[149].T,ddense_hid)
dB_hid = np.sum(ddense_hid,axis = 0, keepdims = True)
# Backprop through time:
for t in reversed(range(1,self.seq_len)):
# Just equating more meaningful names
stack,forget_gate,input_gate,cell_bar,cell_state,output_gate,hidden_state,dense,probs = z_s[t], f_s[t], i_s[t], C_bar_s[t], C_s[t], o_s[t], h_s[t],v_s[t], y_s[t]
C_prev = C_s[t-1]
# w.r.t. softmax input
#ddense = np.copy(probs)
#ddense[np.arange(len(Y)),np.argmax(Y,1)] -= 1
#ddense[np.arange(len(Y)),np.argmax(Y,1)] -=1
# Softmax classifier's :
#dW += np.dot(hidden_state.T,ddense)
#dB += np.sum(ddense,axis = 0, keepdims = True)
# Output gate :
dh = np.dot(ddense_hid,self.W_hid.T) + dh_next
do = dh * np.tanh(cell_state)
do = do * dsigmoid(output_gate)
dW_o += np.dot(stack.T,do)
dB_o += np.sum(do,axis = 0, keepdims = True)
# Cell state:
dC = np.copy(dC_next)
dC += dh * output_gate * activations.dtanh(cell_state)
dC_bar = dC * input_gate
dC_bar = dC_bar * dtanh(cell_bar)
dW_c += np.dot(stack.T,dC_bar)
dB_c += np.sum(dC_bar,axis = 0, keepdims = True)
# Input gate:
di = dC * cell_bar
di = dsigmoid(input_gate) * di
dW_i += np.dot(stack.T,di)
dB_i += np.sum(di,axis = 0,keepdims = True)
# Forget gate:
df = dC * C_prev
df = df * dsigmoid(forget_gate)
dW_f += np.dot(stack.T,df)
dB_f += np.sum(df,axis = 0, keepdims = True)
dz = np.dot(df,self.W_f.T) + np.dot(di,self.W_i.T) + np.dot(dC_bar,self.W_c.T) + np.dot(do,self.W_o.T)
dh_next = dz[:,-self.hidden_dim_1:]
dC_next = forget_gate * dC
# List of gradients :
grads = [dW,dB,dW_hid,dB_hid,dW_o,dB_o,dW_c,dB_c,dW_i,dB_i,dW_f,dB_f]
# Clipping gradients anyway
for grad in grads:
np.clip(grad, -15, 15, out = grad)
return h_s[self.seq_len - 1],C_s[self.seq_len -1 ],grads
def fit(self,X,Y,X_val,y_val,epochs = 50 ,optimizer = 'SGD',verbose = True, crossVal = False):
"""
Given the traning dataset,their labels and number of epochs
fitting the model, and measure the performance
by validating training dataset.
"""
for epoch in range(epochs):
print(f'Epoch : {epoch + 1}')
perm = np.random.permutation(3000)
h_prev,C_prev = np.zeros((self.batch_size,self.hidden_dim_1)),np.zeros((self.batch_size,self.hidden_dim_1))
for i in range(round(X.shape[0]/self.batch_size) - 1):
batch_start = i * self.batch_size
batch_finish = (i+1) * self.batch_size
index = perm[batch_start:batch_finish]
# Feeding random indexes:
X_feed = X[index]
y_feed = Y[index]
# Forward + BPTT + SGD:
cache_train = self.forward(X_feed,h_prev,C_prev)
h,c,grads = self.BPTT(cache_train,y_feed)
if optimizer == 'SGD':
self.SGD(grads)
elif optimizer == 'AdaGrad' :
self.AdaGrad(grads)
elif optimizer == 'RMSprop':
self.RMSprop(grads)
elif optimizer == 'VanillaAdam':
self.VanillaAdam(grads)
else:
self.Adam(grads)
# Hidden state -------> Previous hidden state
# Cell state ---------> Previous cell state
h_prev,C_prev = h,c
# Training metrics calculations:
cross_loss_train = self.CategoricalCrossEntropy(y_feed,cache_train[8][149])
predictions_train = self.predict(X)
acc_train = metrics.accuracy(np.argmax(Y,1),predictions_train)
# Validation metrics calculations:
test_prevs = np.zeros((X_val.shape[0],self.hidden_dim_1))
_,__,___,____,_____,______,_______,________,probs_test,a,b = self.forward(X_val,test_prevs,test_prevs)
cross_loss_val = self.CategoricalCrossEntropy(y_val,probs_test[149])
predictions_val = np.argmax(probs_test[149],1)
acc_val = metrics.accuracy(np.argmax(y_val,1),predictions_val)
if verbose:
print(f"[{epoch + 1}/{epochs}] ------> Training : Accuracy : {acc_train}")
print(f"[{epoch + 1}/{epochs}] ------> Training : Loss : {cross_loss_train}")
print('______________________________________________________________________________________\n')
print(f"[{epoch + 1}/{epochs}] ------> Testing : Accuracy : {acc_val}")
print(f"[{epoch + 1}/{epochs}] ------> Testing : Loss : {cross_loss_val}")
print('______________________________________________________________________________________\n')
self.train_loss.append(cross_loss_train)
self.test_loss.append(cross_loss_val)
self.train_acc.append(acc_train)
self.test_acc.append(acc_val)
def params(self):
"""
Return all weights/biases in sequential order starting from end in list form.
"""
return [self.W,self.B,self.W_hid,self.B_hid,self.W_o,self.B_o,self.W_c,self.B_c,self.W_i,self.B_i,self.W_f,self.B_f]
def SGD(self,grads):
"""
Stochastic gradient descent with momentum on mini-batches.
"""
prevs = []
for param,grad,prev_update in zip(self.params(),grads,self.previous_updates):
delta = self.learning_rate * grad - self.mom_coeff * prev_update
param -= delta
prevs.append(delta)
self.previous_updates = prevs
self.learning_rate *= 0.99999
def AdaGrad(self,grads):
"""
AdaGrad adaptive optimization algorithm.
"""
i = 0
for param,grad in zip(self.params(),grads):
self.cache[i] += grad **2
param += -self.learning_rate * grad / (np.sqrt(self.cache[i]) + 1e-6)
i += 1
def RMSprop(self,grads,decay_rate = 0.9):
"""
RMSprop adaptive optimization algorithm
"""
i = 0
for param,grad in zip(self.params(),grads):
self.cache_rmsprop[i] = decay_rate * self.cache_rmsprop[i] + (1-decay_rate) * grad **2
param += - self.learning_rate * grad / (np.sqrt(self.cache_rmsprop[i])+ 1e-6)
i += 1
def VanillaAdam(self,grads,beta1 = 0.9,beta2 = 0.999):
"""
Adam optimizer, but bias correction is not implemented
"""
i = 0
for param,grad in zip(self.params(),grads):
self.m[i] = beta1 * self.m[i] + (1-beta1) * grad
self.v[i] = beta2 * self.v[i] + (1-beta2) * grad **2
param += -self.learning_rate * self.m[i] / (np.sqrt(self.v[i]) + 1e-8)
i += 1
def Adam(self,grads,beta1 = 0.9,beta2 = 0.999):
"""
Adam optimizer, bias correction is implemented.
"""
i = 0
for param,grad in zip(self.params(),grads):
self.m[i] = beta1 * self.m[i] + (1-beta1) * grad
self.v[i] = beta2 * self.v[i] + (1-beta2) * grad **2
m_corrected = self.m[i] / (1-beta1**self.t)
v_corrected = self.v[i] / (1-beta2**self.t)
param += -self.learning_rate * m_corrected / (np.sqrt(v_corrected) + 1e-8)
i += 1
self.t +=1
def CategoricalCrossEntropy(self,labels,preds):
"""
Computes cross entropy between labels and model's predictions
"""
predictions = np.clip(preds, 1e-12, 1. - 1e-12)
N = predictions.shape[0]
return -np.sum(labels * np.log(predictions + 1e-9)) / N
def predict(self,X):
"""
Return predictions, (not one hot encoded format)
"""
# Give zeros to hidden/cell states:
pasts = np.zeros((X.shape[0],self.hidden_dim_1))
_,__,___,____,_____,______,_______,_______,probs,a,b = self.forward(X,pasts,pasts)
return np.argmax(probs[149],axis=1)
def history(self):
return {'TrainLoss' : self.train_loss,
'TrainAcc' : self.train_acc,
'TestLoss' : self.test_loss,
'TestAcc' : self.test_acc}
# %%
mutl_layer_lstm = Multi_Layer_LSTM(learning_rate=1e-3,batch_size=32,hidden_dim_1 = 128,hidden_dim_2=64,mom_coeff=0.0)
mutl_layer_lstm.fit(X_train,y_train,X_test,y_test,epochs=15,optimizer='Adam')
# %%
mutl_layer_lstm_history = mutl_layer_lstm.history()
plt.figure()
plt.plot(mutl_layer_lstm_history['TrainAcc'],'-o')
plt.plot(lstm_history['TrainAcc'],'-o')
plt.xlabel('# of epochs')
plt.ylabel('Loss')
plt.title('Training Accuracy over epochs')
plt.legend(['Multi Layer LSTM','LSTM'])
plt.show()
plt.figure()
plt.plot(mutl_layer_lstm_history['TestAcc'],'-o')
plt.plot(lstm_history['TestAcc'],'-o')
plt.xlabel('# of epochs')
plt.ylabel('Loss')
plt.title('Testing Accuracy over epochs')
plt.legend(['Multi Layer LSTM','LSTM'])
plt.show()
plt.figure()
plt.plot(mutl_layer_lstm_history['TrainLoss'],'-o')
plt.plot(lstm_history['TrainLoss'],'-o')
plt.xlabel('# of epochs')
plt.ylabel('Loss')
plt.title('Training Loss over epochs')
plt.legend(['Multi Layer LSTM','LSTM'])
plt.show()
plt.figure()
plt.plot(mutl_layer_lstm_history['TestLoss'],'-o')
plt.plot(lstm_history['TestLoss'],'-o')
plt.xlabel('# of epochs')
plt.ylabel('Loss')
plt.title('Testing Loss over epochs')
plt.legend(['Multi Layer LSTM','LSTM'])
plt.show()
# %%
mutl_layer_lstm.fit(X_train,y_train,X_test,y_test,epochs=15,optimizer = 'Vanilla')
# %%
mutl_layer_lstm_history = mutl_layer_lstm.history()
# %%
plt.figure()
plt.plot(mutl_layer_lstm_history['TestLoss'],'-o')
plt.xlabel('# of epochs')
plt.ylabel('Loss')
plt.title('Testing Loss over epochs')
plt.show()
plt.figure()
plt.plot(mutl_layer_lstm_history['TrainLoss'],'-o')
plt.xlabel('# of epochs')
plt.ylabel('Loss')
plt.title('Testing Loss over epochs')
plt.show()
plt.figure()
plt.plot(mutl_layer_lstm_history['TestAcc'],'-o')
plt.xlabel('# of epochs')
plt.ylabel('Loss')
plt.title('Testing Loss over epochs')
plt.show()
plt.figure()
plt.plot(mutl_layer_lstm_history['TrainAcc'],'-o')
plt.xlabel('# of epochs')
plt.ylabel('Loss')
plt.title('Testing Loss over epochs')
plt.show()
# %%
class GRU(object):
"""
Gater recurrent unit, encapsulates all necessary logic for training, then built the hyperparameters and architecture of the network.
"""
def __init__(self,input_dim = 3,hidden_dim = 128,output_class = 6,seq_len = 150,batch_size = 32,learning_rate = 1e-1,mom_coeff = 0.85):
"""
Initialization of weights/biases and other configurable parameters.
"""
np.random.seed(32)
self.input_dim = input_dim
self.hidden_dim = hidden_dim
# Unfold case T = 150 :
self.seq_len = seq_len
self.output_class = output_class
self.learning_rate = learning_rate
self.batch_size = batch_size
self.mom_coeff = mom_coeff
# Xavier uniform scaler :
Xavier = lambda fan_in,fan_out : math.sqrt(6/(fan_in + fan_out))
lim1 = Xavier(self.input_dim,self.hidden_dim)
lim1_hid = Xavier(self.hidden_dim,self.hidden_dim)
self.W_z = np.random.uniform(-lim1,lim1,(self.input_dim,self.hidden_dim))
self.U_z = np.random.uniform(-lim1_hid,lim1_hid,(self.hidden_dim,self.hidden_dim))
self.B_z = np.random.uniform(-lim1,lim1,(1,self.hidden_dim))
self.W_r = np.random.uniform(-lim1,lim1,(self.input_dim,self.hidden_dim))
self.U_r = np.random.uniform(-lim1_hid,lim1_hid,(self.hidden_dim,self.hidden_dim))
self.B_r = np.random.uniform(-lim1,lim1,(1,self.hidden_dim))
self.W_h = np.random.uniform(-lim1,lim1,(self.input_dim,self.hidden_dim))
self.U_h = np.random.uniform(-lim1_hid,lim1_hid,(self.hidden_dim,self.hidden_dim))
self.B_h = np.random.uniform(-lim1,lim1,(1,self.hidden_dim))
lim2 = Xavier(self.hidden_dim,self.output_class)
self.W = np.random.uniform(-lim2,lim2,(self.hidden_dim,self.output_class))
self.B = np.random.uniform(-lim2,lim2,(1,self.output_class))
# To keep track loss and accuracy score :
self.train_loss,self.test_loss,self.train_acc,self.test_acc = [],[],[],[]
# To keep previous updates in momentum :
self.previous_updates = [0] * 10
# For AdaGrad:
self.cache = [0] * 11
self.cache_rmsprop = [0] * 11
self.m = [0] * 11
self.v = [0] * 11
self.t = 1
def cell_forward(self,X,h_prev):
"""
Takes input, previous hidden state and previous cell state, compute:
--- Forget gate + Input gate + New candidate input + New cell state +
output gate + hidden state. Then, classify by softmax.
"""
# Update gate:
update_gate = activations.sigmoid(np.dot(X,self.W_z) + np.dot(h_prev,self.U_z) + self.B_z)
# Reset gate:
reset_gate = activations.sigmoid(np.dot(X,self.W_r) + np.dot(h_prev,self.U_r) + self.B_r)
# Current memory content:
h_hat = np.tanh(np.dot(X,self.W_h) + np.dot(np.multiply(reset_gate,h_prev),self.U_h) + self.B_h)
# Hidden state:
hidden_state = np.multiply(update_gate,h_prev) + np.multiply((1-update_gate),h_hat)
# Classifiers (Softmax) :
dense = np.dot(hidden_state,self.W) + self.B
probs = activations.softmax(dense)
return (update_gate,reset_gate,h_hat,hidden_state,dense,probs)
def forward(self,X,h_prev):
x_s,z_s,r_s,h_hat = {},{},{},{}
h_s = {}
y_s,p_s = {},{}
h_s[-1] = np.copy(h_prev)
for t in range(self.seq_len):
x_s[t] = X[:,t,:]
z_s[t], r_s[t], h_hat[t], h_s[t], y_s[t], p_s[t] = self.cell_forward(x_s[t],h_s[t-1])
return (x_s,z_s, r_s, h_hat, h_s, y_s, p_s)
def BPTT(self,outs,Y):
x_s,z_s, r_s, h_hat, h_s, y_s, p_s = outs
dW_z, dW_r,dW_h, dW = np.zeros_like(self.W_z), np.zeros_like(self.W_r), np.zeros_like(self.W_h),np.zeros_like(self.W)
dU_z, dU_r,dU_h, = np.zeros_like(self.U_z), np.zeros_like(self.U_r), np.zeros_like(self.U_h)
dB_z, dB_r,dB_h,dB = np.zeros_like(self.B_z), np.zeros_like(self.B_r),np.zeros_like(self.B_h),np.zeros_like(self.B)
dh_next = np.zeros_like(h_s[0])
# w.r.t. softmax input
ddense = np.copy(p_s[149])
ddense[np.arange(len(Y)),np.argmax(Y,1)] -= 1
#ddense[np.argmax(Y,1)] -=1
#ddense = y_s[149] - Y
# Softmax classifier's :
dW = np.dot(h_s[149].T,ddense)
dB = np.sum(ddense,axis = 0, keepdims = True)
# Backprop through time:
for t in reversed(range(1,self.seq_len)):
# w.r.t. softmax input
#ddense = np.copy(probs)
#ddense[np.arange(len(Y)),np.argmax(Y,1)] -= 1
#ddense[np.arange(len(Y)),np.argmax(Y,1)] -=1
# Softmax classifier's :
#dW += np.dot(hidden_state.T,ddense)
#dB += np.sum(ddense,axis = 0, keepdims = True)
# Curernt memort state :
dh = np.dot(ddense,self.W.T) + dh_next
dh_hat = dh * (1-z_s[t])
dh_hat = dh_hat * dtanh(h_hat[t])
dW_h += np.dot(x_s[t].T,dh_hat)
dU_h += np.dot((r_s[t] * h_s[t-1]).T,dh_hat)
dB_h += np.sum(dh_hat,axis = 0, keepdims = True)
# Reset gate:
dr_1 = np.dot(dh_hat,self.U_h.T)
dr = dr_1 * h_s[t-1]
dr = dr * dsigmoid(r_s[t])
dW_r += np.dot(x_s[t].T,dr)
dU_r += np.dot(h_s[t-1].T,dr)
dB_r += np.sum(dr,axis = 0, keepdims = True)
# Forget gate:
dz = dh * (h_s[t-1] - h_hat[t])
dz = dz * dsigmoid(z_s[t])
dW_z += np.dot(x_s[t].T,dz)
dU_z += np.dot(h_s[t-1].T,dz)
dB_z += np.sum(dz,axis = 0, keepdims = True)
# Nexts:
dh_next = np.dot(dz,self.U_z.T) + (dh * z_s[t]) + (dr_1 * r_s[t]) + np.dot(dr,self.U_r.T)
# List of gradients :
grads = [dW,dB,dW_z,dU_z,dB_z,dW_r,dU_r,dB_r,dW_h,dU_h,dB_h]
# Clipping gradients anyway
for grad in grads:
np.clip(grad, -15, 15, out = grad)
return h_s[self.seq_len - 1],grads
def fit(self,X,Y,X_val,y_val,epochs = 50 ,optimizer = 'SGD',verbose = True, crossVal = False):
"""
Given the traning dataset,their labels and number of epochs
fitting the model, and measure the performance
by validating training dataset.
"""
for epoch in range(epochs):
print(f'Epoch : {epoch + 1}')
perm = np.random.permutation(3000)
h_prev = np.zeros((self.batch_size,self.hidden_dim))
for i in range(round(X.shape[0]/self.batch_size) - 1):
batch_start = i * self.batch_size
batch_finish = (i+1) * self.batch_size
index = perm[batch_start:batch_finish]
# Feeding random indexes:
X_feed = X[index]
y_feed = Y[index]
# Forward + BPTT + SGD:
cache_train = self.forward(X_feed,h_prev)
h,grads = self.BPTT(cache_train,y_feed)
if optimizer == 'SGD':
self.SGD(grads)
elif optimizer == 'AdaGrad' :
self.AdaGrad(grads)
elif optimizer == 'RMSprop':
self.RMSprop(grads)
elif optimizer == 'VanillaAdam':
self.VanillaAdam(grads)
else:
self.Adam(grads)
# Hidden state -------> Previous hidden state
h_prev= h
# Training metrics calculations:
cross_loss_train = self.CategoricalCrossEntropy(y_feed,cache_train[6][149])
predictions_train = self.predict(X)
acc_train = metrics.accuracy(np.argmax(Y,1),predictions_train)
# Validation metrics calculations:
test_prevs = np.zeros((X_val.shape[0],self.hidden_dim))
_,__,___,____,_____,______,probs_test = self.forward(X_val,test_prevs)
cross_loss_val = self.CategoricalCrossEntropy(y_val,probs_test[149])
predictions_val = np.argmax(probs_test[149],1)
acc_val = metrics.accuracy(np.argmax(y_val,1),predictions_val)
if verbose:
print(f"[{epoch + 1}/{epochs}] ------> Training : Accuracy : {acc_train}")
print(f"[{epoch + 1}/{epochs}] ------> Training : Loss : {cross_loss_train}")
print('______________________________________________________________________________________\n')
print(f"[{epoch + 1}/{epochs}] ------> Testing : Accuracy : {acc_val}")
print(f"[{epoch + 1}/{epochs}] ------> Testing : Loss : {cross_loss_val}")
print('______________________________________________________________________________________\n')
self.train_loss.append(cross_loss_train)
self.test_loss.append(cross_loss_val)
self.train_acc.append(acc_train)
self.test_acc.append(acc_val)
def params(self):
"""
Return all weights/biases in sequential order starting from end in list form.
"""
return [self.W,self.B,self.W_z,self.U_z,self.B_z,self.W_r,self.U_r,self.B_r,self.W_h,self.U_h,self.B_h]
def SGD(self,grads):
"""
Stochastic gradient descent with momentum on mini-batches.
"""
prevs = []
for param,grad,prev_update in zip(self.params(),grads,self.previous_updates):
delta = self.learning_rate * grad - self.mom_coeff * prev_update
param -= delta
prevs.append(delta)
self.previous_updates = prevs
self.learning_rate *= 0.99999
def AdaGrad(self,grads):
"""
AdaGrad adaptive optimization algorithm.
"""
i = 0
for param,grad in zip(self.params(),grads):
self.cache[i] += grad **2
param += -self.learning_rate * grad / (np.sqrt(self.cache[i]) + 1e-6)
i += 1
def RMSprop(self,grads,decay_rate = 0.9):
"""
RMSprop adaptive optimization algorithm
"""
i = 0
for param,grad in zip(self.params(),grads):
self.cache_rmsprop[i] = decay_rate * self.cache_rmsprop[i] + (1-decay_rate) * grad **2
param += - self.learning_rate * grad / (np.sqrt(self.cache_rmsprop[i])+ 1e-6)
i += 1
def VanillaAdam(self,grads,beta1 = 0.9,beta2 = 0.999):
"""
Adam optimizer, but bias correction is not implemented
"""
i = 0
for param,grad in zip(self.params(),grads):
self.m[i] = beta1 * self.m[i] + (1-beta1) * grad
self.v[i] = beta2 * self.v[i] + (1-beta2) * grad **2
param += -self.learning_rate * self.m[i] / (np.sqrt(self.v[i]) + 1e-8)
i += 1
def Adam(self,grads,beta1 = 0.9,beta2 = 0.999):
"""
Adam optimizer, bias correction is implemented.
"""
i = 0
for param,grad in zip(self.params(),grads):
self.m[i] = beta1 * self.m[i] + (1-beta1) * grad
self.v[i] = beta2 * self.v[i] + (1-beta2) * grad **2
m_corrected = self.m[i] / (1-beta1**self.t)
v_corrected = self.v[i] / (1-beta2**self.t)
param += -self.learning_rate * m_corrected / (np.sqrt(v_corrected) + 1e-8)
i += 1
self.t +=1
def CategoricalCrossEntropy(self,labels,preds):
"""
Computes cross entropy between labels and model's predictions
"""
predictions = np.clip(preds, 1e-12, 1. - 1e-12)
N = predictions.shape[0]
return -np.sum(labels * np.log(predictions + 1e-9)) / N
def predict(self,X):
"""
Return predictions, (not one hot encoded format)
"""
# Give zeros to hidden/cell states:
pasts = np.zeros((X.shape[0],self.hidden_dim))
_,__,___,____,_____,______,probs = self.forward(X,pasts)
return np.argmax(probs[149],axis=1)
def history(self):
return {'TrainLoss' : self.train_loss,
'TrainAcc' : self.train_acc,
'TestLoss' : self.test_loss,
'TestAcc' : self.test_acc}
# %%
gru = GRU(hidden_dim=128,learning_rate=1e-3,batch_size=32,mom_coeff=0.0)
# %%
gru.fit(X_train,y_train,X_test,y_test,epochs = 15,optimizer = 'RMSprop')
# %%
gru_history = gru.history()
# %%
# For figure 97:
plt.figure()
plt.plot(gru_history['TrainLoss'],'-o')
plt.plot(gru_history['TestLoss'],'-o')
plt.xlabel('# of epochs')
plt.ylabel('Loss')
plt.title('Loss over epochs')
plt.legend(['Train Loss','Test Loss'])
plt.show()
plt.figure()
plt.plot(gru_history['TrainAcc'],'-o')
plt.plot(gru_history['TestAcc'],'-o')
plt.xlabel('# of epochs')
plt.ylabel('Loss')
plt.title('Accuracy over epochs')
plt.legend(['Train Acc','Test Acc'])
plt.show()
# %%
# For figure 98:
multi_layer_gru_history = multi_layer_gru.history()
plt.figure()
plt.plot(multi_layer_gru_history['TrainAcc'],'-o')
plt.plot(gru_history['TrainAcc'],'-o')
plt.xlabel('# of epochs')
plt.ylabel('Loss')
plt.title('Training Accuracy over epochs')
plt.legend(['Multi Layer GRU','GRU'])
plt.show()
plt.figure()
plt.plot(multi_layer_gru_history['TestAcc'],'-o')
plt.plot(gru_history['TestAcc'],'-o')
plt.xlabel('# of epochs')
plt.ylabel('Loss')
plt.title('Testing Accuracy over epochs')
plt.legend(['Multi Layer GRU','GRU'])
plt.show()
plt.figure()
plt.plot(multi_layer_gru_history['TrainLoss'],'-o')
plt.plot(gru_history['TrainLoss'],'-o')
plt.xlabel('# of epochs')
plt.ylabel('Loss')
plt.title('Training Loss over epochs')
plt.legend(['Multi Layer GRU','GRU'])
plt.show()
plt.figure()
plt.plot(multi_layer_gru_history['TestLoss'],'-o')
plt.plot(gru_history['TestLoss'],'-o')
plt.xlabel('# of epochs')
plt.ylabel('Loss')
plt.title('Testing Loss over epochs')
plt.legend(['Multi Layer GRU','GRU'])
plt.show()
# %%
# For figure 99:
three_layer_rnn_history = three_layer_rnn.history()
plt.figure()
plt.plot(gru_history['TrainAcc'],'-o')
plt.plot(lstm_history['TrainAcc'],'-o')
plt.plot(three_layer_rnn_history['TrainAcc'],'-o')
plt.xlabel('# of epochs')
plt.ylabel('Loss')
plt.title('Training Accuracy over epochs')
plt.legend(['GRU','LSTM','RNN'])
plt.show()
plt.figure()
plt.plot(gru_history['TestAcc'],'-o')
plt.plot(lstm_history['TestAcc'],'-o')
plt.plot(three_layer_rnn_history['TestAcc'],'-o')
plt.xlabel('# of epochs')
plt.ylabel('Loss')
plt.title('Testing Accuracy over epochs')
plt.legend(['GRU','LSTM','RNN'])
plt.show()
plt.figure()
plt.plot(gru_history['TrainLoss'],'-o')
plt.plot(lstm_history['TrainLoss'],'-o')
plt.plot(three_layer_rnn_history['TrainLoss'],'-o')
plt.xlabel('# of epochs')
plt.ylabel('Loss')
plt.title('Training Loss over epochs')
plt.legend(['GRU','LSTM','RNN'])
plt.show()
plt.figure()
plt.plot(gru_history['TestLoss'],'-o')
plt.plot(lstm_history['TestLoss'],'-o')
plt.plot(three_layer_rnn_history['TestLoss'],'-o')
plt.xlabel('# of epochs')
plt.ylabel('Loss')
plt.title('Testing Loss over epochs')
plt.legend(['GRU','LSTM','RNN'])
plt.show()
# %%
train_preds_gru = gru.predict(X_train)
test_preds_gru = gru.predict(X_test)
confusion_mat_train_gru = metrics.confusion_matrix(np.argmax(y_train,1),train_preds_gru)
confusion_mat_test_gru = metrics.confusion_matrix(np.argmax(y_test,1),test_preds_gru)
body_movements = ['downstairs','jogging','sitting','standing','upstairs','walking']
confusion_mat_train_gru.columns = body_movements
confusion_mat_train_gru.index = body_movements
confusion_mat_test_gru.columns = body_movements
confusion_mat_test_gru.index = body_movements
sns.heatmap(confusion_mat_train_gru/np.sum(confusion_mat_train_gru), annot=True,
fmt='.2%',cmap = 'Blues')
plt.show()
sns.heatmap(confusion_mat_test_gru/np.sum(confusion_mat_test_gru), annot=True,
fmt='.2%',cmap = 'Blues')
plt.show()
# %%
class Multi_layer_GRU(object):
"""
Gater recurrent unit, encapsulates all necessary logic for training, then built the hyperparameters and architecture of the network.
"""
def __init__(self,input_dim = 3,hidden_dim_1 = 128,hidden_dim_2 = 64,output_class = 6,seq_len = 150,batch_size = 32,learning_rate = 1e-1,mom_coeff = 0.85):
"""
Initialization of weights/biases and other configurable parameters.
"""
np.random.seed(150)
self.input_dim = input_dim
self.hidden_dim_1 = hidden_dim_1
self.hidden_dim_2 = hidden_dim_2
# Unfold case T = 150 :
self.seq_len = seq_len
self.output_class = output_class
self.learning_rate = learning_rate
self.batch_size = batch_size
self.mom_coeff = mom_coeff
# Xavier uniform scaler :
Xavier = lambda fan_in,fan_out : math.sqrt(6/(fan_in + fan_out))
lim1 = Xavier(self.input_dim,self.hidden_dim_1)
lim1_hid = Xavier(self.hidden_dim_1,self.hidden_dim_1)
self.W_z = np.random.uniform(-lim1,lim1,(self.input_dim,self.hidden_dim_1))
self.U_z = np.random.uniform(-lim1_hid,lim1_hid,(self.hidden_dim_1,self.hidden_dim_1))
self.B_z = np.random.uniform(-lim1,lim1,(1,self.hidden_dim_1))
self.W_r = np.random.uniform(-lim1,lim1,(self.input_dim,self.hidden_dim_1))
self.U_r = np.random.uniform(-lim1_hid,lim1_hid,(self.hidden_dim_1,self.hidden_dim_1))
self.B_r = np.random.uniform(-lim1,lim1,(1,self.hidden_dim_1))
self.W_h = np.random.uniform(-lim1,lim1,(self.input_dim,self.hidden_dim_1))
self.U_h = np.random.uniform(-lim1_hid,lim1_hid,(self.hidden_dim_1,self.hidden_dim_1))
self.B_h = np.random.uniform(-lim1,lim1,(1,self.hidden_dim_1))
lim2_hid = Xavier(self.hidden_dim_1,self.hidden_dim_2)
self.W_hid = np.random.uniform(-lim2_hid,lim2_hid,(self.hidden_dim_1,self.hidden_dim_2))
self.B_hid = np.random.uniform(-lim2_hid,lim2_hid,(1,self.hidden_dim_2))
lim2 = Xavier(self.hidden_dim_2,self.output_class)
self.W = np.random.uniform(-lim2,lim2,(self.hidden_dim_2,self.output_class))
self.B = np.random.uniform(-lim2,lim2,(1,self.output_class))
# To keep track loss and accuracy score :
self.train_loss,self.test_loss,self.train_acc,self.test_acc = [],[],[],[]
# To keep previous updates in momentum :
self.previous_updates = [0] * 13
# For AdaGrad:
self.cache = [0] * 13
self.cache_rmsprop = [0] * 13
self.m = [0] * 13
self.v = [0] * 13
self.t = 1
def cell_forward(self,X,h_prev):
# Update gate:
update_gate = activations.sigmoid(np.dot(X,self.W_z) + np.dot(h_prev,self.U_z) + self.B_z)
# Reset gate:
reset_gate = activations.sigmoid(np.dot(X,self.W_r) + np.dot(h_prev,self.U_r) + self.B_r)
# Current memory content:
h_hat = np.tanh(np.dot(X,self.W_h) + np.dot(np.multiply(reset_gate,h_prev),self.U_h) + self.B_h)
# Hidden state:
hidden_state = np.multiply(update_gate,h_prev) + np.multiply((1-update_gate),h_hat)
# Hidden MLP:
hid_dense = np.dot(hidden_state,self.W_hid) + self.B_hid
relu = activations.ReLU(hid_dense)
# Classifiers (Softmax) :
dense = np.dot(relu,self.W) + self.B
probs = activations.softmax(dense)
return (update_gate,reset_gate,h_hat,hidden_state,hid_dense,relu,dense,probs)
def forward(self,X,h_prev):
x_s,z_s,r_s,h_hat = {},{},{},{}
h_s = {}
hd_s,relu_s = {},{}
y_s,p_s = {},{}
h_s[-1] = np.copy(h_prev)
for t in range(self.seq_len):
x_s[t] = X[:,t,:]
z_s[t], r_s[t], h_hat[t], h_s[t],hd_s[t],relu_s[t], y_s[t], p_s[t] = self.cell_forward(x_s[t],h_s[t-1])
return (x_s,z_s, r_s, h_hat, h_s, hd_s,relu_s, y_s, p_s)
def BPTT(self,outs,Y):
x_s,z_s, r_s, h_hat, h_s, hd_s,relu_s, y_s, p_s = outs
dW_z, dW_r,dW_h, dW = np.zeros_like(self.W_z), np.zeros_like(self.W_r), np.zeros_like(self.W_h),np.zeros_like(self.W)
dW_hid = np.zeros_like(self.W_hid)
dU_z, dU_r,dU_h = np.zeros_like(self.U_z), np.zeros_like(self.U_r), np.zeros_like(self.U_h)
dB_z, dB_r,dB_h,dB = np.zeros_like(self.B_z), np.zeros_like(self.B_r),np.zeros_like(self.B_h),np.zeros_like(self.B)
dB_hid = np.zeros_like(self.B_hid)
dh_next = np.zeros_like(h_s[0])
# w.r.t. softmax input
ddense = np.copy(p_s[149])
ddense[np.arange(len(Y)),np.argmax(Y,1)] -= 1
#ddense[np.argmax(Y,1)] -=1
#ddense = y_s[149] - Y
# Softmax classifier's :
dW = np.dot(relu_s[149].T,ddense)
dB = np.sum(ddense,axis = 0, keepdims = True)
ddense_hid = np.dot(ddense,self.W.T) * activations.dReLU(hd_s[149])
dW_hid = np.dot(h_s[149].T,ddense_hid)
dB_hid = np.sum(ddense_hid,axis = 0, keepdims = True)
# Backprop through time:
for t in reversed(range(1,self.seq_len)):
# Curernt memort state :
dh = np.dot(ddense_hid,self.W_hid.T) + dh_next
dh_hat = dh * (1-z_s[t])
dh_hat = dh_hat * dtanh(h_hat[t])
dW_h += np.dot(x_s[t].T,dh_hat)
dU_h += np.dot((r_s[t] * h_s[t-1]).T,dh_hat)
dB_h += np.sum(dh_hat,axis = 0, keepdims = True)
# Reset gate:
dr_1 = np.dot(dh_hat,self.U_h.T)
dr = dr_1 * h_s[t-1]
dr = dr * dsigmoid(r_s[t])
dW_r += np.dot(x_s[t].T,dr)
dU_r += np.dot(h_s[t-1].T,dr)
dB_r += np.sum(dr,axis = 0, keepdims = True)
# Forget gate:
dz = dh * (h_s[t-1] - h_hat[t])
dz = dz * dsigmoid(z_s[t])
dW_z += np.dot(x_s[t].T,dz)
dU_z += np.dot(h_s[t-1].T,dz)
dB_z += np.sum(dz,axis = 0, keepdims = True)
# Nexts:
dh_next = np.dot(dz,self.U_z.T) + (dh * z_s[t]) + (dr_1 * r_s[t]) + np.dot(dr,self.U_r.T)
# List of gradients :
grads = [dW,dB,dW_hid,dB_hid,dW_z,dU_z,dB_z,dW_r,dU_r,dB_r,dW_h,dU_h,dB_h]
# Clipping gradients anyway
for grad in grads:
np.clip(grad, -15, 15, out = grad)
return h_s[self.seq_len - 1],grads
def fit(self,X,Y,X_val,y_val,epochs = 50 ,optimizer = 'SGD',verbose = True, crossVal = False):
"""
Given the traning dataset,their labels and number of epochs
fitting the model, and measure the performance
by validating training dataset.
"""
for epoch in range(epochs):
print(f'Epoch : {epoch + 1}')
perm = np.random.permutation(3000)
# Equate 0 in every epoch:
h_prev = np.zeros((self.batch_size,self.hidden_dim_1))
for i in range(round(X.shape[0]/self.batch_size) - 1):
batch_start = i * self.batch_size
batch_finish = (i+1) * self.batch_size
index = perm[batch_start:batch_finish]
# Feeding random indexes:
X_feed = X[index]
y_feed = Y[index]
# Forward + BPTT + Optimization:
cache_train = self.forward(X_feed,h_prev)
h,grads = self.BPTT(cache_train,y_feed)
if optimizer == 'SGD':
self.SGD(grads)
elif optimizer == 'AdaGrad' :
self.AdaGrad(grads)
elif optimizer == 'RMSprop':
self.RMSprop(grads)
elif optimizer == 'VanillaAdam':
self.VanillaAdam(grads)
else:
self.Adam(grads)
# Hidden state -------> Previous hidden state
h_prev = h
# Training metrics calculations:
cross_loss_train = self.CategoricalCrossEntropy(y_feed,cache_train[8][149])
predictions_train = self.predict(X)
acc_train = metrics.accuracy(np.argmax(Y,1),predictions_train)
# Validation metrics calculations:
test_prevs = np.zeros((X_val.shape[0],self.hidden_dim_1))
_,__,___,____,_____,______,_______,________,probs_test = self.forward(X_val,test_prevs)
cross_loss_val = self.CategoricalCrossEntropy(y_val,probs_test[149])
predictions_val = np.argmax(probs_test[149],1)
acc_val = metrics.accuracy(np.argmax(y_val,1),predictions_val)
if verbose:
print(f"[{epoch + 1}/{epochs}] ------> Training : Accuracy : {acc_train}")
print(f"[{epoch + 1}/{epochs}] ------> Training : Loss : {cross_loss_train}")
print('______________________________________________________________________________________\n')
print(f"[{epoch + 1}/{epochs}] ------> Testing : Accuracy : {acc_val}")
print(f"[{epoch + 1}/{epochs}] ------> Testing : Loss : {cross_loss_val}")
print('______________________________________________________________________________________\n')
self.train_loss.append(cross_loss_train)
self.test_loss.append(cross_loss_val)
self.train_acc.append(acc_train)
self.test_acc.append(acc_val)
def params(self):
"""
Return all weights/biases in sequential order starting from end in list form.
"""
return [self.W,self.B,self.W_hid,self.B_hid,self.W_z,self.U_z,self.B_z,self.W_r,self.U_r,self.B_r,self.W_h,self.U_h,self.B_h]
def SGD(self,grads):
"""
Stochastic gradient descent with momentum on mini-batches.
"""
prevs = []
for param,grad,prev_update in zip(self.params(),grads,self.previous_updates):
delta = self.learning_rate * grad + self.mom_coeff * prev_update
param -= delta
prevs.append(delta)
self.previous_updates = prevs
self.learning_rate *= 0.99999
def AdaGrad(self,grads):
"""
AdaGrad adaptive optimization algorithm.
"""
i = 0
for param,grad in zip(self.params(),grads):
self.cache[i] += grad **2
param += -self.learning_rate * grad / (np.sqrt(self.cache[i]) + 1e-6)
i += 1
def RMSprop(self,grads,decay_rate = 0.9):
"""
RMSprop adaptive optimization algorithm
"""
i = 0
for param,grad in zip(self.params(),grads):
self.cache_rmsprop[i] = decay_rate * self.cache_rmsprop[i] + (1-decay_rate) * grad **2
param += - self.learning_rate * grad / (np.sqrt(self.cache_rmsprop[i])+ 1e-6)
i += 1
def VanillaAdam(self,grads,beta1 = 0.9,beta2 = 0.999):
"""
Adam optimizer, but bias correction is not implemented
"""
i = 0
for param,grad in zip(self.params(),grads):
self.m[i] = beta1 * self.m[i] + (1-beta1) * grad
self.v[i] = beta2 * self.v[i] + (1-beta2) * grad **2
param += -self.learning_rate * self.m[i] / (np.sqrt(self.v[i]) + 1e-8)
i += 1
def Adam(self,grads,beta1 = 0.9,beta2 = 0.999):
"""
Adam optimizer, bias correction is implemented.
"""
i = 0
for param,grad in zip(self.params(),grads):
self.m[i] = beta1 * self.m[i] + (1-beta1) * grad
self.v[i] = beta2 * self.v[i] + (1-beta2) * grad **2
m_corrected = self.m[i] / (1-beta1**self.t)
v_corrected = self.v[i] / (1-beta2**self.t)
param += -self.learning_rate * m_corrected / (np.sqrt(v_corrected) + 1e-8)
i += 1
self.t +=1
def CategoricalCrossEntropy(self,labels,preds):
"""
Computes cross entropy between labels and model's predictions
"""
predictions = np.clip(preds, 1e-12, 1. - 1e-12)
N = predictions.shape[0]
return -np.sum(labels * np.log(predictions + 1e-9)) / N
def predict(self,X):
"""
Return predictions, (not one hot encoded format)
"""
# Give zeros to hidden states:
pasts = np.zeros((X.shape[0],self.hidden_dim_1))
_,__,___,____,_____,______,_______,________,probs = self.forward(X,pasts)
return np.argmax(probs[149],axis=1)
def history(self):
return {'TrainLoss' : self.train_loss,
'TrainAcc' : self.train_acc,
'TestLoss' : self.test_loss,
'TestAcc' : self.test_acc}
# %%
multi_layer_gru = Multi_layer_GRU(hidden_dim_1=128,hidden_dim_2=64,learning_rate=1e-3,mom_coeff=0.0,batch_size=32)
# %%
multi_layer_gru.fit(X_train,y_train,X_test,y_test,epochs = 15,optimizer = 'RMSprop')
can_kocagil_21602218_hw3(question)
| 39.916454 | 227 | 0.506292 | 21,892 | 187,767 | 4.011374 | 0.033574 | 0.025724 | 0.026054 | 0.013665 | 0.857852 | 0.830853 | 0.806734 | 0.784598 | 0.763542 | 0.744355 | 0 | 0.030262 | 0.381585 | 187,767 | 4,703 | 228 | 39.924942 | 0.725994 | 0.103069 | 0 | 0.648596 | 0 | 0 | 0.061021 | 0.009124 | 0 | 0 | 0 | 0 | 0.00039 | 1 | 0.059672 | false | 0.00039 | 0.00429 | 0.0078 | 0.108814 | 0.027301 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
ef20c9aed3c2c0868a834ac8582a0f0979304880 | 84 | py | Python | mitmirror/main/adapters/__init__.py | Claayton/mitmirror-api | a78ec3aa84aa3685a26bfaf5e1ba2a3f0f8405d1 | [
"MIT"
] | null | null | null | mitmirror/main/adapters/__init__.py | Claayton/mitmirror-api | a78ec3aa84aa3685a26bfaf5e1ba2a3f0f8405d1 | [
"MIT"
] | 1 | 2021-10-09T20:42:03.000Z | 2021-10-09T20:42:03.000Z | mitmirror/main/adapters/__init__.py | Claayton/mitmirror-api | a78ec3aa84aa3685a26bfaf5e1ba2a3f0f8405d1 | [
"MIT"
] | null | null | null | """Inicializaçao do modulo adapters"""
from .request_adapter import request_adapter
| 28 | 44 | 0.821429 | 10 | 84 | 6.7 | 0.8 | 0.41791 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.095238 | 84 | 2 | 45 | 42 | 0.881579 | 0.380952 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
ef4444b275c26f51262d5a3957f513cb96865446 | 70 | py | Python | ns/mac/__init__.py | serjkazhura/network-simulator | 7542ef8c56b0fd7e488852891deef8606571fce9 | [
"MIT"
] | null | null | null | ns/mac/__init__.py | serjkazhura/network-simulator | 7542ef8c56b0fd7e488852891deef8606571fce9 | [
"MIT"
] | null | null | null | ns/mac/__init__.py | serjkazhura/network-simulator | 7542ef8c56b0fd7e488852891deef8606571fce9 | [
"MIT"
] | null | null | null | from ns.mac.factory import mac_address_factory, BROADCAST_MAC_ADDRESS
| 35 | 69 | 0.885714 | 11 | 70 | 5.272727 | 0.636364 | 0.344828 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.071429 | 70 | 1 | 70 | 70 | 0.892308 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
ef57c37d718e22187949a8c5bc94a53bf365d9f6 | 377 | py | Python | PythonExercicios/ex108/teste.py | MatheusTG/python | 5ec8701ffcdc5ac5a3e6e75dcd789bdec84612ad | [
"MIT"
] | null | null | null | PythonExercicios/ex108/teste.py | MatheusTG/python | 5ec8701ffcdc5ac5a3e6e75dcd789bdec84612ad | [
"MIT"
] | null | null | null | PythonExercicios/ex108/teste.py | MatheusTG/python | 5ec8701ffcdc5ac5a3e6e75dcd789bdec84612ad | [
"MIT"
] | null | null | null | from ex108 import moeda
num = float(input('Digite um número: R$'))
print(f'A metade de {moeda.moeda(num)} é {moeda.moeda(moeda.metade(num))}')
print(f'O dobro de {moeda.moeda(num)} é {moeda.moeda(moeda.dobro(num))}')
print(f'Aumentando 10% teremos o valor {moeda.moeda(moeda.aumentar(num, 10))}')
print(f'Diminuindo 15% teremos o valor {moeda.moeda(moeda.diminuir(num, 15))}')
| 53.857143 | 79 | 0.71618 | 65 | 377 | 4.153846 | 0.415385 | 0.37037 | 0.222222 | 0.111111 | 0.437037 | 0.437037 | 0.22963 | 0.22963 | 0 | 0 | 0 | 0.032258 | 0.095491 | 377 | 6 | 80 | 62.833333 | 0.759531 | 0 | 0 | 0 | 0 | 0.333333 | 0.758621 | 0.33687 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.166667 | 0 | 0.166667 | 0.666667 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
323c9afe241fda56a0095e5c2a5d360568718e7f | 31 | py | Python | clorm/util/__init__.py | florianfischer91/clorm | 3569a91daa1d691f0a7f5a9534db925e027cdbf9 | [
"MIT"
] | 21 | 2020-01-07T15:55:54.000Z | 2022-02-13T13:07:49.000Z | clorm/util/__init__.py | florianfischer91/clorm | 3569a91daa1d691f0a7f5a9534db925e027cdbf9 | [
"MIT"
] | 66 | 2020-01-07T16:08:08.000Z | 2022-03-31T07:51:35.000Z | clorm/util/__init__.py | florianfischer91/clorm | 3569a91daa1d691f0a7f5a9534db925e027cdbf9 | [
"MIT"
] | 5 | 2020-07-06T17:36:28.000Z | 2021-11-01T09:32:05.000Z | from .oset import OrderedSet
| 7.75 | 28 | 0.774194 | 4 | 31 | 6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.193548 | 31 | 3 | 29 | 10.333333 | 0.96 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
32572e3d1af3884bcc56928d41a1cbcbec353569 | 108 | py | Python | layers/__init__.py | casperbh96/CNN-From-Scratch | f9553e2a0890620baf42225570a38e7b66a5cec8 | [
"MIT"
] | 2 | 2020-06-06T09:14:14.000Z | 2020-06-28T00:54:13.000Z | layers/__init__.py | casperbh96/CNN-From-Scratch | f9553e2a0890620baf42225570a38e7b66a5cec8 | [
"MIT"
] | null | null | null | layers/__init__.py | casperbh96/CNN-From-Scratch | f9553e2a0890620baf42225570a38e7b66a5cec8 | [
"MIT"
] | 2 | 2021-03-09T22:22:33.000Z | 2022-03-12T14:18:08.000Z | from layers.conv2d import Conv2D
from layers.dense import Dense
from layers.maxpooling2d import MaxPooling2D | 36 | 44 | 0.87037 | 15 | 108 | 6.266667 | 0.4 | 0.319149 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.041237 | 0.101852 | 108 | 3 | 44 | 36 | 0.927835 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
32c5cc8df25184eb5525d4600d8db0f9d0ccdb74 | 22 | py | Python | user00.py | balqui/nothing01234 | 3dae2ed2c9def2886c9fdae88b6ba8ddd061b5ac | [
"MIT"
] | null | null | null | user00.py | balqui/nothing01234 | 3dae2ed2c9def2886c9fdae88b6ba8ddd061b5ac | [
"MIT"
] | null | null | null | user00.py | balqui/nothing01234 | 3dae2ed2c9def2886c9fdae88b6ba8ddd061b5ac | [
"MIT"
] | null | null | null | from pckg0.a import A
| 11 | 21 | 0.772727 | 5 | 22 | 3.4 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.055556 | 0.181818 | 22 | 1 | 22 | 22 | 0.888889 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
08dee6da443fafd12eea3b378765e8a409595d0d | 19 | py | Python | pythondata_cpu_minerva/sources/minerva/units/debug/__init__.py | litex-hub/pythondata-cpu-minerva | ef714b6fd68b73e71a26a71efa45c6d148ad9379 | [
"BSD-2-Clause"
] | 5 | 2021-12-17T03:09:34.000Z | 2022-03-23T20:50:41.000Z | thirdparty/minerva/units/debug/__init__.py | gatecat/cxxrtl-soc-demo | 40317d9406d235e5c54ff2c0dd7e13b5f02fb589 | [
"BSD-2-Clause"
] | 8 | 2021-12-10T22:05:32.000Z | 2021-12-29T13:36:05.000Z | thirdparty/minerva/units/debug/__init__.py | gatecat/cxxrtl-soc-demo | 40317d9406d235e5c54ff2c0dd7e13b5f02fb589 | [
"BSD-2-Clause"
] | 1 | 2021-12-18T16:52:06.000Z | 2021-12-18T16:52:06.000Z | from .top import *
| 9.5 | 18 | 0.684211 | 3 | 19 | 4.333333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.210526 | 19 | 1 | 19 | 19 | 0.866667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
08f40ede641c1ef6069cab99c4cddc72a27de650 | 14,317 | py | Python | tests/integration_tests/test_rules.py | manuel-sommer/DrHeader | c2098d592578b046e77df44445651b66394e0d49 | [
"MIT"
] | null | null | null | tests/integration_tests/test_rules.py | manuel-sommer/DrHeader | c2098d592578b046e77df44445651b66394e0d49 | [
"MIT"
] | null | null | null | tests/integration_tests/test_rules.py | manuel-sommer/DrHeader | c2098d592578b046e77df44445651b66394e0d49 | [
"MIT"
] | null | null | null | import unittest2
from tests.integration_tests import utils
class TestDefaultRules(unittest2.TestCase):
def tearDown(self):
utils.reset_default_rules()
def test__should_validate_all_rules_for_valid_headers(self):
headers = utils.get_headers()
report = utils.process_test(headers=headers)
self.assertEqual(len(report), 0, msg=utils.build_error_message(report))
def test_cache_control__should_exist(self):
headers = utils.delete_headers('Cache-Control')
report = utils.process_test(headers=headers)
expected = {
'rule': 'Cache-Control',
'message': 'Header not included in response',
'severity': 'high',
'expected': ['no-store', 'max-age=0'],
'delimiter': ','
}
self.assertIn(expected, report, msg=utils.build_error_message(report, expected, 'Cache-Control'))
def test_cache_control__should_disable_caching(self):
headers = utils.add_or_modify_header('Cache-Control', 'no-cache')
report = utils.process_test(headers=headers)
expected = {
'rule': 'Cache-Control',
'message': 'Value does not match security policy',
'severity': 'high',
'value': 'no-cache',
'expected': ['no-store', 'max-age=0'],
'delimiter': ','
}
self.assertIn(expected, report, msg=utils.build_error_message(report, expected, 'Cache-Control'))
def test_csp__should_exist(self):
headers = utils.delete_headers('Content-Security-Policy')
report = utils.process_test(headers=headers)
expected = {
'rule': 'Content-Security-Policy',
'message': 'Header not included in response',
'severity': 'high'
}
self.assertIn(expected, report, msg=utils.build_error_message(report, expected, 'Content-Security-Policy'))
def test_csp__should_enforce_default_src(self):
headers = utils.add_or_modify_header('Content-Security-Policy', 'default-src https://example.com')
report = utils.process_test(headers=headers)
expected = {
'rule': 'Content-Security-Policy - default-src',
'message': 'Value does not match security policy. Exactly one of the expected items was expected',
'severity': 'high',
'value': 'https://example.com',
'expected': ['none', 'self']
}
self.assertIn(expected, report, msg=utils.build_error_message(report, expected, 'Content-Security-Policy'))
def test_coep__should_exist_when_cross_origin_isolated_is_true(self):
headers = utils.delete_headers('Cross-Origin-Embedder-Policy')
report = utils.process_test(headers=headers, cross_origin_isolated=True)
expected = {
'rule': 'Cross-Origin-Embedder-Policy',
'message': 'Header not included in response',
'severity': 'high',
'expected': ['require-corp']
}
self.assertIn(expected, report, msg=utils.build_error_message(report, expected, 'Cross-Origin-Embedder-Policy'))
def test_coep__should_enforce_require_corp_when_cross_origin_isolated_is_true(self):
headers = utils.add_or_modify_header('Cross-Origin-Embedder-Policy', 'unsafe-none')
report = utils.process_test(headers=headers, cross_origin_isolated=True)
expected = {
'rule': 'Cross-Origin-Embedder-Policy',
'message': 'Value does not match security policy',
'severity': 'high',
'value': 'unsafe-none',
'expected': ['require-corp']
}
self.assertIn(expected, report, msg=utils.build_error_message(report, expected, 'Cross-Origin-Embedder-Policy'))
def test_coop__should_exist_when_cross_origin_isolated_is_true(self):
headers = utils.delete_headers('Cross-Origin-Opener-Policy')
report = utils.process_test(headers=headers, cross_origin_isolated=True)
expected = {
'rule': 'Cross-Origin-Opener-Policy',
'message': 'Header not included in response',
'severity': 'high',
'expected': ['same-origin']
}
self.assertIn(expected, report, msg=utils.build_error_message(report, expected, 'Cross-Origin-Opener-Policy'))
def test_coop__should_enforce_same_origin_when_cross_origin_isolated_is_true(self):
headers = utils.add_or_modify_header('Cross-Origin-Opener-Policy', 'same-origin-allow-popups')
report = utils.process_test(headers=headers, cross_origin_isolated=True)
expected = {
'rule': 'Cross-Origin-Opener-Policy',
'message': 'Value does not match security policy',
'severity': 'high',
'value': 'same-origin-allow-popups',
'expected': ['same-origin']
}
self.assertIn(expected, report, msg=utils.build_error_message(report, expected, 'Cross-Origin-Opener-Policy'))
def test_pragma__should_exist(self):
headers = utils.delete_headers('Pragma')
report = utils.process_test(headers=headers)
expected = {
'rule': 'Pragma',
'message': 'Header not included in response',
'severity': 'high',
'expected': ['no-cache']
}
self.assertIn(expected, report, msg=utils.build_error_message(report, expected, 'Pragma'))
def test_referrer_policy__should_exist(self):
headers = utils.delete_headers('Referrer-Policy')
report = utils.process_test(headers=headers)
expected = {
'rule': 'Referrer-Policy',
'message': 'Header not included in response',
'severity': 'high',
'expected': ['strict-origin', 'strict-origin-when-cross-origin', 'no-referrer']
}
self.assertIn(expected, report, msg=utils.build_error_message(report, expected, 'Referrer-Policy'))
def test_referrer_policy__should_enforce_strict_policy(self):
headers = utils.add_or_modify_header('Referrer-Policy', 'same-origin')
report = utils.process_test(headers=headers)
expected = {
'rule': 'Referrer-Policy',
'message': 'Value does not match security policy. Exactly one of the expected items was expected',
'severity': 'high',
'value': 'same-origin',
'expected': ['strict-origin', 'strict-origin-when-cross-origin', 'no-referrer']
}
self.assertIn(expected, report, msg=utils.build_error_message(report, expected, 'Referrer-Policy'))
def test_server__should_not_exist(self):
headers = utils.add_or_modify_header('Server', 'Apache/2.4.1 (Unix)')
report = utils.process_test(headers=headers)
expected = {
'rule': 'Server',
'message': 'Header should not be returned',
'severity': 'high'
}
self.assertIn(expected, report, msg=utils.build_error_message(report, expected, 'Server'))
def test_set_cookie__should_enforce_secure_for_all_cookies(self):
headers = utils.add_or_modify_header('Set-Cookie', ['session_id=585733723; HttpOnly; SameSite=Strict'])
report = utils.process_test(headers=headers)
expected = {
'rule': 'Set-Cookie - session_id',
'message': 'Must-Contain directive missed',
'severity': 'high',
'value': 'session_id=585733723; HttpOnly; SameSite=Strict',
'expected': ['HttpOnly', 'Secure'],
'delimiter': ';',
'anomalies': ['Secure']
}
self.assertIn(expected, report, msg=utils.build_error_message(report, expected, 'Set-Cookie'))
def test_set_cookie__should_enforce_httponly_for_all_cookies(self):
headers = utils.add_or_modify_header('Set-Cookie', ['session_id=585733723; Secure; SameSite=Strict'])
report = utils.process_test(headers=headers)
expected = {
'rule': 'Set-Cookie - session_id',
'message': 'Must-Contain directive missed',
'severity': 'high',
'value': 'session_id=585733723; Secure; SameSite=Strict',
'expected': ['HttpOnly', 'Secure'],
'delimiter': ';',
'anomalies': ['HttpOnly']
}
self.assertIn(expected, report, msg=utils.build_error_message(report, expected, 'Set-Cookie'))
def test_strict_transport_security__should_exist(self):
headers = utils.delete_headers('Strict-Transport-Security')
report = utils.process_test(headers=headers)
expected = {
'rule': 'Strict-Transport-Security',
'message': 'Header not included in response',
'severity': 'high',
'expected': ['max-age=31536000', 'includeSubDomains'],
'delimiter': ';'
}
self.assertIn(expected, report, msg=utils.build_error_message(report, expected, 'Strict-Transport-Security'))
def test_user_agent__should_not_exist(self):
headers = utils.add_or_modify_header('User-Agent', 'Dalvik/2.1.0 (Linux; U; Android 6.0.1; Nexus Player)')
report = utils.process_test(headers=headers)
expected = {
'rule': 'User-Agent',
'message': 'Header should not be returned',
'severity': 'high'
}
self.assertIn(expected, report, msg=utils.build_error_message(report, expected, 'User-Agent'))
def test_x_aspnet_version__should_not_exist(self):
headers = utils.add_or_modify_header('X-AspNet-Version', '2.0.50727')
report = utils.process_test(headers=headers)
expected = {
'rule': 'X-AspNet-Version',
'message': 'Header should not be returned',
'severity': 'high'
}
self.assertIn(expected, report, msg=utils.build_error_message(report, expected, 'X-AspNet-Version'))
def test_x_client_ip__should_not_exist(self):
headers = utils.add_or_modify_header('X-Client-IP', '27.59.32.182')
report = utils.process_test(headers=headers)
expected = {
'rule': 'X-Client-IP',
'message': 'Header should not be returned',
'severity': 'high'
}
self.assertIn(expected, report, msg=utils.build_error_message(report, expected, 'X-Client-IP'))
def test_x_content_type_options__should_exist(self):
headers = utils.delete_headers('X-Content-Type-Options')
report = utils.process_test(headers=headers)
expected = {
'rule': 'X-Content-Type-Options',
'message': 'Header not included in response',
'severity': 'high',
'expected': ['nosniff']
}
self.assertIn(expected, report, msg=utils.build_error_message(report, expected, 'X-Content-Type-Options'))
def test_x_frame_options__should_exist(self):
headers = utils.delete_headers('X-Frame-Options')
report = utils.process_test(headers=headers)
expected = {
'rule': 'X-Frame-Options',
'message': 'Header not included in response',
'severity': 'high',
'expected': ['DENY', 'SAMEORIGIN']
}
self.assertIn(expected, report, msg=utils.build_error_message(report, expected, 'X-Frame-Options'))
def test_x_frame_options__should_disable_allow_from(self):
headers = utils.add_or_modify_header('X-Frame-Options', 'ALLOW-FROM https//example.com')
report = utils.process_test(headers=headers)
expected = {
'rule': 'X-Frame-Options',
'message': 'Value does not match security policy. Exactly one of the expected items was expected',
'severity': 'high',
'value': 'ALLOW-FROM https//example.com',
'expected': ['DENY', 'SAMEORIGIN']
}
self.assertIn(expected, report, msg=utils.build_error_message(report, expected, 'X-Frame-Options'))
def test_x_forwarded_for__should_not_exist(self):
headers = utils.add_or_modify_header('X-Forwarded-For', '2001:db8:85a3:8d3:1319:8a2e:370:7348')
report = utils.process_test(headers=headers)
expected = {
'rule': 'X-Forwarded-For',
'message': 'Header should not be returned',
'severity': 'high'
}
self.assertIn(expected, report, msg=utils.build_error_message(report, expected, 'X-Forwarded-For'))
def test_x_generator__should_not_exist(self):
headers = utils.add_or_modify_header('X-Generator', 'Drupal 7 (http://drupal.org)')
report = utils.process_test(headers=headers)
expected = {
'rule': 'X-Generator',
'message': 'Header should not be returned',
'severity': 'high'
}
self.assertIn(expected, report, msg=utils.build_error_message(report, expected, 'X-Generator'))
def test_x_powered_by__should_not_exist(self):
headers = utils.add_or_modify_header('X-Powered-By', 'ASP.NET')
report = utils.process_test(headers=headers)
expected = {
'rule': 'X-Powered-By',
'message': 'Header should not be returned',
'severity': 'high'
}
self.assertIn(expected, report, msg=utils.build_error_message(report, expected, 'X-Powered-By'))
def test_x_xss_protection__should_exist(self):
headers = utils.delete_headers('X-XSS-Protection')
report = utils.process_test(headers=headers)
expected = {
'rule': 'X-XSS-Protection',
'message': 'Header not included in response',
'severity': 'high',
'expected': ['0']
}
self.assertIn(expected, report, msg=utils.build_error_message(report, expected, 'X-XSS-Protection'))
def test_x_xss_protection__should_disable_filter(self):
headers = utils.add_or_modify_header('X-XSS-Protection', '1; mode=block')
report = utils.process_test(headers=headers)
expected = {
'rule': 'X-XSS-Protection',
'message': 'Value does not match security policy',
'severity': 'high',
'value': '1; mode=block',
'expected': ['0']
}
self.assertIn(expected, report, msg=utils.build_error_message(report, expected, 'X-XSS-Protection'))
| 42.610119 | 120 | 0.628553 | 1,586 | 14,317 | 5.453972 | 0.105296 | 0.02185 | 0.049942 | 0.068671 | 0.852023 | 0.828902 | 0.806474 | 0.755029 | 0.720231 | 0.66474 | 0 | 0.009399 | 0.24202 | 14,317 | 335 | 121 | 42.737313 | 0.787689 | 0 | 0 | 0.552347 | 0 | 0.00361 | 0.282671 | 0.058183 | 0 | 0 | 0 | 0 | 0.097473 | 1 | 0.101083 | false | 0 | 0.00722 | 0 | 0.111913 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
3edecfc085ed451092a76291d133a0f9f861c1d3 | 1,756 | py | Python | AppPy/Classes.py | MarcosHCTavares/AppDnD | 30b845e1e7e83bb81b25d6f170844c4b5045430d | [
"MIT"
] | null | null | null | AppPy/Classes.py | MarcosHCTavares/AppDnD | 30b845e1e7e83bb81b25d6f170844c4b5045430d | [
"MIT"
] | null | null | null | AppPy/Classes.py | MarcosHCTavares/AppDnD | 30b845e1e7e83bb81b25d6f170844c4b5045430d | [
"MIT"
] | null | null | null | import random
#random nunbers v
modcon = 0
nivel = 0
#random nunbers ^
classe = str(input('Classe: ')).upper().strip()
if classe == 'BARBARO':
if nivel == 1:
vida = 12 + modcon
else:
vida = (random.randint(1, 12) + modcon) * nivel
elif classe == 'BARDO':
if nivel == 1:
vida = 8 + modcon
else:
vida = (random.randint(1, 8) + modcon) * nivel
elif classe == 'BRUXO':
if nivel == 1:
vida = 8 + modcon
else:
vida = (random.randint(1, 8) + modcon) * nivel
elif classe == 'CLERIGO':
if nivel == 1:
vida = 8 + modcon
else:
vida = (random.randint(1, 8) + modcon) * nivel
elif classe == 'DRUIDA':
if nivel == 1:
vida = 8 + modcon
else:
vida = (random.randint(1, 8) + modcon) * nivel
elif classe == 'FEITICEIRO':
if nivel == 1:
vida = 6 + modcon
else:
vida = (random.randint(1, 6) + modcon) * nivel
elif classe == 'GUERREIRO':
if nivel == 1:
vida = 10 + modcon
else:
vida = (random.randint(1, 10) + modcon) * nivel
elif classe == 'LADINO':
if nivel == 1:
vida = 8 + modcon
else:
vida = (random.randint(1, 8) + modcon) * nivel
elif classe == 'MAGO':
if nivel == 1:
vida = 6 + modcon
else:
vida = (random.randint(1, 6) + modcon) * nivel
elif classe == 'MONGE':
if nivel == 1:
vida = 8 + modcon
else:
vida = (random.randint(1, 8) + modcon) * nivel
elif classe == 'PALADINO':
if nivel == 1:
vida = 10 + modcon
else:
vida = (random.randint(1, 10) + modcon) * nivel
elif classe == 'PATRULHEIRO':
if nivel == 1:
vida = 10 + modcon
else:
vida = (random.randint(1, 10) + modcon) * nivel
| 26.208955 | 55 | 0.527904 | 221 | 1,756 | 4.19457 | 0.153846 | 0.090615 | 0.10356 | 0.15534 | 0.773463 | 0.773463 | 0.743258 | 0.743258 | 0.743258 | 0.743258 | 0 | 0.048904 | 0.324601 | 1,756 | 66 | 56 | 26.606061 | 0.732715 | 0.018223 | 0 | 0.71875 | 0 | 0 | 0.052846 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.015625 | 0 | 0.015625 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
3efd4c1b1418380ddc96f26653f497270a52a4e0 | 9,985 | py | Python | apps/risk_rating/forms.py | fga-gpp-mds/2017.2-Grupo4 | e7cd2114ed46da879700f6163594d57e7797e367 | [
"MIT"
] | 7 | 2017-08-22T19:27:25.000Z | 2017-12-09T18:17:40.000Z | apps/risk_rating/forms.py | fga-gpp-mds/2017.2-Grupo4 | e7cd2114ed46da879700f6163594d57e7797e367 | [
"MIT"
] | 89 | 2017-09-20T03:22:49.000Z | 2017-12-11T18:50:25.000Z | apps/risk_rating/forms.py | fga-gpp-mds/2017.2-Grupo4 | e7cd2114ed46da879700f6163594d57e7797e367 | [
"MIT"
] | 1 | 2017-09-26T04:15:49.000Z | 2017-09-26T04:15:49.000Z | # Arquivo: apps/risk_rating/forms.py
from django import forms
from apps.risk_rating.models import ClinicalState_28d, ClinicalState_29d_2m, \
ClinicalState_2m_3y, ClinicalState_3y_10y, ClinicalState_10yMore, \
MachineLearning_28d, MachineLearning_29d_2m, MachineLearning_2m_3y, \
MachineLearning_3y_10y, MachineLearning_10yMore
class ClinicalState_28dForm(forms.ModelForm):
"""
Defining fields for under 28 days patient's clinical state
"""
class Meta:
model = ClinicalState_28d
fields = ['patient', 'classifier_id', 'dispneia', 'ictericia',
'perdada_consciencia', 'cianose', 'febre', 'solucos',
'prostracao', 'vomitos', 'tosse', 'coriza',
'obstrucao_nasal', 'convulsao_no_momento', 'diarreia',
'choro_inconsolavel', 'dificuldade_evacuar', 'nao_suga_seio',
'manchas_na_pele', 'salivacao', 'chiado_no_peito',
'diminuicao_da_diurese', 'dor_abdominal',
'fontanela_abaulada', 'secrecao_no_umbigo',
'secrecao_ocular', 'sangue_nas_fezes', 'convulsao_hoje']
class ClinicalState_29d_2mForm(forms.ModelForm):
"""
Defining fields patients (29 days and 2 months old) clinical state
"""
class Meta:
model = ClinicalState_29d_2m
fields = ['patient', 'classifier_id', 'dispneia', 'ictericia',
'perdada_consciencia', 'cianose', 'febre', 'solucos',
'prostracao', 'vomitos', 'tosse', 'coriza',
'obstrucao_nasal', 'convulsao_no_momento', 'diarreia',
'dificuldade_evacuar', 'nao_suga_seio', 'manchas_na_pele',
'salivacao', 'chiado_no_peito', 'diminuicao_da_diurese',
'dor_abdominal', 'fontanela_abaulada', 'secrecao_no_umbigo',
'secrecao_ocular', 'sangue_nas_fezes', 'convulsao_hoje']
class ClinicalState_2m_3yForm(forms.ModelForm):
"""
Defining fields patients (2 months and 3 years old) clinical state
"""
class Meta:
model = ClinicalState_2m_3y
fields = ['patient', 'classifier_id', 'dispneia', 'ictericia',
'perdada_consciencia', 'cianose', 'febre', 'solucos',
'prostracao', 'vomitos', 'tosse', 'coriza',
'obstrucao_nasal', 'convulsao_no_momento', 'diarreia',
'dificuldade_evacuar', 'nao_suga_seio', 'manchas_na_pele',
'salivacao', 'chiado_no_peito', 'diminuicao_da_diurese',
'dor_abdominal', 'fontanela_abaulada', 'secrecao_no_umbigo',
'secrecao_ocular']
class ClinicalState_3y_10yForm(forms.ModelForm):
"""
Defining filds patients (3 years and 10 years old) clinical state
"""
class Meta:
model = ClinicalState_3y_10y
fields = ['patient', 'classifier_id', 'perdada_consciencia',
'febre_maior_72h', 'febre_menos_72h', 'odinofagia',
'fascies_de_dor', 'tontura', 'corpo_estranho', 'dor_dentes',
'disuria', 'urina_concentrada', 'dispneia', 'dor_toracica',
'choque_eletrico', 'quase_afogamento', 'artralgia',
'ictericia', 'perda_consciencia', 'palidez', 'cianose',
'solucos', 'prostracao', 'febre', 'vomitos', 'tosse',
'coriza', 'espirros', 'hiperemia_conjuntival',
'secrecao_ocular', 'obstrucao_nasal', 'convulsao',
'diarreia', 'manchas_na_pele', 'queda', 'hiporexia',
'salivacao', 'constipacao', 'chiado_no_peito',
'diminuicao_da_diurese', 'dor_abdominal', 'otalgia',
'epistaxe', 'otorreia', 'edema', 'adenomegalias',
'dor_articular', 'dificulade_de_marchar', 'sonolencia',
'dor_muscular', 'dor_retroorbitaria']
class ClinicalState_10yMoreForm(forms.ModelForm):
"""
Defining fields patients (10 years old or more) clinical state
"""
class Meta:
model = ClinicalState_10yMore
fields = ['patient', 'classifier_id', 'mais_de_72h_febre',
'menos_de_72h_febre', 'tontura', 'corpo_estranho',
'dor_de_dente', 'disuria', 'urina_concentrada', 'dispneia',
'dor_toracica', 'choque_eletrico', 'quase_afogamento',
'artralgia', 'ictericia', 'perda_da_consciencia', 'palidez',
'cianose', 'solucos', 'prostracao', 'febre', 'vomitos',
'tosse', 'coriza', 'espirros', 'hiperemia_conjuntival',
'secrecao_ocular', 'obstrucao_nasal', 'convulsao',
'diarreia', 'dificuldade_evacuar', 'cefaleia',
'manchas_na_pele', 'salivacao', 'queda', 'hiporexia',
'salivacao', 'hiporexia', 'constipacao', 'chiado_no_peito',
'diminuicao_da_diurese', 'dor_abdominal', 'otalgia',
'epistaxe', 'otorreia', 'edema', 'adenomegalias',
'dor_articular', 'dificuldade_de_marcha', 'sonolencia',
'secrecao_ocular', 'dor_muscular', 'dor_retroorbitaria']
class MachineLearning_28dForm(forms.ModelForm):
"""
Defining fields for under 28 days patient's clinical state
"""
class Meta:
model = MachineLearning_28d
fields = ['dispneia', 'ictericia', 'perdada_consciencia', 'cianose',
'febre', 'solucos', 'prostracao', 'vomitos', 'tosse',
'coriza', 'obstrucao_nasal', 'convulsao_no_momento',
'diarreia', 'choro_inconsolavel', 'dificuldade_evacuar',
'nao_suga_seio', 'manchas_na_pele', 'salivacao',
'chiado_no_peito', 'diminuicao_da_diurese', 'dor_abdominal',
'fontanela_abaulada', 'secrecao_no_umbigo',
'secrecao_ocular', 'sangue_nas_fezes', 'convulsao_hoje',
'classification']
class MachineLearning_29d_2mForm(forms.ModelForm):
"""
Defining fields patients (29 days and 2 months old) clinical state
"""
class Meta:
model = MachineLearning_29d_2m
fields = ['dispneia', 'ictericia', 'perdada_consciencia', 'cianose',
'febre', 'solucos', 'prostracao', 'vomitos', 'tosse',
'coriza', 'obstrucao_nasal', 'convulsao_no_momento',
'diarreia', 'dificuldade_evacuar', 'nao_suga_seio',
'manchas_na_pele', 'salivacao', 'chiado_no_peito',
'diminuicao_da_diurese', 'dor_abdominal',
'fontanela_abaulada', 'secrecao_no_umbigo',
'secrecao_ocular', 'sangue_nas_fezes', 'convulsao_hoje',
'classification']
class MachineLearning_2m_3yForm(forms.ModelForm):
"""
Defining fields patients (29 days and 2 months old) clinical state
"""
class Meta:
model = MachineLearning_2m_3y
fields = ['dispneia', 'ictericia', 'perdada_consciencia', 'cianose',
'febre', 'solucos', 'prostracao', 'vomitos', 'tosse',
'coriza', 'obstrucao_nasal', 'convulsao_no_momento',
'diarreia', 'dificuldade_evacuar', 'nao_suga_seio',
'manchas_na_pele', 'salivacao', 'chiado_no_peito',
'diminuicao_da_diurese', 'dor_abdominal',
'fontanela_abaulada', 'secrecao_no_umbigo',
'secrecao_ocular', 'classification']
class MachineLearning_3y_10yForm(forms.ModelForm):
"""
Defining filds patients (3 years and 10 years old) clinical state
"""
class Meta:
model = MachineLearning_3y_10y
fields = ['perdada_consciencia', 'febre_maior_72h', 'febre_menos_72h',
'odinofagia', 'fascies_de_dor', 'tontura', 'corpo_estranho',
'dor_dentes', 'disuria', 'urina_concentrada', 'dispneia',
'dor_toracica', 'choque_eletrico', 'quase_afogamento',
'artralgia', 'ictericia', 'perda_consciencia', 'palidez',
'cianose', 'solucos', 'prostracao', 'febre', 'vomitos',
'tosse', 'coriza', 'espirros', 'hiperemia_conjuntival',
'secrecao_ocular', 'obstrucao_nasal', 'convulsao',
'diarreia', 'manchas_na_pele', 'queda', 'hiporexia',
'salivacao', 'constipacao', 'chiado_no_peito',
'diminuicao_da_diurese', 'dor_abdominal', 'otalgia',
'epistaxe', 'otorreia', 'edema', 'adenomegalias',
'dor_articular', 'dificulade_de_marchar', 'sonolencia',
'dor_muscular', 'dor_retroorbitaria', 'classification']
class MachineLearning_10yMoreForm(forms.ModelForm):
"""
Defining fields patients (29 days and 2 months old) clinical state
"""
class Meta:
model = MachineLearning_10yMore
fields = ['mais_de_72h_febre', 'menos_de_72h_febre', 'tontura',
'corpo_estranho', 'dor_de_dente', 'disuria',
'urina_concentrada', 'dispneia', 'dor_toracica',
'choque_eletrico', 'quase_afogamento', 'artralgia',
'ictericia', 'perda_da_consciencia', 'palidez', 'cianose',
'solucos', 'prostracao', 'febre', 'vomitos', 'tosse',
'coriza', 'espirros', 'hiperemia_conjuntival',
'secrecao_ocular', 'obstrucao_nasal', 'convulsao',
'diarreia', 'dificuldade_evacuar', 'cefaleia',
'manchas_na_pele', 'salivacao', 'queda', 'hiporexia',
'salivacao', 'hiporexia', 'constipacao', 'chiado_no_peito',
'diminuicao_da_diurese', 'dor_abdominal', 'otalgia',
'epistaxe', 'otorreia', 'edema', 'adenomegalias',
'dor_articular', 'dificuldade_de_marcha', 'sonolencia',
'secrecao_ocular', 'dor_muscular', 'dor_retroorbitaria',
'classification']
| 50.429293 | 79 | 0.595694 | 883 | 9,985 | 6.397508 | 0.156285 | 0.02974 | 0.038945 | 0.038945 | 0.907594 | 0.900867 | 0.878563 | 0.864401 | 0.853602 | 0.853602 | 0 | 0.015803 | 0.277516 | 9,985 | 197 | 80 | 50.685279 | 0.767258 | 0.068403 | 0 | 0.643836 | 0 | 0 | 0.469894 | 0.041307 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.013699 | 0 | 0.150685 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
4117581351dc06b0c7520a58455c336408132439 | 156 | py | Python | TestPlugins.py | alexweav/ADAF | e46e48d4183ae024dded48feb2fba6bef59cce9f | [
"MIT"
] | 2 | 2017-10-09T21:13:56.000Z | 2017-10-16T05:55:09.000Z | TestPlugins.py | alexweav/ADAF | e46e48d4183ae024dded48feb2fba6bef59cce9f | [
"MIT"
] | 16 | 2017-10-06T20:04:41.000Z | 2017-11-29T22:34:02.000Z | TestPlugins.py | alexweav/ADAF | e46e48d4183ae024dded48feb2fba6bef59cce9f | [
"MIT"
] | null | null | null | from PluginSystem.PluginEngine import PluginEngine
pluginEngine = PluginEngine()
pluginEngine.ExecutePlugin("FrameStream", "the requested data passed in")
| 31.2 | 73 | 0.833333 | 15 | 156 | 8.666667 | 0.733333 | 0.553846 | 0.553846 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.089744 | 156 | 4 | 74 | 39 | 0.915493 | 0 | 0 | 0 | 0 | 0 | 0.25 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.333333 | 0.333333 | 0 | 0.333333 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 6 |
f5c4a1ebbd9ea96287e3c05ea052a9ada210e483 | 192 | py | Python | app.py | EVOLVED-5G/ELCM | 07d07a114b667e8c6915ee3ef125dd4864dd2247 | [
"Apache-2.0"
] | 1 | 2020-04-16T17:07:46.000Z | 2020-04-16T17:07:46.000Z | app.py | EVOLVED-5G/ELCM | 07d07a114b667e8c6915ee3ef125dd4864dd2247 | [
"Apache-2.0"
] | 3 | 2020-03-06T11:22:09.000Z | 2020-03-06T11:22:10.000Z | app.py | EVOLVED-5G/ELCM | 07d07a114b667e8c6915ee3ef125dd4864dd2247 | [
"Apache-2.0"
] | 1 | 2022-02-01T07:56:44.000Z | 2022-02-01T07:56:44.000Z | from Scheduler import app, config
from Status import ExecutionQueue
@app.shell_context_processor
def make_shell_context():
return {'App': app, 'Config': config, 'Queue': ExecutionQueue}
| 24 | 66 | 0.770833 | 24 | 192 | 6 | 0.583333 | 0.125 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.130208 | 192 | 7 | 67 | 27.428571 | 0.862275 | 0 | 0 | 0 | 0 | 0 | 0.072917 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | true | 0 | 0.4 | 0.2 | 0.8 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
f5c9a03e58e842fddb836b5b80a9506694b004c7 | 154 | py | Python | virtual/lib/python3.6/site-packages/pylint/test/functional/wrong_import_position14.py | drewheathens/The-Moringa-Tribune | 98ee4d63c9df6f1f7497fc6876960a822d914500 | [
"MIT"
] | 463 | 2015-01-15T08:17:42.000Z | 2022-03-28T15:10:20.000Z | virtual/lib/python3.6/site-packages/pylint/test/functional/wrong_import_position14.py | drewheathens/The-Moringa-Tribune | 98ee4d63c9df6f1f7497fc6876960a822d914500 | [
"MIT"
] | 52 | 2015-01-06T02:43:59.000Z | 2022-03-14T11:15:21.000Z | virtual/lib/python3.6/site-packages/pylint/test/functional/wrong_import_position14.py | drewheathens/The-Moringa-Tribune | 98ee4d63c9df6f1f7497fc6876960a822d914500 | [
"MIT"
] | 249 | 2015-01-07T22:49:49.000Z | 2022-03-18T02:32:06.000Z | """Checks import position rule"""
# pylint: disable=unused-import,undefined-variable,import-error
if x:
import os
import y # [wrong-import-position]
| 25.666667 | 63 | 0.74026 | 21 | 154 | 5.428571 | 0.714286 | 0.245614 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.12987 | 154 | 5 | 64 | 30.8 | 0.850746 | 0.74026 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
eb2642761ff349242e3759e5ca19ac260cbc126d | 4,927 | py | Python | origmacrm/interaction/migrations/0001_initial.py | eld120/origma-crm | e48c94434ca4800334d95cae4e65fe4a729423a8 | [
"BSD-3-Clause"
] | null | null | null | origmacrm/interaction/migrations/0001_initial.py | eld120/origma-crm | e48c94434ca4800334d95cae4e65fe4a729423a8 | [
"BSD-3-Clause"
] | 5 | 2021-11-17T20:44:56.000Z | 2021-12-15T20:06:54.000Z | origmacrm/interaction/migrations/0001_initial.py | eld120/origma-crm | e48c94434ca4800334d95cae4e65fe4a729423a8 | [
"BSD-3-Clause"
] | null | null | null | # Generated by Django 3.2.8 on 2021-11-30 17:55
from django.conf import settings
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
initial = True
dependencies = [
('customer', '0001_initial'),
migrations.swappable_dependency(settings.AUTH_USER_MODEL),
]
operations = [
migrations.CreateModel(
name='Initiative',
fields=[
('id', models.BigAutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('campaign', models.CharField(choices=[('bundle', 'Bundle'), ('overstock', 'Overstock'), ('closeout', 'Closeout'), ('special price', 'Special Price')], max_length=50, verbose_name='Campaign')),
('start_date', models.DateField(verbose_name='Start Date')),
('end_date', models.DateField(verbose_name='End Date')),
('description', models.TextField(verbose_name='Description')),
('status', models.CharField(choices=[('open', 'Open'), ('closed', 'Closed'), ('negotiating', 'Negotiating'), ('won', 'Won'), ('lost', 'Lost')], max_length=50, verbose_name='Status')),
('origin', models.CharField(choices=[('promotion', 'Promotion'), ('email', 'Email'), ('outbound call', 'Outbound Call'), ('referral', 'Referral'), ('inbound call', 'Inbound Call'), ('event', 'Event')], max_length=50, verbose_name='Origin')),
('expected_sales', models.DecimalField(decimal_places=2, default=0.0, max_digits=9, verbose_name='Expected Sales')),
('realized_sales', models.DecimalField(decimal_places=2, default=0.0, max_digits=9, verbose_name='Realized Sales')),
],
),
migrations.CreateModel(
name='Task',
fields=[
('id', models.BigAutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('date', models.DateField(auto_now=True, verbose_name='Date')),
('notes', models.TextField(verbose_name='')),
('expected_sales', models.DecimalField(decimal_places=2, default=0.0, max_digits=9, verbose_name='Expected Sales')),
('realized_sales', models.DecimalField(decimal_places=2, default=0.0, max_digits=9, verbose_name='Realized Sales')),
('business', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='task_business_client', to='customer.customer', verbose_name='Business')),
('contact', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.CASCADE, related_name='task_direct_contact', to='customer.contact', verbose_name='Contact')),
('employee', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to=settings.AUTH_USER_MODEL, verbose_name='Employee')),
('initiative', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='interaction.initiative', verbose_name='Initiative')),
('involved_contacts', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='task_additional_contacts', to='customer.customer', verbose_name='')),
],
options={
'abstract': False,
},
),
migrations.CreateModel(
name='Interaction',
fields=[
('id', models.BigAutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('date', models.DateField(auto_now=True, verbose_name='Date')),
('notes', models.TextField(verbose_name='')),
('expected_sales', models.DecimalField(decimal_places=2, default=0.0, max_digits=9, verbose_name='Expected Sales')),
('realized_sales', models.DecimalField(decimal_places=2, default=0.0, max_digits=9, verbose_name='Realized Sales')),
('business', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='interaction_business_client', to='customer.customer', verbose_name='Business')),
('contact', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.CASCADE, related_name='interaction_direct_contact', to='customer.contact', verbose_name='Contact')),
('employee', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to=settings.AUTH_USER_MODEL, verbose_name='Employee')),
('initiative', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='interaction.initiative', verbose_name='Initiative')),
('involved_contacts', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='interaction_additional_contacts', to='customer.customer', verbose_name='')),
],
options={
'abstract': False,
},
),
]
| 71.405797 | 257 | 0.647453 | 529 | 4,927 | 5.844991 | 0.204159 | 0.103169 | 0.049806 | 0.078266 | 0.753234 | 0.712484 | 0.712484 | 0.712484 | 0.712484 | 0.712484 | 0 | 0.012364 | 0.195657 | 4,927 | 68 | 258 | 72.455882 | 0.767853 | 0.009133 | 0 | 0.540984 | 1 | 0 | 0.215984 | 0.031148 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.04918 | 0 | 0.114754 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
de1c2f78aad3e960c5ac48831a4ff536a245c80b | 35,642 | py | Python | models/experimental_networks.py | KingOnTheStar/pytorch-CycleGAN-and-pix2pix | 9016b98d09902975b49a07c394bb0d5066e2aa55 | [
"BSD-3-Clause"
] | null | null | null | models/experimental_networks.py | KingOnTheStar/pytorch-CycleGAN-and-pix2pix | 9016b98d09902975b49a07c394bb0d5066e2aa55 | [
"BSD-3-Clause"
] | null | null | null | models/experimental_networks.py | KingOnTheStar/pytorch-CycleGAN-and-pix2pix | 9016b98d09902975b49a07c394bb0d5066e2aa55 | [
"BSD-3-Clause"
] | null | null | null | import random
import torch
import torch.nn as nn
from torch.nn import init
import functools
from torch.optim import lr_scheduler
from models.base_networks import *
class UnetAttentionMaskGenerator(nn.Module):
"""Create a Unet-based generator"""
def __init__(self, input_nc, output_nc, num_downs, net_branch_num=3, ngf=64, norm_layer=nn.BatchNorm2d, use_dropout=False):
"""Construct a Unet generator
Parameters:
input_nc (int) -- the number of channels in input images
output_nc (int) -- the number of channels in output images
num_downs (int) -- the number of downsamplings in UNet. For example, # if |num_downs| == 7,
image of size 128x128 will become of size 1x1 # at the bottleneck
ngf (int) -- the number of filters in the last conv layer
norm_layer -- normalization layer
We construct the U-Net from the innermost layer to the outermost layer.
It is a recursive process.
"""
super(UnetAttentionMaskGenerator, self).__init__()
# construct unet structure
unet_block = UnetSkipConnectionBlock(ngf * 8, ngf * 8, input_nc=None, submodule=None, norm_layer=norm_layer, innermost=True) # add the innermost layer
for i in range(num_downs - 5): # add intermediate layers with ngf * 8 filters
unet_block = UnetSkipConnectionBlock(ngf * 8, ngf * 8, input_nc=None, submodule=unet_block, norm_layer=norm_layer, use_dropout=use_dropout)
# gradually reduce the number of filters from ngf * 8 to ngf
unet_block = UnetSkipConnectionBlock(ngf * 4, ngf * 8, input_nc=None, submodule=unet_block, norm_layer=norm_layer)
unet_block = UnetSkipConnectionBlock(ngf * 2, ngf * 4, input_nc=None, submodule=unet_block, norm_layer=norm_layer)
unet_block = UnetSkipConnectionBlock(ngf, ngf * 2, input_nc=None, submodule=unet_block, norm_layer=norm_layer)
unet_block = UnetSkipConnectionBlock(output_nc, ngf, input_nc=ngf * net_branch_num, submodule=unet_block, outermost=True, norm_layer=norm_layer) # add the outermost UNet layer
self.model = MultiCNNMaskBlock(output_nc, ngf, net_branch_num, input_nc=input_nc, submodule=unet_block, outermost=True, norm_layer=norm_layer) # add the outermost layer
def forward(self, input, input_mask):
"""Standard forward"""
return self.model(input, input_mask)
class UnetRandomAndMaskGenerator(nn.Module):
"""Create a Unet-based generator"""
def __init__(self, input_nc, output_nc, num_downs, net_branch_num=3, ngf=64, norm_layer=nn.BatchNorm2d, use_dropout=False):
"""Construct a Unet generator
Parameters:
input_nc (int) -- the number of channels in input images
output_nc (int) -- the number of channels in output images
num_downs (int) -- the number of downsamplings in UNet. For example, # if |num_downs| == 7,
image of size 128x128 will become of size 1x1 # at the bottleneck
ngf (int) -- the number of filters in the last conv layer
norm_layer -- normalization layer
We construct the U-Net from the innermost layer to the outermost layer.
It is a recursive process.
"""
super(UnetRandomAndMaskGenerator, self).__init__()
# construct unet structure
unet_block = UnetSkipConnectionBlock(ngf * 8, ngf * 8, input_nc=None, submodule=None, norm_layer=norm_layer, innermost=True) # add the innermost layer
for i in range(num_downs - 5): # add intermediate layers with ngf * 8 filters
unet_block = UnetSkipConnectionBlock(ngf * 8, ngf * 8, input_nc=None, submodule=unet_block, norm_layer=norm_layer, use_dropout=use_dropout)
# gradually reduce the number of filters from ngf * 8 to ngf
unet_block = UnetSkipConnectionBlock(ngf * 4, ngf * 8, input_nc=None, submodule=unet_block, norm_layer=norm_layer)
unet_block = UnetSkipConnectionBlock(ngf * 2, ngf * 4, input_nc=None, submodule=unet_block, norm_layer=norm_layer)
unet_block = UnetSkipConnectionBlock(ngf, ngf * 2, input_nc=None, submodule=unet_block, norm_layer=norm_layer)
unet_block = UnetSkipConnectionBlock(output_nc, ngf, input_nc=net_branch_num, submodule=unet_block, outermost=True, norm_layer=norm_layer) # add the outermost UNet layer
self.model = MultiCNNMaskRandomBGBlock(output_nc, ngf, net_branch_num, input_nc=input_nc, submodule=unet_block, outermost=True, norm_layer=norm_layer) # add the outermost layer
def forward(self, input, input_mask, input_random_bg):
"""Standard forward"""
return self.model(input, input_mask, input_random_bg)
class PostMaskUnetGenerator(nn.Module):
"""Create a Unet-based generator"""
def __init__(self, input_nc, output_nc, num_downs, net_branch_num=3, ngf=64, norm_layer=nn.BatchNorm2d, use_dropout=False):
"""Construct a Unet generator
Parameters:
input_nc (int) -- the number of channels in input images
output_nc (int) -- the number of channels in output images
num_downs (int) -- the number of downsamplings in UNet. For example, # if |num_downs| == 7,
image of size 128x128 will become of size 1x1 # at the bottleneck
ngf (int) -- the number of filters in the last conv layer
norm_layer -- normalization layer
We construct the U-Net from the innermost layer to the outermost layer.
It is a recursive process.
"""
super(PostMaskUnetGenerator, self).__init__()
# construct unet structure
unet_block = UnetSkipConnectionBlock(ngf * 8, ngf * 8, input_nc=None, submodule=None, norm_layer=norm_layer, innermost=True) # add the innermost layer
for i in range(num_downs - 5): # add intermediate layers with ngf * 8 filters
unet_block = UnetSkipConnectionBlock(ngf * 8, ngf * 8, input_nc=None, submodule=unet_block, norm_layer=norm_layer, use_dropout=use_dropout)
# gradually reduce the number of filters from ngf * 8 to ngf
unet_block = UnetSkipConnectionBlock(ngf * 4, ngf * 8, input_nc=None, submodule=unet_block, norm_layer=norm_layer)
unet_block = UnetSkipConnectionBlock(ngf * 2, ngf * 4, input_nc=None, submodule=unet_block, norm_layer=norm_layer)
unet_block = UnetSkipConnectionBlock(ngf, ngf * 2, input_nc=None, submodule=unet_block, norm_layer=norm_layer)
unet_block = UnetSkipConnectionBlock(output_nc, ngf, input_nc=net_branch_num, submodule=unet_block, outermost=True, norm_layer=norm_layer) # add the outermost UNet layer
self.model = DisperseBlock(output_nc, ngf, net_branch_num, input_nc=input_nc, submodule=unet_block, outermost=True, norm_layer=norm_layer) # add the outermost layer
def forward(self, input):
"""Standard forward"""
return self.model(input)
class MaskCollectionGenerator(nn.Module):
"""Create a Unet-based generator"""
def __init__(self, input_nc, output_nc, num_downs, net_branch_num=3, ngf=64, norm_layer=nn.BatchNorm2d, use_dropout=False):
"""Construct a Unet generator
Parameters:
input_nc (int) -- the number of channels in input images
output_nc (int) -- the number of channels in output images
num_downs (int) -- the number of downsamplings in UNet. For example, # if |num_downs| == 7,
image of size 128x128 will become of size 1x1 # at the bottleneck
ngf (int) -- the number of filters in the last conv layer
norm_layer -- normalization layer
We construct the U-Net from the innermost layer to the outermost layer.
It is a recursive process.
"""
super(MaskCollectionGenerator, self).__init__()
self.model = MaskCollectionBlock(output_nc, ngf, net_branch_num, input_nc=input_nc, submodule=None, outermost=True, norm_layer=norm_layer) # add the outermost layer
def forward(self, input, input_mask, input_random_bg):
"""Standard forward"""
return self.model(input, input_mask, input_random_bg)
class UnetInnerRandomGenerator(nn.Module):
"""Create a Unet-based generator"""
def __init__(self, input_nc, output_nc, num_downs, ngf=64, inner_ap_nc=0, norm_layer=nn.BatchNorm2d, use_dropout=False):
"""Construct a Unet generator
Parameters:
input_nc (int) -- the number of channels in input images
output_nc (int) -- the number of channels in output images
num_downs (int) -- the number of downsamplings in UNet. For example, # if |num_downs| == 7,
image of size 128x128 will become of size 1x1 # at the bottleneck
inner_ap_nc -- the number of channels in inner append vector
ngf (int) -- the number of filters in the last conv layer
norm_layer -- normalization layer
We construct the U-Net from the innermost layer to the outermost layer.
It is a recursive process.
"""
super(UnetInnerRandomGenerator, self).__init__()
# construct unet structure
unet_block = UnetSkipConnectionInnerRandomBlock(ngf * 8, ngf * 8, inner_ap_nc, input_nc=None, submodule=None, norm_layer=norm_layer, innermost=True) # add the innermost layer
for i in range(num_downs - 5): # add intermediate layers with ngf * 8 filters
unet_block = UnetSkipConnectionInnerRandomBlock(ngf * 8, ngf * 8, inner_ap_nc, input_nc=None, submodule=unet_block, norm_layer=norm_layer, use_dropout=use_dropout)
# gradually reduce the number of filters from ngf * 8 to ngf
unet_block = UnetSkipConnectionInnerRandomBlock(ngf * 4, ngf * 8, inner_ap_nc, input_nc=None, submodule=unet_block, norm_layer=norm_layer)
unet_block = UnetSkipConnectionInnerRandomBlock(ngf * 2, ngf * 4, inner_ap_nc, input_nc=None, submodule=unet_block, norm_layer=norm_layer)
unet_block = UnetSkipConnectionInnerRandomBlock(ngf, ngf * 2, inner_ap_nc, input_nc=None, submodule=unet_block, norm_layer=norm_layer)
self.model = UnetSkipConnectionInnerRandomBlock(output_nc, ngf, inner_ap_nc, input_nc=input_nc, submodule=unet_block, outermost=True, norm_layer=norm_layer) # add the outermost layer
def forward(self, input, inner_ap):
"""Standard forward"""
return self.model(input, inner_ap)
class MultiCNNMaskBlock(nn.Module):
"""Defines the Unet submodule with skip connection.
X -------------------identity----------------------
|-- downsampling -- |submodule| -- upsampling --|
"""
def __init__(self, outer_nc, inner_nc, branch_num, input_nc=None,
submodule=None, outermost=False, innermost=False, norm_layer=nn.BatchNorm2d, use_dropout=False):
"""Construct a Unet submodule with skip connections.
Parameters:
outer_nc (int) -- the number of filters in the outer conv layer
inner_nc (int) -- the number of filters in the inner conv layer
input_nc (int) -- the number of channels in input images/features
submodule (UnetSkipConnectionBlock) -- previously defined submodules
outermost (bool) -- if this module is the outermost module
innermost (bool) -- if this module is the innermost module
norm_layer -- normalization layer
use_dropout (bool) -- if use dropout layers.
"""
super(MultiCNNMaskBlock, self).__init__()
self.outermost = outermost
if type(norm_layer) == functools.partial:
use_bias = norm_layer.func == nn.InstanceNorm2d
else:
use_bias = norm_layer == nn.InstanceNorm2d
if input_nc is None:
input_nc = outer_nc
mask_models = []
for i in range(0, branch_num):
equalconv = nn.Conv2d(input_nc, inner_nc, kernel_size=3 + 2 * i,
stride=1, padding=1 + i, bias=use_bias)
equalrelu = nn.LeakyReLU(0.2, True)
equalnorm = norm_layer(inner_nc)
mask_model = [equalconv, equalnorm, equalrelu]
mask_models.append(nn.Sequential(*mask_model))
self.mask_models = nn.ModuleList(mask_models)
model = [submodule]
self.model = nn.Sequential(*model)
def forward(self, x, mask):
mask_y = None
for mask_model in self.mask_models:
mask_y_branch = mask_model(x) * mask
if mask_y is None:
mask_y = mask_y_branch
else:
mask_y = torch.cat([mask_y, mask_y_branch], 1)
return self.model(mask_y)
class MultiCNNMaskRandomBGBlock(nn.Module):
"""Defines the Unet submodule with skip connection.
X -------------------identity----------------------
|-- downsampling -- |submodule| -- upsampling --|
"""
def __init__(self, outer_nc, inner_nc, branch_num, input_nc=None,
submodule=None, outermost=False, innermost=False, norm_layer=nn.BatchNorm2d, use_dropout=False):
"""Construct a Unet submodule with skip connections.
Parameters:
outer_nc (int) -- the number of filters in the outer conv layer
inner_nc (int) -- the number of filters in the inner conv layer
input_nc (int) -- the number of channels in input images/features
submodule (UnetSkipConnectionBlock) -- previously defined submodules
outermost (bool) -- if this module is the outermost module
innermost (bool) -- if this module is the innermost module
norm_layer -- normalization layer
use_dropout (bool) -- if use dropout layers.
"""
super(MultiCNNMaskRandomBGBlock, self).__init__()
self.outermost = outermost
if type(norm_layer) == functools.partial:
use_bias = norm_layer.func == nn.InstanceNorm2d
else:
use_bias = norm_layer == nn.InstanceNorm2d
if input_nc is None:
input_nc = outer_nc
mask_models = []
for i in range(0, branch_num):
equalconv = nn.Conv2d(input_nc, inner_nc, kernel_size=7 + 2 * i,
stride=1, padding=3 + i, bias=use_bias)
# equalconv = nn.Conv2d(input_nc, inner_nc, kernel_size=3 + 2 * i,
# stride=1, padding=1 + i, bias=use_bias)
equalrelu = nn.LeakyReLU(0.2, True)
equalnorm = norm_layer(inner_nc)
mask_model = [equalconv, equalnorm, equalrelu]
mask_models.append(nn.Sequential(*mask_model))
self.mask_models = nn.ModuleList(mask_models)
shrinkconv = nn.Conv2d(inner_nc * branch_num, branch_num, kernel_size=1,
stride=1, padding=0, bias=use_bias)
shrinkrelu = nn.LeakyReLU(0.2, True)
shrinknorm = norm_layer(branch_num)
disperseconv = nn.Conv2d(branch_num, branch_num, kernel_size=11,
stride=1, padding=5, bias=use_bias)
disperserelu = nn.LeakyReLU(0.2, True)
dispersenorm = norm_layer(branch_num)
shrinkpart = [shrinkconv, shrinknorm, shrinkrelu]
dispersepart = [disperseconv, dispersenorm, disperserelu]
self.shrinkpart = nn.Sequential(*shrinkpart)
self.dispersepart = nn.Sequential(*dispersepart)
model = [submodule]
self.model = nn.Sequential(*model)
def forward(self, x, mask, random_bg):
mask_y = None
for mask_model in self.mask_models:
mask_y_branch = mask_model(x) * mask
if mask_y is None:
mask_y = mask_y_branch
else:
mask_y = torch.cat([mask_y, mask_y_branch], 1)
mask_y = mask_y + random_bg * (1 - mask)
processed_y = self.shrinkpart(mask_y)
processed_y = self.random_move_controlling_stick(processed_y, mask)
processed_y = self.dispersepart(processed_y)
return self.model(processed_y)
def random_move_controlling_stick(self, processed_y, mask):
controlling_stick_gap = 5
cut_width = 50
upper_bound = processed_y.shape[3] - 1 - cut_width
src_pos_x = random.randint(0, upper_bound)
src_pos_x = src_pos_x - src_pos_x % controlling_stick_gap
src_pos_y = random.randint(0, upper_bound)
tag_pos_x = random.randint(0, upper_bound - int(controlling_stick_gap / 2))
tag_pos_x = tag_pos_x - tag_pos_x % controlling_stick_gap + int(controlling_stick_gap / 2)
tag_pos_y = random.randint(0, upper_bound)
ret_y = processed_y.clone()
ret_y[:, :, tag_pos_y: tag_pos_y + cut_width, tag_pos_x: tag_pos_x + cut_width] += \
processed_y[:, :, src_pos_y: src_pos_y + cut_width, src_pos_x: src_pos_x + cut_width]
return ret_y
class MaskCollectionBlock(nn.Module):
"""Defines the Unet submodule with skip connection.
X -------------------identity----------------------
|-- downsampling -- |submodule| -- upsampling --|
"""
def __init__(self, outer_nc, inner_nc, branch_num, input_nc=None,
submodule=None, outermost=False, innermost=False, norm_layer=nn.BatchNorm2d, use_dropout=False):
"""Construct a Unet submodule with skip connections.
Parameters:
outer_nc (int) -- the number of filters in the outer conv layer
inner_nc (int) -- the number of filters in the inner conv layer
input_nc (int) -- the number of channels in input images/features
submodule (UnetSkipConnectionBlock) -- previously defined submodules
outermost (bool) -- if this module is the outermost module
innermost (bool) -- if this module is the innermost module
norm_layer -- normalization layer
use_dropout (bool) -- if use dropout layers.
"""
super(MaskCollectionBlock, self).__init__()
self.outermost = outermost
if type(norm_layer) == functools.partial:
use_bias = norm_layer.func == nn.InstanceNorm2d
else:
use_bias = norm_layer == nn.InstanceNorm2d
if input_nc is None:
input_nc = outer_nc
mask_models = []
for i in range(0, branch_num):
equalconv = nn.Conv2d(input_nc, inner_nc, kernel_size=7 + 2 * i,
stride=1, padding=3 + i, bias=use_bias)
# equalconv = nn.Conv2d(input_nc, inner_nc, kernel_size=3 + 2 * i,
# stride=1, padding=1 + i, bias=use_bias)
equalrelu = nn.LeakyReLU(0.2, True)
equalnorm = norm_layer(inner_nc)
mask_model = [equalconv, equalnorm, equalrelu]
mask_models.append(nn.Sequential(*mask_model))
self.mask_models = nn.ModuleList(mask_models)
shrinkconv = nn.Conv2d(inner_nc * branch_num, outer_nc, kernel_size=1,
stride=1, padding=0, bias=use_bias)
shrinkrelu = nn.LeakyReLU(0.2, True)
shrinknorm = norm_layer(outer_nc)
shrinkpart = [shrinkconv, shrinknorm, shrinkrelu]
self.shrinkpart = nn.Sequential(*shrinkpart)
def forward(self, x, mask, random_bg):
mask_y = None
for mask_model in self.mask_models:
mask_y_branch = mask_model(x) * mask
if mask_y is None:
mask_y = mask_y_branch
else:
mask_y = torch.cat([mask_y, mask_y_branch], 1)
mask_y = mask_y + random_bg * (1 - mask)
processed_y = self.shrinkpart(mask_y)
return processed_y
class DisperseBlock(nn.Module):
"""Defines the Unet submodule with skip connection.
X -------------------identity----------------------
|-- downsampling -- |submodule| -- upsampling --|
"""
def __init__(self, outer_nc, inner_nc, branch_num, input_nc=None,
submodule=None, outermost=False, innermost=False, norm_layer=nn.BatchNorm2d, use_dropout=False):
"""Construct a Unet submodule with skip connections.
Parameters:
outer_nc (int) -- the number of filters in the outer conv layer
inner_nc (int) -- the number of filters in the inner conv layer
input_nc (int) -- the number of channels in input images/features
submodule (UnetSkipConnectionBlock) -- previously defined submodules
outermost (bool) -- if this module is the outermost module
innermost (bool) -- if this module is the innermost module
norm_layer -- normalization layer
use_dropout (bool) -- if use dropout layers.
"""
super(DisperseBlock, self).__init__()
self.outermost = outermost
if type(norm_layer) == functools.partial:
use_bias = norm_layer.func == nn.InstanceNorm2d
else:
use_bias = norm_layer == nn.InstanceNorm2d
if input_nc is None:
input_nc = outer_nc
disperseconv = nn.Conv2d(input_nc, branch_num, kernel_size=11,
stride=1, padding=5, bias=use_bias)
disperserelu = nn.LeakyReLU(0.2, True)
dispersenorm = norm_layer(branch_num)
dispersepart = [disperseconv, dispersenorm, disperserelu]
self.dispersepart = nn.Sequential(*dispersepart)
model = [submodule]
self.model = nn.Sequential(*model)
def forward(self, x):
processed_y = self.dispersepart(x)
return self.model(processed_y)
class UnetSkipConnectionInnerRandomBlock(nn.Module):
"""Defines the Unet submodule with skip connection.
X -------------------identity----------------------
|-- downsampling -- |submodule| -- upsampling --|
"""
def __init__(self, outer_nc, inner_nc, inner_ap_nc, input_nc=None,
submodule=None, outermost=False, innermost=False, norm_layer=nn.BatchNorm2d, use_dropout=False):
"""Construct a Unet submodule with skip connections.
Parameters:
outer_nc (int) -- the number of filters in the outer conv layer
inner_nc (int) -- the number of filters in the inner conv layer
inner_ap_nc -- the number of channels in inner append vector
input_nc (int) -- the number of channels in input images/features
submodule (UnetSkipConnectionBlock) -- previously defined submodules
outermost (bool) -- if this module is the outermost module
innermost (bool) -- if this module is the innermost module
norm_layer -- normalization layer
use_dropout (bool) -- if use dropout layers.
"""
super(UnetSkipConnectionInnerRandomBlock, self).__init__()
self.outermost = outermost
self.innermost = innermost
if type(norm_layer) == functools.partial:
use_bias = norm_layer.func == nn.InstanceNorm2d
else:
use_bias = norm_layer == nn.InstanceNorm2d
if input_nc is None:
input_nc = outer_nc
downconv = nn.Conv2d(input_nc, inner_nc, kernel_size=4,
stride=2, padding=1, bias=use_bias)
downrelu = nn.LeakyReLU(0.2, True)
downnorm = norm_layer(inner_nc)
uprelu = nn.ReLU(True)
upnorm = norm_layer(outer_nc)
if outermost:
upconv = nn.ConvTranspose2d(inner_nc * 2, outer_nc,
kernel_size=4, stride=2,
padding=1)
down = [downconv]
up = [uprelu, upconv, nn.Tanh()]
model_down = down
model_sub = submodule
model_up = up
elif innermost:
upconv = nn.ConvTranspose2d(inner_nc + inner_ap_nc, outer_nc,
kernel_size=4, stride=2,
padding=1, bias=use_bias)
down = [downrelu, downconv]
up = [uprelu, upconv, upnorm]
model_down = down
model_sub = None
model_up = up
else:
upconv = nn.ConvTranspose2d(inner_nc * 2, outer_nc,
kernel_size=4, stride=2,
padding=1, bias=use_bias)
down = [downrelu, downconv, downnorm]
up = [uprelu, upconv, upnorm]
if use_dropout:
model_down = down
model_sub = submodule
model_up = up + [nn.Dropout(0.5)]
else:
model_down = down
model_sub = submodule
model_up = up
self.model_down = nn.Sequential(*model_down)
if model_sub is not None:
self.model_sub = model_sub
self.model_up = nn.Sequential(*model_up)
def forward(self, x, inner_ap):
if self.outermost:
down_out = self.model_down(x)
sub_out = self.model_sub(down_out, inner_ap)
return self.model_up(sub_out)
elif self.innermost:
down_out = self.model_down(x)
sub_out = torch.cat([down_out, inner_ap], 1)
return torch.cat([x, self.model_up(sub_out)], 1)
else: # add skip connections
down_out = self.model_down(x)
sub_out = self.model_sub(down_out, inner_ap)
return torch.cat([x, self.model_up(sub_out)], 1)
class DownsamplingResnetBranchGenerator(nn.Module):
"""Resnet-based generator that consists of Resnet blocks between a few downsampling/upsampling operations.
We adapt Torch code and idea from Justin Johnson's neural style transfer project(https://github.com/jcjohnson/fast-neural-style)
"""
def __init__(self, input_nc, output_nc, ngf=64, norm_layer=nn.BatchNorm2d, use_dropout=False, n_blocks=6, padding_type='reflect', n_downsampling=2):
"""Construct a Resnet-based generator
Parameters:
input_nc (int) -- the number of channels in input images
output_nc (int) -- the number of channels in output images
ngf (int) -- the number of filters in the last conv layer
norm_layer -- normalization layer
use_dropout (bool) -- if use dropout layers
n_blocks (int) -- the number of ResNet blocks
padding_type (str) -- the name of padding layer in conv layers: reflect | replicate | zero
"""
assert(n_blocks >= 0)
super(DownsamplingResnetBranchGenerator, self).__init__()
if type(norm_layer) == functools.partial:
use_bias = norm_layer.func == nn.InstanceNorm2d
else:
use_bias = norm_layer == nn.InstanceNorm2d
pre_n_blocks = int(n_blocks * 0.5)
post_n_blocks = n_blocks - pre_n_blocks
comp_model = [nn.ReflectionPad2d(3),
nn.Conv2d(input_nc, ngf, kernel_size=7, padding=0, bias=use_bias),
norm_layer(ngf),
nn.ReLU(True)]
for i in range(n_downsampling): # add downsampling layers
mult = 2 ** i
comp_model += [nn.Conv2d(ngf * mult, ngf * mult * 2, kernel_size=3, stride=2, padding=1, bias=use_bias),
norm_layer(ngf * mult * 2),
nn.ReLU(True)]
mult = 2 ** n_downsampling
for i in range(pre_n_blocks): # add ResNet blocks
comp_model += [ResnetBlock(ngf * mult, padding_type=padding_type, norm_layer=norm_layer, use_dropout=use_dropout, use_bias=use_bias)]
self.comp_model = nn.Sequential(*comp_model)
def forward(self, input):
"""Standard forward"""
return self.comp_model(input)
class UpsamplingResnetBranchGenerator(nn.Module):
"""Resnet-based generator that consists of Resnet blocks between a few downsampling/upsampling operations.
We adapt Torch code and idea from Justin Johnson's neural style transfer project(https://github.com/jcjohnson/fast-neural-style)
"""
def __init__(self, input_nc, output_nc, ngf=64, norm_layer=nn.BatchNorm2d, use_dropout=False, n_blocks=6, padding_type='reflect', n_downsampling=2):
"""Construct a Resnet-based generator
Parameters:
input_nc (int) -- the number of channels in input images
output_nc (int) -- the number of channels in output images
ngf (int) -- the number of filters in the last conv layer
norm_layer -- normalization layer
use_dropout (bool) -- if use dropout layers
n_blocks (int) -- the number of ResNet blocks
padding_type (str) -- the name of padding layer in conv layers: reflect | replicate | zero
"""
assert(n_blocks >= 0)
super(UpsamplingResnetBranchGenerator, self).__init__()
if type(norm_layer) == functools.partial:
use_bias = norm_layer.func == nn.InstanceNorm2d
else:
use_bias = norm_layer == nn.InstanceNorm2d
pre_n_blocks = int(n_blocks * 0.5)
post_n_blocks = n_blocks - pre_n_blocks
upsam_branch_model = []
mult = 2 ** n_downsampling
for i in range(post_n_blocks): # add ResNet blocks
upsam_branch_model += [ResnetBlock(ngf * mult, padding_type=padding_type, norm_layer=norm_layer, use_dropout=use_dropout, use_bias=use_bias)]
for i in range(n_downsampling): # add upsampling layers
mult = 2 ** (n_downsampling - i)
upsam_branch_model += [nn.ConvTranspose2d(ngf * mult, int(ngf * mult / 2),
kernel_size=3, stride=2,
padding=1, output_padding=1,
bias=use_bias),
norm_layer(int(ngf * mult / 2)),
nn.ReLU(True)]
upsam_branch_model += [nn.ReflectionPad2d(3)]
upsam_branch_model += [nn.Conv2d(ngf, output_nc, kernel_size=7, padding=0)]
upsam_branch_model += [nn.Tanh()]
self.upsam_branch_model = nn.Sequential(*upsam_branch_model)
def forward(self, input):
"""Standard forward"""
return self.upsam_branch_model(input)
class LabelBranchGenerator(nn.Module):
"""Resnet-based generator that consists of Resnet blocks between a few downsampling/upsampling operations.
We adapt Torch code and idea from Justin Johnson's neural style transfer project(https://github.com/jcjohnson/fast-neural-style)
"""
def __init__(self, input_nc, output_nc, ngf=64, norm_layer=nn.BatchNorm2d, use_dropout=False, n_blocks=6, padding_type='reflect', n_downsampling=2):
"""Construct a Resnet-based generator
Parameters:
input_nc (int) -- the number of channels in input images
output_nc (int) -- the number of channels in output images
ngf (int) -- the number of filters in the last conv layer
norm_layer -- normalization layer
use_dropout (bool) -- if use dropout layers
n_blocks (int) -- the number of ResNet blocks
padding_type (str) -- the name of padding layer in conv layers: reflect | replicate | zero
"""
assert(n_blocks >= 0)
super(LabelBranchGenerator, self).__init__()
if type(norm_layer) == functools.partial:
use_bias = norm_layer.func == nn.InstanceNorm2d
else:
use_bias = norm_layer == nn.InstanceNorm2d
mult = 2 ** n_downsampling
n_downsampling_to_one = 7 - n_downsampling
hight_delta_branch_model = []
input_chanel = ngf * mult
for i in range(n_downsampling_to_one): # add downsampling layers
hight_delta_branch_model += [nn.Conv2d(input_chanel, output_nc, kernel_size=3, stride=2, padding=1, bias=use_bias),
norm_layer(output_nc),
nn.ReLU(True)]
input_chanel = output_nc
hight_delta_branch_model += [nn.Conv2d(input_chanel, output_nc, kernel_size=3, stride=2, padding=1, bias=use_bias),
nn.Sigmoid(),
nn.Flatten()]
self.hight_delta_branch_model = nn.Sequential(*hight_delta_branch_model)
def forward(self, input):
"""Standard forward"""
return self.hight_delta_branch_model(input)
class LeakReluResnetGenerator(nn.Module):
"""Resnet-based generator that consists of Resnet blocks between a few downsampling/upsampling operations.
We adapt Torch code and idea from Justin Johnson's neural style transfer project(https://github.com/jcjohnson/fast-neural-style)
"""
def __init__(self, input_nc, output_nc, ngf=64, norm_layer=nn.BatchNorm2d, use_dropout=False, n_blocks=6, padding_type='reflect'):
"""Construct a Resnet-based generator
Parameters:
input_nc (int) -- the number of channels in input images
output_nc (int) -- the number of channels in output images
ngf (int) -- the number of filters in the last conv layer
norm_layer -- normalization layer
use_dropout (bool) -- if use dropout layers
n_blocks (int) -- the number of ResNet blocks
padding_type (str) -- the name of padding layer in conv layers: reflect | replicate | zero
"""
assert(n_blocks >= 0)
super(LeakReluResnetGenerator, self).__init__()
if type(norm_layer) == functools.partial:
use_bias = norm_layer.func == nn.InstanceNorm2d
else:
use_bias = norm_layer == nn.InstanceNorm2d
model = [nn.ReflectionPad2d(3),
nn.Conv2d(input_nc, ngf, kernel_size=7, padding=0, bias=use_bias),
norm_layer(ngf),
nn.LeakyReLU(0.2, True)]
n_downsampling = 2
for i in range(n_downsampling): # add downsampling layers
mult = 2 ** i
model += [nn.Conv2d(ngf * mult, ngf * mult * 2, kernel_size=3, stride=2, padding=1, bias=use_bias),
norm_layer(ngf * mult * 2),
nn.LeakyReLU(0.2, True)]
mult = 2 ** n_downsampling
for i in range(n_blocks): # add ResNet blocks
model += [ResnetBlock(ngf * mult, padding_type=padding_type, norm_layer=norm_layer, use_dropout=use_dropout, use_bias=use_bias)]
for i in range(n_downsampling): # add upsampling layers
mult = 2 ** (n_downsampling - i)
model += [nn.ConvTranspose2d(ngf * mult, int(ngf * mult / 2),
kernel_size=3, stride=2,
padding=1, output_padding=1,
bias=use_bias),
norm_layer(int(ngf * mult / 2)),
nn.LeakyReLU(0.2, True)]
model += [nn.ReflectionPad2d(3)]
model += [nn.Conv2d(ngf, output_nc, kernel_size=7, padding=0)]
model += [nn.LeakyReLU(0.2, True)]
self.model = nn.Sequential(*model)
def forward(self, input):
"""Standard forward"""
return self.model(input)
| 48.891632 | 191 | 0.62432 | 4,424 | 35,642 | 4.812613 | 0.053571 | 0.056221 | 0.029449 | 0.033535 | 0.88319 | 0.855573 | 0.848387 | 0.839134 | 0.830868 | 0.819736 | 0 | 0.013179 | 0.284692 | 35,642 | 728 | 192 | 48.958791 | 0.821926 | 0.309971 | 0 | 0.636132 | 0 | 0 | 0.001211 | 0 | 0 | 0 | 0 | 0 | 0.010178 | 1 | 0.073791 | false | 0 | 0.017812 | 0 | 0.170483 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
de9843f7d57b27431c59eb7fa83109d4b03cbb9c | 162 | py | Python | src/ability_scores.py | hmisee/BoardGames | 6ffb8801b5ceee3a526a0185fb729cbf4ee027ee | [
"MIT"
] | null | null | null | src/ability_scores.py | hmisee/BoardGames | 6ffb8801b5ceee3a526a0185fb729cbf4ee027ee | [
"MIT"
] | null | null | null | src/ability_scores.py | hmisee/BoardGames | 6ffb8801b5ceee3a526a0185fb729cbf4ee027ee | [
"MIT"
] | null | null | null | class AbilityScores:
def __init__(self, strength):
self.strength = strength
def getStrengthMod(self):
return (self.strength - 10) / 2 | 27 | 39 | 0.635802 | 17 | 162 | 5.823529 | 0.588235 | 0.363636 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.025424 | 0.271605 | 162 | 6 | 39 | 27 | 0.813559 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.4 | false | 0 | 0 | 0.2 | 0.8 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
7229739e9274f25cc5277abcf41773f2de494c58 | 37 | py | Python | test/test3.py | AmirHKiani98/betweenness | de615303c743bb03174bb0a4086f665ca3d94516 | [
"MIT"
] | 1 | 2021-09-18T21:45:31.000Z | 2021-09-18T21:45:31.000Z | test/test3.py | AmirHKiani98/betweenness | de615303c743bb03174bb0a4086f665ca3d94516 | [
"MIT"
] | null | null | null | test/test3.py | AmirHKiani98/betweenness | de615303c743bb03174bb0a4086f665ca3d94516 | [
"MIT"
] | null | null | null | print(("{:b}".format(3)).count("1"))
| 18.5 | 36 | 0.513514 | 6 | 37 | 3.166667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.055556 | 0.027027 | 37 | 1 | 37 | 37 | 0.472222 | 0 | 0 | 0 | 0 | 0 | 0.135135 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
723114a5fa81663cac4daf559d0d705a9c5fe27c | 44 | py | Python | randomwordz/__init__.py | noyoshi/randomwordz | cc785456465abd2234a4449bb559374c28660814 | [
"BSD-3-Clause"
] | null | null | null | randomwordz/__init__.py | noyoshi/randomwordz | cc785456465abd2234a4449bb559374c28660814 | [
"BSD-3-Clause"
] | null | null | null | randomwordz/__init__.py | noyoshi/randomwordz | cc785456465abd2234a4449bb559374c28660814 | [
"BSD-3-Clause"
] | null | null | null | from randomwordz.words import WordGenerator
| 22 | 43 | 0.886364 | 5 | 44 | 7.8 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.090909 | 44 | 1 | 44 | 44 | 0.975 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
9d1f5645ccdad30addd309bb3668c9defd706214 | 87 | py | Python | xgtest.py | PALYmyGXG/tensoflow_flask_httpx_id48_project | 45efe14e2dad13eb94a10a0d87b69b08737d02c3 | [
"Apache-2.0"
] | 1 | 2021-07-21T06:46:03.000Z | 2021-07-21T06:46:03.000Z | xgtest.py | PALYmyGXG/tensoflow_flask_httpx_id48_project | 45efe14e2dad13eb94a10a0d87b69b08737d02c3 | [
"Apache-2.0"
] | null | null | null | xgtest.py | PALYmyGXG/tensoflow_flask_httpx_id48_project | 45efe14e2dad13eb94a10a0d87b69b08737d02c3 | [
"Apache-2.0"
] | null | null | null | import psutil
print(psutil.virtual_memory().percent)
print(psutil.cpu_percent(1,True)) | 21.75 | 38 | 0.816092 | 13 | 87 | 5.307692 | 0.692308 | 0.318841 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.012048 | 0.045977 | 87 | 4 | 39 | 21.75 | 0.819277 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.333333 | 0 | 0.333333 | 0.666667 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 6 |
9d3a0d650d98650d1da860d623aeab142f5d4168 | 145 | py | Python | plwordnet/__init__.py | oskar-j/plwordnet-reader | 7cc677d23708f841db9b5587866b7ce89fe859ec | [
"MIT"
] | 2 | 2018-06-04T10:35:58.000Z | 2018-09-21T20:56:00.000Z | plwordnet/__init__.py | oskar-j/plwordnet-reader | 7cc677d23708f841db9b5587866b7ce89fe859ec | [
"MIT"
] | 2 | 2018-06-04T10:40:57.000Z | 2018-06-04T10:42:12.000Z | plwordnet/__init__.py | oskar-j/plwordnet-reader | 7cc677d23708f841db9b5587866b7ce89fe859ec | [
"MIT"
] | 1 | 2020-10-10T17:32:25.000Z | 2020-10-10T17:32:25.000Z | from plwordnet.plwordnet import PolishWordnet
from plwordnet.dataset import PolishWordnetDataset
from plwordnet.engine import PolishWordnetEngine | 48.333333 | 50 | 0.903448 | 15 | 145 | 8.733333 | 0.533333 | 0.29771 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.075862 | 145 | 3 | 51 | 48.333333 | 0.977612 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
19b83400406133896ff9ba5ebb10403c39d4c25a | 47 | py | Python | __init__.py | xegepa/Twitch-Api-Py | 84613dd32654315422481d24bb9afc1ab3967d3d | [
"MIT"
] | 2 | 2020-08-16T12:54:23.000Z | 2021-02-11T20:43:42.000Z | __init__.py | xegepa/Twitch-Api-Py | 84613dd32654315422481d24bb9afc1ab3967d3d | [
"MIT"
] | null | null | null | __init__.py | xegepa/Twitch-Api-Py | 84613dd32654315422481d24bb9afc1ab3967d3d | [
"MIT"
] | null | null | null | from TwitchApiPy.TwitchApiPy import TwitchApiPy | 47 | 47 | 0.914894 | 5 | 47 | 8.6 | 0.6 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.06383 | 47 | 1 | 47 | 47 | 0.977273 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
19c5cdf3868015be5a1d7ec33bdc213360df0cae | 21 | py | Python | example_project/some_modules/third_modules/a165.py | Yuriy-Leonov/cython_imports_limit_issue | 2f9e7c02798fb52185dabfe6ce3811c439ca2839 | [
"MIT"
] | null | null | null | example_project/some_modules/third_modules/a165.py | Yuriy-Leonov/cython_imports_limit_issue | 2f9e7c02798fb52185dabfe6ce3811c439ca2839 | [
"MIT"
] | null | null | null | example_project/some_modules/third_modules/a165.py | Yuriy-Leonov/cython_imports_limit_issue | 2f9e7c02798fb52185dabfe6ce3811c439ca2839 | [
"MIT"
] | null | null | null | class A165:
pass
| 7 | 11 | 0.619048 | 3 | 21 | 4.333333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.214286 | 0.333333 | 21 | 2 | 12 | 10.5 | 0.714286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.5 | 0 | 0 | 0.5 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
19c5ffc4e18842b3868e24c953d081344e77b71a | 55 | py | Python | players/__init__.py | madisonmussari/mcts_kds | 86202640de2abc017d32c4db08abf2b32d9c2a70 | [
"MIT"
] | 1 | 2021-09-15T04:24:32.000Z | 2021-09-15T04:24:32.000Z | players/__init__.py | madisonmussari/mcts_kds | 86202640de2abc017d32c4db08abf2b32d9c2a70 | [
"MIT"
] | null | null | null | players/__init__.py | madisonmussari/mcts_kds | 86202640de2abc017d32c4db08abf2b32d9c2a70 | [
"MIT"
] | null | null | null | from .human_player import *
from .mcts_player import *
| 18.333333 | 27 | 0.781818 | 8 | 55 | 5.125 | 0.625 | 0.585366 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.145455 | 55 | 2 | 28 | 27.5 | 0.87234 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
19fa88336eb677776f7476cfc319bb68dcf230fb | 48 | py | Python | retro_star/model/__init__.py | cthoyt/retro_star | 280231eb2f5dffc0e14bed300d770977b323205a | [
"MIT"
] | 65 | 2020-06-27T04:28:21.000Z | 2022-03-30T11:18:22.000Z | retro_star/model/__init__.py | cthoyt/retro_star | 280231eb2f5dffc0e14bed300d770977b323205a | [
"MIT"
] | 15 | 2020-07-07T13:17:05.000Z | 2022-03-22T12:52:29.000Z | retro_star/model/__init__.py | cthoyt/retro_star | 280231eb2f5dffc0e14bed300d770977b323205a | [
"MIT"
] | 14 | 2020-06-30T09:22:13.000Z | 2022-03-30T11:18:28.000Z | from retro_star.model.value_mlp import ValueMLP
| 24 | 47 | 0.875 | 8 | 48 | 5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.083333 | 48 | 1 | 48 | 48 | 0.909091 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.