hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1cc1456234d268ec5519fd7d40d1486a56173d80 | 23 | py | Python | scholar/__init__.py | toni-heittola/pelican-btex | 788ab934efcba4edf238237ad9fff8f489d685b7 | [
"MIT"
] | 5 | 2016-11-13T10:24:28.000Z | 2019-08-05T05:03:50.000Z | scholar/__init__.py | toni-heittola/pelican-btex | 788ab934efcba4edf238237ad9fff8f489d685b7 | [
"MIT"
] | null | null | null | scholar/__init__.py | toni-heittola/pelican-btex | 788ab934efcba4edf238237ad9fff8f489d685b7 | [
"MIT"
] | null | null | null | from .scholar import *
| 11.5 | 22 | 0.73913 | 3 | 23 | 5.666667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.173913 | 23 | 1 | 23 | 23 | 0.894737 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
1ce63a27207a8c7b718dc3352e66c55d28fc7d71 | 5,579 | py | Python | set_up_grasp_models/tests/test_set_up_model.py | martamatos/set_up_grasp_models | 0028f063c41104e3c0404956aa225e76aa6ac983 | [
"MIT"
] | null | null | null | set_up_grasp_models/tests/test_set_up_model.py | martamatos/set_up_grasp_models | 0028f063c41104e3c0404956aa225e76aa6ac983 | [
"MIT"
] | 5 | 2019-05-14T17:05:41.000Z | 2019-05-29T13:17:11.000Z | set_up_grasp_models/tests/test_set_up_model.py | martamatos/set_up_grasp_models | 0028f063c41104e3c0404956aa225e76aa6ac983 | [
"MIT"
] | null | null | null | import os
import unittest
from unittest.mock import patch
import pandas as pd
import pickle
from set_up_grasp_models.set_up_models.set_up_model import set_up_model
class TestSetUpModel(unittest.TestCase):
def setUp(self):
this_dir, this_filename = os.path.split(__file__)
self.test_folder = os.path.join(this_dir, 'test_files', 'test_set_up_models', 'set_up_model')
self.file_in_stoic = os.path.join(self.test_folder, 'model_with_PPP_plaintext.txt')
def test_set_up_model_empty_base(self):
true_res = pd.read_excel(os.path.join(self.test_folder, 'true_res_model_v1.xlsx'), sheet_name=None)
general_file = os.path.join(self.test_folder, '..', '..', '..', '..', '..', 'base_files', 'GRASP_general.xlsx')
model_name = 'model_v1'
file_out = os.path.join(self.test_folder, model_name + '.xlsx')
set_up_model(model_name, self.file_in_stoic, general_file, file_out)
res = pd.read_excel(os.path.join(self.test_folder, model_name + '.xlsx'), sheet_name=None)
self.assertListEqual(list(true_res.keys()), list(res.keys()))
for key in true_res:
print(key)
self.assertTrue(true_res[key].equals(res[key]))
def test_set_up_model_empty_base_error(self):
general_file = os.path.join(self.test_folder, 'GRASP_general_error.xlsx')
model_name = 'model_v1'
file_out = os.path.join(self.test_folder, 'model_v1.xlsx')
with self.assertRaises(KeyError) as context:
set_up_model(model_name, self.file_in_stoic, general_file, file_out)
self.assertTrue(
f'The base excel file {general_file} must contain a sheet named \'general\'' in context.exception)
def test_set_up_model_not_empty_base(self):
true_res = pd.read_excel(os.path.join(self.test_folder, 'true_res_model_v2.xlsx'), sheet_name=None)
general_file = os.path.join(self.test_folder, 'model_v1_manual2_EX.xlsx')
model_name = 'model_v2'
file_out = os.path.join(self.test_folder, model_name + '.xlsx')
set_up_model(model_name, self.file_in_stoic, general_file, file_out)
res = pd.read_excel(os.path.join(self.test_folder, model_name + '.xlsx'), sheet_name=None)
#with open(os.path.join(self.test_folder, 'true_res_model_v2.pkl'), 'wb') as handle:
# pickle.dump(res, handle)
self.assertListEqual(list(true_res.keys()), list(res.keys()))
for key in true_res:
print(key)
#if key != 'measRates':
self.assertTrue(true_res[key].equals(res[key]))
def test_set_up_model_not_empty_base_equilibrator(self):
with open(os.path.join(self.test_folder, 'true_res_model_v3.pkl'), 'rb') as f_in:
true_res = pickle.load(f_in)
general_file = os.path.join(self.test_folder, 'model_v1_manual2_EX.xlsx')
model_name = 'model_v3'
file_out = os.path.join(self.test_folder, model_name + '.xlsx')
with patch('builtins.input', side_effect=['']):
set_up_model(model_name, self.file_in_stoic, general_file, file_out, use_equilibrator=True)
res = pd.read_excel(os.path.join(self.test_folder, model_name + '.xlsx'), sheet_name=None)
#with open(os.path.join(self.test_folder, 'true_res_model_v3.pkl'), 'wb') as handle:
# pickle.dump(res, handle)
self.assertListEqual(list(true_res.keys()), list(res.keys()))
for key in true_res:
print(key)
self.assertTrue(true_res[key].equals(res[key]))
def test_set_up_model_not_empty_base_mets_file(self):
with open(os.path.join(self.test_folder, 'true_res_model_v4.pkl'), 'rb') as f_in:
true_res = pickle.load(f_in)
general_file = os.path.join(self.test_folder, 'model_v1_manual2_EX.xlsx')
file_in_mets_conc = os.path.join(self.test_folder, 'met_concs.xlsx')
model_name = 'model_v4'
file_out = os.path.join(self.test_folder, model_name + '.xlsx')
with patch('builtins.input', side_effect=['']):
set_up_model(model_name, self.file_in_stoic, general_file, file_out,
file_in_mets_conc=file_in_mets_conc)
res = pd.read_excel(os.path.join(self.test_folder, model_name + '.xlsx'), sheet_name=None)
#with open(os.path.join(self.test_folder, 'true_res_model_v4.pkl'), 'wb') as handle:
# pickle.dump(res, handle)
self.assertListEqual(list(true_res.keys()), list(res.keys()))
for key in true_res:
print(key)
self.assertTrue(true_res[key].equals(res[key]))
def test_set_up_model_not_empty_base_rxns_file(self):
true_res = pd.read_excel(os.path.join(self.test_folder, 'true_res_model_v5.xlsx'), sheet_name=None)
general_file = os.path.join(self.test_folder, 'model_v1_manual2_EX2.xlsx')
file_in_rxn_fluxes = os.path.join(self.test_folder, 'flux_file_rows.xlsx')
model_name = 'model_v5'
file_out = os.path.join(self.test_folder, model_name + '.xlsx')
with patch('builtins.input', side_effect=['']):
set_up_model(model_name, self.file_in_stoic, general_file, file_out,
file_in_meas_fluxes=file_in_rxn_fluxes, fluxes_orient='rows')
res = pd.read_excel(os.path.join(self.test_folder, model_name + '.xlsx'), sheet_name=None)
self.assertListEqual(list(true_res.keys()), list(res.keys()))
for key in true_res:
print(key)
self.assertTrue(true_res[key].equals(res[key]))
| 43.248062 | 120 | 0.669475 | 837 | 5,579 | 4.127838 | 0.123059 | 0.052098 | 0.117511 | 0.113459 | 0.80521 | 0.80521 | 0.779161 | 0.763242 | 0.746744 | 0.746744 | 0 | 0.005392 | 0.202187 | 5,579 | 128 | 121 | 43.585938 | 0.770838 | 0.063273 | 0 | 0.542169 | 0 | 0 | 0.113432 | 0.049243 | 0 | 0 | 0 | 0 | 0.144578 | 1 | 0.084337 | false | 0 | 0.072289 | 0 | 0.168675 | 0.060241 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
1c771aff4232d23b8d12ee0f78dab7ef45dba319 | 20 | py | Python | randomstate/prng/pcg32/__init__.py | bashtage/ng-numpy-randomstate | b397db9cb8688b291fc40071ab043009dfa05a85 | [
"Apache-2.0",
"BSD-3-Clause"
] | 43 | 2016-02-11T03:38:16.000Z | 2022-02-03T10:00:15.000Z | randomstate/prng/pcg32/__init__.py | bashtage/pcg-python | b397db9cb8688b291fc40071ab043009dfa05a85 | [
"Apache-2.0",
"BSD-3-Clause"
] | 31 | 2015-12-26T19:47:36.000Z | 2018-12-10T15:55:46.000Z | randomstate/prng/pcg32/__init__.py | bashtage/ng-numpy-randomstate | b397db9cb8688b291fc40071ab043009dfa05a85 | [
"Apache-2.0",
"BSD-3-Clause"
] | 11 | 2016-04-28T02:00:38.000Z | 2020-08-07T10:33:10.000Z | from .pcg32 import * | 20 | 20 | 0.75 | 3 | 20 | 5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.117647 | 0.15 | 20 | 1 | 20 | 20 | 0.764706 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
98c06dadf0d0ae5abbfca513b2b063a57e779ce2 | 620 | py | Python | sdk/python/pulumi_azure/datafactory/__init__.py | kenny-wealth/pulumi-azure | e57e3a81f95bf622e7429c53f0bff93e33372aa1 | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | sdk/python/pulumi_azure/datafactory/__init__.py | kenny-wealth/pulumi-azure | e57e3a81f95bf622e7429c53f0bff93e33372aa1 | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | sdk/python/pulumi_azure/datafactory/__init__.py | kenny-wealth/pulumi-azure | e57e3a81f95bf622e7429c53f0bff93e33372aa1 | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | # coding=utf-8
# *** WARNING: this file was generated by the Pulumi Terraform Bridge (tfgen) Tool. ***
# *** Do not edit by hand unless you're certain you know what you are doing! ***
# Export this package's modules as members:
from .factory import *
from .dataset_mysql import *
from .dataset_postgresql import *
from .dataset_sql_server_table import *
from .integration_runtime_managed import *
from .linked_service_data_lake_storage_gen2 import *
from .linked_service_mysql import *
from .linked_service_postgresql import *
from .linked_service_sql_server import *
from .pipeline import *
from .get_factory import *
| 36.470588 | 87 | 0.783871 | 90 | 620 | 5.177778 | 0.588889 | 0.214592 | 0.137339 | 0.197425 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.003745 | 0.13871 | 620 | 16 | 88 | 38.75 | 0.868914 | 0.353226 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
c727ab7314b6a50cceaee5a0a5dd6fc60055d251 | 135 | py | Python | src/__init__.py | Yang-33/vjudge-atcoder-submitID | 5b87594322a337e6acb25c84470d273427413445 | [
"MIT"
] | null | null | null | src/__init__.py | Yang-33/vjudge-atcoder-submitID | 5b87594322a337e6acb25c84470d273427413445 | [
"MIT"
] | 3 | 2018-02-07T16:35:27.000Z | 2018-02-07T17:47:36.000Z | src/__init__.py | Yang-33/vjudge-atcoder-submitID | 5b87594322a337e6acb25c84470d273427413445 | [
"MIT"
] | null | null | null | from . import GetSubmitID
from . import PrintIDs
from . import GetFromfileURL
from . import InputSTDIN
from . import PrintColor
| 19.285714 | 29 | 0.762963 | 15 | 135 | 6.866667 | 0.466667 | 0.485437 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.2 | 135 | 6 | 30 | 22.5 | 0.953704 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
c7b5957d881fb700bfd4cba0b8d26b1aa982e4b1 | 35,945 | py | Python | python/test/ml_ops/cconv_python.py | leomariga/Open3D | d197339fcd29ad0803a182ef8953d89e563f94d7 | [
"MIT"
] | 1 | 2021-06-27T22:04:38.000Z | 2021-06-27T22:04:38.000Z | python/test/ml_ops/cconv_python.py | leomariga/Open3D | d197339fcd29ad0803a182ef8953d89e563f94d7 | [
"MIT"
] | null | null | null | python/test/ml_ops/cconv_python.py | leomariga/Open3D | d197339fcd29ad0803a182ef8953d89e563f94d7 | [
"MIT"
] | null | null | null | # ----------------------------------------------------------------------------
# - Open3D: www.open3d.org -
# ----------------------------------------------------------------------------
# The MIT License (MIT)
#
# Copyright (c) 2018-2021 www.open3d.org
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
# IN THE SOFTWARE.
# ----------------------------------------------------------------------------
"""This is a python implementation for the continuous convolutions meant for
debugging and testing the C code.
"""
import numpy as np
# interpolation
LINEAR = 1
NEAREST_NEIGHBOR = 2
LINEAR_BORDER = 3
# coordinate mapping
IDENTITY = 4
BALL_TO_CUBE_RADIAL = 5
BALL_TO_CUBE_VOLUME_PRESERVING = 6
# windows
RECTANGLE = 7
TRAPEZOID = 8
POLY = 9
_convert_parameter_str_dict = {
'linear': LINEAR,
'linear_border': LINEAR_BORDER,
'nearest_neighbor': NEAREST_NEIGHBOR,
'identity': IDENTITY,
'ball_to_cube_radial': BALL_TO_CUBE_RADIAL,
'ball_to_cube_volume_preserving': BALL_TO_CUBE_VOLUME_PRESERVING,
}
def map_cube_to_cylinder(points, inverse=False):
"""maps a cube to a cylinder and vice versa
The input and output range of the coordinates is [-1,1]. The cylinder axis
is along z.
points: numpy array with shape [n,3]
inverse: If True apply the inverse transform: cylinder -> cube
"""
assert points.ndim == 2
assert points.shape[1] == 3
# yapf: disable
result = np.empty_like(points)
if inverse:
for i, p in enumerate(points):
x, y, z = p
if np.allclose(p[0:2], np.zeros_like(p[0:2])):
result[i] = (0,0,z)
elif np.abs(y) <= x and x > 0:
result[i] = (np.sqrt(x*x+y*y), 4/np.pi *np.sqrt(x*x+y*y)*np.arctan(y/x), z)
elif np.abs(y) <= -x and x < 0:
result[i] = (-np.sqrt(x*x+y*y), -4/np.pi *np.sqrt(x*x+y*y)*np.arctan(y/x), z)
elif np.abs(x) <= y and y > 0:
result[i] = (4/np.pi *np.sqrt(x*x+y*y)*np.arctan(x/y), np.sqrt(x*x+y*y), z)
else: # elif np.abs(x) <= -y and y < 0:
result[i] = (-4/np.pi *np.sqrt(x*x+y*y)*np.arctan(x/y), -np.sqrt(x*x+y*y), z)
else:
for i, p in enumerate(points):
x, y, z = p
if np.count_nonzero(p[0:2]) == 0:
result[i] = (0,0,z)
elif np.abs(y) <= np.abs(x):
result[i] = (x*np.cos(y/x*np.pi/4), x*np.sin(y/x*np.pi/4), z)
else:
result[i] = (y*np.sin(x/y*np.pi/4), y*np.cos(x/y*np.pi/4), z)
return result
# yapf: enable
def map_cylinder_to_sphere(points, inverse=False):
"""maps a cylinder to a sphere and vice versa.
The input and output range of the coordinates is [-1,1]. The cylinder axis
is along z.
points: numpy array with shape [n,3]
inverse: If True apply the inverse transform: sphere -> cylinder
"""
assert points.ndim == 2
assert points.shape[1] == 3
# yapf: disable
result = np.empty_like(points)
if inverse:
for i, p in enumerate(points):
x, y, z = p
t = np.linalg.norm(p, ord=2)
if np.allclose(p, np.zeros_like(p)):
result[i] = 0,0,0
elif 5/4*z**2 > (x**2 + y**2):
s, z = np.sqrt(3*t/(t+np.abs(z))), np.sign(z)*t
result[i] = s*x, s*y, z
else: # elif 5/4*z**2 <= (x**2 + y**2):
s, z = t/np.sqrt(x*x+y*y), 3/2*z
result[i] = s*x, s*y, z
else:
for i, p in enumerate(points):
x, y, z = p
if np.allclose(p, np.zeros_like(p)):
result[i] = 0,0,0
elif z*z >= x*x + y*y:
result[i] = ( x*np.sqrt(2/3-(x*x+y*y)/(9*z*z)),
y*np.sqrt(2/3-(x*x+y*y)/(9*z*z)),
z-(x*x+y*y)/(3*z) )
else:
result[i] = ( x*np.sqrt(1-(4*z*z)/(9*(x*x+y*y))),
y*np.sqrt(1-(4*z*z)/(9*(x*x+y*y))),
2*z/3 )
return result
# yapf: enable
def compute_filter_coordinates(pos, filter_xyz_size, inv_extents, offset,
align_corners, mapping):
"""Computes the filter coordinates for a single point
The input to this function are coordinates relative to the point where the
convolution is evaluated. Coordinates are usually in the range
[-extent/2,extent/2] with extent as the edge length of the bounding box of
the filter shape. The output is a coordinate within the filter array, i.e.
the range is [0, filter_size.xyz], if the point was inside the filter shape.
The simplest filter shape is a cuboid (mapping=IDENTITY) and the
transformation is simply [-extent/2,extent/2] -> [0, filter_size.xyz].
The other type of shape that is implemented is a sphere with
mapping=BALL_TO_CUBE_RADIAL or mapping=BALL_TO_CUBE_VOLUME_PRESERVING.
pos: A single 3D point. An array of shape [3] with x,y,z coordinates.
filter_xyz_size: An array of shape [3], which defines the size of the filter
array for the spatial dimensions.
inv_extents: An array of shape [3], which defines the spatial extent of the
filter. The values are the reciprocal of the spatial extent
for x,y and z.
offset: An array of shape [3]. An offset for shifting the center. Can be
used to implement discrete filters with even filter size.
align_corners: If True then the voxel centers of the outer voxels
of the filter array are mapped to the boundary of the filter shape.
If false then the boundary of the filter array is mapped to the
boundary of the filter shape.
mapping: The mapping that is applied to the input coordinates.
- BALL_TO_CUBE_RADIAL uses radial stretching to map a sphere to
a cube.
- BALL_TO_CUBE_VOLUME_PRESERVING is using a more expensive volume
preserving mapping to map a sphere to a cube.
- IDENTITY no mapping is applied to the coordinates.
"""
assert pos.ndim == 1
assert pos.shape[0] == 3
assert filter_xyz_size.ndim == 1
assert all(filter_xyz_size.shape)
assert inv_extents.ndim == 1
assert inv_extents.shape[0] == 3
assert offset.ndim == 1
assert offset.shape[0] == 3
p = pos.copy()
if mapping == BALL_TO_CUBE_RADIAL:
p *= 2 * inv_extents # p is now a position in a sphere with radius 1
abs_max = np.max(np.abs(p))
if abs_max < 1e-8:
p = np.zeros_like(p)
else:
# map to the unit cube with edge length 1 and range [-0.5,0.5]
p *= 0.5 * np.sqrt(np.sum(p * p)) / abs_max
elif mapping == BALL_TO_CUBE_VOLUME_PRESERVING:
p *= 2 * inv_extents
p = 0.5 * map_cube_to_cylinder(map_cylinder_to_sphere(p[np.newaxis, :],
inverse=True),
inverse=True)[0]
elif mapping == IDENTITY:
# map to the unit cube with edge length 1 and range [-0.5,0.5]
p *= inv_extents
else:
raise ValueError("Unknown mapping")
if align_corners:
p += 0.5
p *= filter_xyz_size - 1
else:
p *= filter_xyz_size
p += offset
# integer div
p += filter_xyz_size // 2
if filter_xyz_size[0] % 2 == 0:
p[0] -= 0.5
if filter_xyz_size[1] % 2 == 0:
p[1] -= 0.5
if filter_xyz_size[2] % 2 == 0:
p[2] -= 0.5
return p
def window_function(pos, inv_extents, window, window_params):
"""Implements 3 types of window functions
pos: A single 3D point. An array of shape [3] with x,y,z coordinates.
inv_extents: An array of shape [3], which defines the spatial extent of the
filter. The values are the reciprocal of the spatial extent
for x,y and z.
window: The window type. Allowed types are
-RECTANGLE this just returns 1 everywhere.
-TRAPEZOID /‾\ plateau with 1 at the center and decays linearly
to 0 at the borders.
-POLY The poly 6 window
window_params: array with parameters for the windows.
Only TRAPEZOID uses this to define the normalized distance
from the center at which the linear decay starts.
"""
assert pos.ndim == 1
assert pos.shape[0] == 3
assert inv_extents.ndim == 1
assert inv_extents.shape[0] == 3
p = pos.copy()
if window == RECTANGLE:
return 1
elif window == TRAPEZOID:
p *= 2 * inv_extents # p is now a position in a sphere with radius 1
d = np.linalg.norm(p, ord=2)
d = np.clip(d, 0, 1)
# the window parameter defines the distance at which the value decreases
# linearly to 0
if d > window_params[0]:
return (1 - d) / (1 - window_params[0])
else:
return 1
elif window == POLY:
p *= 2 * inv_extents # p is now a position in a sphere with radius 1
r_sqr = np.sum(p * p)
return np.clip((1 - r_sqr)**3, 0, 1)
else:
raise ValueError("Unknown window type")
def interpolate(xyz, xyz_size, interpolation):
""" Computes interpolation weights and indices
xyz: A single 3D point.
xyz_size: An array of shape [3], which defines the size of the filter
array for the spatial dimensions.
interpolation: One of LINEAR, LINEAR_BORDER, NEAREST_NEIGHBOR.
LINEAR is trilinear interpolation with coordinate clamping.
LINEAR_BORDER uses a zero border if outside the range.
NEAREST_NEIGHBOR uses the neares neighbor instead of interpolation.
Returns a tuple with the interpolation weights and the indices
"""
# yapf: disable
if interpolation == NEAREST_NEIGHBOR:
pi = np.round(xyz).astype(np.int32)
pi = np.clip(pi, np.zeros_like(pi), xyz_size-1)
idx = pi[2]*xyz_size[0]*xyz_size[1] + pi[1]*xyz_size[0] + pi[0]
return (1,), ((pi[2],pi[1],pi[0]),)
elif interpolation == LINEAR_BORDER:
pi0 = np.floor(xyz).astype(np.int32)
pi1 = pi0+1
a = xyz[0]-pi0[0]
b = xyz[1]-pi0[1]
c = xyz[2]-pi0[2]
w = ((1-a)*(1-b)*(1-c),
(a)*(1-b)*(1-c),
(1-a)*(b)*(1-c),
(a)*(b)*(1-c),
(1-a)*(1-b)*(c),
(a)*(1-b)*(c),
(1-a)*(b)*(c),
(a)*(b)*(c))
idx=((pi0[2], pi0[1], pi0[0]),
(pi0[2], pi0[1], pi1[0]),
(pi0[2], pi1[1], pi0[0]),
(pi0[2], pi1[1], pi1[0]),
(pi1[2], pi0[1], pi0[0]),
(pi1[2], pi0[1], pi1[0]),
(pi1[2], pi1[1], pi0[0]),
(pi1[2], pi1[1], pi1[0]))
w_idx = []
for w_, idx_ in zip(w,idx):
if np.any(np.array(idx_) < 0) or idx_[0] >= xyz_size[2] or idx_[1] >= xyz_size[1] or idx_[2] >= xyz_size[0]:
w_idx.append((0.0, (0,0,0)))
else:
w_idx.append((w_,idx_))
w, idx = zip(*w_idx)
return w, idx
elif interpolation == LINEAR:
pi0 = np.clip(xyz.astype(np.int32), np.zeros_like(xyz, dtype=np.int32), xyz_size-1)
pi1 = np.clip(pi0+1, np.zeros_like(pi0), xyz_size-1)
a = xyz[0]-pi0[0]
b = xyz[1]-pi0[1]
c = xyz[2]-pi0[2]
a = np.clip(a, 0, 1)
b = np.clip(b, 0, 1)
c = np.clip(c, 0, 1)
w = ((1-a)*(1-b)*(1-c),
(a)*(1-b)*(1-c),
(1-a)*(b)*(1-c),
(a)*(b)*(1-c),
(1-a)*(1-b)*(c),
(a)*(1-b)*(c),
(1-a)*(b)*(c),
(a)*(b)*(c))
idx=((pi0[2], pi0[1], pi0[0]),
(pi0[2], pi0[1], pi1[0]),
(pi0[2], pi1[1], pi0[0]),
(pi0[2], pi1[1], pi1[0]),
(pi1[2], pi0[1], pi0[0]),
(pi1[2], pi0[1], pi1[0]),
(pi1[2], pi1[1], pi0[0]),
(pi1[2], pi1[1], pi1[0]))
return w, idx
else:
raise ValueError("Unknown interpolation mode")
# yapf: enable
def cconv(filter, out_positions, extent, offset, inp_positions, inp_features,
inp_importance, neighbors_index, neighbors_importance,
neighbors_row_splits, align_corners, coordinate_mapping, normalize,
interpolation, **kwargs):
""" Computes the output features of a continuous convolution.
filter: 5D filter array with shape [depth,height,width,inp_ch, out_ch]
out_positions: The positions of the output points. The shape is
[num_out, 3].
extents: The spatial extents of the filter in coordinate units.
This is a 2D array with shape [1,1] or [1,3] or [num_out,1]
or [num_out,3]
offset: A single 3D vector used in the filter coordinate
computation. The shape is [3].
inp_positions: The positions of the input points. The shape is
[num_inp, 3].
inp_features: The input features with shape [num_inp, in_channels].
inp_importance: Optional importance for each input point with
shape [num_inp]. Set to np.array([]) to disable.
neighbors_index: The array with lists of neighbors for each
output point. The start and end of each sublist is defined by
neighbors_row_splits.
neighbors_importance: Optional importance for each entry in
neighbors_index. Set to np.array([]) to disable.
neighbors_row_splits: The prefix sum which defines the start
and end of the sublists in neighbors_index. The size of the
array is num_out + 1.
align_corners: If true then the voxel centers of the outer voxels
of the filter array are mapped to the boundary of the filter shape.
If false then the boundary of the filter array is mapped to the
boundary of the filter shape.
coordinate_mapping: The coordinate mapping function. One of
IDENTITY, BALL_TO_CUBE_RADIAL, BALL_TO_CUBE_VOLUME_PRESERVING.
normalize: If true then the result is normalized either by the
number of points (neighbors_importance is null) or by the sum of
the respective values in neighbors_importance.
interpolation: The interpolation mode. Either LINEAR or NEAREST_NEIGHBOR.
"""
assert filter.ndim == 5
assert all(filter.shape)
assert filter.shape[3] == inp_features.shape[-1]
assert out_positions.ndim == 2
assert extent.ndim == 2
assert extent.shape[0] == 1 or extent.shape[0] == out_positions.shape[0]
assert extent.shape[1] in (1, 3)
assert offset.ndim == 1 and offset.shape[0] == 3
assert inp_positions.ndim == 2
assert inp_positions.shape[0] == inp_features.shape[0]
assert inp_features.ndim == 2
assert inp_importance.ndim == 1
assert (inp_importance.shape[0] == 0 or
inp_importance.shape[0] == inp_positions.shape[0])
assert neighbors_importance.ndim == 1
assert (neighbors_importance.shape[0] == 0 or
neighbors_importance.shape[0] == neighbors_index.shape[0])
assert neighbors_index.ndim == 1
assert neighbors_row_splits.ndim == 1
assert neighbors_row_splits.shape[0] == out_positions.shape[0] + 1
coordinate_mapping = _convert_parameter_str_dict[
coordinate_mapping] if isinstance(coordinate_mapping,
str) else coordinate_mapping
interpolation = _convert_parameter_str_dict[interpolation] if isinstance(
interpolation, str) else interpolation
dtype = inp_features.dtype
num_out = out_positions.shape[0]
num_inp = inp_positions.shape[0]
in_channels = inp_features.shape[-1]
out_channels = filter.shape[-1]
inv_extent = 1 / np.broadcast_to(extent, out_positions.shape)
if inp_importance.shape[0] == 0:
inp_importance = np.ones([num_inp])
if neighbors_importance.shape[0] == 0:
neighbors_importance = np.ones(neighbors_index.shape)
filter_xyz_size = np.array(list(reversed(filter.shape[0:3])))
out_features = np.zeros((num_out, out_channels))
for out_idx, out_pos in enumerate(out_positions):
neighbors_start = neighbors_row_splits[out_idx]
neighbors_end = neighbors_row_splits[out_idx + 1]
outfeat = out_features[out_idx:out_idx + 1]
n_importance_sum = 0.0
for inp_idx, n_importance in zip(
neighbors_index[neighbors_start:neighbors_end],
neighbors_importance[neighbors_start:neighbors_end]):
inp_pos = inp_positions[inp_idx]
relative_pos = inp_pos - out_pos
coords = compute_filter_coordinates(relative_pos, filter_xyz_size,
inv_extent[out_idx], offset,
align_corners,
coordinate_mapping)
interp_w, interp_idx = interpolate(coords,
filter_xyz_size,
interpolation=interpolation)
n_importance_sum += n_importance
infeat = inp_features[inp_idx:inp_idx +
1] * inp_importance[inp_idx] * n_importance
filter_value = 0.0
for w, idx in zip(interp_w, interp_idx):
filter_value += w * filter[idx]
outfeat += infeat @ filter_value
if normalize:
if n_importance_sum != 0:
outfeat /= n_importance_sum
return out_features
def cconv_backprop_filter(filter, out_positions, extent, offset, inp_positions,
inp_features, inp_importance, neighbors_index,
neighbors_importance, neighbors_row_splits,
out_features_gradient, align_corners,
coordinate_mapping, normalize, interpolation,
**kwargs):
"""This implements the backprop to the filter weights for the cconv.
out_features_gradient: An array with the gradient for the outputs of the
cconv in the forward pass.
See cconv for more info about the parameters.
"""
assert filter.ndim == 5
assert all(filter.shape)
assert filter.shape[3] == inp_features.shape[-1]
assert out_positions.ndim == 2
assert extent.ndim == 2
assert extent.shape[0] == 1 or extent.shape[0] == out_positions.shape[0]
assert extent.shape[1] in (1, 3)
assert offset.ndim == 1 and offset.shape[0] == 3
assert inp_positions.ndim == 2
assert inp_positions.shape[0] == inp_features.shape[0]
assert inp_features.ndim == 2
assert inp_importance.ndim == 1
assert (inp_importance.shape[0] == 0 or
inp_importance.shape[0] == inp_positions.shape[0])
assert neighbors_importance.ndim == 1
assert (neighbors_importance.shape[0] == 0 or
neighbors_importance.shape[0] == neighbors_index.shape[0])
assert neighbors_index.ndim == 1
assert neighbors_row_splits.ndim == 1
assert neighbors_row_splits.shape[0] == out_positions.shape[0] + 1
coordinate_mapping = _convert_parameter_str_dict[
coordinate_mapping] if isinstance(coordinate_mapping,
str) else coordinate_mapping
interpolation = _convert_parameter_str_dict[interpolation] if isinstance(
interpolation, str) else interpolation
dtype = inp_features.dtype
num_out = out_positions.shape[0]
num_inp = inp_positions.shape[0]
in_channels = inp_features.shape[-1]
out_channels = filter.shape[-1]
inv_extent = 1 / np.broadcast_to(extent, out_positions.shape)
if inp_importance.shape[0] == 0:
inp_importance = np.ones([num_inp])
if neighbors_importance.shape[0] == 0:
neighbors_importance = np.ones(neighbors_index.shape)
filter_xyz_size = np.array(list(reversed(filter.shape[0:3])))
filter_backprop = np.zeros_like(filter)
for out_idx, out_pos in enumerate(out_positions):
neighbors_start = neighbors_row_splits[out_idx]
neighbors_end = neighbors_row_splits[out_idx + 1]
n_importance_sum = 1.0
if normalize:
n_importance_sum = 0.0
for inp_idx, n_importance in zip(
neighbors_index[neighbors_start:neighbors_end],
neighbors_importance[neighbors_start:neighbors_end]):
inp_pos = inp_positions[inp_idx]
relative_pos = inp_pos - out_pos
n_importance_sum += n_importance
normalizer = 1 / n_importance_sum if n_importance_sum != 0.0 else 1
outfeat_grad = normalizer * out_features_gradient[out_idx:out_idx + 1]
for inp_idx, n_importance in zip(
neighbors_index[neighbors_start:neighbors_end],
neighbors_importance[neighbors_start:neighbors_end]):
inp_pos = inp_positions[inp_idx]
relative_pos = inp_pos - out_pos
coords = compute_filter_coordinates(relative_pos, filter_xyz_size,
inv_extent[out_idx], offset,
align_corners,
coordinate_mapping)
interp_w, interp_idx = interpolate(coords,
filter_xyz_size,
interpolation=interpolation)
infeat = inp_features[inp_idx:inp_idx +
1] * inp_importance[inp_idx] * n_importance
for w, idx in zip(interp_w, interp_idx):
filter_backprop[idx] += w * (infeat.T @ outfeat_grad)
return filter_backprop
def cconv_transpose(filter, out_positions, out_importance, extent, offset,
inp_positions, inp_features, inp_neighbors_index,
inp_neighbors_importance, inp_neighbors_row_splits,
neighbors_index, neighbors_importance, neighbors_row_splits,
align_corners, coordinate_mapping, normalize, interpolation,
**kwargs):
"""Computes the output features of a transpose continuous convolution.
This is also used for computing the backprop to the input features for the
normal cconv.
filter: 5D filter array with shape [depth,height,width,inp_ch, out_ch]
out_positions: The positions of the output points. The shape is
[num_out, 3].
inp_importance: Optional importance for each output point with
shape [num_out]. Set to np.array([]) to disable.
extents: The spatial extents of the filter in coordinate units.
This is a 2D array with shape [1,1] or [1,3] or [num_inp,1]
or [num_inp,3]
offset: A single 3D vector used in the filter coordinate
computation. The shape is [3].
inp_positions: The positions of the input points. The shape is
[num_inp, 3].
inp_features: The input features with shape [num_inp, in_channels].
inp_neighbors_index: The array with lists of neighbors for each
input point. The start and end of each sublist is defined by
inp_neighbors_row_splits.
inp_neighbors_importance: Optional importance for each entry in
inp_neighbors_index. Set to np.array([]) to disable.
inp_neighbors_row_splits: The prefix sum which defines the start
and end of the sublists in inp_neighbors_index. The size of the
array is num_inp + 1.
neighbors_index: The array with lists of neighbors for each
output point. The start and end of each sublist is defined by
neighbors_row_splits.
neighbors_importance: Optional importance for each entry in
neighbors_index. Set to np.array([]) to disable.
neighbors_row_splits: The prefix sum which defines the start
and end of the sublists in neighbors_index. The size of the
array is num_out + 1.
align_corners: If true then the voxel centers of the outer voxels
of the filter array are mapped to the boundary of the filter shape.
If false then the boundary of the filter array is mapped to the
boundary of the filter shape.
coordinate_mapping: The coordinate mapping function. One of
IDENTITY, BALL_TO_CUBE_RADIAL, BALL_TO_CUBE_VOLUME_PRESERVING.
normalize: If true then the result is normalized either by the
number of points (neighbors_importance is null) or by the sum of
the respective values in neighbors_importance.
interpolation: The interpolation mode. Either LINEAR or NEAREST_NEIGHBOR.
"""
assert filter.ndim == 5
assert all(filter.shape)
assert filter.shape[3] == inp_features.shape[-1]
assert out_positions.ndim == 2
assert out_importance.ndim == 1
assert (out_importance.shape[0] == 0 or
out_importance.shape[0] == out_positions.shape[0])
assert extent.ndim == 2
assert extent.shape[0] == 1 or extent.shape[0] == inp_positions.shape[0]
assert extent.shape[1] in (1, 3)
assert offset.ndim == 1 and offset.shape[0] == 3
assert inp_positions.ndim == 2
assert inp_positions.shape[0] == inp_features.shape[0]
assert inp_features.ndim == 2
assert inp_neighbors_index.ndim == 1
assert inp_neighbors_importance.ndim == 1
assert (inp_neighbors_importance.shape[0] == 0 or
inp_neighbors_importance.shape[0] == inp_neighbors_index.shape[0])
assert inp_neighbors_row_splits.ndim == 1
assert inp_neighbors_row_splits.shape[0] == inp_positions.shape[0] + 1
assert neighbors_index.ndim == 1
assert neighbors_importance.ndim == 1
assert (neighbors_importance.shape[0] == 0 or
neighbors_importance.shape[0] == neighbors_index.shape[0])
assert neighbors_row_splits.ndim == 1
assert neighbors_row_splits.shape[0] == out_positions.shape[0] + 1
assert neighbors_index.shape[0] == inp_neighbors_index.shape[0]
coordinate_mapping = _convert_parameter_str_dict[
coordinate_mapping] if isinstance(coordinate_mapping,
str) else coordinate_mapping
interpolation = _convert_parameter_str_dict[interpolation] if isinstance(
interpolation, str) else interpolation
dtype = inp_features.dtype
num_out = out_positions.shape[0]
num_inp = inp_positions.shape[0]
in_channels = inp_features.shape[-1]
out_channels = filter.shape[
-1] # filter shape is [depth,height,width, in_ch, out_ch]
inv_extent = 1 / np.broadcast_to(extent, inp_positions.shape)
if out_importance.shape[0] == 0:
out_importance = np.ones([num_out])
if neighbors_importance.shape[0] == 0:
neighbors_importance = np.ones(neighbors_index.shape)
if inp_neighbors_importance.shape[0] == 0:
inp_neighbors_importance = np.ones(inp_neighbors_index.shape)
if normalize:
inp_n_importance_sums = np.zeros_like(inp_neighbors_row_splits[:-1],
dtype=out_positions.dtype)
for inp_idx, inp_pos in enumerate(inp_positions):
inp_neighbors_start = inp_neighbors_row_splits[inp_idx]
inp_neighbors_end = inp_neighbors_row_splits[inp_idx + 1]
for out_idx, n_importance in zip(
inp_neighbors_index[inp_neighbors_start:inp_neighbors_end],
inp_neighbors_importance[
inp_neighbors_start:inp_neighbors_end]):
inp_n_importance_sums[inp_idx] += n_importance
filter_xyz_size = np.array(list(reversed(filter.shape[0:3])))
out_features = np.zeros((num_out, out_channels))
for out_idx, out_pos in enumerate(out_positions):
neighbors_start = neighbors_row_splits[out_idx]
neighbors_end = neighbors_row_splits[out_idx + 1]
for inp_idx, n_importance in zip(
neighbors_index[neighbors_start:neighbors_end],
neighbors_importance[neighbors_start:neighbors_end]):
inp_pos = inp_positions[inp_idx]
normalizer = 1
if normalize:
n_importance_sum = inp_n_importance_sums[inp_idx]
if n_importance_sum != 0.0:
normalizer = 1 / n_importance_sum
relative_pos = out_pos - inp_pos
coords = compute_filter_coordinates(relative_pos, filter_xyz_size,
inv_extent[inp_idx], offset,
align_corners,
coordinate_mapping)
infeat = normalizer * inp_features[inp_idx:inp_idx +
1] * n_importance
interp_w, interp_idx = interpolate(coords,
filter_xyz_size,
interpolation=interpolation)
filter_value = 0.0
for w, idx in zip(interp_w, interp_idx):
filter_value += w * filter[idx]
out_features[out_idx:out_idx + 1] += infeat @ filter_value
out_features *= out_importance[:, np.newaxis]
return out_features
def cconv_transpose_backprop_filter(
filter, out_positions, out_importance, extent, offset, inp_positions,
inp_features, inp_neighbors_index, inp_neighbors_importance,
inp_neighbors_row_splits, neighbors_index, neighbors_importance,
neighbors_row_splits, out_features_gradient, align_corners,
coordinate_mapping, normalize, interpolation, **kwargs):
"""This implements the backprop to the filter weights for the transpose
cconv.
out_features_gradient: An array with the gradient for the outputs of the
cconv in the forward pass.
See cconv_transpose for more info about the parameters.
"""
assert filter.ndim == 5
assert all(filter.shape)
assert filter.shape[3] == inp_features.shape[-1]
assert out_positions.ndim == 2
assert extent.ndim == 2
assert extent.shape[0] == 1 or extent.shape[0] == inp_positions.shape[0]
assert extent.shape[1] in (1, 3)
assert offset.ndim == 1 and offset.shape[0] == 3
assert inp_positions.ndim == 2
assert inp_positions.shape[0] == inp_features.shape[0]
assert inp_features.ndim == 2
assert out_importance.ndim == 1
assert (out_importance.shape[0] == 0 or
out_importance.shape[0] == out_positions.shape[0])
assert inp_neighbors_index.ndim == 1
assert inp_neighbors_importance.ndim == 1
assert (inp_neighbors_importance.shape[0] == 0 or
inp_neighbors_importance.shape[0] == inp_neighbors_index.shape[0])
assert inp_neighbors_row_splits.ndim == 1
assert inp_neighbors_row_splits.shape[0] == inp_positions.shape[0] + 1
assert neighbors_index.ndim == 1
assert neighbors_importance.ndim == 1
assert (neighbors_importance.shape[0] == 0 or
neighbors_importance.shape[0] == neighbors_index.shape[0])
assert neighbors_row_splits.ndim == 1
assert neighbors_row_splits.shape[0] == out_positions.shape[0] + 1
assert neighbors_index.shape[0] == inp_neighbors_index.shape[0]
coordinate_mapping = _convert_parameter_str_dict[
coordinate_mapping] if isinstance(coordinate_mapping,
str) else coordinate_mapping
interpolation = _convert_parameter_str_dict[interpolation] if isinstance(
interpolation, str) else interpolation
dtype = inp_features.dtype
num_out = out_positions.shape[0]
num_inp = inp_positions.shape[0]
in_channels = inp_features.shape[-1]
out_channels = filter.shape[-1]
inv_extent = 1 / np.broadcast_to(extent, inp_positions.shape)
if out_importance.shape[0] == 0:
out_importance = np.ones([num_out])
if neighbors_importance.shape[0] == 0:
neighbors_importance = np.ones(neighbors_index.shape)
if inp_neighbors_importance.shape[0] == 0:
inp_neighbors_importance = np.ones(inp_neighbors_index.shape)
if normalize:
inp_n_importance_sums = np.zeros_like(inp_neighbors_row_splits[:-1],
dtype=out_positions.dtype)
for inp_idx, inp_pos in enumerate(inp_positions):
inp_neighbors_start = inp_neighbors_row_splits[inp_idx]
inp_neighbors_end = inp_neighbors_row_splits[inp_idx + 1]
for out_idx, n_importance in zip(
inp_neighbors_index[inp_neighbors_start:inp_neighbors_end],
inp_neighbors_importance[
inp_neighbors_start:inp_neighbors_end]):
inp_n_importance_sums[inp_idx] += n_importance
filter_xyz_size = np.array(list(reversed(filter.shape[0:3])))
filter_backprop = np.zeros_like(filter)
for out_idx, out_pos in enumerate(out_positions):
neighbors_start = neighbors_row_splits[out_idx]
neighbors_end = neighbors_row_splits[out_idx + 1]
outfeat_grad = out_features_gradient[out_idx:out_idx +
1] * out_importance[out_idx]
for inp_idx, n_importance in zip(
neighbors_index[neighbors_start:neighbors_end],
neighbors_importance[neighbors_start:neighbors_end]):
inp_pos = inp_positions[inp_idx]
normalizer = 1
if normalize:
n_importance_sum = inp_n_importance_sums[inp_idx]
if n_importance_sum != 0.0:
normalizer = 1 / n_importance_sum
relative_pos = out_pos - inp_pos
coords = compute_filter_coordinates(relative_pos, filter_xyz_size,
inv_extent[inp_idx], offset,
align_corners,
coordinate_mapping)
interp_w, interp_idx = interpolate(coords,
filter_xyz_size,
interpolation=interpolation)
infeat = normalizer * inp_features[inp_idx:inp_idx +
1] * n_importance
for w, idx in zip(interp_w, interp_idx):
filter_backprop[idx] += w * (outfeat_grad.T @ infeat).T
return filter_backprop
| 40.162011 | 120 | 0.605981 | 4,885 | 35,945 | 4.268987 | 0.079427 | 0.027908 | 0.032799 | 0.016304 | 0.776686 | 0.749832 | 0.736358 | 0.729932 | 0.720677 | 0.715546 | 0 | 0.025597 | 0.296815 | 35,945 | 894 | 121 | 40.206935 | 0.799414 | 0.288051 | 0 | 0.73055 | 0 | 0 | 0.006102 | 0.001204 | 0 | 0 | 0 | 0 | 0.189753 | 1 | 0.017078 | false | 0 | 0.185958 | 0 | 0.229602 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
c7ceae327f92874a146e3e651be302b319d658fe | 8,521 | py | Python | map/get_shop.py | 2218084076/hotpoor_autoclick_xhs | a52446ba691ac19e43410a465dc63f940c0e444d | [
"Apache-2.0"
] | 1 | 2021-12-21T10:42:46.000Z | 2021-12-21T10:42:46.000Z | map/get_shop.py | 2218084076/hotpoor_autoclick_xhs | a52446ba691ac19e43410a465dc63f940c0e444d | [
"Apache-2.0"
] | null | null | null | map/get_shop.py | 2218084076/hotpoor_autoclick_xhs | a52446ba691ac19e43410a465dc63f940c0e444d | [
"Apache-2.0"
] | null | null | null | import pyautogui
import time
import pyperclip
# 打开审查元素位置 921.6
# 2022/01/15
news_urls=[]
for i in urls:
news_urls.append(i)
print(len(urls))
print(len(news_urls))
print(len(news_urls))
print(len(news_urls))
def pyautogui_action(action):
if action["name"] in ["move_to_click"]:
pyautogui.moveTo(x=action.get("x",None), y=action.get("y",None),duration=0, tween=pyautogui.linear)
pyautogui.click(x=action.get("x",None), y=action.get("y",None),clicks=1, button='left')
elif action["name"] in ["select_all_and_write"]:
pyautogui.moveTo(x=action.get("x",None), y=action.get("y",None),duration=0, tween=pyautogui.linear)
pyautogui.click(x=action.get("x",None), y=action.get("y",None),clicks=1, button='left')
time.sleep(1)
pyautogui.hotkey("ctrl", "a")
write_content = action.get("content","")
pyautogui.typewrite(write_content)
pyautogui.press('enter')
elif action["name"] in ["select_all_and_js_latest"]:
pyautogui.moveTo(x=action.get("x",None), y=action.get("y",None),duration=0, tween=pyautogui.linear)
pyautogui.click(x=action.get("x",None), y=action.get("y",None),clicks=1, button='left')
pyautogui.hotkey("ctrl", "a")
pyautogui.press('backspace')
pyautogui.press('up')
pyautogui.press('enter')
elif action["name"] in ["select_all_and_copy"]:
pyautogui.moveTo(x=action.get("x",None), y=action.get("y",None),duration=0, tween=pyautogui.linear)
pyautogui.click(x=action.get("x",None), y=action.get("y",None),clicks=1, button='left')
pyautogui.hotkey("ctrl", "a")
pyautogui.hotkey("ctrl", "c")
elif action["name"] in ["select_all_and_paste"]:
pyautogui.moveTo(x=action.get("x",None), y=action.get("y",None),duration=0, tween=pyautogui.linear)
pyautogui.click(x=action.get("x",None), y=action.get("y",None),clicks=1, button='left')
pyautogui.hotkey("ctrl", "a")
pyautogui.hotkey("ctrl", "v")
elif action["name"] in ["select_item_and_close_tab"]:
pyautogui.moveTo(x=action.get("x",None), y=action.get("y",None),duration=0, tween=pyautogui.linear)
pyautogui.click(x=action.get("x",None), y=action.get("y",None),clicks=1, button='left')
pyautogui.hotkey("ctrl", "w")
elif action["name"] in ["select_all_and_copy_and_paste"]:
pyautogui.moveTo(x=action.get("x",None), y=action.get("y",None),duration=0, tween=pyautogui.linear)
pyautogui.click(x=action.get("x",None), y=action.get("y",None),clicks=1, button='left')
write_content = action.get("content","")
pyperclip.copy(write_content)
pyautogui.moveTo(x=action.get("x",None), y=action.get("y",None),duration=0, tween=pyautogui.linear)
pyautogui.click(x=action.get("x",None), y=action.get("y",None),clicks=1, button='left')
pyautogui.hotkey("ctrl", "v")
pyautogui.press('enter')
elif action["name"] in ["open_console"]:
pyautogui.moveTo(x=action.get("x",None), y=action.get("y",None),duration=0, tween=pyautogui.linear)
pyautogui.click(x=action.get("x",None), y=action.get("y",None),clicks=1, button='left')
pyautogui.hotkey("f12")
elif action["name"] in ["url_paste"]:
pyautogui.moveTo(x=action.get("x",None), y=action.get("y",None),duration=0, tween=pyautogui.linear)
pyautogui.click(x=action.get("x",None), y=action.get("y",None),clicks=1, button='left')
write_content = action.get("content","")
pyperclip.copy(write_content)
pyautogui.moveTo(x=action.get("x",None), y=action.get("y",None),duration=0, tween=pyautogui.linear)
pyautogui.click(x=action.get("x",None), y=action.get("y",None),clicks=1, button='left')
pyautogui.hotkey("ctrl", "l")
pyautogui.hotkey("ctrl", "v")
pyautogui.press('enter')
print(action.get("action_name"))
action_sleep = action.get("sleep",0)
time.sleep(action_sleep)
for u in urls:
print(u)
page={
"x":435,
"y":69,
"sleep":8,
"name":"select_all_and_copy_and_paste",
"content":'document.getElementsByClassName("pic")[0].getElementsByTagName("a")[0].click()',
"action_name":"访问链接",
}
pyautogui_action(page)
action_item_click_list = [
{
"x": 1207,
"y": 176,
"sleep": 0.5,
"name": "move_to_click",
"content": "",
"action_name": "清空console",
},
{
"x": 1376,
"y": 997,
"sleep": 0.5,
"name": "select_all_and_copy_and_paste",
"content":
r'''
result=[]
result.push(document.getElementsByClassName("shop-name")[0].innerText)
result.push(document.getElementsByClassName("brief-info")[0].getElementsByTagName("span")[0].getAttribute("class").split("mid-str")[1])
result.push(document.getElementsByClassName("brief-info")[0].getElementsByTagName("span")[1].innerText)
try{
result.push(document.getElementsByClassName("brief-info")[0].getElementsByTagName("span")[2].innerText)
}catch{
result.push("null")
}
try{
result.push(document.getElementsByClassName("tel")[0].innerText)
}catch{
document.getElementsByClassName("phone")[0].getElementsByTagName("a")[0].click()
result.push(document.getElementsByClassName("phone")[0].innerText)
}
result.push(document.getElementsByClassName("address")[0].innerText)
result.push(document.getElementById("map").getElementsByTagName("img")[0].getAttribute("src").split(".png|")[1])
result_info = {
"shop-name":result[0],
"star":result[1]*0.1,
"comment":result[2],
"consume":result[3],
"tel":result[4],
"address":result[5],
"coordinate":result[6],
}
dom=document.createElement("div")
dom.id="wlb_cover"
dom.style.position="fixed"
dom.style.top="0px"
dom.style.right="0px"
dom.style.zIndex=9999999999999999999
dom.innerHTML="<textarea id=\"wlb_cover_textarea\" style=\"height:100px;width:200px;\">"+JSON.stringify(result_info)+"</textarea>"
document.body.append(dom)
''',
"action_name": "get店铺信息",
},
{
"x": 1026,
"y": 149,
"sleep": 0.5,
"name": "select_all_and_copy",
"content": "",
"action_name": "copy"
},
{
"x": 431,
"y": 20,
"sleep": 1,
"name": "move_to_click",
"content": "",
"action_name": "点击选项卡_pages",
},
{
"x": 533,
"y": 209,
"sleep": 1,
"name": "select_all_and_paste",
"content": "",
"action_name": "提交",
},
{
"x": 416,
"y": 283,
"sleep": 1,
"name": "move_to_click",
"content": "",
"action_name": "submit",
},
{
"x": 137,
"y": 24,
"sleep": 1,
"name": "move_to_click",
"content": "",
"action_name": "切换pgy页面",
},
]
for action_item_click in action_item_click_list:
pyautogui_action(action_item_click)
'''
result=[]
result.push(document.getElementsByClassName("shop-name")[0].innerText)
result.push(document.getElementsByClassName("brief-info")[0].getElementsByTagName("span")[0].getAttribute("class").split("mid-str")[1])
result.push(document.getElementsByClassName("brief-info")[0].getElementsByTagName("span")[1].innerText)
try{
result.push(document.getElementsByClassName("brief-info")[0].getElementsByTagName("span")[2].innerText)
}catch{
result.push("null")
}
try{
result.push(document.getElementsByClassName("tel")[0].innerText)
}catch{
document.getElementsByClassName("phone")[0].getElementsByTagName("a")[0].click()
result.push(document.getElementsByClassName("phone")[0].innerText)
}
result.push(document.getElementsByClassName("address")[0].innerText)
result.push(document.getElementById("map").getElementsByTagName("img")[0].getAttribute("src").split(".png|")[1])
result_info = {
"shop-name":result[0],
"star":result[1]*0.1,
"comment":result[2],
"consume":result[3],
"tel":result[4],
"address":result[5],
"coordinate":result[6],
}
dom=document.createElement("div")
dom.id="wlb_cover"
dom.style.position="fixed"
dom.style.top="0px"
dom.style.right="0px"
dom.style.zIndex=9999999999999999999
dom.innerHTML="<textarea id=\"wlb_cover_textarea\" style=\"height:100px;width:200px;\">"+JSON.stringify(result_info)+"</textarea>"
document.body.append(dom)
'''
| 38.382883 | 135 | 0.62387 | 1,076 | 8,521 | 4.851301 | 0.144052 | 0.084483 | 0.042146 | 0.04636 | 0.857663 | 0.84272 | 0.84272 | 0.812069 | 0.785249 | 0.753448 | 0 | 0.028747 | 0.179439 | 8,521 | 221 | 136 | 38.556561 | 0.71782 | 0.002934 | 0 | 0.425532 | 0 | 0 | 0.160674 | 0.037172 | 0 | 0 | 0 | 0 | 0 | 1 | 0.007092 | false | 0 | 0.021277 | 0 | 0.028369 | 0.042553 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
1bfd2e704fa924fb676400a0a75d85d1015a7fca | 26 | py | Python | bitwallet/__init__.py | lookfwd/bitwallet | b0f5efed7d8d896650c5e0d7318269abc20906b5 | [
"MIT"
] | 1 | 2018-01-25T15:04:34.000Z | 2018-01-25T15:04:34.000Z | bitwallet/__init__.py | lookfwd/bitwallet | b0f5efed7d8d896650c5e0d7318269abc20906b5 | [
"MIT"
] | 2 | 2018-02-19T02:27:13.000Z | 2018-04-27T06:16:18.000Z | bitwallet/__init__.py | lookfwd/bitwallet | b0f5efed7d8d896650c5e0d7318269abc20906b5 | [
"MIT"
] | null | null | null | from wallet import Wallet
| 13 | 25 | 0.846154 | 4 | 26 | 5.5 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.153846 | 26 | 1 | 26 | 26 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
4021f2d392e89b8d5032a922f86c0d373508d007 | 81 | py | Python | app/modules/im/__init__.py | Eastwu5788/Heron | 646eeaacea77e293c6eccc6dad82a04ece9294a3 | [
"Apache-2.0"
] | 7 | 2018-01-29T02:46:31.000Z | 2018-03-25T11:15:10.000Z | app/modules/im/__init__.py | Eastwu5788/Heron | 646eeaacea77e293c6eccc6dad82a04ece9294a3 | [
"Apache-2.0"
] | 4 | 2021-06-08T19:38:03.000Z | 2022-03-11T23:18:46.000Z | app/modules/im/__init__.py | Eastwu5788/Heron | 646eeaacea77e293c6eccc6dad82a04ece9294a3 | [
"Apache-2.0"
] | 1 | 2021-06-12T14:14:35.000Z | 2021-06-12T14:14:35.000Z | from flask import Blueprint
im = Blueprint("im", __name__)
from . import immsg
| 13.5 | 30 | 0.740741 | 11 | 81 | 5.090909 | 0.636364 | 0.392857 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.17284 | 81 | 5 | 31 | 16.2 | 0.835821 | 0 | 0 | 0 | 0 | 0 | 0.024691 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.666667 | 0 | 0.666667 | 0.666667 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 6 |
405b4077c24814ac6e86f916af423926e95d3206 | 96 | py | Python | pyball/models/conference/__init__.py | SebastianDang/PyBall | d1965aa01477b5ee0db9c0463ec584a7e3997395 | [
"MIT"
] | 74 | 2018-03-04T22:58:46.000Z | 2021-07-06T12:28:50.000Z | pyball/models/conference/__init__.py | SebastianDang/PyBall | d1965aa01477b5ee0db9c0463ec584a7e3997395 | [
"MIT"
] | 18 | 2018-03-10T19:17:54.000Z | 2020-01-04T15:42:47.000Z | pyball/models/conference/__init__.py | SebastianDang/PyBall | d1965aa01477b5ee0db9c0463ec584a7e3997395 | [
"MIT"
] | 13 | 2018-03-06T02:39:38.000Z | 2020-01-17T04:38:53.000Z | from .conference import Conference
from .conference import League
from .conference import Sport
| 24 | 34 | 0.84375 | 12 | 96 | 6.75 | 0.416667 | 0.518519 | 0.740741 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.125 | 96 | 3 | 35 | 32 | 0.964286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
40679a54b6b237ed1a7a0ae1cc73b7b8641c9ef7 | 39 | py | Python | __init__.py | dhruvagarwal/flask_restdemo | 3292fc4cc03cb76297502bb2296784d6c81d1a42 | [
"Apache-2.0"
] | null | null | null | __init__.py | dhruvagarwal/flask_restdemo | 3292fc4cc03cb76297502bb2296784d6c81d1a42 | [
"Apache-2.0"
] | null | null | null | __init__.py | dhruvagarwal/flask_restdemo | 3292fc4cc03cb76297502bb2296784d6c81d1a42 | [
"Apache-2.0"
] | null | null | null | from .flask_restdemo import CustomApi
| 13 | 37 | 0.846154 | 5 | 39 | 6.4 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.128205 | 39 | 2 | 38 | 19.5 | 0.941176 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
406f810443bf86957645845fb4d90fc6489a236f | 3,632 | py | Python | plot_error_correlations.py | danhagen/NonlinearControl | 3bda71a058ec3b1a598df886e9485fd4d08982ba | [
"MIT"
] | null | null | null | plot_error_correlations.py | danhagen/NonlinearControl | 3bda71a058ec3b1a598df886e9485fd4d08982ba | [
"MIT"
] | 5 | 2018-08-01T17:19:38.000Z | 2020-08-18T19:57:46.000Z | plot_error_correlations.py | danhagen/NonlinearControl | 3bda71a058ec3b1a598df886e9485fd4d08982ba | [
"MIT"
] | 1 | 2020-07-22T22:38:20.000Z | 2020-07-22T22:38:20.000Z | import pickle
from pathlib import Path
import numpy as np
import matplotlib.pyplot as plt
from scipy.stats import pearsonr
from pendulum_eqns.physiology.muscle_params_BIC_TRI import *
#### Fixed Initial Tendon Tension
out=pickle.load( open( Path(r"C:\Users\hagen\Documents\Github\NonlinearControl\output_figures\integrator_backstepping_sinusoidal_activations_fixed_tensions\2020_05_23_112705\output.pkl"), "rb" ) )
Error = out["Error"]
States = out["States"]
lm1o = np.array([States[i,4,0] for i in range(States.shape[0])])
lm2o = np.array([States[i,5,0] for i in range(States.shape[0])])
MAE1 = np.array([np.mean(abs(Error[0][i,:])) for i in range(Error[0].shape[0])])
MAE2 = np.array([np.mean(abs(Error[1][i,:])) for i in range(Error[0].shape[0])])
fig1,(ax1,ax2) = plt.subplots(1,2,figsize=(10,5))
plt.subplots_adjust(bottom=0.2)
ax1.spines["top"].set_visible(False)
ax1.spines['right'].set_visible(False)
ax2.spines["top"].set_visible(False)
ax2.spines['right'].set_visible(False)
ax1.set_title("Muscle 1", fontsize=16)
ax2.set_title("Muscle 2", fontsize=16)
ax1.set_xlabel("Initial Normalized\nMuscle Fascicle Length",fontsize=14)
ax1.set_ylabel("Percent Mean Absolute Error",fontsize=14)
ax1.scatter(lm1o/lo1,100*(MAE1/lo1))
ax1.text(
0.5,0.9, f"PCC = {pearsonr(lm1o,MAE1)[0]:0.3f}",
transform=ax1.transAxes,
horizontalalignment='center',
verticalalignment='center',
color = "k",
fontsize=14,
bbox=dict(
boxstyle='round,pad=0.5',
edgecolor='k',
facecolor='w'
)
)
ax2.scatter(lm2o/lo2,100*(MAE2/lo2))
ax2.text(
0.5,0.9, f"PCC = {pearsonr(lm2o,MAE2)[0]:0.3f}",
transform=ax2.transAxes,
horizontalalignment='center',
verticalalignment='center',
color = "k",
fontsize=14,
bbox=dict(
boxstyle='round,pad=0.5',
edgecolor='k',
facecolor='w'
)
)
ax1.set_ylim([0,2])
ax2.set_ylim([0,2])
### Fixed Initial Muscle Length
out=pickle.load( open( Path(r"C:\Users\hagen\Documents\Github\NonlinearControl\output_figures\integrator_backstepping_sinusoidal_activations_fixed_muscle_lengths\2020_05_23_115050\output.pkl"), "rb" ) )
Error = out["Error"]
States = out["States"]
fT1o = np.array([States[i,2,0] for i in range(States.shape[0])])
fT2o = np.array([States[i,3,0] for i in range(States.shape[0])])
MAE1 = np.array([np.mean(abs(Error[0][i,:])) for i in range(Error[0].shape[0])])
MAE2 = np.array([np.mean(abs(Error[1][i,:])) for i in range(Error[0].shape[0])])
fig2,(ax1,ax2) = plt.subplots(1,2,figsize=(10,5))
plt.subplots_adjust(bottom=0.2)
ax1.spines["top"].set_visible(False)
ax1.spines['right'].set_visible(False)
ax2.spines["top"].set_visible(False)
ax2.spines['right'].set_visible(False)
ax1.set_title("Muscle 1", fontsize=16)
ax2.set_title("Muscle 2", fontsize=16)
ax1.set_xlabel("Initial Normalized\nTendon Force",fontsize=14)
ax1.set_ylabel("Percent Mean Absolute Error",fontsize=14)
ax1.scatter(fT1o/F_MAX1,100*(MAE1/lo1))
ax1.text(
0.5,0.9, f"PCC = {pearsonr(fT1o,MAE1)[0]:0.3f}",
transform=ax1.transAxes,
horizontalalignment='center',
verticalalignment='center',
color = "k",
fontsize=14,
bbox=dict(
boxstyle='round,pad=0.5',
edgecolor='k',
facecolor='w'
)
)
ax2.scatter(fT2o/F_MAX2,100*(MAE2/lo2))
ax2.text(
0.5,0.9, f"PCC = {pearsonr(fT2o,MAE2)[0]:0.3f}",
transform=ax2.transAxes,
horizontalalignment='center',
verticalalignment='center',
color = "k",
fontsize=14,
bbox=dict(
boxstyle='round,pad=0.5',
edgecolor='k',
facecolor='w'
)
)
ax1.set_ylim([0,2])
ax2.set_ylim([0,2])
plt.show()
| 31.042735 | 202 | 0.683921 | 569 | 3,632 | 4.282953 | 0.226714 | 0.022979 | 0.019696 | 0.03611 | 0.826426 | 0.826426 | 0.826426 | 0.826426 | 0.80673 | 0.774723 | 0 | 0.069642 | 0.130231 | 3,632 | 116 | 203 | 31.310345 | 0.701804 | 0.015419 | 0 | 0.673267 | 0 | 0.019802 | 0.219731 | 0.120516 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.059406 | 0 | 0.059406 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
407cd497cd6c9bbc3422be6d9cc1a621c67d597c | 103 | py | Python | mag_report/main.py | Curtin-Open-Knowledge-Initiative/mag_coverage_report | a75dd1273c44895b5c857ebd498407aa95bd45e5 | [
"Apache-2.0"
] | 1 | 2021-09-07T06:42:40.000Z | 2021-09-07T06:42:40.000Z | main.py | bmkramer/what_do_we_lose_mag | c60ea99915caafd5209ef30d21ab456a89ff89a0 | [
"Apache-2.0",
"MIT"
] | 2 | 2021-08-30T11:52:25.000Z | 2021-09-02T12:11:05.000Z | mag_report/main.py | Curtin-Open-Knowledge-Initiative/mag_coverage_report | a75dd1273c44895b5c857ebd498407aa95bd45e5 | [
"Apache-2.0"
] | 3 | 2021-07-04T07:39:01.000Z | 2021-08-24T15:24:29.000Z | import process
from precipy.main import render_file
render_file('config.json', [process], storages=[]) | 25.75 | 50 | 0.786408 | 14 | 103 | 5.642857 | 0.714286 | 0.253165 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.087379 | 103 | 4 | 50 | 25.75 | 0.840426 | 0 | 0 | 0 | 0 | 0 | 0.105769 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
40d41ef56a8243b1710293775b2395e29459ae28 | 1,603 | py | Python | tensorflow/contrib/__init__.py | shishaochen/TensorFlow-0.8-Win | 63221dfc4f1a1d064308e632ba12e6a54afe1fd8 | [
"Apache-2.0"
] | 1 | 2017-09-14T23:59:05.000Z | 2017-09-14T23:59:05.000Z | tensorflow/contrib/__init__.py | shishaochen/TensorFlow-0.8-Win | 63221dfc4f1a1d064308e632ba12e6a54afe1fd8 | [
"Apache-2.0"
] | 1 | 2016-10-19T02:43:04.000Z | 2016-10-31T14:53:06.000Z | tensorflow/contrib/__init__.py | shishaochen/TensorFlow-0.8-Win | 63221dfc4f1a1d064308e632ba12e6a54afe1fd8 | [
"Apache-2.0"
] | 8 | 2016-10-23T00:50:02.000Z | 2019-04-21T11:11:57.000Z | # Copyright 2015 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""contrib module containing volatile or experimental code."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
# Add projects here, they will show up under tf.contrib.
#from tensorflow.contrib import ctc
#from tensorflow.contrib import distributions
#from tensorflow.contrib import framework
#from tensorflow.contrib import grid_rnn
#from tensorflow.contrib import layers
#from tensorflow.contrib import learn
#from tensorflow.contrib import linear_optimizer
#from tensorflow.contrib import lookup
#from tensorflow.contrib import losses
#from tensorflow.contrib import metrics
#from tensorflow.contrib import quantization
#from tensorflow.contrib import rnn
#from tensorflow.contrib import skflow
#from tensorflow.contrib import tensor_forest
#from tensorflow.contrib import testing
#from tensorflow.contrib import util
#from tensorflow.contrib import copy_graph
| 41.102564 | 80 | 0.771678 | 213 | 1,603 | 5.723005 | 0.502347 | 0.195242 | 0.292863 | 0.376538 | 0.049221 | 0 | 0 | 0 | 0 | 0 | 0 | 0.005714 | 0.126638 | 1,603 | 38 | 81 | 42.184211 | 0.865 | 0.887087 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0.333333 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
90865e4d830451658c1eb422bb94edee870ce196 | 4,003 | py | Python | python/test/test_bubble_sort.py | michaelreneer/Algorithms | 3752d6ad542d798c54eedf78b2624d27296b00be | [
"MIT"
] | 2 | 2019-01-08T04:35:44.000Z | 2020-11-06T18:57:05.000Z | python/test/test_bubble_sort.py | michaelreneer/algorithms | 3752d6ad542d798c54eedf78b2624d27296b00be | [
"MIT"
] | null | null | null | python/test/test_bubble_sort.py | michaelreneer/algorithms | 3752d6ad542d798c54eedf78b2624d27296b00be | [
"MIT"
] | null | null | null | #!/usr/bin/env python
from __future__ import absolute_import
import unittest
import algorithms
class TestBubbleSort(unittest.TestCase):
def test_bubble_sort_with_one_item(self):
iterable = [1]
algorithms.bubble_sort(iterable)
expected = [1]
self.assertEqual(iterable, expected)
def test_bubble_sort_with_two_items_1(self):
iterable = [1, 2]
algorithms.bubble_sort(iterable)
expected = [1, 2]
self.assertEqual(iterable, expected)
def test_bubble_sort_with_two_items_2(self):
iterable = [2, 1]
algorithms.bubble_sort(iterable)
expected = [1, 2]
self.assertEqual(iterable, expected)
def test_bubble_sort_with_three_items_1(self):
iterable = [1, 2, 3]
algorithms.bubble_sort(iterable)
expected = [1, 2, 3]
self.assertEqual(iterable, expected)
def test_bubble_sort_with_three_items_2(self):
iterable = [1, 3, 2]
algorithms.bubble_sort(iterable)
expected = [1, 2, 3]
self.assertEqual(iterable, expected)
def test_bubble_sort_with_three_items_3(self):
iterable = [2, 1, 3]
algorithms.bubble_sort(iterable)
expected = [1, 2, 3]
self.assertEqual(iterable, expected)
def test_bubble_sort_with_three_items_4(self):
iterable = [2, 3, 1]
algorithms.bubble_sort(iterable)
expected = [1, 2, 3]
self.assertEqual(iterable, expected)
def test_bubble_sort_with_three_items_5(self):
iterable = [3, 1, 2]
algorithms.bubble_sort(iterable)
expected = [1, 2, 3]
self.assertEqual(iterable, expected)
def test_bubble_sort_with_three_items_6(self):
iterable = [3, 2, 1]
algorithms.bubble_sort(iterable)
expected = [1, 2, 3]
self.assertEqual(iterable, expected)
def test_bubble_sort_with_ascending_items(self):
iterable = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
algorithms.bubble_sort(iterable)
expected = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
self.assertEqual(iterable, expected)
def test_bubble_sort_with_descending_items(self):
iterable = [10, 9, 8, 7, 6, 5, 4, 3, 2, 1]
algorithms.bubble_sort(iterable)
expected = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
self.assertEqual(iterable, expected)
def test_bubble_sort_with_equal_items(self):
iterable = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1]
algorithms.bubble_sort(iterable)
expected = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1]
self.assertEqual(iterable, expected)
def test_bubble_sort_with_strings(self):
iterable = ['a', 's', 'd', 'f']
algorithms.bubble_sort(iterable)
expected = ['a', 'd', 'f', 's']
self.assertEqual(iterable, expected)
def test_bubble_sort_with_no_items(self):
iterable = []
algorithms.bubble_sort(iterable)
expected = []
self.assertEqual(iterable, expected)
def test_bubble_sort_with_none_iterable_raises_type_error(self):
iterable = None
with self.assertRaises(TypeError):
algorithms.bubble_sort(iterable)
def test_bubble_sort_is_stable_1(self):
iterable = [[1], [1], [1], [1]]
ids = [id(item) for item in iterable]
algorithms.bubble_sort(iterable)
expected = [id(item) for item in iterable]
self.assertEqual(ids, expected)
def test_bubble_sort_is_stable_2(self):
iterable = [[1], [2], [3], [1]]
ids = [id(item) for item in iterable if item[0] == 1]
algorithms.bubble_sort(iterable)
expected = [id(item) for item in iterable if item[0] == 1]
self.assertEqual(ids, expected)
def test_bubble_sort_is_stable_3(self):
iterable = [[2], [3], [1], [1]]
ids = [id(item) for item in iterable if item[0] == 1]
algorithms.bubble_sort(iterable)
expected = [id(item) for item in iterable if item[0] == 1]
self.assertEqual(ids, expected)
def test_bubble_sort_is_stable_4(self):
iterable = [[3], [2], [3], [1]]
ids = [id(item) for item in iterable if item[0] == 1]
algorithms.bubble_sort(iterable)
expected = [id(item) for item in iterable if item[0] == 1]
self.assertEqual(ids, expected)
if __name__ == '__main__':
unittest.main()
| 30.325758 | 66 | 0.677492 | 580 | 4,003 | 4.439655 | 0.106897 | 0.147573 | 0.095922 | 0.125437 | 0.83068 | 0.787961 | 0.740194 | 0.711845 | 0.711845 | 0.658252 | 0 | 0.046468 | 0.193605 | 4,003 | 131 | 67 | 30.557252 | 0.751239 | 0.004996 | 0 | 0.509615 | 0 | 0 | 0.004018 | 0 | 0 | 0 | 0 | 0 | 0.182692 | 1 | 0.182692 | false | 0 | 0.028846 | 0 | 0.221154 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
90dc8744bb7dba3256160f533976958cec988371 | 19 | py | Python | systems/control/backend/__init__.py | stylekilla/syncmrt | 816bb57d80d6595719b8b9d7f027f4f17d0a6c0a | [
"Apache-2.0"
] | null | null | null | systems/control/backend/__init__.py | stylekilla/syncmrt | 816bb57d80d6595719b8b9d7f027f4f17d0a6c0a | [
"Apache-2.0"
] | 25 | 2019-03-05T05:56:35.000Z | 2019-07-24T13:11:57.000Z | systems/control/backend/__init__.py | stylekilla/syncmrt | 816bb57d80d6595719b8b9d7f027f4f17d0a6c0a | [
"Apache-2.0"
] | 1 | 2019-11-27T05:10:47.000Z | 2019-11-27T05:10:47.000Z | from . import epics | 19 | 19 | 0.789474 | 3 | 19 | 5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.157895 | 19 | 1 | 19 | 19 | 0.9375 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
90de378c74ebe60d74691b0f17e9511261e0c7ef | 285 | py | Python | Darlington/phase1/python Basic 2/day 28 solution/qtn2.py | CodedLadiesInnovateTech/-python-challenge-solutions | 430cd3eb84a2905a286819eef384ee484d8eb9e7 | [
"MIT"
] | 6 | 2020-05-23T19:53:25.000Z | 2021-05-08T20:21:30.000Z | Darlington/phase1/python Basic 2/day 28 solution/qtn2.py | CodedLadiesInnovateTech/-python-challenge-solutions | 430cd3eb84a2905a286819eef384ee484d8eb9e7 | [
"MIT"
] | 8 | 2020-05-14T18:53:12.000Z | 2020-07-03T00:06:20.000Z | Darlington/phase1/python Basic 2/day 28 solution/qtn2.py | CodedLadiesInnovateTech/-python-challenge-solutions | 430cd3eb84a2905a286819eef384ee484d8eb9e7 | [
"MIT"
] | 39 | 2020-05-10T20:55:02.000Z | 2020-09-12T17:40:59.000Z | #program to compute the digit distance between two integers.
def digit_distance_nums(n1, n2):
return sum(map(int,str(abs(n1-n2))))
print(digit_distance_nums(123, 256))
print(digit_distance_nums(23, 56))
print(digit_distance_nums(1, 2))
print(digit_distance_nums(24232, 45645)) | 40.714286 | 60 | 0.768421 | 47 | 285 | 4.446809 | 0.595745 | 0.373206 | 0.406699 | 0.421053 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.101961 | 0.105263 | 285 | 7 | 61 | 40.714286 | 0.717647 | 0.207018 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0 | 0 | 0.166667 | 0.333333 | 0.666667 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 6 |
29138bb1a037bad82fc3f688d71d83eb8fa96227 | 14,970 | py | Python | src/model_sac.py | sesem738/Calamari | f3e16682901138c5ee8bb5e94b335bfaee36afad | [
"Unlicense"
] | null | null | null | src/model_sac.py | sesem738/Calamari | f3e16682901138c5ee8bb5e94b335bfaee36afad | [
"Unlicense"
] | null | null | null | src/model_sac.py | sesem738/Calamari | f3e16682901138c5ee8bb5e94b335bfaee36afad | [
"Unlicense"
] | null | null | null | #########################################################################
# Implementation of Soft Actor Critic by Josias Moukpe
# from the https://arxiv.org/abs/1812.05905 paper
# and inspired by https://github.com/pranz24/pytorch-soft-actor-critic
##########################################################################
# 12 channel : 3 lidar + 9 rgb
import torch
from torch import nn
from torch._C import device
import torch.nn.functional as F
from torch.distributions import Normal
LOG_SIG_MAX = 2
LOG_SIG_MIN = -20
epsilon = 1e-6
# Initialize Policy weights
def weights_init_(m):
'''initialize the po '''
if isinstance(m, nn.Linear):
torch.nn.init.xavier_uniform_(m.weight, gain=1)
torch.nn.init.constant_(m.bias, 0)
class ValueNetwork(nn.Module):
'''Soft Actor Critic Value Network'''
def __init__(self, alpha, input_dim, output_dim = 1, name='ValueNet', checkpoint='checkpoints/sac'):
super(ValueNetwork, self).__init__()
c, h, w = input_dim
self.name = name
self.checkpoint_file = checkpoint
if h != 256:
raise ValueError(f"Expecting input height: 150, got: {h}")
if w != 256:
raise ValueError(f"Expecting input width: 150, got: {w}")
c1 = 3; c2 = 3; c3 = 3# 3 channels lidar, 9 channels rgb camera
# Feature extraction
# 1 -> camera
self.conv11 = nn.Conv2d(in_channels=c1, out_channels=64, kernel_size=8, stride=4)
self.conv12 = nn.Conv2d(in_channels=64, out_channels=64, kernel_size=4, stride=2)
self.conv13 = nn.Conv2d(in_channels=64, out_channels=64, kernel_size=3, stride=1)
# 2 -> lidar
self.conv21 = nn.Conv2d(in_channels=c2, out_channels=64, kernel_size=8, stride=4)
self.conv22 = nn.Conv2d(in_channels=64, out_channels=64, kernel_size=4, stride=2)
self.conv23 = nn.Conv2d(in_channels=64, out_channels=64, kernel_size=3, stride=1)
# 3 -> birdeye
self.conv31 = nn.Conv2d(in_channels=c3, out_channels=64, kernel_size=8, stride=4)
self.conv32 = nn.Conv2d(in_channels=64, out_channels=64, kernel_size=4, stride=2)
self.conv33 = nn.Conv2d(in_channels=64, out_channels=64, kernel_size=3, stride=1)
out_size = 150528 #TODO: find out
# Estimation
self.dense1 = nn.Linear(out_size, 512)
self.dense2 = nn.Linear(512,256)
self.dense3 = nn.Linear(256,output_dim) # output_dim is 1 for SAC Value function
self.optimizer = torch.optim.Adam(self.parameters(), lr=alpha)
self.device = torch.device('cuda:0')
self.to(device=self.device)
#Optimal initialization of the weights
self.apply(weights_init_)
def forward(self, input):
'''Forward pass to the Value Network to estimate the value'''
# input is the state
# input_1, _, input_2 = torch.tensor_split(input,(3,3), dim=1)
# input_1 = F.relu(self.conv11(input_1))
# input_1 = F.relu(self.conv12(input_1))
# input_1 = F.relu(self.conv13(input_1))
# input_1 = torch.flatten(input_1,1)
# input_2 = F.relu(self.conv21(input_2))
# input_2 = F.relu(self.conv22(input_2))
# input_2 = F.relu(self.conv23(input_2))
# input_2 = torch.flatten(input_2,1)
# x = torch.cat((input_1, input_2),dim=-1)
input_1 = input['camera']
input_2 = input['lidar']
input_3 = input['birdeye']
input_1 = F.relu(self.conv11(input_1))
input_1 = F.relu(self.conv12(input_1))
input_1 = F.relu(self.conv13(input_1))
input_1 = torch.flatten(input_1,1)
input_2 = F.relu(self.conv21(input_2))
input_2 = F.relu(self.conv22(input_2))
input_2 = F.relu(self.conv23(input_2))
input_2 = torch.flatten(input_2,1)
input_3 = F.relu(self.conv31(input_3))
input_3 = F.relu(self.conv32(input_3))
input_3 = F.relu(self.conv33(input_3))
input_3 = torch.flatten(input_3,1)
x = torch.cat((input_1, input_2, input_3),dim=-1)
x = F.relu(self.dense1(x))
x = F.relu(self.dense2(x))
x = self.dense3(x)
return x
def freeze_paramaters(self):
"""Freeze network parameters"""
for p in self.parameters():
p.requires_grad = False
# TODO: found out if it's affected by the update to SAC
def save_checkpoint(self, epsilon, num):
print('... saving checkpoint ...')
path = self.checkpoint_file / (self.name+'_'+str(num)+'.chkpt')
torch.save(dict(model=self.state_dict(), epsilon_decay=epsilon), path)
def load_checkpoint(self, checkpoint_file):
if not checkpoint_file.exists():
raise ValueError(f"{checkpoint_file} does not exist")
print('... loading checkpoint ...')
ckp = torch.load(checkpoint_file)
exploration_rate = ckp.get('epsilon_decay')
state_dict = ckp.get('model')
self.load_state_dict(state_dict)
return exploration_rate
class QNetwork(nn.Module):
'''Soft Actor Critic Q Network'''
def __init__(self, alpha, input_dim, action_dim, output_dim =1, name='QNet', checkpoint='checkpoints/sac'):
super(QNetwork, self).__init__()
c, h, w = input_dim
self.name = name
self.checkpoint_file = checkpoint
if h != 256:
raise ValueError(f"Expecting input height: 150, got: {h}")
if w != 256:
raise ValueError(f"Expecting input width: 150, got: {w}")
c1 = 3; c2 = 3; c3 = 3# 3 channels lidar, 9 channels rgb camera
# Feature extraction
# 1 -> camera
self.conv11 = nn.Conv2d(in_channels=c1, out_channels=64, kernel_size=8, stride=4)
self.conv12 = nn.Conv2d(in_channels=64, out_channels=64, kernel_size=4, stride=2)
self.conv13 = nn.Conv2d(in_channels=64, out_channels=64, kernel_size=3, stride=1)
# 2 -> lidar
self.conv21 = nn.Conv2d(in_channels=c2, out_channels=64, kernel_size=8, stride=4)
self.conv22 = nn.Conv2d(in_channels=64, out_channels=64, kernel_size=4, stride=2)
self.conv23 = nn.Conv2d(in_channels=64, out_channels=64, kernel_size=3, stride=1)
# 3 -> birdeye
self.conv31 = nn.Conv2d(in_channels=c3, out_channels=64, kernel_size=8, stride=4)
self.conv32 = nn.Conv2d(in_channels=64, out_channels=64, kernel_size=4, stride=2)
self.conv33 = nn.Conv2d(in_channels=64, out_channels=64, kernel_size=3, stride=1)
out_size = 150528 #TODO: find out
# Estimation
self.dense1 = nn.Linear(out_size, 512)
self.dense2_2 = nn.Linear(512,256)
self.dense3_2 = nn.Linear(256,output_dim) # output_dim =1 for
self.optimizer = torch.optim.Adam(self.parameters(), lr=alpha)
self.device = torch.device('cuda:0')
self.to(device=self.device)
# optimal initialization of weights
self.apply(weights_init_)
def forward(self, input, action):
# input_1, _, input_2 = torch.tensor_split(input,(3,3), dim=1)
# # going through the convolutions
# input_1 = F.relu(self.conv11(input_1))
# input_1 = F.relu(self.conv12(input_1))
# input_1 = F.relu(self.conv13(input_1))
# input_1 = torch.flatten(input_1,1)
# input_2 = F.relu(self.conv21(input_2))
# input_2 = F.relu(self.conv22(input_2))
# input_2 = F.relu(self.conv23(input_2))
# input_2 = torch.flatten(input_2,1)
# # Concatenate conv outputs
# state_input = torch.cat((input_1, input_2),dim=-1)
input_1 = input['camera']
input_2 = input['lidar']
input_3 = input['birdeye']
input_1 = F.relu(self.conv11(input_1))
input_1 = F.relu(self.conv12(input_1))
input_1 = F.relu(self.conv13(input_1))
input_1 = torch.flatten(input_1,1)
input_2 = F.relu(self.conv21(input_2))
input_2 = F.relu(self.conv22(input_2))
input_2 = F.relu(self.conv23(input_2))
input_2 = torch.flatten(input_2,1)
input_3 = F.relu(self.conv31(input_3))
input_3 = F.relu(self.conv32(input_3))
input_3 = F.relu(self.conv33(input_3))
input_3 = torch.flatten(input_3,1)
state_input = torch.cat((input_1, input_2, input_3),dim=-1)
xu = torch.cat([state_input, action], 1)
# Q1 fully connected forward
x1 = F.relu(self.dense1_1(xu))
x1 = F.relu(self.dense2_1(x1))
x1 = self.dense3_1(x1)
# Q2 fully connected forward
x2 = F.relu(self.dense1_2(xu))
x2 = F.relu(self.dense2_2(x2))
x2 = self.dense3_2(x2)
return x1, x2
def freeze_paramaters(self):
"""Freeze network parameters"""
for p in self.parameters():
p.requires_grad = False
#TODO: check if this still applies to the new Q network
def save_checkpoint(self, epsilon, num):
print('... saving checkpoint ...')
path = self.checkpoint_file / (self.name+'_'+str(num)+'.chkpt')
torch.save(dict(model=self.state_dict(), epsilon_decay=epsilon), path)
def load_checkpoint(self, checkpoint_file):
if not checkpoint_file.exists():
raise ValueError(f"{checkpoint_file} does not exist")
print('... loading checkpoint ...')
ckp = torch.load(checkpoint_file)
exploration_rate = ckp.get('epsilon_decay')
state_dict = ckp.get('model')
self.load_state_dict(state_dict)
return exploration_rate
class PolicyNetwork(nn.Module):
'''
Soft Actor Critic Gaussian Policy Network
'''
def __init__(self, alpha, input_dim, action_dim, action_space=None, name='PolicyNet', checkpoint='checkpoints/sac'):
super(PolicyNetwork, self).__init__()
c, h, w = input_dim
self.name = name
self.checkpoint_file = checkpoint
if h != 256:
raise ValueError(f"Expecting input height: 150, got: {h}")
if w != 256:
raise ValueError(f"Expecting input width: 150, got: {w}")
c1 = 3; c2 = 3; c3 = 3# 3 channels lidar, 9 channels rgb camera
# Feature extraction
# 1 -> camera
self.conv11 = nn.Conv2d(in_channels=c1, out_channels=64, kernel_size=8, stride=4)
self.conv12 = nn.Conv2d(in_channels=64, out_channels=64, kernel_size=4, stride=2)
self.conv13 = nn.Conv2d(in_channels=64, out_channels=64, kernel_size=3, stride=1)
# 2 -> lidar
self.conv21 = nn.Conv2d(in_channels=c2, out_channels=64, kernel_size=8, stride=4)
self.conv22 = nn.Conv2d(in_channels=64, out_channels=64, kernel_size=4, stride=2)
self.conv23 = nn.Conv2d(in_channels=64, out_channels=64, kernel_size=3, stride=1)
# 3 -> birdeye
self.conv31 = nn.Conv2d(in_channels=c3, out_channels=64, kernel_size=8, stride=4)
self.conv32 = nn.Conv2d(in_channels=64, out_channels=64, kernel_size=4, stride=2)
self.conv33 = nn.Conv2d(in_channels=64, out_channels=64, kernel_size=3, stride=1)
out_size = 150528 #TODO: find out
# Estimation
self.dense1 = nn.Linear(out_size, 512)
self.dense2 = nn.Linear(512,256)
self.mean_dense = nn.Linear(256, action_dim)
self.log_std_dense = nn.Linear(256, action_dim)
# applying initial optimal weights
self.apply(weights_init_)
# action rescaling TODO: verify it applies to this case
if action_space is None:
self.action_scale = torch.tensor(1.)
self.action_bias = torch.tensor(0.)
else:
self.action_scale = torch.FloatTensor(
(action_space.high - action_space.low) / 2.)
self.action_bias = torch.FloatTensor(
(action_space.high + action_space.low) / 2.)
self.optimizer = torch.optim.Adam(self.parameters(), lr=alpha)
self.device = torch.device('cuda:0')
self.to(device=self.device)
def forward(self, input: dict):
'''passing data through policy network'''
# Extracting them features
# input_1, _, input_2 = torch.tensor_split(input,(3,3), dim=1)
input_1 = input['camera']
input_2 = input['lidar']
input_3 = input['birdeye']
input_1 = F.relu(self.conv11(input_1))
input_1 = F.relu(self.conv12(input_1))
input_1 = F.relu(self.conv13(input_1))
input_1 = torch.flatten(input_1,1)
input_2 = F.relu(self.conv21(input_2))
input_2 = F.relu(self.conv22(input_2))
input_2 = F.relu(self.conv23(input_2))
input_2 = torch.flatten(input_2,1)
input_3 = F.relu(self.conv31(input_3))
input_3 = F.relu(self.conv32(input_3))
input_3 = F.relu(self.conv33(input_3))
input_3 = torch.flatten(input_3,1)
state_input = torch.cat((input_1, input_2, input_3),dim=-1)
x = F.relu(self.dense1(state_input))
x = F.relu(self.dense2(x))
mean = self.mean_dense(x)
log_std = self.log_std_dense(x)
log_std = torch.clamp(log_std, min=LOG_SIG_MIN, max=LOG_SIG_MAX)
return mean, log_std
def sample(self, input):
mean, log_std = self.forward(input)
std = log_std.exp()
normal = Normal(mean, std)
x_t = normal.rsample() # for reparameterization trick (mean + std * N(0,1))
y_t = torch.tanh(x_t)
action = y_t * self.action_scale + self.action_bias
log_prob = normal.log_prob(x_t)
# Enforcing Action Bound
log_prob -= torch.log(self.action_scale * (1 - y_t.pow(2)) + epsilon)
log_prob = log_prob.sum(1, keepdim=True)
mean = torch.tanh(mean) * self.action_scale + self.action_bias
return action, log_prob, mean
def to(self, device):
self.action_scale = self.action_scale.to(device)
self.action_bias = self.action_bias.to(device)
return super(PolicyNetwork, self).to(device)
def freeze_paramaters(self):
"""Freeze network parameters"""
for p in self.parameters():
p.requires_grad = False
# TODO: check if it still applies to this case
def save_checkpoint(self, epsilon, num):
print('... saving checkpoint ...')
path = self.checkpoint_file / (self.name+'_'+str(num)+'.chkpt')
torch.save(dict(model=self.state_dict(), epsilon_decay=epsilon), path)
def load_checkpoint(self, checkpoint_file):
if not checkpoint_file.exists():
raise ValueError(f"{checkpoint_file} does not exist")
print('... loading checkpoint ...')
ckp = torch.load(checkpoint_file)
exploration_rate = ckp.get('epsilon_decay')
state_dict = ckp.get('model')
self.load_state_dict(state_dict)
return exploration_rate
| 38.286445 | 120 | 0.615965 | 2,115 | 14,970 | 4.170213 | 0.115366 | 0.034694 | 0.047959 | 0.055102 | 0.787415 | 0.770522 | 0.75102 | 0.74093 | 0.739569 | 0.720068 | 0 | 0.060339 | 0.250501 | 14,970 | 390 | 121 | 38.384615 | 0.725758 | 0.158116 | 0 | 0.705882 | 0 | 0 | 0.055236 | 0 | 0 | 0 | 0 | 0.005128 | 0 | 1 | 0.07563 | false | 0 | 0.021008 | 0 | 0.142857 | 0.02521 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
2933dd0a1327b51c1ed1223ead1915492611c715 | 216 | py | Python | course/admin.py | LuoLuo0101/ChoiceCourse | 93eba91cd39a524455edab52ad29dfd09ac000ba | [
"Apache-2.0"
] | null | null | null | course/admin.py | LuoLuo0101/ChoiceCourse | 93eba91cd39a524455edab52ad29dfd09ac000ba | [
"Apache-2.0"
] | null | null | null | course/admin.py | LuoLuo0101/ChoiceCourse | 93eba91cd39a524455edab52ad29dfd09ac000ba | [
"Apache-2.0"
] | null | null | null | from django.contrib import admin
from course.models import Student, Teacher, Course, Enrollment
admin.site.register(Student)
admin.site.register(Teacher)
admin.site.register(Course)
admin.site.register(Enrollment)
| 24 | 62 | 0.824074 | 29 | 216 | 6.137931 | 0.413793 | 0.202247 | 0.382022 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.078704 | 216 | 8 | 63 | 27 | 0.894472 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.333333 | 0 | 0.333333 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
293d69d7e1b5bebc5ff967901b2ddcaed49fa0db | 42 | py | Python | botovod/dbdrivers/__init__.py | OlegYurchik/botovod | 20d7c97a4758ce280fcdc601e395b34c4f942a0f | [
"MIT"
] | 7 | 2018-09-03T11:03:55.000Z | 2020-07-11T16:13:56.000Z | botovod/dbdrivers/__init__.py | OlegYurchik/botovod | 20d7c97a4758ce280fcdc601e395b34c4f942a0f | [
"MIT"
] | null | null | null | botovod/dbdrivers/__init__.py | OlegYurchik/botovod | 20d7c97a4758ce280fcdc601e395b34c4f942a0f | [
"MIT"
] | 2 | 2019-09-03T12:09:40.000Z | 2020-06-05T18:09:52.000Z | from .dbdrivers import DBDriver, Follower
| 21 | 41 | 0.833333 | 5 | 42 | 7 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.119048 | 42 | 1 | 42 | 42 | 0.945946 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
297a4e21ddb5eac9bf1e386239c9b292ebd19320 | 16,416 | py | Python | pegasus/tools/clustering.py | slowkow/pegasus | 9a840b4a485ad93e703b2087e21179b0329e0c41 | [
"BSD-3-Clause"
] | null | null | null | pegasus/tools/clustering.py | slowkow/pegasus | 9a840b4a485ad93e703b2087e21179b0329e0c41 | [
"BSD-3-Clause"
] | null | null | null | pegasus/tools/clustering.py | slowkow/pegasus | 9a840b4a485ad93e703b2087e21179b0329e0c41 | [
"BSD-3-Clause"
] | null | null | null | import time
import numpy as np
import pandas as pd
from pegasusio import MultimodalData
from natsort import natsorted
from sklearn.cluster import KMeans
from typing import List, Optional
from pegasus.tools import construct_graph
import logging
logger = logging.getLogger(__name__)
from pegasusio import timer
@timer(logger=logger)
def louvain(
data: MultimodalData,
rep: str = "pca",
resolution: int = 1.3,
random_state: int = 0,
class_label: str = "louvain_labels",
) -> None:
"""Cluster the cells using Louvain algorithm. [Blondel08]_
Parameters
----------
data: ``pegasusio.MultimodalData``
Annotated data matrix with rows for cells and columns for genes.
rep: ``str``, optional, default: ``"pca"``
The embedding representation used for clustering. Keyword ``'X_' + rep`` must exist in ``data.obsm``. By default, use PCA coordinates.
resolution: ``int``, optional, default: ``1.3``
Resolution factor. Higher resolution tends to find more clusters with smaller sizes.
random_state: ``int``, optional, default: ``0``
Random seed for reproducing results.
class_label: ``str``, optional, default: ``"louvain_labels"``
Key name for storing cluster labels in ``data.obs``.
Returns
-------
``None``
Update ``data.obs``:
* ``data.obs[class_label]``: Cluster labels of cells as categorical data.
Examples
--------
>>> pg.louvain(data)
"""
try:
import louvain as louvain_module
except ImportError:
print("Need louvain! Try 'pip install louvain-github'.")
rep_key = "W_" + rep
if rep_key not in data.uns:
raise ValueError("Cannot find affinity matrix. Please run neighbors first!")
W = data.uns[rep_key]
G = construct_graph(W)
partition_type = louvain_module.RBConfigurationVertexPartition
partition = partition_type(G, resolution_parameter=resolution, weights="weight")
optimiser = louvain_module.Optimiser()
optimiser.set_rng_seed(random_state)
diff = optimiser.optimise_partition(partition)
labels = np.array([str(x + 1) for x in partition.membership])
categories = natsorted(np.unique(labels))
data.obs[class_label] = pd.Categorical(values=labels, categories=categories)
n_clusters = data.obs[class_label].cat.categories.size
logger.info(f"Louvain clustering is done. Get {n_clusters} clusters.")
@timer(logger=logger)
def leiden(
data: MultimodalData,
rep: str = "pca",
resolution: int = 1.3,
n_iter: int = -1,
random_state: int = 0,
class_label: str = "leiden_labels",
) -> None:
"""Cluster the data using Leiden algorithm. [Traag19]_
Parameters
----------
data: ``pegasusio.MultimodalData``
Annotated data matrix with rows for cells and columns for genes.
rep: ``str``, optional, default: ``"pca"``
The embedding representation used for clustering. Keyword ``'X_' + rep`` must exist in ``data.obsm``. By default, use PCA coordinates.
resolution: ``int``, optional, default: ``1.3``
Resolution factor. Higher resolution tends to find more clusters.
n_iter: ``int``, optional, default: ``-1``
Number of iterations that Leiden algorithm runs. If ``-1``, run the algorithm until reaching its optimal clustering.
random_state: ``int``, optional, default: ``0``
Random seed for reproducing results.
class_label: ``str``, optional, default: ``"leiden_labels"``
Key name for storing cluster labels in ``data.obs``.
Returns
-------
``None``
Update ``data.obs``:
* ``data.obs[class_label]``: Cluster labels of cells as categorical data.
Examples
--------
>>> pg.leiden(data)
"""
try:
import leidenalg
except ImportError:
print("Need leidenalg! Try 'pip install leidenalg'.")
rep_key = "W_" + rep
if rep_key not in data.uns:
raise ValueError("Cannot find affinity matrix. Please run neighbors first!")
W = data.uns[rep_key]
G = construct_graph(W)
partition_type = leidenalg.RBConfigurationVertexPartition
partition = leidenalg.find_partition(
G,
partition_type,
seed=random_state,
weights="weight",
resolution_parameter=resolution,
n_iterations=n_iter,
)
labels = np.array([str(x + 1) for x in partition.membership])
categories = natsorted(np.unique(labels))
data.obs[class_label] = pd.Categorical(values=labels, categories=categories)
n_clusters = data.obs[class_label].cat.categories.size
logger.info(f"Leiden clustering is done. Get {n_clusters} clusters.")
def partition_cells_by_kmeans(
X: np.ndarray,
n_clusters: int,
n_clusters2: int,
n_init: int,
random_state: int,
min_avg_cells_per_final_cluster: Optional[int] = 10,
) -> List[int]:
n_clusters = min(n_clusters, max(X.shape[0] // min_avg_cells_per_final_cluster, 1))
if n_clusters == 1:
return np.zeros(X.shape[0], dtype = np.int32)
kmeans_params = {
'n_clusters': n_clusters,
'n_init': n_init,
'random_state': random_state,
}
km = KMeans(**kmeans_params)
km.fit(X)
coarse = km.labels_.copy()
km.set_params(n_init=1)
labels = coarse.copy()
base_sum = 0
for i in range(n_clusters):
idx = coarse == i
nc = min(n_clusters2, max(idx.sum() // min_avg_cells_per_final_cluster, 1))
if nc == 1:
labels[idx] = base_sum
else:
km.set_params(n_clusters=nc)
km.fit(X[idx, :])
labels[idx] = base_sum + km.labels_
base_sum += nc
return labels
@timer(logger=logger)
def spectral_louvain(
data: MultimodalData,
rep: str = "pca",
resolution: float = 1.3,
rep_kmeans: str = "diffmap",
n_clusters: int = 30,
n_clusters2: int = 50,
n_init: int = 10,
random_state: int = 0,
class_label: str = "spectral_louvain_labels",
) -> None:
""" Cluster the data using Spectral Louvain algorithm. [Li20]_
Parameters
----------
data: ``pegasusio.MultimodalData``
Annotated data matrix with rows for cells and columns for genes.
rep: ``str``, optional, default: ``"pca"``
The embedding representation used for clustering. Keyword ``'X_' + rep`` must exist in ``data.obsm``. By default, use PCA coordinates.
resolution: ``int``, optional, default: ``1.3``
Resolution factor. Higher resolution tends to find more clusters with smaller sizes.
rep_kmeans: ``str``, optional, default: ``"diffmap"``
The embedding representation on which the KMeans runs. Keyword must exist in ``data.obsm``. By default, use Diffusion Map coordinates. If diffmap is not calculated, use PCA coordinates instead.
n_clusters: ``int``, optional, default: ``30``
The number of first level clusters.
n_clusters2: ``int``, optional, default: ``50``
The number of second level clusters.
n_init: ``int``, optional, default: ``10``
Number of kmeans tries for the first level clustering. Default is set to be the same as scikit-learn Kmeans function.
random_state: ``int``, optional, default: ``0``
Random seed for reproducing results.
class_label: ``str``, optional, default: ``"spectral_louvain_labels"``
Key name for storing cluster labels in ``data.obs``.
Returns
-------
``None``
Update ``data.obs``:
* ``data.obs[class_label]``: Cluster labels for cells as categorical data.
Examples
--------
>>> pg.spectral_louvain(data)
"""
try:
import louvain as louvain_module
except ImportError:
print("Need louvain! Try 'pip install louvain-github'.")
if "X_" + rep_kmeans not in data.obsm.keys():
logger.warning(
"{} is not calculated, switch to pca instead.".format(rep_kmeans)
)
rep_kmeans = "pca"
if "X_" + rep_kmeans not in data.obsm.keys():
raise ValueError("Please run {} first!".format(rep_kmeans))
if "W_" + rep not in data.uns:
raise ValueError("Cannot find affinity matrix. Please run neighbors first!")
labels = partition_cells_by_kmeans(
data.obsm[rep_kmeans], n_clusters, n_clusters2, n_init, random_state,
)
W = data.uns["W_" + rep]
G = construct_graph(W)
partition_type = louvain_module.RBConfigurationVertexPartition
partition = partition_type(
G, resolution_parameter=resolution, weights="weight", initial_membership=labels
)
partition_agg = partition.aggregate_partition()
optimiser = louvain_module.Optimiser()
optimiser.set_rng_seed(random_state)
diff = optimiser.optimise_partition(partition_agg)
partition.from_coarse_partition(partition_agg)
labels = np.array([str(x + 1) for x in partition.membership])
categories = natsorted(np.unique(labels))
data.obs[class_label] = pd.Categorical(values=labels, categories=categories)
n_clusters = data.obs[class_label].cat.categories.size
logger.info(f"Spectral Louvain clustering is done. Get {n_clusters} clusters.")
@timer(logger=logger)
def spectral_leiden(
data: MultimodalData,
rep: str = "pca",
resolution: float = 1.3,
rep_kmeans: str = "diffmap",
n_clusters: int = 30,
n_clusters2: int = 50,
n_init: int = 10,
random_state: int = 0,
class_label: str = "spectral_leiden_labels",
) -> None:
"""Cluster the data using Spectral Leiden algorithm. [Li20]_
Parameters
----------
data: ``pegasusio.MultimodalData``
Annotated data matrix with rows for cells and columns for genes.
rep: ``str``, optional, default: ``"pca"``
The embedding representation used for clustering. Keyword ``'X_' + rep`` must exist in ``data.obsm``. By default, use PCA coordinates.
resolution: ``int``, optional, default: ``1.3``
Resolution factor. Higher resolution tends to find more clusters.
rep_kmeans: ``str``, optional, default: ``"diffmap"``
The embedding representation on which the KMeans runs. Keyword must exist in ``data.obsm``. By default, use Diffusion Map coordinates. If diffmap is not calculated, use PCA coordinates instead.
n_clusters: ``int``, optional, default: ``30``
The number of first level clusters.
n_clusters2: ``int``, optional, default: ``50``
The number of second level clusters.
n_init: ``int``, optional, default: ``10``
Number of kmeans tries for the first level clustering. Default is set to be the same as scikit-learn Kmeans function.
random_state: ``int``, optional, default: ``0``
Random seed for reproducing results.
class_label: ``str``, optional, default: ``"spectral_leiden_labels"``
Key name for storing cluster labels in ``data.obs``.
Returns
-------
``None``
Update ``data.obs``:
* ``data.obs[class_label]``: Cluster labels for cells as categorical data.
Examples
--------
>>> pg.spectral_leiden(data)
"""
try:
import leidenalg
except ImportError:
print("Need leidenalg! Try 'pip install leidenalg'.")
if "X_" + rep_kmeans not in data.obsm.keys():
logger.warning(
"{} is not calculated, switch to pca instead.".format(rep_kmeans)
)
rep_kmeans = "pca"
if "X_" + rep_kmeans not in data.obsm.keys():
raise ValueError("Please run {} first!".format(rep_kmeans))
if "W_" + rep not in data.uns:
raise ValueError("Cannot find affinity matrix. Please run neighbors first!")
labels = partition_cells_by_kmeans(
data.obsm[rep_kmeans], n_clusters, n_clusters2, n_init, random_state,
)
W = data.uns["W_" + rep]
G = construct_graph(W)
partition_type = leidenalg.RBConfigurationVertexPartition
partition = partition_type(
G, resolution_parameter=resolution, weights="weight", initial_membership=labels
)
partition_agg = partition.aggregate_partition()
optimiser = leidenalg.Optimiser()
optimiser.set_rng_seed(random_state)
diff = optimiser.optimise_partition(partition_agg, -1)
partition.from_coarse_partition(partition_agg)
labels = np.array([str(x + 1) for x in partition.membership])
categories = natsorted(np.unique(labels))
data.obs[class_label] = pd.Categorical(values=labels, categories=categories)
n_clusters = data.obs[class_label].cat.categories.size
logger.info(f"Spectral Leiden clustering is done. Get {n_clusters} clusters.")
def cluster(
data: MultimodalData,
algo: str = "louvain",
rep: str = "pca",
resolution: int = 1.3,
random_state: int = 0,
class_label: str = None,
n_iter: int = -1,
rep_kmeans: str = "diffmap",
n_clusters: int = 30,
n_clusters2: int = 50,
n_init: int = 10,
) -> None:
"""Cluster the data using the chosen algorithm.
Candidates are *louvain*, *leiden*, *spectral_louvain* and *spectral_leiden*.
If data have < 1000 cells and there are clusters with sizes of 1, resolution is automatically reduced until no cluster of size 1 appears.
Parameters
----------
data: ``pegasusio.MultimodalData``
Annotated data matrix with rows for cells and columns for genes.
algo: ``str``, optional, default: ``"louvain"``
Which clustering algorithm to use. Choices are louvain, leiden, spectral_louvain, spectral_leiden
rep: ``str``, optional, default: ``"pca"``
The embedding representation used for clustering. Keyword ``'X_' + rep`` must exist in ``data.obsm``. By default, use PCA coordinates.
resolution: ``int``, optional, default: ``1.3``
Resolution factor. Higher resolution tends to find more clusters.
random_state: ``int``, optional, default: ``0``
Random seed for reproducing results.
class_label: ``str``, optional, default: None
Key name for storing cluster labels in ``data.obs``. If None, use 'algo_labels'.
n_iter: ``int``, optional, default: ``-1``
Number of iterations that Leiden algorithm runs. If ``-1``, run the algorithm until reaching its optimal clustering.
rep_kmeans: ``str``, optional, default: ``"diffmap"``
The embedding representation on which the KMeans runs. Keyword must exist in ``data.obsm``. By default, use Diffusion Map coordinates. If diffmap is not calculated, use PCA coordinates instead.
n_clusters: ``int``, optional, default: ``30``
The number of first level clusters.
n_clusters2: ``int``, optional, default: ``50``
The number of second level clusters.
n_init: ``int``, optional, default: ``10``
Number of kmeans tries for the first level clustering. Default is set to be the same as scikit-learn Kmeans function.
Returns
-------
``None``
Update ``data.obs``:
* ``data.obs[class_label]``: Cluster labels of cells as categorical data.
Examples
--------
>>> pg.cluster(data, algo = 'leiden')
"""
if algo not in {"louvain", "leiden", "spectral_louvain", "spectral_leiden"}:
raise ValueError("Unknown clustering algorithm {}.".format(algo))
if class_label is None:
class_label = algo + "_labels"
kwargs = {
"data": data,
"rep": rep,
"resolution": resolution,
"random_state": random_state,
"class_label": class_label,
}
if algo == "leiden":
kwargs["n_iter"] = n_iter
if algo in ["spectral_louvain", "spectral_leiden"]:
kwargs.update(
{
"rep_kmeans": rep_kmeans,
"n_clusters": n_clusters,
"n_clusters2": n_clusters2,
"n_init": n_init,
}
)
cluster_func = globals()[algo]
cluster_func(**kwargs) # clustering
if data.shape[0] < 100000 and data.obs[class_label].value_counts().min() == 1:
new_resol = resolution
while new_resol > 0.0:
new_resol -= 0.1
kwargs["resolution"] = new_resol
cluster_func(**kwargs)
if data.obs[class_label].value_counts().min() > 1:
break
logger.warning(
"Reduced resolution from {:.2f} to {:.2f} to avoid clusters of size 1.".format(
resolution, new_resol
)
)
| 33.365854 | 201 | 0.646991 | 2,037 | 16,416 | 5.078547 | 0.11242 | 0.050749 | 0.036539 | 0.02465 | 0.82813 | 0.810343 | 0.79971 | 0.783374 | 0.77158 | 0.747801 | 0 | 0.010396 | 0.232395 | 16,416 | 491 | 202 | 33.433809 | 0.810571 | 0.420078 | 0 | 0.53527 | 0 | 0 | 0.138814 | 0.005034 | 0 | 0 | 0 | 0 | 0 | 1 | 0.024896 | false | 0 | 0.074689 | 0 | 0.107884 | 0.016598 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
29831e1e80772ba97b088169817051bcefe4c0cf | 157 | py | Python | lib/exception.py | sumedhpb/testrunner | 9ff887231c75571624abc31a3fb5248110e01203 | [
"Apache-2.0"
] | 14 | 2015-02-06T02:47:57.000Z | 2020-03-14T15:06:05.000Z | lib/exception.py | sumedhpb/testrunner | 9ff887231c75571624abc31a3fb5248110e01203 | [
"Apache-2.0"
] | 3 | 2019-02-27T19:29:11.000Z | 2021-06-02T02:14:27.000Z | lib/exception.py | sumedhpb/testrunner | 9ff887231c75571624abc31a3fb5248110e01203 | [
"Apache-2.0"
] | 108 | 2015-03-26T08:58:49.000Z | 2022-03-21T05:21:39.000Z | class TimeoutException(Exception):
def __init__(self, value):
self.parameter = value
def __str__(self):
return repr(self.parameter)
| 22.428571 | 35 | 0.66879 | 17 | 157 | 5.705882 | 0.647059 | 0.268041 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.235669 | 157 | 6 | 36 | 26.166667 | 0.808333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.4 | false | 0 | 0 | 0.2 | 0.8 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
4667d36ae3605c96abde4f55e8992a5919769195 | 160 | py | Python | IOT/RaspberryPi/hellog/rasp_server/admin.py | syureu/Hellog2 | f61524ffe6f2a3836d13085e9e29e2015bba9f87 | [
"Apache-2.0"
] | null | null | null | IOT/RaspberryPi/hellog/rasp_server/admin.py | syureu/Hellog2 | f61524ffe6f2a3836d13085e9e29e2015bba9f87 | [
"Apache-2.0"
] | null | null | null | IOT/RaspberryPi/hellog/rasp_server/admin.py | syureu/Hellog2 | f61524ffe6f2a3836d13085e9e29e2015bba9f87 | [
"Apache-2.0"
] | null | null | null | from django.contrib import admin
from .models import User, Record, Machine
admin.site.register(User)
admin.site.register(Record)
admin.site.register(Machine)
| 20 | 41 | 0.80625 | 23 | 160 | 5.608696 | 0.478261 | 0.209302 | 0.395349 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.09375 | 160 | 7 | 42 | 22.857143 | 0.889655 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.4 | 0 | 0.4 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
46aa43dcf86f46a8a8a1bd56df42cdd54c4f6229 | 174 | py | Python | magi/wrappers/__init__.py | akbir/magi | cff26ddb87165bb6e19796dc77521e3191afcffc | [
"Apache-2.0"
] | 86 | 2021-11-24T21:53:29.000Z | 2022-03-27T13:35:45.000Z | magi/wrappers/__init__.py | akbir/magi | cff26ddb87165bb6e19796dc77521e3191afcffc | [
"Apache-2.0"
] | 7 | 2021-11-26T17:23:29.000Z | 2022-03-07T21:49:44.000Z | magi/wrappers/__init__.py | akbir/magi | cff26ddb87165bb6e19796dc77521e3191afcffc | [
"Apache-2.0"
] | 3 | 2021-11-27T11:13:18.000Z | 2022-01-24T14:38:53.000Z | """Environment wrappers for dm_env.Environment."""
from magi.wrappers.filter import TakeKeyWrapper # noqa
from magi.wrappers.frame_stack import FrameStackingWrapper # noqa
| 43.5 | 66 | 0.816092 | 21 | 174 | 6.666667 | 0.666667 | 0.114286 | 0.228571 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.103448 | 174 | 3 | 67 | 58 | 0.897436 | 0.316092 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
46ab12c0ebb4553773651a58c9cd99300bc5d783 | 6,653 | py | Python | Pytorch/denseForecastNet.py | khsibr/forecastNet | f3a3d8a7a675dfdd37365e9945c1d02548465c61 | [
"MIT"
] | 81 | 2020-02-18T19:07:28.000Z | 2022-03-22T23:08:09.000Z | Pytorch/denseForecastNet.py | khsibr/forecastNet | f3a3d8a7a675dfdd37365e9945c1d02548465c61 | [
"MIT"
] | 12 | 2020-05-02T14:48:10.000Z | 2021-08-16T02:51:21.000Z | Pytorch/denseForecastNet.py | khsibr/forecastNet | f3a3d8a7a675dfdd37365e9945c1d02548465c61 | [
"MIT"
] | 23 | 2020-02-20T11:22:21.000Z | 2022-03-26T07:46:58.000Z | """
ForecastNet with cells comprising densely connected layers.
ForecastNetDenseModel provides the mixture density network outputs.
ForecastNetDenseModel2 provides the linear outputs.
Paper:
"ForecastNet: A Time-Variant Deep Feed-Forward Neural Network Architecture for Multi-Step-Ahead Time-Series Forecasting"
by Joel Janek Dabrowski, YiFan Zhang, and Ashfaqur Rahman
Link to the paper: https://arxiv.org/abs/2002.04155
"""
import torch
import torch.nn as nn
import torch.nn.functional as F
class ForecastNetDenseModel(nn.Module):
"""
Class for the densely connected hidden cells version of the model
"""
def __init__(self, input_dim, hidden_dim, output_dim, in_seq_length, out_seq_length, device):
"""
Constructor
:param input_dim: Dimension of the inputs
:param hidden_dim: Number of hidden units
:param output_dim: Dimension of the outputs
:param in_seq_length: Length of the input sequence
:param out_seq_length: Length of the output sequence
"""
super(ForecastNetDenseModel, self).__init__()
self.input_dim = input_dim
self.hidden_dim = hidden_dim
self.output_dim = output_dim
self.in_seq_length = in_seq_length
self.out_seq_length = out_seq_length
self.device = device
# Input dimension of componed inputs and sequences
input_dim_comb = input_dim * in_seq_length
# Initialise layers
hidden_layer1 = [nn.Linear(input_dim_comb, hidden_dim)]
for i in range(out_seq_length - 1):
hidden_layer1.append(nn.Linear(input_dim_comb + hidden_dim + output_dim, hidden_dim))
self.hidden_layer1 = nn.ModuleList(hidden_layer1)
self.hidden_layer2 = nn.ModuleList([nn.Linear(hidden_dim, hidden_dim) for i in range(out_seq_length)])
self.mu_layer = nn.ModuleList([nn.Linear(hidden_dim, output_dim) for i in range(out_seq_length)])
self.sigma_layer = nn.ModuleList([nn.Linear(hidden_dim, output_dim) for i in range(out_seq_length)])
def forward(self, input, target, is_training=False):
"""
Forward propagation of the dense ForecastNet model
:param input: Input data in the form [input_seq_length, batch_size, input_dim]
:param target: Target data in the form [output_seq_length, batch_size, output_dim]
:param is_training: If true, use target data for training, else use the previous output.
:return: outputs: Sampled forecast outputs in the form [decoder_seq_length, batch_size, input_dim]
:return: mu: Outputs of the mean layer [decoder_seq_length, batch_size, input_dim]
:return: sigma: Outputs of the standard deviation layer [decoder_seq_length, batch_size, input_dim]
"""
# Initialise outputs
outputs = torch.zeros((self.out_seq_length, input.shape[0], self.output_dim)).to(self.device)
mu = torch.zeros((self.out_seq_length, input.shape[0], self.output_dim)).to(self.device)
sigma = torch.zeros((self.out_seq_length, input.shape[0], self.output_dim)).to(self.device)
# First input
next_cell_input = input
for i in range(self.out_seq_length):
# Propagate through cell
out = F.relu(self.hidden_layer1[i](next_cell_input))
out = F.relu(self.hidden_layer2[i](out))
# Calculate the output
mu_ = self.mu_layer[i](out)
sigma_ = F.softplus(self.sigma_layer[i](out))
mu[i,:,:] = mu_
sigma[i,:,:] = sigma_
outputs[i,:,:] = torch.normal(mu_, sigma_).to(self.device)
# Prepare the next input
if is_training:
next_cell_input = torch.cat((input, out, target[i, :, :]), dim=1)
else:
next_cell_input = torch.cat((input, out, outputs[i, :, :]), dim=1)
return outputs, mu, sigma
class ForecastNetDenseModel2(nn.Module):
"""
Class for the densely connected hidden cells version of the model
"""
def __init__(self, input_dim, hidden_dim, output_dim, in_seq_length, out_seq_length, device):
"""
Constructor
:param input_dim: Dimension of the inputs
:param hidden_dim: Number of hidden units
:param output_dim: Dimension of the outputs
:param in_seq_length: Length of the input sequence
:param out_seq_length: Length of the output sequence
"""
super(ForecastNetDenseModel2, self).__init__()
self.input_dim = input_dim
self.hidden_dim = hidden_dim
self.output_dim = output_dim
self.in_seq_length = in_seq_length
self.out_seq_length = out_seq_length
self.device = device
# Input dimension of componed inputs and sequences
input_dim_comb = input_dim * in_seq_length
# Initialise layers
hidden_layer1 = [nn.Linear(input_dim_comb, hidden_dim)]
for i in range(out_seq_length - 1):
hidden_layer1.append(nn.Linear(input_dim_comb + hidden_dim + output_dim, hidden_dim))
self.hidden_layer1 = nn.ModuleList(hidden_layer1)
self.hidden_layer2 = nn.ModuleList([nn.Linear(hidden_dim, hidden_dim) for i in range(out_seq_length)])
self.output_layer = nn.ModuleList([nn.Linear(hidden_dim, output_dim) for i in range(out_seq_length)])
def forward(self, input, target, is_training=False):
"""
Forward propagation of the dense ForecastNet model
:param input: Input data in the form [input_seq_length, batch_size, input_dim]
:param target: Target data in the form [output_seq_length, batch_size, output_dim]
:param is_training: If true, use target data for training, else use the previous output.
:return: outputs: Forecast outputs in the form [decoder_seq_length, batch_size, input_dim]
"""
# Initialise outputs
outputs = torch.zeros((self.out_seq_length, input.shape[0], self.output_dim)).to(self.device)
# First input
next_cell_input = input
for i in range(self.out_seq_length):
# Propagate through cell
hidden = F.relu(self.hidden_layer1[i](next_cell_input))
hidden = F.relu(self.hidden_layer2[i](hidden))
# Calculate the output
output = self.output_layer[i](hidden)
outputs[i,:,:] = output
# Prepare the next input
if is_training:
next_cell_input = torch.cat((input, hidden, target[i, :, :]), dim=1)
else:
next_cell_input = torch.cat((input, hidden, outputs[i, :, :]), dim=1)
return outputs | 47.863309 | 120 | 0.666767 | 905 | 6,653 | 4.660773 | 0.142541 | 0.083215 | 0.059744 | 0.023471 | 0.816027 | 0.811759 | 0.789474 | 0.785206 | 0.772167 | 0.754623 | 0 | 0.00715 | 0.243199 | 6,653 | 139 | 121 | 47.863309 | 0.830586 | 0.352623 | 0 | 0.567164 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.059701 | false | 0 | 0.044776 | 0 | 0.164179 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
d3c47d0dd1bb5afb5fbbc8a8970bd181f216950a | 7,773 | py | Python | src/saltext/vmware/modules/ntp.py | jain-prerna/salt-ext-modules-vmware-old | 89ea6dd77c6d5a35dc55c23adbdc361949a63057 | [
"Apache-2.0"
] | null | null | null | src/saltext/vmware/modules/ntp.py | jain-prerna/salt-ext-modules-vmware-old | 89ea6dd77c6d5a35dc55c23adbdc361949a63057 | [
"Apache-2.0"
] | null | null | null | src/saltext/vmware/modules/ntp.py | jain-prerna/salt-ext-modules-vmware-old | 89ea6dd77c6d5a35dc55c23adbdc361949a63057 | [
"Apache-2.0"
] | null | null | null | # SPDX-License-Identifier: Apache-2.0
def set_ntp_config(
host,
username,
password,
ntp_servers,
protocol=None,
port=None,
host_names=None,
verify_ssl=True,
):
"""
Set NTP configuration for a given host of list of host_names.
host
The location of the host.
username
The username used to login to the host, such as ``root``.
password
The password used to login to the host.
ntp_servers
A list of servers that should be added to and configured for the specified
host's NTP configuration.
protocol
Optionally set to alternate protocol if the host is not using the default
protocol. Default protocol is ``https``.
port
Optionally set to alternate port if the host is not using the default
port. Default port is ``443``.
host_names
List of ESXi host names. When the host, username, and password credentials
are provided for a vCenter Server, the host_names argument is required to tell
vCenter which hosts to configure ntp servers.
If host_names is not provided, the NTP servers will be configured for the
``host`` location instead. This is useful for when service instance connection
information is used for a single ESXi host.
verify_ssl
Verify the SSL certificate. Default: True
CLI Example:
.. code-block:: bash
# Used for single ESXi host connection information
salt '*' vsphere.ntp_configure my.esxi.host root bad-password '[192.174.1.100, 192.174.1.200]'
# Used for connecting to a vCenter Server
salt '*' vsphere.ntp_configure my.vcenter.location root bad-password '[192.174.1.100, 192.174.1.200]' \
host_names='[esxi-1.host.com, esxi-2.host.com]'
"""
service_instance = salt.utils.vmware.get_service_instance(
host=host,
username=username,
password=password,
protocol=protocol,
port=port,
verify_ssl=verify_ssl,
)
if not isinstance(ntp_servers, list):
raise CommandExecutionError("'ntp_servers' must be a list.")
# Get NTP Config Object from ntp_servers
ntp_config = vim.HostNtpConfig(server=ntp_servers)
# Get DateTimeConfig object from ntp_config
date_config = vim.HostDateTimeConfig(ntpConfig=ntp_config)
host_names = _check_hosts(service_instance, host, host_names)
ret = {}
for host_name in host_names:
host_ref = _get_host_ref(service_instance, host, host_name=host_name)
date_time_manager = _get_date_time_mgr(host_ref)
log.debug("Configuring NTP Servers '{}' for host '{}'.".format(ntp_servers, host_name))
try:
date_time_manager.UpdateDateTimeConfig(config=date_config)
except vim.fault.HostConfigFault as err:
msg = "vsphere.ntp_configure_servers failed: {}".format(err)
log.debug(msg)
ret.update({host_name: {"Error": msg}})
continue
ret.update({host_name: {"NTP Servers": ntp_config}})
return ret
def update_host_datetime(
host, username, password, protocol=None, port=None, host_names=None, verify_ssl=True
):
"""
Update the date/time on the given host or list of host_names. This function should be
used with caution since network delays and execution delays can result in time skews.
host
The location of the host.
username
The username used to login to the host, such as ``root``.
password
The password used to login to the host.
protocol
Optionally set to alternate protocol if the host is not using the default
protocol. Default protocol is ``https``.
port
Optionally set to alternate port if the host is not using the default
port. Default port is ``443``.
host_names
List of ESXi host names. When the host, username, and password credentials
are provided for a vCenter Server, the host_names argument is required to
tell vCenter which hosts should update their date/time.
If host_names is not provided, the date/time will be updated for the ``host``
location instead. This is useful for when service instance connection
information is used for a single ESXi host.
verify_ssl
Verify the SSL certificate. Default: True
CLI Example:
.. code-block:: bash
# Used for single ESXi host connection information
salt '*' vsphere.update_date_time my.esxi.host root bad-password
# Used for connecting to a vCenter Server
salt '*' vsphere.update_date_time my.vcenter.location root bad-password \
host_names='[esxi-1.host.com, esxi-2.host.com]'
"""
service_instance = salt.utils.vmware.get_service_instance(
host=host,
username=username,
password=password,
protocol=protocol,
port=port,
verify_ssl=verify_ssl,
)
host_names = _check_hosts(service_instance, host, host_names)
ret = {}
for host_name in host_names:
host_ref = _get_host_ref(service_instance, host, host_name=host_name)
date_time_manager = _get_date_time_mgr(host_ref)
try:
date_time_manager.UpdateDateTime(datetime.datetime.utcnow())
except vim.fault.HostConfigFault as err:
msg = "'vsphere.update_date_time' failed for host {}: {}".format(host_name, err)
log.debug(msg)
ret.update({host_name: {"Error": msg}})
continue
ret.update({host_name: {"Datetime Updated": True}})
return ret
def get_ntp_config(
host, username, password, protocol=None, port=None, host_names=None, verify_ssl=True
):
"""
Get the NTP configuration information for a given host or list of host_names.
host
The location of the host.
username
The username used to login to the host, such as ``root``.
password
The password used to login to the host.
protocol
Optionally set to alternate protocol if the host is not using the default
protocol. Default protocol is ``https``.
port
Optionally set to alternate port if the host is not using the default
port. Default port is ``443``.
host_names
List of ESXi host names. When the host, username, and password credentials
are provided for a vCenter Server, the host_names argument is required to tell
vCenter the hosts for which to get ntp configuration information.
If host_names is not provided, the NTP configuration will be retrieved for the
``host`` location instead. This is useful for when service instance connection
information is used for a single ESXi host.
verify_ssl
Verify the SSL certificate. Default: True
CLI Example:
.. code-block:: bash
# Used for single ESXi host connection information
salt '*' vsphere.get_ntp_config my.esxi.host root bad-password
# Used for connecting to a vCenter Server
salt '*' vsphere.get_ntp_config my.vcenter.location root bad-password \
host_names='[esxi-1.host.com, esxi-2.host.com]'
"""
service_instance = salt.utils.vmware.get_service_instance(
host=host,
username=username,
password=password,
protocol=protocol,
port=port,
verify_ssl=verify_ssl,
)
host_names = _check_hosts(service_instance, host, host_names)
ret = {}
for host_name in host_names:
host_ref = _get_host_ref(service_instance, host, host_name=host_name)
ntp_config = host_ref.configManager.dateTimeSystem.dateTimeInfo.ntpConfig.server
ret.update({host_name: ntp_config})
return ret
| 33.795652 | 111 | 0.670783 | 1,055 | 7,773 | 4.807583 | 0.143128 | 0.053233 | 0.033715 | 0.040812 | 0.777997 | 0.761238 | 0.745662 | 0.733044 | 0.703864 | 0.695189 | 0 | 0.00988 | 0.257816 | 7,773 | 229 | 112 | 33.943231 | 0.869301 | 0.560659 | 0 | 0.6625 | 0 | 0 | 0.065541 | 0.018206 | 0 | 0 | 0 | 0 | 0 | 1 | 0.0375 | false | 0.075 | 0 | 0 | 0.075 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
d3f02b57b6a0fe2696df5aeff800a681cd29c4df | 2,962 | py | Python | terrascript/databricks/r.py | mjuenema/python-terrascript | 6d8bb0273a14bfeb8ff8e950fe36f97f7c6e7b1d | [
"BSD-2-Clause"
] | 507 | 2017-07-26T02:58:38.000Z | 2022-01-21T12:35:13.000Z | terrascript/databricks/r.py | mjuenema/python-terrascript | 6d8bb0273a14bfeb8ff8e950fe36f97f7c6e7b1d | [
"BSD-2-Clause"
] | 135 | 2017-07-20T12:01:59.000Z | 2021-10-04T22:25:40.000Z | terrascript/databricks/r.py | mjuenema/python-terrascript | 6d8bb0273a14bfeb8ff8e950fe36f97f7c6e7b1d | [
"BSD-2-Clause"
] | 81 | 2018-02-20T17:55:28.000Z | 2022-01-31T07:08:40.000Z | # terrascript/databricks/r.py
# Automatically generated by tools/makecode.py ()
import warnings
warnings.warn(
"using the 'legacy layout' is deprecated", DeprecationWarning, stacklevel=2
)
import terrascript
class databricks_aws_s3_mount(terrascript.Resource):
pass
class databricks_azure_adls_gen1_mount(terrascript.Resource):
pass
class databricks_azure_adls_gen2_mount(terrascript.Resource):
pass
class databricks_azure_blob_mount(terrascript.Resource):
pass
class databricks_cluster(terrascript.Resource):
pass
class databricks_cluster_policy(terrascript.Resource):
pass
class databricks_dbfs_file(terrascript.Resource):
pass
class databricks_directory(terrascript.Resource):
pass
class databricks_global_init_script(terrascript.Resource):
pass
class databricks_group(terrascript.Resource):
pass
class databricks_group_instance_profile(terrascript.Resource):
pass
class databricks_group_member(terrascript.Resource):
pass
class databricks_instance_pool(terrascript.Resource):
pass
class databricks_instance_profile(terrascript.Resource):
pass
class databricks_ip_access_list(terrascript.Resource):
pass
class databricks_job(terrascript.Resource):
pass
class databricks_mws_credentials(terrascript.Resource):
pass
class databricks_mws_customer_managed_keys(terrascript.Resource):
pass
class databricks_mws_log_delivery(terrascript.Resource):
pass
class databricks_mws_networks(terrascript.Resource):
pass
class databricks_mws_private_access_settings(terrascript.Resource):
pass
class databricks_mws_storage_configurations(terrascript.Resource):
pass
class databricks_mws_vpc_endpoint(terrascript.Resource):
pass
class databricks_mws_workspaces(terrascript.Resource):
pass
class databricks_notebook(terrascript.Resource):
pass
class databricks_obo_token(terrascript.Resource):
pass
class databricks_permissions(terrascript.Resource):
pass
class databricks_pipeline(terrascript.Resource):
pass
class databricks_secret(terrascript.Resource):
pass
class databricks_secret_acl(terrascript.Resource):
pass
class databricks_secret_scope(terrascript.Resource):
pass
class databricks_service_principal(terrascript.Resource):
pass
class databricks_sql_dashboard(terrascript.Resource):
pass
class databricks_sql_endpoint(terrascript.Resource):
pass
class databricks_sql_permissions(terrascript.Resource):
pass
class databricks_sql_query(terrascript.Resource):
pass
class databricks_sql_visualization(terrascript.Resource):
pass
class databricks_sql_widget(terrascript.Resource):
pass
class databricks_token(terrascript.Resource):
pass
class databricks_user(terrascript.Resource):
pass
class databricks_user_instance_profile(terrascript.Resource):
pass
class databricks_workspace_conf(terrascript.Resource):
pass
| 16.640449 | 79 | 0.799797 | 326 | 2,962 | 6.98773 | 0.251534 | 0.276558 | 0.424056 | 0.503951 | 0.789728 | 0.656277 | 0.136523 | 0.045654 | 0 | 0 | 0 | 0.001565 | 0.13707 | 2,962 | 177 | 80 | 16.734463 | 0.889671 | 0.025321 | 0 | 0.47191 | 1 | 0 | 0.013523 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.47191 | 0.022472 | 0 | 0.494382 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
3130608ee2b25f9f003e000c9bf2da1b03c611c6 | 68 | py | Python | wand/apps/relations/__init__.py | phvalguima/kafka_charm_lib | 0062a277c12f5650f2a18b17eae8529500fafafe | [
"Apache-2.0"
] | null | null | null | wand/apps/relations/__init__.py | phvalguima/kafka_charm_lib | 0062a277c12f5650f2a18b17eae8529500fafafe | [
"Apache-2.0"
] | null | null | null | wand/apps/relations/__init__.py | phvalguima/kafka_charm_lib | 0062a277c12f5650f2a18b17eae8529500fafafe | [
"Apache-2.0"
] | null | null | null | from .zookeeper import * # noqa
from .kafka_connect import * # noqa
| 22.666667 | 35 | 0.735294 | 9 | 68 | 5.444444 | 0.666667 | 0.408163 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.176471 | 68 | 2 | 36 | 34 | 0.875 | 0.132353 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
31388a55f4ca3cec5c3cbec0c8964b824d486b7c | 2,984 | py | Python | src/visualization/climate/amip.py | jejjohnson/2019_rbig_rs | 00df5c623d55895e0b43a4130bb6c601fae84890 | [
"MIT"
] | 2 | 2020-05-15T17:31:39.000Z | 2021-03-16T08:49:33.000Z | src/visualization/climate/amip.py | jejjohnson/rbig_eo | 00df5c623d55895e0b43a4130bb6c601fae84890 | [
"MIT"
] | null | null | null | src/visualization/climate/amip.py | jejjohnson/rbig_eo | 00df5c623d55895e0b43a4130bb6c601fae84890 | [
"MIT"
] | null | null | null | import seaborn as sns
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
plt.style.use(["seaborn-talk", "ggplot"])
def plot_individual(
df: pd.DataFrame, cmip_model: str, spatial_res: int, info: str = "h"
) -> None:
# subset data
df = df[df["cmip"] == cmip_model]
df = df[df["spatial"] == spatial_res]
fig, ax = plt.subplots(figsize=(10, 7))
if info == "h":
pts1 = sns.lineplot(x="base_time", y="h_base", data=df, linewidth=5)
pts2 = sns.lineplot(x="base_time", y="h_cmip", data=df, linewidth=5)
ax.set_xticklabels(np.arange(1980, 2009, 1), fontsize=20)
ax.set_ylabel("Entropy, H")
elif info == "mi":
pts1 = sns.lineplot(x="cmip_time", y="mi", data=df, linewidth=5)
ax.set_xticklabels(np.arange(1980, 2009, 1), fontsize=20)
ax.set_ylabel("Mutual Information, MI")
else:
raise ValueError("Unrecognized info measure:", info)
plt.xticks(rotation="vertical")
ax.set_xlabel("")
plt.show()
def plot_individual_all(df: pd.DataFrame, spatial_res: int, info: str = "h") -> None:
# subset data
df = df[df["spatial"] == spatial_res]
fig, ax = plt.subplots(figsize=(10, 7))
if info == "h":
pts1 = sns.lineplot(
x="base_time",
y="h_base",
data=df,
linestyle="--",
color="black",
linewidth=6,
)
pts2 = sns.lineplot(x="base_time", y="h_cmip", data=df, hue="cmip", linewidth=5)
ax.set_ylabel("Entropy, H")
ax.set_xticklabels(np.arange(1980, 2009, 1), fontsize=20)
elif info == "mi":
pts2 = sns.lineplot(x="cmip_time", y="mi", data=df, hue="cmip", linewidth=5)
ax.set_xticklabels(np.arange(1980, 2009, 1), fontsize=20)
ax.set_ylabel("Mutual Information, MI")
else:
raise ValueError("Unrecognized info measure:", info)
plt.xticks(rotation="vertical")
ax.set_xlabel("")
plt.show()
def plot_diff(df: pd.DataFrame, spatial_res: int) -> None:
# subset data
df = df[df["spatial"] == spatial_res]
fig, ax = plt.subplots(figsize=(10, 7))
df["h_abs_diff"] = abs(df["h_base"] - df["h_cmip"])
pts2 = sns.lineplot(x="base_time", y="h_abs_diff", hue="cmip", data=df, linewidth=5)
ax.set_xlabel("Time", fontsize=20)
ax.set_xticklabels(np.arange(1980, 2009, 1), fontsize=20)
ax.set_ylabel("Absolute Difference, H", fontsize=20)
plt.xticks(rotation="vertical")
plt.show()
def plot_individual_diff(df: pd.DataFrame, cmip_model: str, spatial_res: int) -> None:
# subset data
df = df[df["cmip"] == cmip_model]
df = df[df["spatial"] == spatial_res]
fig, ax = plt.subplots(figsize=(10, 7))
df["h_abs_diff"] = abs(df["h_base"] - df["h_cmip"])
pts2 = sns.lineplot(x="base_time", y="h_abs_diff", data=df, linewidth=5)
plt.xticks(rotation="vertical")
ax.set_xticklabels(np.arange(1980, 2009, 1), fontsize=20)
plt.show()
| 31.410526 | 88 | 0.61059 | 436 | 2,984 | 4.050459 | 0.18578 | 0.039638 | 0.05436 | 0.05436 | 0.830691 | 0.798981 | 0.770102 | 0.755946 | 0.739524 | 0.667044 | 0 | 0.042006 | 0.218164 | 2,984 | 94 | 89 | 31.744681 | 0.714959 | 0.015751 | 0 | 0.588235 | 0 | 0 | 0.142906 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.058824 | false | 0 | 0.058824 | 0 | 0.117647 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
314469c504e36285d4078f67f1447e280e0119cf | 114 | py | Python | app.py | FacundoLepere/test_heroku_cv2_python_3.9 | 5ca3ff8c5d4b661c3b55a92be2cbc770203a3c99 | [
"MIT"
] | null | null | null | app.py | FacundoLepere/test_heroku_cv2_python_3.9 | 5ca3ff8c5d4b661c3b55a92be2cbc770203a3c99 | [
"MIT"
] | null | null | null | app.py | FacundoLepere/test_heroku_cv2_python_3.9 | 5ca3ff8c5d4b661c3b55a92be2cbc770203a3c99 | [
"MIT"
] | null | null | null | from flask import Flask
import cv2
app = Flask(__name__)
@app.route("/")
def index():
return "Hello World!"
| 12.666667 | 25 | 0.675439 | 16 | 114 | 4.5625 | 0.75 | 0.30137 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.010753 | 0.184211 | 114 | 8 | 26 | 14.25 | 0.774194 | 0 | 0 | 0 | 0 | 0 | 0.114035 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0 | 0.333333 | 0.166667 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
316b053358082f4364ac5e355e4d5ff9687b3c18 | 22 | py | Python | wamv_morse/src/wamv_sim/builder/sensors/__init__.py | mission-systems-pty-ltd/WamV-Morse-Sim | 80007b4a02fa10f69d167526eb0add636305555f | [
"BSD-2-Clause"
] | 2 | 2019-09-04T00:58:20.000Z | 2020-08-17T05:20:16.000Z | wamv_morse/src/wamv_sim/builder/sensors/__init__.py | mission-systems-pty-ltd/WamV-Morse-Sim | 80007b4a02fa10f69d167526eb0add636305555f | [
"BSD-2-Clause"
] | null | null | null | wamv_morse/src/wamv_sim/builder/sensors/__init__.py | mission-systems-pty-ltd/WamV-Morse-Sim | 80007b4a02fa10f69d167526eb0add636305555f | [
"BSD-2-Clause"
] | 1 | 2021-12-15T09:30:08.000Z | 2021-12-15T09:30:08.000Z | from .DVL import Dvl
| 7.333333 | 20 | 0.727273 | 4 | 22 | 4 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.227273 | 22 | 2 | 21 | 11 | 0.941176 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
317780c9aafe38ee41801a34e89682053d094689 | 40 | py | Python | cluster_tools/meshes/__init__.py | constantinpape/cluster_tools | a7e88545b58f8315723bc47583916e1900a7892d | [
"MIT"
] | 28 | 2018-12-09T22:11:52.000Z | 2022-02-01T16:48:23.000Z | cluster_tools/meshes/__init__.py | constantinpape/cluster_tools | a7e88545b58f8315723bc47583916e1900a7892d | [
"MIT"
] | 16 | 2019-01-27T10:59:33.000Z | 2022-01-11T09:09:24.000Z | cluster_tools/meshes/__init__.py | constantinpape/cluster_tools | a7e88545b58f8315723bc47583916e1900a7892d | [
"MIT"
] | 11 | 2018-12-09T22:11:56.000Z | 2021-08-08T20:10:13.000Z | from .mesh_workflow import MeshWorkflow
| 20 | 39 | 0.875 | 5 | 40 | 6.8 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.1 | 40 | 1 | 40 | 40 | 0.944444 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
31845682767bcf2d8db4400e73c1f1c0990fa9ee | 27 | py | Python | axon_conabio/losses/__init__.py | mbsantiago/axon-conabio | 0348edda1a49855912057b15b9516562a999ccee | [
"MIT"
] | null | null | null | axon_conabio/losses/__init__.py | mbsantiago/axon-conabio | 0348edda1a49855912057b15b9516562a999ccee | [
"MIT"
] | null | null | null | axon_conabio/losses/__init__.py | mbsantiago/axon-conabio | 0348edda1a49855912057b15b9516562a999ccee | [
"MIT"
] | null | null | null | from .baseloss import Loss
| 13.5 | 26 | 0.814815 | 4 | 27 | 5.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.148148 | 27 | 1 | 27 | 27 | 0.956522 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
31b3c6533e83926f1092e1f7937a46beb7abfa2e | 125 | py | Python | backend_rest/sales/admin.py | ezrankayamba/twiga_expodocs | 39303f137f3761e7024e1e0e1a6449f4187e30e9 | [
"MIT"
] | null | null | null | backend_rest/sales/admin.py | ezrankayamba/twiga_expodocs | 39303f137f3761e7024e1e0e1a6449f4187e30e9 | [
"MIT"
] | 13 | 2020-02-21T13:58:18.000Z | 2022-03-12T00:16:26.000Z | backend_rest/sales/admin.py | ezrankayamba/twiga_expodocs | 39303f137f3761e7024e1e0e1a6449f4187e30e9 | [
"MIT"
] | null | null | null | from django.contrib import admin
from . import models
admin.site.register(models.Sale)
admin.site.register(models.SaleDoc)
| 17.857143 | 35 | 0.808 | 18 | 125 | 5.611111 | 0.555556 | 0.178218 | 0.336634 | 0.455446 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.096 | 125 | 6 | 36 | 20.833333 | 0.893805 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
31c476dce6ecc6fc8eadb87d44d54635f2c42fae | 5,987 | py | Python | networks/resnet50.py | Times125/break_captcha | 9ab16d1d7737630786039d07caec308bdb8a26d9 | [
"MIT"
] | 13 | 2019-08-14T09:36:03.000Z | 2021-04-05T06:35:05.000Z | networks/resnet50.py | Times125/break_captcha | 9ab16d1d7737630786039d07caec308bdb8a26d9 | [
"MIT"
] | 6 | 2019-09-26T09:45:12.000Z | 2022-03-11T23:56:08.000Z | networks/resnet50.py | Times125/break_captcha | 9ab16d1d7737630786039d07caec308bdb8a26d9 | [
"MIT"
] | 6 | 2019-08-16T14:46:04.000Z | 2021-06-05T01:53:20.000Z | #! /usr/bin/env python
# -*- coding: utf-8 -*-
"""
@Author: _defined
@Time: 2019/8/21 14:20
@Description:
This script defines some ResNet50 structures
"""
from tensorflow.python.keras import (backend, layers)
def _identity_block(input_tensor, kernel_size, filters, stage, block):
"""The identity block is the block that has no conv layer at shortcut.
# Arguments
input_tensor: input tensor
kernel_size: default 3, the kernel size of
middle conv layer at main file_list
filters: list of integers, the filters of 3 conv layer at main file_list
stage: integer, current stage label, used for generating layer names
block: 'a','b'..., current block label, used for generating layer names
# Returns
Output tensor for the block.
"""
filters1, filters2, filters3 = filters
if backend.image_data_format() == 'channels_last':
bn_axis = 3
else:
bn_axis = 1
conv_name_base = 'res' + str(stage) + block + '_branch'
bn_name_base = 'bn' + str(stage) + block + '_branch'
x = layers.Conv2D(filters1, (1, 1),
kernel_initializer='he_normal',
name=conv_name_base + '2a')(input_tensor)
x = layers.BatchNormalization(axis=bn_axis, name=bn_name_base + '2a')(x)
x = layers.Activation('relu')(x)
x = layers.Conv2D(filters2, kernel_size,
padding='same',
kernel_initializer='he_normal',
name=conv_name_base + '2b')(x)
x = layers.BatchNormalization(axis=bn_axis, name=bn_name_base + '2b')(x)
x = layers.Activation('relu')(x)
x = layers.Conv2D(filters3, (1, 1),
kernel_initializer='he_normal',
name=conv_name_base + '2c')(x)
x = layers.BatchNormalization(axis=bn_axis, name=bn_name_base + '2c')(x)
x = layers.add([x, input_tensor])
x = layers.Activation('relu')(x)
return x
def _conv_block(input_tensor, kernel_size, filters, stage, block, strides=(2, 2)):
"""A block that has a conv layer at shortcut.
# Arguments
input_tensor: input tensor
kernel_size: default 3, the kernel size of
middle conv layer at main file_list
filters: list of integers, the filters of 3 conv layer at main file_list
stage: integer, current stage label, used for generating layer names
block: 'a','b'..., current block label, used for generating layer names
strides: Strides for the first conv layer in the block.
# Returns
Output tensor for the block.
Note that from stage 3,
the first conv layer at main file_list is with strides=(2, 2)
And the shortcut should have strides=(2, 2) as well
"""
filters1, filters2, filters3 = filters
if backend.image_data_format() == 'channels_last':
bn_axis = 3
else:
bn_axis = 1
conv_name_base = 'res' + str(stage) + block + '_branch'
bn_name_base = 'bn' + str(stage) + block + '_branch'
x = layers.Conv2D(filters1, (1, 1), strides=strides,
kernel_initializer='he_normal',
name=conv_name_base + '2a')(input_tensor)
x = layers.BatchNormalization(axis=bn_axis, name=bn_name_base + '2a')(x)
x = layers.Activation('relu')(x)
x = layers.Conv2D(filters2, kernel_size, padding='same',
kernel_initializer='he_normal',
name=conv_name_base + '2b')(x)
x = layers.BatchNormalization(axis=bn_axis, name=bn_name_base + '2b')(x)
x = layers.Activation('relu')(x)
x = layers.Conv2D(filters3, (1, 1),
kernel_initializer='he_normal',
name=conv_name_base + '2c')(x)
x = layers.BatchNormalization(axis=bn_axis, name=bn_name_base + '2c')(x)
shortcut = layers.Conv2D(filters3, (1, 1), strides=strides,
kernel_initializer='he_normal',
name=conv_name_base + '1')(input_tensor)
shortcut = layers.BatchNormalization(axis=bn_axis,
name=bn_name_base + '1')(shortcut)
x = layers.add([x, shortcut])
x = layers.Activation('relu')(x)
return x
def ResNet50(input_tensor):
"""
ResNet50
:param input_tensor:
:return:
"""
img_input = input_tensor
if backend.image_data_format() == 'channels_last':
bn_axis = 3
else:
bn_axis = 1
x = layers.ZeroPadding2D(padding=(3, 3), name='conv1_pad')(img_input)
x = layers.Conv2D(64, (7, 7), strides=(2, 2),
kernel_initializer='he_normal', name='conv1')(x)
x = layers.BatchNormalization(axis=bn_axis, name='bn_conv1')(x)
x = layers.Activation('relu')(x)
x = layers.ZeroPadding2D(padding=(1, 1), name='pool1_pad')(x)
x = layers.MaxPooling2D((3, 3), strides=(2, 2))(x)
x = _conv_block(x, 3, [64, 64, 256], stage=2, block='a', strides=(1, 1))
x = _identity_block(x, 3, [64, 64, 256], stage=2, block='b')
x = _identity_block(x, 3, [64, 64, 256], stage=2, block='c')
x = _conv_block(x, 3, [128, 128, 512], stage=3, block='a')
x = _identity_block(x, 3, [128, 128, 512], stage=3, block='b')
x = _identity_block(x, 3, [128, 128, 512], stage=3, block='c')
x = _identity_block(x, 3, [128, 128, 512], stage=3, block='d')
x = _conv_block(x, 3, [256, 256, 1024], stage=4, block='a')
x = _identity_block(x, 3, [256, 256, 1024], stage=4, block='b')
x = _identity_block(x, 3, [256, 256, 1024], stage=4, block='c')
x = _identity_block(x, 3, [256, 256, 1024], stage=4, block='d')
x = _identity_block(x, 3, [256, 256, 1024], stage=4, block='e')
x = _identity_block(x, 3, [256, 256, 1024], stage=4, block='f')
x = _conv_block(x, 3, [512, 512, 2048], stage=5, block='a')
x = _identity_block(x, 3, [512, 512, 2048], stage=5, block='b')
x = _identity_block(x, 3, [512, 512, 2048], stage=5, block='c')
return x
| 39.388158 | 82 | 0.604309 | 839 | 5,987 | 4.141836 | 0.153754 | 0.052374 | 0.039137 | 0.051799 | 0.810935 | 0.791079 | 0.76518 | 0.76259 | 0.710791 | 0.653813 | 0 | 0.063414 | 0.257224 | 5,987 | 151 | 83 | 39.649007 | 0.718012 | 0.218306 | 0 | 0.563218 | 0 | 0 | 0.056828 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.034483 | false | 0 | 0.011494 | 0 | 0.08046 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
31c7f0fc1dff349aaa8fbf1d99639943f1e769b6 | 35 | py | Python | sketch/sketch/__init__.py | mmathys/bagua | e17978690452318b65b317b283259f09c24d59bb | [
"MIT"
] | null | null | null | sketch/sketch/__init__.py | mmathys/bagua | e17978690452318b65b317b283259f09c24d59bb | [
"MIT"
] | null | null | null | sketch/sketch/__init__.py | mmathys/bagua | e17978690452318b65b317b283259f09c24d59bb | [
"MIT"
] | null | null | null | from .sketch import SketchAlgorithm | 35 | 35 | 0.885714 | 4 | 35 | 7.75 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.085714 | 35 | 1 | 35 | 35 | 0.96875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
9ec4fd463c55391a02817e7ec7d743c6653ed8e0 | 118 | py | Python | dyntrack/plot/__init__.py | LouisFaure/dyntrack | 1af9d9a1851900a8ae62f54e44b20e04df618f29 | [
"BSD-3-Clause"
] | null | null | null | dyntrack/plot/__init__.py | LouisFaure/dyntrack | 1af9d9a1851900a8ae62f54e44b20e04df618f29 | [
"BSD-3-Clause"
] | null | null | null | dyntrack/plot/__init__.py | LouisFaure/dyntrack | 1af9d9a1851900a8ae62f54e44b20e04df618f29 | [
"BSD-3-Clause"
] | null | null | null | from .tracks import tracks
from .vector_field import vector_field
from .ftle import FTLE
from .fit_ppt import fit_ppt
| 23.6 | 38 | 0.830508 | 20 | 118 | 4.7 | 0.4 | 0.234043 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.135593 | 118 | 4 | 39 | 29.5 | 0.921569 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
9edae35377e2fa4c5edd6f49e00d8791402b29df | 1,881 | py | Python | ckanext/canada/tests/test_ati.py | HoussamBedja/ckanext-canada | 9099223beb088c65262cab403be10774e29e06b8 | [
"MIT"
] | 31 | 2015-04-19T16:14:55.000Z | 2021-08-20T13:18:44.000Z | ckanext/canada/tests/test_ati.py | HoussamBedja/ckanext-canada | 9099223beb088c65262cab403be10774e29e06b8 | [
"MIT"
] | 214 | 2015-01-20T20:43:26.000Z | 2022-03-29T20:36:01.000Z | ckanext/canada/tests/test_ati.py | HoussamBedja/ckanext-canada | 9099223beb088c65262cab403be10774e29e06b8 | [
"MIT"
] | 46 | 2015-02-18T17:11:06.000Z | 2022-01-17T17:05:09.000Z | # -*- coding: UTF-8 -*-
from nose.tools import assert_equal, assert_raises
from ckanapi import LocalCKAN, ValidationError
from ckan.tests.helpers import FunctionalTestBase
from ckan.tests.factories import Organization
from ckanext.recombinant.tables import get_chromo
class TestAti(FunctionalTestBase):
def setup(self):
super(TestAti, self).setup()
org = Organization()
lc = LocalCKAN()
lc.action.recombinant_create(dataset_type='ati', owner_org=org['name'])
rval = lc.action.recombinant_show(dataset_type='ati', owner_org=org['name'])
self.resource_id = rval['resources'][0]['id']
def test_example(self):
lc = LocalCKAN()
record = get_chromo('ati')['examples']['record']
lc.action.datastore_upsert(
resource_id=self.resource_id,
records=[record])
def test_blank(self):
lc = LocalCKAN()
assert_raises(ValidationError,
lc.action.datastore_upsert,
resource_id=self.resource_id,
records=[{}])
class TestAtiNil(FunctionalTestBase):
def setup(self):
super(TestAtiNil, self).setup()
org = Organization()
lc = LocalCKAN()
lc.action.recombinant_create(dataset_type='ati', owner_org=org['name'])
rval = lc.action.recombinant_show(dataset_type='ati', owner_org=org['name'])
self.resource_id = rval['resources'][1]['id']
def test_example(self):
lc = LocalCKAN()
record = get_chromo('ati-nil')['examples']['record']
lc.action.datastore_upsert(
resource_id=self.resource_id,
records=[record])
def test_blank(self):
lc = LocalCKAN()
assert_raises(ValidationError,
lc.action.datastore_upsert,
resource_id=self.resource_id,
records=[{}])
| 33.589286 | 84 | 0.628389 | 208 | 1,881 | 5.504808 | 0.269231 | 0.087336 | 0.073362 | 0.066376 | 0.765066 | 0.70393 | 0.70393 | 0.70393 | 0.70393 | 0.70393 | 0 | 0.002123 | 0.248804 | 1,881 | 55 | 85 | 34.2 | 0.808209 | 0.011164 | 0 | 0.711111 | 0 | 0 | 0.047363 | 0 | 0 | 0 | 0 | 0 | 0.066667 | 1 | 0.133333 | false | 0 | 0.111111 | 0 | 0.288889 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
9eeb47506cef88ee1225580c675e375cc7ce6674 | 2,039 | py | Python | contacts/swagger_params.py | sauravpanda/Django-CRM | c6b8cde02c9cf3d3f30f4e05b825f77d00734e87 | [
"MIT"
] | null | null | null | contacts/swagger_params.py | sauravpanda/Django-CRM | c6b8cde02c9cf3d3f30f4e05b825f77d00734e87 | [
"MIT"
] | null | null | null | contacts/swagger_params.py | sauravpanda/Django-CRM | c6b8cde02c9cf3d3f30f4e05b825f77d00734e87 | [
"MIT"
] | null | null | null | from drf_yasg import openapi
company_params_in_header = openapi.Parameter(
"company", openapi.IN_HEADER, required=True, type=openapi.TYPE_STRING
)
contact_list_get_params = [
company_params_in_header,
openapi.Parameter(
"name", openapi.IN_QUERY, type=openapi.TYPE_STRING
),
openapi.Parameter(
"city", openapi.IN_QUERY, type=openapi.TYPE_STRING
),
openapi.Parameter(
"assigned_to", openapi.IN_QUERY, type=openapi.TYPE_STRING
),
]
contact_detail_get_params = [company_params_in_header]
contact_delete_get_params = [company_params_in_header]
contact_create_post_params = [
company_params_in_header,
openapi.Parameter(
"first_name", openapi.IN_QUERY, required=True, type=openapi.TYPE_STRING
),
openapi.Parameter(
"last_name", openapi.IN_QUERY, required=True, type=openapi.TYPE_STRING
),
openapi.Parameter(
"phone", openapi.IN_QUERY, required=True, type=openapi.TYPE_STRING
),
openapi.Parameter(
"email", openapi.IN_QUERY, required=True, type=openapi.TYPE_STRING
),
openapi.Parameter(
"teams", openapi.IN_QUERY, type=openapi.TYPE_STRING
),
openapi.Parameter(
"assigned_to", openapi.IN_QUERY, type=openapi.TYPE_STRING
),
openapi.Parameter(
"address_line", openapi.IN_QUERY, type=openapi.TYPE_STRING
),
openapi.Parameter(
"street", openapi.IN_QUERY, type=openapi.TYPE_STRING
),
openapi.Parameter(
"city", openapi.IN_QUERY, type=openapi.TYPE_STRING
),
openapi.Parameter(
"state", openapi.IN_QUERY, type=openapi.TYPE_STRING
),
openapi.Parameter(
"postcode", openapi.IN_QUERY, type=openapi.TYPE_STRING
),
openapi.Parameter(
"country", openapi.IN_QUERY, type=openapi.TYPE_STRING
),
openapi.Parameter(
"description", openapi.IN_QUERY, type=openapi.TYPE_STRING
),
openapi.Parameter(
"contact_attachment", openapi.IN_QUERY, type=openapi.TYPE_FILE
),
]
| 29.128571 | 79 | 0.686121 | 237 | 2,039 | 5.607595 | 0.168776 | 0.216704 | 0.20316 | 0.268623 | 0.872837 | 0.862302 | 0.785553 | 0.665162 | 0.665162 | 0.461249 | 0 | 0 | 0.205002 | 2,039 | 69 | 80 | 29.550725 | 0.819864 | 0 | 0 | 0.634921 | 0 | 0 | 0.069642 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.015873 | 0 | 0.015873 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
7318ce959245f1bda9db8492c73d803c32479af5 | 32 | py | Python | nwb_conversion_tools/behavior/__init__.py | wuffi/nwb-conversion-tools | 39cfb95b714155b26a17fdda9ed7d801eefd14ea | [
"BSD-3-Clause"
] | null | null | null | nwb_conversion_tools/behavior/__init__.py | wuffi/nwb-conversion-tools | 39cfb95b714155b26a17fdda9ed7d801eefd14ea | [
"BSD-3-Clause"
] | null | null | null | nwb_conversion_tools/behavior/__init__.py | wuffi/nwb-conversion-tools | 39cfb95b714155b26a17fdda9ed7d801eefd14ea | [
"BSD-3-Clause"
] | null | null | null | from .bpod.bpod import Bpod2NWB
| 16 | 31 | 0.8125 | 5 | 32 | 5.2 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.035714 | 0.125 | 32 | 1 | 32 | 32 | 0.892857 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
731952eb4c9e4b7b003bb7dbe20a19de2236b5a8 | 10,515 | py | Python | huaweicloud-sdk-swr/huaweicloudsdkswr/v2/__init__.py | huaweicloud/huaweicloud-sdk-python-v3 | 7a6270390fcbf192b3882bf763e7016e6026ef78 | [
"Apache-2.0"
] | 64 | 2020-06-12T07:05:07.000Z | 2022-03-30T03:32:50.000Z | huaweicloud-sdk-swr/huaweicloudsdkswr/v2/__init__.py | huaweicloud/huaweicloud-sdk-python-v3 | 7a6270390fcbf192b3882bf763e7016e6026ef78 | [
"Apache-2.0"
] | 11 | 2020-07-06T07:56:54.000Z | 2022-01-11T11:14:40.000Z | huaweicloud-sdk-swr/huaweicloudsdkswr/v2/__init__.py | huaweicloud/huaweicloud-sdk-python-v3 | 7a6270390fcbf192b3882bf763e7016e6026ef78 | [
"Apache-2.0"
] | 24 | 2020-06-08T11:42:13.000Z | 2022-03-04T06:44:08.000Z | # coding: utf-8
from __future__ import absolute_import
# import SwrClient
from huaweicloudsdkswr.v2.swr_client import SwrClient
from huaweicloudsdkswr.v2.swr_async_client import SwrAsyncClient
# import models into sdk package
from huaweicloudsdkswr.v2.model.auth_info import AuthInfo
from huaweicloudsdkswr.v2.model.create_image_sync_repo_request import CreateImageSyncRepoRequest
from huaweicloudsdkswr.v2.model.create_image_sync_repo_request_body import CreateImageSyncRepoRequestBody
from huaweicloudsdkswr.v2.model.create_image_sync_repo_response import CreateImageSyncRepoResponse
from huaweicloudsdkswr.v2.model.create_manual_image_sync_repo_request import CreateManualImageSyncRepoRequest
from huaweicloudsdkswr.v2.model.create_manual_image_sync_repo_request_body import CreateManualImageSyncRepoRequestBody
from huaweicloudsdkswr.v2.model.create_manual_image_sync_repo_response import CreateManualImageSyncRepoResponse
from huaweicloudsdkswr.v2.model.create_namespace_auth_request import CreateNamespaceAuthRequest
from huaweicloudsdkswr.v2.model.create_namespace_auth_response import CreateNamespaceAuthResponse
from huaweicloudsdkswr.v2.model.create_namespace_request import CreateNamespaceRequest
from huaweicloudsdkswr.v2.model.create_namespace_request_body import CreateNamespaceRequestBody
from huaweicloudsdkswr.v2.model.create_namespace_response import CreateNamespaceResponse
from huaweicloudsdkswr.v2.model.create_repo_domains_request import CreateRepoDomainsRequest
from huaweicloudsdkswr.v2.model.create_repo_domains_request_body import CreateRepoDomainsRequestBody
from huaweicloudsdkswr.v2.model.create_repo_domains_response import CreateRepoDomainsResponse
from huaweicloudsdkswr.v2.model.create_repo_request import CreateRepoRequest
from huaweicloudsdkswr.v2.model.create_repo_request_body import CreateRepoRequestBody
from huaweicloudsdkswr.v2.model.create_repo_response import CreateRepoResponse
from huaweicloudsdkswr.v2.model.create_retention_request import CreateRetentionRequest
from huaweicloudsdkswr.v2.model.create_retention_request_body import CreateRetentionRequestBody
from huaweicloudsdkswr.v2.model.create_retention_response import CreateRetentionResponse
from huaweicloudsdkswr.v2.model.create_secret_request import CreateSecretRequest
from huaweicloudsdkswr.v2.model.create_secret_response import CreateSecretResponse
from huaweicloudsdkswr.v2.model.create_trigger_request import CreateTriggerRequest
from huaweicloudsdkswr.v2.model.create_trigger_request_body import CreateTriggerRequestBody
from huaweicloudsdkswr.v2.model.create_trigger_response import CreateTriggerResponse
from huaweicloudsdkswr.v2.model.create_user_repository_auth_request import CreateUserRepositoryAuthRequest
from huaweicloudsdkswr.v2.model.create_user_repository_auth_response import CreateUserRepositoryAuthResponse
from huaweicloudsdkswr.v2.model.delete_image_sync_repo_request import DeleteImageSyncRepoRequest
from huaweicloudsdkswr.v2.model.delete_image_sync_repo_request_body import DeleteImageSyncRepoRequestBody
from huaweicloudsdkswr.v2.model.delete_image_sync_repo_response import DeleteImageSyncRepoResponse
from huaweicloudsdkswr.v2.model.delete_namespace_auth_request import DeleteNamespaceAuthRequest
from huaweicloudsdkswr.v2.model.delete_namespace_auth_response import DeleteNamespaceAuthResponse
from huaweicloudsdkswr.v2.model.delete_namespaces_request import DeleteNamespacesRequest
from huaweicloudsdkswr.v2.model.delete_namespaces_response import DeleteNamespacesResponse
from huaweicloudsdkswr.v2.model.delete_repo_domains_request import DeleteRepoDomainsRequest
from huaweicloudsdkswr.v2.model.delete_repo_domains_response import DeleteRepoDomainsResponse
from huaweicloudsdkswr.v2.model.delete_repo_request import DeleteRepoRequest
from huaweicloudsdkswr.v2.model.delete_repo_response import DeleteRepoResponse
from huaweicloudsdkswr.v2.model.delete_repo_tag_request import DeleteRepoTagRequest
from huaweicloudsdkswr.v2.model.delete_repo_tag_response import DeleteRepoTagResponse
from huaweicloudsdkswr.v2.model.delete_retention_request import DeleteRetentionRequest
from huaweicloudsdkswr.v2.model.delete_retention_response import DeleteRetentionResponse
from huaweicloudsdkswr.v2.model.delete_trigger_request import DeleteTriggerRequest
from huaweicloudsdkswr.v2.model.delete_trigger_response import DeleteTriggerResponse
from huaweicloudsdkswr.v2.model.delete_user_repository_auth_request import DeleteUserRepositoryAuthRequest
from huaweicloudsdkswr.v2.model.delete_user_repository_auth_response import DeleteUserRepositoryAuthResponse
from huaweicloudsdkswr.v2.model.link import Link
from huaweicloudsdkswr.v2.model.list_api_versions_request import ListApiVersionsRequest
from huaweicloudsdkswr.v2.model.list_api_versions_response import ListApiVersionsResponse
from huaweicloudsdkswr.v2.model.list_image_auto_sync_repos_details_request import ListImageAutoSyncReposDetailsRequest
from huaweicloudsdkswr.v2.model.list_image_auto_sync_repos_details_response import ListImageAutoSyncReposDetailsResponse
from huaweicloudsdkswr.v2.model.list_namespaces_request import ListNamespacesRequest
from huaweicloudsdkswr.v2.model.list_namespaces_response import ListNamespacesResponse
from huaweicloudsdkswr.v2.model.list_repo_domains_request import ListRepoDomainsRequest
from huaweicloudsdkswr.v2.model.list_repo_domains_response import ListRepoDomainsResponse
from huaweicloudsdkswr.v2.model.list_repos_details_request import ListReposDetailsRequest
from huaweicloudsdkswr.v2.model.list_repos_details_response import ListReposDetailsResponse
from huaweicloudsdkswr.v2.model.list_repository_tags_request import ListRepositoryTagsRequest
from huaweicloudsdkswr.v2.model.list_repository_tags_response import ListRepositoryTagsResponse
from huaweicloudsdkswr.v2.model.list_retention_histories_request import ListRetentionHistoriesRequest
from huaweicloudsdkswr.v2.model.list_retention_histories_response import ListRetentionHistoriesResponse
from huaweicloudsdkswr.v2.model.list_retentions_request import ListRetentionsRequest
from huaweicloudsdkswr.v2.model.list_retentions_response import ListRetentionsResponse
from huaweicloudsdkswr.v2.model.list_shared_repos_details_request import ListSharedReposDetailsRequest
from huaweicloudsdkswr.v2.model.list_shared_repos_details_response import ListSharedReposDetailsResponse
from huaweicloudsdkswr.v2.model.list_triggers_details_request import ListTriggersDetailsRequest
from huaweicloudsdkswr.v2.model.list_triggers_details_response import ListTriggersDetailsResponse
from huaweicloudsdkswr.v2.model.retention import Retention
from huaweicloudsdkswr.v2.model.retention_log import RetentionLog
from huaweicloudsdkswr.v2.model.rule import Rule
from huaweicloudsdkswr.v2.model.show_access_domain_request import ShowAccessDomainRequest
from huaweicloudsdkswr.v2.model.show_access_domain_response import ShowAccessDomainResponse
from huaweicloudsdkswr.v2.model.show_api_version_request import ShowApiVersionRequest
from huaweicloudsdkswr.v2.model.show_api_version_response import ShowApiVersionResponse
from huaweicloudsdkswr.v2.model.show_namespace import ShowNamespace
from huaweicloudsdkswr.v2.model.show_namespace_auth_request import ShowNamespaceAuthRequest
from huaweicloudsdkswr.v2.model.show_namespace_auth_response import ShowNamespaceAuthResponse
from huaweicloudsdkswr.v2.model.show_namespace_request import ShowNamespaceRequest
from huaweicloudsdkswr.v2.model.show_namespace_response import ShowNamespaceResponse
from huaweicloudsdkswr.v2.model.show_repo_domains_response import ShowRepoDomainsResponse
from huaweicloudsdkswr.v2.model.show_repos_resp import ShowReposResp
from huaweicloudsdkswr.v2.model.show_repos_tag_resp import ShowReposTagResp
from huaweicloudsdkswr.v2.model.show_repository_request import ShowRepositoryRequest
from huaweicloudsdkswr.v2.model.show_repository_response import ShowRepositoryResponse
from huaweicloudsdkswr.v2.model.show_retention_request import ShowRetentionRequest
from huaweicloudsdkswr.v2.model.show_retention_response import ShowRetentionResponse
from huaweicloudsdkswr.v2.model.show_sync_job_request import ShowSyncJobRequest
from huaweicloudsdkswr.v2.model.show_sync_job_response import ShowSyncJobResponse
from huaweicloudsdkswr.v2.model.show_trigger_request import ShowTriggerRequest
from huaweicloudsdkswr.v2.model.show_trigger_response import ShowTriggerResponse
from huaweicloudsdkswr.v2.model.show_user_repository_auth_request import ShowUserRepositoryAuthRequest
from huaweicloudsdkswr.v2.model.show_user_repository_auth_response import ShowUserRepositoryAuthResponse
from huaweicloudsdkswr.v2.model.sync_job import SyncJob
from huaweicloudsdkswr.v2.model.sync_repo import SyncRepo
from huaweicloudsdkswr.v2.model.tag_selector import TagSelector
from huaweicloudsdkswr.v2.model.trigger import Trigger
from huaweicloudsdkswr.v2.model.trigger_histories import TriggerHistories
from huaweicloudsdkswr.v2.model.update_namespace_auth_request import UpdateNamespaceAuthRequest
from huaweicloudsdkswr.v2.model.update_namespace_auth_response import UpdateNamespaceAuthResponse
from huaweicloudsdkswr.v2.model.update_repo_domains_request import UpdateRepoDomainsRequest
from huaweicloudsdkswr.v2.model.update_repo_domains_request_body import UpdateRepoDomainsRequestBody
from huaweicloudsdkswr.v2.model.update_repo_domains_response import UpdateRepoDomainsResponse
from huaweicloudsdkswr.v2.model.update_repo_request import UpdateRepoRequest
from huaweicloudsdkswr.v2.model.update_repo_request_body import UpdateRepoRequestBody
from huaweicloudsdkswr.v2.model.update_repo_response import UpdateRepoResponse
from huaweicloudsdkswr.v2.model.update_retention_request import UpdateRetentionRequest
from huaweicloudsdkswr.v2.model.update_retention_request_body import UpdateRetentionRequestBody
from huaweicloudsdkswr.v2.model.update_retention_response import UpdateRetentionResponse
from huaweicloudsdkswr.v2.model.update_trigger_request import UpdateTriggerRequest
from huaweicloudsdkswr.v2.model.update_trigger_request_body import UpdateTriggerRequestBody
from huaweicloudsdkswr.v2.model.update_trigger_response import UpdateTriggerResponse
from huaweicloudsdkswr.v2.model.update_user_repository_auth_request import UpdateUserRepositoryAuthRequest
from huaweicloudsdkswr.v2.model.update_user_repository_auth_response import UpdateUserRepositoryAuthResponse
from huaweicloudsdkswr.v2.model.user_auth import UserAuth
from huaweicloudsdkswr.v2.model.version_detail import VersionDetail
| 83.452381 | 120 | 0.919353 | 1,156 | 10,515 | 8.08391 | 0.150519 | 0.265169 | 0.290423 | 0.347566 | 0.563617 | 0.517817 | 0.365864 | 0.140931 | 0.053826 | 0.02504 | 0 | 0.011872 | 0.046695 | 10,515 | 125 | 121 | 84.12 | 0.920391 | 0.005801 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
73245a03b60bd8e4cd7d07da6f291701e6b06381 | 286 | py | Python | roomexplorer/__init__.py | alaflaquiere/room-explorer | 0fb1e4f9a65f18a4f7a20d3c22bf5f584ab7ed90 | [
"MIT"
] | null | null | null | roomexplorer/__init__.py | alaflaquiere/room-explorer | 0fb1e4f9a65f18a4f7a20d3c22bf5f584ab7ed90 | [
"MIT"
] | null | null | null | roomexplorer/__init__.py | alaflaquiere/room-explorer | 0fb1e4f9a65f18a4f7a20d3c22bf5f584ab7ed90 | [
"MIT"
] | null | null | null | from roomexplorer.renderer.bullet import bullet_tools
from roomexplorer.renderer.bullet.camera import Camera, CameraResolution
from roomexplorer.Agent import MobileArm
from roomexplorer.RoomEnvironment import create_environment
from roomexplorer.dataset_builder import generate_dataset
| 47.666667 | 72 | 0.895105 | 33 | 286 | 7.636364 | 0.484848 | 0.31746 | 0.190476 | 0.238095 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.073427 | 286 | 5 | 73 | 57.2 | 0.950943 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
7326f746ad5cab74373a3c7f78d1aea02408c8e9 | 2,189 | py | Python | MNIST-master/tf_test_congress.py | FisherEat/python_mashine_projects | 9cf2e644e7f0814fe048488278c994e5fd40d629 | [
"MIT"
] | null | null | null | MNIST-master/tf_test_congress.py | FisherEat/python_mashine_projects | 9cf2e644e7f0814fe048488278c994e5fd40d629 | [
"MIT"
] | null | null | null | MNIST-master/tf_test_congress.py | FisherEat/python_mashine_projects | 9cf2e644e7f0814fe048488278c994e5fd40d629 | [
"MIT"
] | 1 | 2019-09-18T02:30:11.000Z | 2019-09-18T02:30:11.000Z |
'''
本案例是tensorflow 回归模型,采用tensorflow自带的梯度下降法生成回归曲线
'''
# import tensorflow as tf
# import numpy as np
# import matplotlib.pyplot as plt
#
# # Prepare train data
# train_X = np.linspace(-1, 1, 100)
# train_Y = 2 * train_X + np.random.randn(*train_X.shape) * 0.33 + 10
#
# # Define the model
# X = tf.placeholder("float")
# Y = tf.placeholder("float")
# w = tf.Variable(0.0, name="weight")
# b = tf.Variable(0.0, name="bias")
# loss = tf.square(Y - X*w - b)
# train_op = tf.train.GradientDescentOptimizer(0.01).minimize(loss)
#
# # Create session to run
# with tf.Session() as sess:
# sess.run(tf.initialize_all_variables())
# epoch = 1
# for i in range(10):
# for(x, y) in zip(train_X, train_Y):
# _, w_value, b_value = sess.run([train_op, w, b], feed_dict={X: x, Y: y})
# print("Epoch: {}, w: {}, b: {}".format(epoch, w_value, b_value))
# epoch += 1
#
#
# # draw
# plt.plot(train_X, train_Y, "+")
# plt.plot(train_X, train_Y.dot(w_value) + b_value)
# plt.show()
'''
本案例是tensorflow 回归模型,采用tensorflow自带的梯度下降法生成回归曲线
'''
import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
# Prepare train data
train_X = np.linspace(-1, 1, 100)
# np.array([0.23, 0.34, 0.67, 0.89, 0.90, 0.97])
# np.linspace(-1, 1, 100)
train_Y = 2 * train_X + np.random.randn(*train_X.shape) * 0.33 + 10
# np.array([0.25, 0.33, 0.65, 0.80, 0.89, 0.99])
# 2 * train_X + np.random.randn(*train_X.shape) * 0.33 + 10
# Define the model
X = tf.placeholder("float")
Y = tf.placeholder("float")
w = tf.Variable(0.0, name="weight")
b = tf.Variable(0.0, name="bias")
loss = tf.square(Y - X*w - b)
train_op = tf.train.GradientDescentOptimizer(0.01).minimize(loss)
# Create session to run
with tf.Session() as sess:
sess.run(tf.initialize_all_variables())
epoch = 1
for i in range(100):
for (x, y) in zip(train_X, train_Y):
_, w_value, b_value = sess.run([train_op, w, b], feed_dict={X: x, Y: y})
print("Epoch: op: {}, {}, w: {}, b: {}".format(epoch, _, w_value, b_value))
epoch += 1
# draw
plt.plot(train_X, train_Y, "+")
plt.plot(train_X, train_X.dot(w_value)+b_value)
plt.show() | 27.3625 | 86 | 0.620831 | 366 | 2,189 | 3.584699 | 0.215847 | 0.068598 | 0.050305 | 0.054878 | 0.955793 | 0.955793 | 0.95503 | 0.921494 | 0.921494 | 0.921494 | 0 | 0.052273 | 0.19598 | 2,189 | 80 | 87 | 27.3625 | 0.693182 | 0.540429 | 0 | 0 | 0 | 0 | 0.057906 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.136364 | 0 | 0.136364 | 0.045455 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
733ba9569b401c728fac0d23c1f928d491d9e98d | 34 | py | Python | hello_world.py | eddycalgary/profiles-rest-api2 | d7416a36a274853550aea3a950893d86e3020781 | [
"MIT"
] | null | null | null | hello_world.py | eddycalgary/profiles-rest-api2 | d7416a36a274853550aea3a950893d86e3020781 | [
"MIT"
] | 7 | 2019-12-05T00:38:30.000Z | 2022-02-10T10:33:27.000Z | hello_world.py | eddycalgary/profiles-rest-api2 | d7416a36a274853550aea3a950893d86e3020781 | [
"MIT"
] | null | null | null | print("Hello Edgar how are you??") | 34 | 34 | 0.705882 | 6 | 34 | 4 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.117647 | 34 | 1 | 34 | 34 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0.714286 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
736168ff4711b8b7b91358bc866093d5d0d8069b | 79 | py | Python | modules/2.79/bpy/types/LocRotScale.py | cmbasnett/fake-bpy-module | acb8b0f102751a9563e5b5e5c7cd69a4e8aa2a55 | [
"MIT"
] | null | null | null | modules/2.79/bpy/types/LocRotScale.py | cmbasnett/fake-bpy-module | acb8b0f102751a9563e5b5e5c7cd69a4e8aa2a55 | [
"MIT"
] | null | null | null | modules/2.79/bpy/types/LocRotScale.py | cmbasnett/fake-bpy-module | acb8b0f102751a9563e5b5e5c7cd69a4e8aa2a55 | [
"MIT"
] | null | null | null | class LocRotScale:
def generate(self, context, ks, data):
pass
| 9.875 | 42 | 0.607595 | 9 | 79 | 5.333333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.303797 | 79 | 7 | 43 | 11.285714 | 0.872727 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0.333333 | 0 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 6 |
7363b1bbf50890b8c03707f0764f65035ae45dfe | 16,755 | py | Python | src/python/turicreate/visualization/show.py | crouchingtigre/turicreate | 38c18e019be5b0c202790bedd6113d13d5d59796 | [
"BSD-3-Clause"
] | 1 | 2019-06-21T21:40:10.000Z | 2019-06-21T21:40:10.000Z | src/python/turicreate/visualization/show.py | crouchingtigre/turicreate | 38c18e019be5b0c202790bedd6113d13d5d59796 | [
"BSD-3-Clause"
] | null | null | null | src/python/turicreate/visualization/show.py | crouchingtigre/turicreate | 38c18e019be5b0c202790bedd6113d13d5d59796 | [
"BSD-3-Clause"
] | null | null | null | # -*- coding: utf-8 -*-
# Copyright © 2017 Apple Inc. All rights reserved.
#
# Use of this source code is governed by a BSD-3-clause license that can
# be found in the LICENSE.txt file or at https://opensource.org/licenses/BSD-3-Clause
from __future__ import print_function as _
from __future__ import division as _
from __future__ import absolute_import as _
from ._plot import Plot, LABEL_DEFAULT
import turicreate as tc
def _get_title(title):
if title == "":
title = " "
if title is None:
title = ""
return title
def plot(x, y, xlabel=LABEL_DEFAULT, ylabel=LABEL_DEFAULT, title=LABEL_DEFAULT):
"""
Plots the data in `x` on the X axis and the data in `y` on the Y axis
in a 2d visualization, and shows the resulting visualization.
Uses the following heuristic to choose the visualization:
* If `x` and `y` are both numeric (SArray of int or float), and they contain
fewer than or equal to 5,000 values, show a scatter plot.
* If `x` and `y` are both numeric (SArray of int or float), and they contain
more than 5,000 values, show a heat map.
* If `x` is numeric and `y` is an SArray of string, show a box and whisker
plot for the distribution of numeric values for each categorical (string)
value.
* If `x` and `y` are both SArrays of string, show a categorical heat map.
This show method supports SArrays of dtypes: int, float, str.
Notes
-----
- The plot will be returned as a Plot object, which can then be shown,
saved, etc. and will display automatically in a Jupyter Notebook.
Parameters
----------
x : SArray
The data to plot on the X axis of a 2d visualization.
y : SArray
The data to plot on the Y axis of a 2d visualization. Must be the same
length as `x`.
xlabel : str (optional)
The text label for the X axis. Defaults to "X".
ylabel : str (optional)
The text label for the Y axis. Defaults to "Y".
title : str (optional)
The title of the plot. Defaults to LABEL_DEFAULT. If the value is
LABEL_DEFAULT, the title will be "<xlabel> vs. <ylabel>". If the value
is None, the title will be omitted. Otherwise, the string passed in as the
title will be used as the plot title.
Examples
--------
Show a categorical heat map of pets and their feelings.
>>> x = turicreate.SArray(['dog', 'cat', 'dog', 'dog', 'cat'])
>>> y = turicreate.SArray(['happy', 'grumpy', 'grumpy', 'happy', 'grumpy'])
>>> turicreate.show(x, y)
Show a scatter plot of the function y = 2x, for x from 0 through 9, labeling
the axes and plot title with custom strings.
>>> x = turicreate.SArray(range(10))
>>> y = x * 2
>>> turicreate.show(x, y,
... xlabel="Custom X label",
... ylabel="Custom Y label",
... title="Custom title")
"""
title = _get_title(title)
plt_ref = tc.extensions.plot(x, y, xlabel, ylabel, title)
return Plot(plt_ref)
def show(x, y, xlabel=LABEL_DEFAULT, ylabel=LABEL_DEFAULT, title=LABEL_DEFAULT):
"""
Plots the data in `x` on the X axis and the data in `y` on the Y axis
in a 2d visualization, and shows the resulting visualization.
Uses the following heuristic to choose the visualization:
* If `x` and `y` are both numeric (SArray of int or float), and they contain
fewer than or equal to 5,000 values, show a scatter plot.
* If `x` and `y` are both numeric (SArray of int or float), and they contain
more than 5,000 values, show a heat map.
* If `x` is numeric and `y` is an SArray of string, show a box and whisker
plot for the distribution of numeric values for each categorical (string)
value.
* If `x` and `y` are both SArrays of string, show a categorical heat map.
This show method supports SArrays of dtypes: int, float, str.
Notes
-----
- The plot will render either inline in a Jupyter Notebook, or in a
native GUI window, depending on the value provided in
`turicreate.visualization.set_target` (defaults to 'auto').
Parameters
----------
x : SArray
The data to plot on the X axis of a 2d visualization.
y : SArray
The data to plot on the Y axis of a 2d visualization. Must be the same
length as `x`.
xlabel : str (optional)
The text label for the X axis. Defaults to "X".
ylabel : str (optional)
The text label for the Y axis. Defaults to "Y".
title : str (optional)
The title of the plot. Defaults to LABEL_DEFAULT. If the value is
LABEL_DEFAULT, the title will be "<xlabel> vs. <ylabel>". If the value
is None, the title will be omitted. Otherwise, the string passed in as the
title will be used as the plot title.
Examples
--------
Show a categorical heat map of pets and their feelings.
>>> x = turicreate.SArray(['dog', 'cat', 'dog', 'dog', 'cat'])
>>> y = turicreate.SArray(['happy', 'grumpy', 'grumpy', 'happy', 'grumpy'])
>>> turicreate.show(x, y)
Show a scatter plot of the function y = 2x, for x from 0 through 9, labeling
the axes and plot title with custom strings.
>>> x = turicreate.SArray(range(10))
>>> y = x * 2
>>> turicreate.show(x, y,
... xlabel="Custom X label",
... ylabel="Custom Y label",
... title="Custom title")
"""
plot(x, y, xlabel, ylabel, title).show()
def scatter(x, y, xlabel=LABEL_DEFAULT, ylabel=LABEL_DEFAULT, title=LABEL_DEFAULT):
"""
Plots the data in `x` on the X axis and the data in `y` on the Y axis
in a 2d scatter plot, and returns the resulting Plot object.
The function supports SArrays of dtypes: int, float.
Parameters
----------
x : SArray
The data to plot on the X axis of the scatter plot.
Must be numeric (int/float).
y : SArray
The data to plot on the Y axis of the scatter plot. Must be the same
length as `x`.
Must be numeric (int/float).
xlabel : str (optional)
The text label for the X axis. Defaults to "X".
ylabel : str (optional)
The text label for the Y axis. Defaults to "Y".
title : str (optional)
The title of the plot. Defaults to LABEL_DEFAULT. If the value is
LABEL_DEFAULT, the title will be "<xlabel> vs. <ylabel>". If the value
is None, the title will be omitted. Otherwise, the string passed in as the
title will be used as the plot title.
Returns
-------
out : Plot
A :class: Plot object that is the scatter plot.
Examples
--------
Make a scatter plot.
>>> x = turicreate.SArray([1,2,3,4,5])
>>> y = x * 2
>>> scplt = turicreate.visualization.scatter(x, y)
"""
if (not isinstance(x, tc.data_structures.sarray.SArray) or
not isinstance(y, tc.data_structures.sarray.SArray) or
x.dtype not in [int, float] or y.dtype not in [int, float]):
raise ValueError("turicreate.visualization.scatter supports " +
"SArrays of dtypes: int, float")
# legit input
title = _get_title(title)
plt_ref = tc.extensions.plot_scatter(x, y,
xlabel, ylabel,title)
return Plot(plt_ref)
def categorical_heatmap(x, y, xlabel=LABEL_DEFAULT, ylabel=LABEL_DEFAULT, title=LABEL_DEFAULT):
"""
Plots the data in `x` on the X axis and the data in `y` on the Y axis
in a 2d categorical heatmap, and returns the resulting Plot object.
The function supports SArrays of dtypes str.
Parameters
----------
x : SArray
The data to plot on the X axis of the categorical heatmap.
Must be string SArray
y : SArray
The data to plot on the Y axis of the categorical heatmap.
Must be string SArray and must be the same length as `x`.
xlabel : str (optional)
The text label for the X axis. Defaults to "X".
ylabel : str (optional)
The text label for the Y axis. Defaults to "Y".
title : str (optional)
The title of the plot. Defaults to LABEL_DEFAULT. If the value is
LABEL_DEFAULT, the title will be "<xlabel> vs. <ylabel>". If the value
is None, the title will be omitted. Otherwise, the string passed in as the
title will be used as the plot title.
Returns
-------
out : Plot
A :class: Plot object that is the categorical heatmap.
Examples
--------
Make a categorical heatmap.
>>> x = turicreate.SArray(['1','2','3','4','5'])
>>> y = turicreate.SArray(['a','b','c','d','e'])
>>> catheat = turicreate.visualization.categorical_heatmap(x, y)
"""
if (not isinstance(x, tc.data_structures.sarray.SArray) or
not isinstance(y, tc.data_structures.sarray.SArray) or
x.dtype != str or y.dtype != str):
raise ValueError("turicreate.visualization.categorical_heatmap supports " +
"SArrays of dtype: str")
# legit input
title = _get_title(title)
plt_ref = tc.extensions.plot_categorical_heatmap(x, y,
xlabel, ylabel, title)
return Plot(plt_ref)
def heatmap(x, y, xlabel=LABEL_DEFAULT, ylabel=LABEL_DEFAULT, title=LABEL_DEFAULT):
"""
Plots the data in `x` on the X axis and the data in `y` on the Y axis
in a 2d heatmap, and returns the resulting Plot object.
The function supports SArrays of dtypes int, float.
Parameters
----------
x : SArray
The data to plot on the X axis of the heatmap.
Must be numeric (int/float).
y : SArray
The data to plot on the Y axis of the heatmap.
Must be numeric (int/float) and must be the same length as `x`.
xlabel : str (optional)
The text label for the X axis. Defaults to "X".
ylabel : str (optional)
The text label for the Y axis. Defaults to "Y".
title : str (optional)
The title of the plot. Defaults to LABEL_DEFAULT. If the value is
LABEL_DEFAULT, the title will be "<xlabel> vs. <ylabel>". If the value
is None, the title will be omitted. Otherwise, the string passed in as the
title will be used as the plot title.
Returns
-------
out : Plot
A :class: Plot object that is the heatmap.
Examples
--------
Make a heatmap.
>>> x = turicreate.SArray([1,2,3,4,5])
>>> y = x * 2
>>> heat = turicreate.visualization.heatmap(x, y)
"""
if (not isinstance(x, tc.data_structures.sarray.SArray) or
not isinstance(y, tc.data_structures.sarray.SArray) or
x.dtype not in [int, float] or y.dtype not in [int, float]):
raise ValueError("turicreate.visualization.heatmap supports " +
"SArrays of dtype: int, float")
title = _get_title(title)
plt_ref = tc.extensions.plot_heatmap(x, y,
xlabel, ylabel, title)
return Plot(plt_ref)
def box_plot(x, y, xlabel=LABEL_DEFAULT, ylabel=LABEL_DEFAULT, title=LABEL_DEFAULT):
"""
Plots the data in `x` on the X axis and the data in `y` on the Y axis
in a 2d box and whiskers plot, and returns the resulting Plot object.
The function x as SArray of dtype str and y as SArray of dtype: int, float.
Parameters
----------
x : SArray
The data to plot on the X axis of the box and whiskers plot.
Must be an SArray with dtype string.
y : SArray
The data to plot on the Y axis of the box and whiskers plot.
Must be numeric (int/float) and must be the same length as `x`.
xlabel : str (optional)
The text label for the X axis. Defaults to "X".
ylabel : str (optional)
The text label for the Y axis. Defaults to "Y".
title : str (optional)
The title of the plot. Defaults to LABEL_DEFAULT. If the value is
LABEL_DEFAULT, the title will be "<xlabel> vs. <ylabel>". If the value
is None, the title will be omitted. Otherwise, the string passed in as the
title will be used as the plot title.
Returns
-------
out : Plot
A :class: Plot object that is the box and whiskers plot.
Examples
--------
Make a box and whiskers plot.
>>> bp = turicreate.visualization.box_plot(tc.SArray(['a','b','c','a','a']),tc.SArray([4.0,3.25,2.1,2.0,1.0]))
"""
if (not isinstance(x, tc.data_structures.sarray.SArray) or
not isinstance(y, tc.data_structures.sarray.SArray) or
x.dtype != str or y.dtype not in [int, float]):
raise ValueError("turicreate.visualization.box_plot supports " +
"x as SArray of dtype str and y as SArray of dtype: int, float." +
"\nExample: turicreate.visualization.box_plot(tc.SArray(['a','b','c','a','a']),tc.SArray([4.0,3.25,2.1,2.0,1.0]))")
title = _get_title(title)
plt_ref = tc.extensions.plot_boxes_and_whiskers(x, y,
xlabel, ylabel, title)
return Plot(plt_ref)
def columnwise_summary(sf):
"""
Plots a columnwise summary of the sframe provided as input,
and returns the resulting Plot object.
The function supports SFrames.
Parameters
----------
sf : SFrame
The data to get a columnwise summary for.
Returns
-------
out : Plot
A :class: Plot object that is the columnwise summary plot.
Examples
--------
Make a columnwise summary of an SFrame.
>>> x = turicreate.SArray([1,2,3,4,5])
>>> s = turicreate.SArray(['a','b','c','a','a'])
>>> sf_test = turicreate.SFrame([x,x,x,x,s,s,s,x,s,x,s,s,s,x,x])
>>> colsum = turicreate.visualization.columnwise_summary(sf_test)
"""
if not isinstance(sf, tc.data_structures.sframe.SFrame):
raise ValueError("turicreate.visualization.columnwise_summary " +
"supports SFrame")
plt_ref = tc.extensions.plot_columnwise_summary(sf)
return Plot(plt_ref)
def histogram(sa, xlabel=LABEL_DEFAULT, ylabel=LABEL_DEFAULT, title=LABEL_DEFAULT):
"""
Plots a histogram of the sarray provided as input, and returns the
resulting Plot object.
The function supports numeric SArrays with dtypes int or float.
Parameters
----------
sa : SArray
The data to get a histogram for. Must be numeric (int/float).
xlabel : str (optional)
The text label for the X axis. Defaults to "Values".
ylabel : str (optional)
The text label for the Y axis. Defaults to "Count".
title : str (optional)
The title of the plot. Defaults to LABEL_DEFAULT. If the value is
LABEL_DEFAULT, the title will be "<xlabel> vs. <ylabel>". If the value
is None, the title will be omitted. Otherwise, the string passed in as the
title will be used as the plot title.
Returns
-------
out : Plot
A :class: Plot object that is the histogram.
Examples
--------
Make a histogram of an SArray.
>>> x = turicreate.SArray([1,2,3,4,5,1,1,1,1,2,2,3,2,3,1,1,1,4])
>>> hist = turicreate.visualization.histogram(x)
"""
if (not isinstance(sa, tc.data_structures.sarray.SArray) or
sa.dtype not in [int, float]):
raise ValueError("turicreate.visualization.histogram supports " +
"SArrays of dtypes: int, float")
title = _get_title(title)
plt_ref = tc.extensions.plot_histogram(sa,
xlabel, ylabel, title)
return Plot(plt_ref)
def item_frequency(sa, xlabel=LABEL_DEFAULT, ylabel=LABEL_DEFAULT, title=LABEL_DEFAULT):
"""
Plots an item frequency of the sarray provided as input, and returns the
resulting Plot object.
The function supports SArrays with dtype str.
Parameters
----------
sa : SArray
The data to get an item frequency for. Must have dtype str
xlabel : str (optional)
The text label for the X axis. Defaults to "Values".
ylabel : str (optional)
The text label for the Y axis. Defaults to "Count".
title : str (optional)
The title of the plot. Defaults to LABEL_DEFAULT. If the value is
LABEL_DEFAULT, the title will be "<xlabel> vs. <ylabel>". If the value
is None, the title will be omitted. Otherwise, the string passed in as the
title will be used as the plot title.
Returns
-------
out : Plot
A :class: Plot object that is the item frequency plot.
Examples
--------
Make an item frequency of an SArray.
>>> x = turicreate.SArray(['a','ab','acd','ab','a','a','a','ab','cd'])
>>> ifplt = turicreate.visualization.item_frequency(x)
"""
if (not isinstance(sa, tc.data_structures.sarray.SArray) or
sa.dtype != str):
raise ValueError("turicreate.visualization.item_frequency supports " +
"SArrays of dtype str")
title = _get_title(title)
plt_ref = tc.extensions.plot_item_frequency(sa,
xlabel, ylabel, title)
return Plot(plt_ref)
| 36.905286 | 127 | 0.63963 | 2,532 | 16,755 | 4.181675 | 0.083728 | 0.046468 | 0.031734 | 0.031734 | 0.838591 | 0.817813 | 0.798262 | 0.78844 | 0.77739 | 0.744333 | 0 | 0.008572 | 0.254969 | 16,755 | 453 | 128 | 36.986755 | 0.839542 | 0.68111 | 0 | 0.45122 | 0 | 0.012195 | 0.154313 | 0.086999 | 0 | 0 | 0 | 0 | 0 | 1 | 0.121951 | false | 0 | 0.060976 | 0 | 0.292683 | 0.012195 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
b430bf6d438f8c8014af2a7b088121b378fb51f4 | 49 | py | Python | tests/integrations/sqlalchemy/__init__.py | annu-ps31/sentry-python | 3966b4a9744bfcb8c53dcca1b615bbadf4935aec | [
"BSD-2-Clause"
] | 1,213 | 2018-06-19T00:51:01.000Z | 2022-03-31T06:37:16.000Z | tests/integrations/sqlalchemy/__init__.py | annu-ps31/sentry-python | 3966b4a9744bfcb8c53dcca1b615bbadf4935aec | [
"BSD-2-Clause"
] | 1,020 | 2018-07-16T12:50:36.000Z | 2022-03-31T20:42:49.000Z | tests/integrations/sqlalchemy/__init__.py | annu-ps31/sentry-python | 3966b4a9744bfcb8c53dcca1b615bbadf4935aec | [
"BSD-2-Clause"
] | 340 | 2018-07-16T12:47:27.000Z | 2022-03-22T10:13:21.000Z | import pytest
pytest.importorskip("sqlalchemy")
| 12.25 | 33 | 0.816327 | 5 | 49 | 8 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.081633 | 49 | 3 | 34 | 16.333333 | 0.888889 | 0 | 0 | 0 | 0 | 0 | 0.204082 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
b44479ce1001de386b8385a16d1ca3e56873e17c | 59 | py | Python | tests/test_resource_manager.py | nghazali/KubeCapWatch | 5b99dedcee45e6e53c2361891e399b9247acfcf1 | [
"Apache-2.0"
] | null | null | null | tests/test_resource_manager.py | nghazali/KubeCapWatch | 5b99dedcee45e6e53c2361891e399b9247acfcf1 | [
"Apache-2.0"
] | 1 | 2021-08-17T23:13:37.000Z | 2021-08-18T06:39:04.000Z | tests/test_resource_manager.py | nghazali/KubeCapWatch | 5b99dedcee45e6e53c2361891e399b9247acfcf1 | [
"Apache-2.0"
] | null | null | null | import pytest
def test_default():
assert True == True | 11.8 | 23 | 0.694915 | 8 | 59 | 5 | 0.875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.220339 | 59 | 5 | 23 | 11.8 | 0.869565 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.333333 | 1 | 0.333333 | true | 0 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
b47686d40380ece6f55f5dcbd7535bd518d49a22 | 123 | py | Python | particle/__init__.py | abelsonlive/particle | 0058474e97f76c80deca7bb9ddb2a1248297eb3a | [
"MIT"
] | 3 | 2015-01-05T23:13:18.000Z | 2015-08-20T16:23:06.000Z | particle/__init__.py | abelsonlive/particle | 0058474e97f76c80deca7bb9ddb2a1248297eb3a | [
"MIT"
] | null | null | null | particle/__init__.py | abelsonlive/particle | 0058474e97f76c80deca7bb9ddb2a1248297eb3a | [
"MIT"
] | null | null | null | from particle.app import Particle
from particle.common import db
from particle.web import api
from particle.cli import cli
| 24.6 | 33 | 0.837398 | 20 | 123 | 5.15 | 0.45 | 0.466019 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.130081 | 123 | 4 | 34 | 30.75 | 0.962617 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
c31aaf13651883cdc63c7c19b73882ea4498f39c | 226 | py | Python | EDX_refit_under_dev.py | tkcroat/EDX | f447dfbc0b7cde248b70b43772eb177185232960 | [
"MIT"
] | null | null | null | EDX_refit_under_dev.py | tkcroat/EDX | f447dfbc0b7cde248b70b43772eb177185232960 | [
"MIT"
] | null | null | null | EDX_refit_under_dev.py | tkcroat/EDX | f447dfbc0b7cde248b70b43772eb177185232960 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
"""
Created on Thu Nov 2 13:55:04 2017
@author: tkc
"""
import matplotlib.pyplot as plt
from matplotlib.widgets import LassoSelector
from matplotlib.widgets import Lasso
from matplotlib import path
| 17.384615 | 44 | 0.743363 | 33 | 226 | 5.090909 | 0.727273 | 0.25 | 0.25 | 0.321429 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.063158 | 0.159292 | 226 | 12 | 45 | 18.833333 | 0.821053 | 0.318584 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
c324b1c9272dbfdae955749ec05b6ff671031bf1 | 34,502 | py | Python | model/subsystems.py | JBHilton/covid-19-in-households-public | 42fb31a3c581a3d5b6b6959078a5f6bf4f25212e | [
"Apache-2.0"
] | 4 | 2020-04-17T13:19:43.000Z | 2021-12-02T19:56:27.000Z | model/subsystems.py | JBHilton/covid-19-in-households-public | 42fb31a3c581a3d5b6b6959078a5f6bf4f25212e | [
"Apache-2.0"
] | 6 | 2020-06-16T17:06:52.000Z | 2021-02-08T18:32:39.000Z | model/subsystems.py | JBHilton/covid-19-in-households-public | 42fb31a3c581a3d5b6b6959078a5f6bf4f25212e | [
"Apache-2.0"
] | 3 | 2020-05-12T12:09:48.000Z | 2021-06-07T09:16:09.000Z | '''This script defines all the subsystems for specific compartmental structures.
First, some functions for doing standard transition events are defined, then
these are used to create some functions for implementing common comprtmental
structures. These are then stored in a dictionary, which always needs to go at
the end of this script.'''
from copy import copy, deepcopy
from numpy import (
arange, around, array, atleast_2d, concatenate, cumprod, diag, ix_,
ones, prod, sum, where, zeros)
from numpy import int64 as my_int
from scipy.sparse import csc_matrix as sparse
def state_recursor(
states,
no_compartments,
age_class,
b_size,
n_blocks,
con_reps,
c,
x,
depth,
k):
if depth < no_compartments-1:
for x_i in arange(c + 1 - x.sum()):
x[0, depth] = x_i
x[0, depth+1:] = zeros(
(1, no_compartments-depth-1),
dtype=my_int)
states, k = state_recursor(
states,
no_compartments,
age_class,
b_size,
n_blocks,
con_reps,
c,
x,
depth+1,
k)
else:
x[0, -1] = c - sum(x[0, :depth])
for block in arange(n_blocks):
repeat_range = arange(
block * b_size
+ k * con_reps,
block * b_size +
(k + 1) * con_reps)
states[repeat_range,
no_compartments*age_class:no_compartments*(age_class+1)] = \
ones(
(con_reps, 1),
dtype=my_int) \
* array(
x,
ndmin=2, dtype=my_int)
k += 1
return states, k
return states, k
def build_states_recursively(
total_size,
no_compartments,
classes_present,
block_size,
num_blocks,
consecutive_repeats,
composition):
states = zeros(
(total_size, no_compartments*len(classes_present)),
dtype=my_int)
for age_class in range(len(classes_present)):
k = 0
states, k = state_recursor(
states,
no_compartments,
age_class,
block_size[age_class],
num_blocks[age_class],
consecutive_repeats[age_class],
composition[classes_present[age_class]],
zeros([1, no_compartments], dtype=my_int),
0,
k)
return states, k
def build_state_matrix(household_spec):
# Number of times you repeat states for each configuration
consecutive_repeats = concatenate((
ones(1, dtype=my_int), cumprod(household_spec.system_sizes[:-1])))
block_size = consecutive_repeats * household_spec.system_sizes
num_blocks = household_spec.total_size // block_size
states, k = build_states_recursively(
household_spec.total_size,
household_spec.no_compartments,
household_spec.class_indexes,
block_size,
num_blocks,
consecutive_repeats,
household_spec.composition)
# Now construct a sparse vector which tells you which row a state appears
# from in the state array
# This loop tells us how many values each column of the state array can
# take
state_sizes = concatenate([
(household_spec.composition[i] + 1)
* ones(household_spec.no_compartments, dtype=my_int)
for i in household_spec.class_indexes]).ravel()
# This vector stores the number of combinations you can get of all
# subsequent elements in the state array, i.e. reverse_prod(i) tells you
# how many arrangements you can get in states(:,i+1:end)
reverse_prod = array([0, *cumprod(state_sizes[:0:-1])])[::-1]
# We can then define index_vector look up the location of a state by
# weighting its elements using reverse_prod - this gives a unique mapping
# from the set of states to the integers. Because lots of combinations
# don't actually appear in the states array, we use a sparse array which
# will be much bigger than we actually require
rows = [
states[k, :].dot(reverse_prod) + states[k, -1]
for k in range(household_spec.total_size)]
if min(rows) < 0:
print(
'Negative row indices found, proportional total',
sum(array(rows) < 0),
'/',
len(rows),
'=',
sum(array(rows) < 0) / len(rows))
index_vector = sparse((
arange(household_spec.total_size),
(rows, [0]*household_spec.total_size)))
return states, reverse_prod, index_vector, rows
def inf_events(from_compartment,
to_compartment,
inf_compartment_list,
inf_scales,
r_home,
density_expo,
no_compartments,
composition,
states,
index_vector,
reverse_prod,
class_idx,
matrix_shape,
Q_int,
inf_event_row,
inf_event_col,
inf_event_class):
# This function adds infection events to a within-household transition
# matrix, allowing for multiple infectious classes
# Total number of compartments contributing to within-household infection:
no_inf_compartments = len(inf_compartment_list)
for i in range(len(class_idx)):
from_present = where(states[:, no_compartments*i+from_compartment] > 0)[0]
inf_to = zeros(len(from_present), dtype=my_int)
inf_rate = zeros(len(from_present))
for k in range(len(from_present)):
old_state = copy(states[from_present[k], :])
old_infs = zeros(len(class_idx))
for ic in range(no_inf_compartments):
old_infs += \
(old_state[inf_compartment_list[ic] +
no_compartments * arange(len(class_idx))] /
(composition[class_idx]**density_expo)) * inf_scales[ic]
inf_rate[k] = old_state[no_compartments*i] * (
r_home[i, :].dot( old_infs ))
new_state = old_state.copy()
new_state[no_compartments*i + from_compartment] -= 1
new_state[no_compartments*i + to_compartment] += 1
inf_to[k] = index_vector[
new_state.dot(reverse_prod) + new_state[-1], 0]
Q_int += sparse(
(inf_rate, (from_present, inf_to)),
shape=matrix_shape,)
inf_event_row = concatenate((inf_event_row, from_present))
inf_event_col = concatenate((inf_event_col, inf_to))
inf_event_class = concatenate(
(inf_event_class, class_idx[i]*ones((len(from_present)))))
return Q_int, inf_event_row, inf_event_col, inf_event_class
''' The next function is used to set up the infection events in the SEPIRQ
model, where some members may be temporarily absent, changing the current
size of the household '''
def size_adj_inf_events(from_compartment,
to_compartment,
inf_compartment_list,
inf_scales,
r_home,
iso_adjusted_comp,
density_expo,
no_compartments,
composition,
states,
index_vector,
reverse_prod,
class_idx,
matrix_shape,
Q_int,
inf_event_row,
inf_event_col,
inf_event_class):
# Total number of compartments contributing to within-household infection
no_inf_compartments = len(inf_compartment_list)
for i in range(len(class_idx)):
from_present = \
where(states[:, no_compartments*i+from_compartment] > 0)[0]
inf_to = zeros(len(from_present), dtype=my_int)
inf_rate = zeros(len(from_present))
for k in range(len(from_present)):
old_state = copy(states[from_present[k], :])
old_infs = zeros(len(class_idx))
for ic in range(no_inf_compartments):
old_infs += (old_state[inf_compartment_list[ic] + \
no_compartments * arange(len(class_idx))] /
(iso_adjusted_comp[k]**density_expo)) * inf_scales[ic]
inf_rate[k] = old_state[no_compartments*i] * (
r_home[i, :].dot( old_infs ))
new_state = old_state.copy()
new_state[no_compartments*i + from_compartment] -= 1
new_state[no_compartments*i + to_compartment] += 1
inf_to[k] = index_vector[
new_state.dot(reverse_prod) + new_state[-1], 0]
Q_int += sparse(
(inf_rate, (from_present, inf_to)),
shape=matrix_shape,)
inf_event_row = concatenate((inf_event_row, from_present))
inf_event_col = concatenate((inf_event_col, inf_to))
inf_event_class = concatenate(
(inf_event_class, class_idx[i]*ones((len(from_present)))))
return Q_int, inf_event_row, inf_event_col, inf_event_class
def progression_events(from_compartment,
to_compartment,
pc_rate,
no_compartments,
states,
index_vector,
reverse_prod,
class_idx,
matrix_shape,
Q_int):
# This function adds a single set of progression events to a
# within-household transition matrix
for i in range(len(class_idx)):
from_present = \
where(states[:, no_compartments*i+from_compartment] > 0)[0]
prog_to = zeros(len(from_present), dtype=my_int)
prog_rate = zeros(len(from_present))
for k in range(len(from_present)):
old_state = copy(states[from_present[k], :])
prog_rate[k] = \
pc_rate * old_state[no_compartments*i+from_compartment]
new_state = copy(old_state)
new_state[no_compartments*i+from_compartment] -= 1
new_state[no_compartments*i+to_compartment] += 1
prog_to[k] = index_vector[
new_state.dot(reverse_prod) + new_state[-1], 0]
Q_int += sparse(
(prog_rate, (from_present, prog_to)),
shape=matrix_shape,)
return Q_int
def stratified_progression_events(from_compartment,
to_compartment,
pc_rate_by_class,
no_compartments,
states,
index_vector,
reverse_prod,
class_idx,
matrix_shape,
Q_int):
''' This function adds a single set of progression events to a
within-household transition matrix, with progression rates stratified by
class.'''
for i in range(len(class_idx)):
from_present = \
where(states[:, no_compartments*i+from_compartment] > 0)[0]
prog_to = zeros(len(from_present), dtype=my_int)
prog_rate = zeros(len(from_present))
for k in range(len(from_present)):
old_state = copy(states[from_present[k], :])
prog_rate[k] = pc_rate_by_class[i] * \
old_state[no_compartments*i+from_compartment]
new_state = copy(old_state)
new_state[no_compartments*i+from_compartment] -= 1
new_state[no_compartments*i+to_compartment] += 1
prog_to[k] = index_vector[
new_state.dot(reverse_prod) + new_state[-1], 0]
Q_int += sparse(
(prog_rate, (from_present, prog_to)),
shape=matrix_shape)
return Q_int
def isolation_events(from_compartment,
to_compartment,
iso_rate_by_class,
class_is_isolating,
iso_method,
adult_bd,
no_adults,
children_present,
adults_isolating,
no_compartments,
states,
index_vector,
reverse_prod,
class_idx,
matrix_shape,
Q_int):
''' This function adds a single set of isolation events to a
within-household transition matrix, with isolation rates stratified by
class.'''
for i in range(len(class_idx)):
# The following if statement checks whether class i is meant to isolate
# and whether any of the vulnerable classes are present
if (class_is_isolating[class_idx[i],class_idx]).any():
# If isolating internally, i is a child class, or there are no
# children around, anyone can isolate
if iso_method=='int' or (i<adult_bd) or not children_present:
iso_permitted = \
where((states[:,no_compartments*i+from_compartment] > 0) * \
(states[:, no_compartments*i+to_compartment] == 0))[0]
# If children are present adults_isolating must stay below
# no_adults-1 so the children still have a guardian
else:
iso_permitted = \
where(
(states[:, no_compartments*i+from_compartment] > 0) * \
(adults_isolating<no_adults-1))[0]
iso_present = \
where(states[:, no_compartments*i+to_compartment] > 0)[0]
iso_to = zeros(len(iso_permitted), dtype=my_int)
iso_rate = zeros(len(iso_permitted))
for k in range(len(iso_permitted)):
old_state = copy(states[iso_permitted[k], :])
iso_rate[k] = iso_rate_by_class[i] * \
old_state[no_compartments*i+from_compartment]
new_state = copy(old_state)
new_state[no_compartments*i+from_compartment] -= 1
new_state[no_compartments*i+to_compartment] += 1
iso_to[k] = index_vector[
new_state.dot(reverse_prod) + new_state[-1], 0]
Q_int += sparse(
(iso_rate, (iso_permitted, iso_to)),
shape=matrix_shape)
return Q_int
def _sir_subsystem(self, household_spec):
'''This function processes a composition to create subsystems i.e.
matrices and vectors describing all possible epdiemiological states
for a given household composition
Assuming frequency-dependent homogeneous within-household mixing
composition[i] is the number of individuals in age-class i inside the
household'''
no_compartments = household_spec.no_compartments
s_comp, i_comp, r_comp = range(no_compartments)
composition = household_spec.composition
matrix_shape = household_spec.matrix_shape
sus = self.model_input.sus
K_home = self.model_input.k_home
gamma = self.model_input.gamma
density_expo = self.model_input.density_expo
# Set of individuals actually present here
class_idx = household_spec.class_indexes
K_home = K_home[ix_(class_idx, class_idx)]
sus = sus[class_idx]
r_home = atleast_2d(diag(sus).dot(K_home))
states, \
reverse_prod, \
index_vector, \
rows = build_state_matrix(household_spec)
Q_int = sparse(household_spec.matrix_shape,)
inf_event_row = array([], dtype=my_int)
inf_event_col = array([], dtype=my_int)
inf_event_class = array([], dtype=my_int)
Q_int, inf_event_row, inf_event_col, inf_event_class = inf_events(s_comp,
i_comp,
[i_comp],
[1],
r_home,
density_expo,
no_compartments,
composition,
states,
index_vector,
reverse_prod,
class_idx,
matrix_shape,
Q_int,
inf_event_row,
inf_event_col,
inf_event_class)
Q_int = progression_events(i_comp,
r_comp,
gamma,
no_compartments,
states,
index_vector,
reverse_prod,
class_idx,
matrix_shape,
Q_int)
S = Q_int.sum(axis=1).getA().squeeze()
Q_int += sparse((
-S, (
arange(household_spec.total_size),
arange(household_spec.total_size)
)))
return tuple((
Q_int,
states,
array(inf_event_row, dtype=my_int, ndmin=1),
array(inf_event_col, dtype=my_int, ndmin=1),
array(inf_event_class, dtype=my_int, ndmin=1),
reverse_prod,
index_vector))
def _seir_subsystem(self, household_spec):
'''This function processes a composition to create subsystems i.e.
matrices and vectors describing all possible epdiemiological states
for a given household composition
Assuming frequency-dependent homogeneous within-household mixing
composition[i] is the number of individuals in age-class i inside the
household'''
no_compartments = household_spec.no_compartments
s_comp, e_comp, i_comp, r_comp = range(no_compartments)
composition = household_spec.composition
matrix_shape = household_spec.matrix_shape
sus = self.model_input.sus
K_home = self.model_input.k_home
alpha = self.model_input.alpha
gamma = self.model_input.gamma
density_expo = self.model_input.density_expo
# Set of individuals actually present here
class_idx = household_spec.class_indexes
K_home = K_home[ix_(class_idx, class_idx)]
sus = sus[class_idx]
r_home = atleast_2d(diag(sus).dot(K_home))
states, \
reverse_prod, \
index_vector, \
rows = build_state_matrix(household_spec)
Q_int = sparse(household_spec.matrix_shape,)
inf_event_row = array([], dtype=my_int)
inf_event_col = array([], dtype=my_int)
inf_event_class = array([], dtype=my_int)
Q_int, inf_event_row, inf_event_col, inf_event_class = inf_events(s_comp,
i_comp,
[i_comp],
[1],
r_home,
density_expo,
no_compartments,
composition,
states,
index_vector,
reverse_prod,
class_idx,
matrix_shape,
Q_int,
inf_event_row,
inf_event_col,
inf_event_class)
Q_int = progression_events(e_comp,
i_comp,
gamma,
no_compartments,
states,
index_vector,
reverse_prod,
class_idx,
matrix_shape,
Q_int)
Q_int = progression_events(i_comp,
r_comp,
gamma,
no_compartments,
states,
index_vector,
reverse_prod,
class_idx,
matrix_shape,
Q_int)
S = Q_int.sum(axis=1).getA().squeeze()
Q_int += sparse((
-S, (
arange(household_spec.total_size),
arange(household_spec.total_size)
)))
return tuple((
Q_int,
states,
array(inf_event_row, dtype=my_int, ndmin=1),
array(inf_event_col, dtype=my_int, ndmin=1),
array(inf_event_class, dtype=my_int, ndmin=1),
reverse_prod,
index_vector))
def _sepir_subsystem(self, household_spec):
'''This function processes a composition to create subsystems i.e.
matrices and vectors describing all possible epdiemiological states
for a given household composition
Assuming frequency-dependent homogeneous within-household mixing
composition[i] is the number of individuals in age-class i inside the
household'''
no_compartments = household_spec.no_compartments
s_comp, e_comp, p_comp, i_comp, r_comp = range(no_compartments)
composition = household_spec.composition
matrix_shape = household_spec.matrix_shape
sus = self.model_input.sus
K_home = self.model_input.k_home
inf_scales = copy(self.model_input.inf_scales)
alpha_1 = self.model_input.alpha_1
alpha_2 = self.model_input.alpha_2
gamma = self.model_input.gamma
density_expo = self.model_input.density_expo
# Set of individuals actually present here
class_idx = household_spec.class_indexes
K_home = K_home[ix_(class_idx, class_idx)]
sus = sus[class_idx]
r_home = atleast_2d(diag(sus).dot(K_home))
for i in range(len(inf_scales)):
inf_scales[i] = inf_scales[i][class_idx]
states, \
reverse_prod, \
index_vector, \
rows = build_state_matrix(household_spec)
Q_int = sparse(household_spec.matrix_shape,)
inf_event_row = array([], dtype=my_int)
inf_event_col = array([], dtype=my_int)
inf_event_class = array([], dtype=my_int)
Q_int, inf_event_row, inf_event_col, inf_event_class = inf_events(s_comp,
e_comp,
[p_comp, i_comp],
inf_scales,
r_home,
density_expo,
no_compartments,
composition,
states,
index_vector,
reverse_prod,
class_idx,
matrix_shape,
Q_int,
inf_event_row,
inf_event_col,
inf_event_class)
Q_int = progression_events(e_comp,
p_comp,
alpha_1,
no_compartments,
states,
index_vector,
reverse_prod,
class_idx,
matrix_shape,
Q_int)
Q_int = progression_events(p_comp,
i_comp,
alpha_2,
no_compartments,
states,
index_vector,
reverse_prod,
class_idx,
matrix_shape,
Q_int)
Q_int = progression_events(i_comp,
r_comp,
gamma,
no_compartments,
states,
index_vector,
reverse_prod,
class_idx,
matrix_shape,
Q_int)
S = Q_int.sum(axis=1).getA().squeeze()
Q_int += sparse((
-S, (
arange(household_spec.total_size),
arange(household_spec.total_size)
)))
return tuple((
Q_int,
states,
array(inf_event_row, dtype=my_int, ndmin=1),
array(inf_event_col, dtype=my_int, ndmin=1),
array(inf_event_class, dtype=my_int, ndmin=1),
reverse_prod,
index_vector))
def _sepirq_subsystem(self, household_spec):
'''This function processes a composition to create subsystems i.e.
matrices and vectors describing all possible epdiemiological states
for a given household composition
Assuming frequency-dependent homogeneous within-household mixing
composition[i] is the number of individuals in age-class i inside the
household'''
no_compartments = household_spec.no_compartments
s_comp, e_comp, p_comp, i_comp, r_comp, q_comp = range(no_compartments)
composition = household_spec.composition
matrix_shape = household_spec.matrix_shape
sus = self.model_input.sus
K_home = self.model_input.k_home
inf_scales = copy(self.model_input.inf_scales)
alpha_1 = self.model_input.alpha_1
alpha_2 = self.model_input.alpha_2
gamma = self.model_input.gamma
iso_rates = deepcopy(self.model_input.iso_rates)
discharge_rate = self.model_input.discharge_rate
density_expo = self.model_input.density_expo
class_is_isolating = self.model_input.class_is_isolating
iso_method = self.model_input.iso_method
adult_bd = self.model_input.adult_bd
no_adults = sum(composition[adult_bd:])
children_present = sum(composition[:adult_bd])>0
# Set of individuals actually present here
class_idx = household_spec.class_indexes
K_home = K_home[ix_(class_idx, class_idx)]
sus = sus[class_idx]
r_home = atleast_2d(diag(sus).dot(K_home))
for i in range(len(inf_scales)):
inf_scales[i] = inf_scales[i][class_idx]
states, \
reverse_prod, \
index_vector, \
rows = build_state_matrix(household_spec)
iso_pos = q_comp + no_compartments * arange(len(class_idx))
if iso_method == "ext":
# This is number of people of each age class present in the household
# given some may isolate
iso_adjusted_comp = composition[class_idx] - states[:,iso_pos]
# Replace zeros with ones - we only ever use this as a denominator
# whose numerator will be zero anyway if it should be zero
iso_adjusted_comp[iso_adjusted_comp==0] = 1
if (iso_adjusted_comp<1).any():
pdb.set_trace()
else:
iso_adjusted_comp = \
composition[class_idx] - zeros(states[:,iso_pos].shape)
# Number of adults isolating by state
adults_isolating = \
states[:,no_compartments*adult_bd+q_comp::no_compartments].sum(axis=1)
Q_int = sparse(household_spec.matrix_shape,)
inf_event_row = array([], dtype=my_int)
inf_event_col = array([], dtype=my_int)
inf_event_class = array([], dtype=my_int)
if iso_method == 'int':
inf_comps = [p_comp, i_comp, q_comp]
else:
inf_comps = [p_comp, i_comp]
if iso_method == 'ext':
for cmp in range(len(iso_rates)):
iso_rates[cmp] = \
self.model_input.ad_prob * iso_rates[cmp][class_idx]
else:
for cmp in range(len(iso_rates)):
iso_rates[cmp] = iso_rates[cmp][class_idx]
Q_int, inf_event_row, inf_event_col, inf_event_class = \
size_adj_inf_events(s_comp,
e_comp,
inf_comps,
inf_scales,
r_home,
iso_adjusted_comp,
density_expo,
no_compartments,
composition,
states,
index_vector,
reverse_prod,
class_idx,
matrix_shape,
Q_int,
inf_event_row,
inf_event_col,
inf_event_class)
Q_int = progression_events(e_comp,
p_comp,
alpha_1,
no_compartments,
states,
index_vector,
reverse_prod,
class_idx,
matrix_shape,
Q_int)
Q_int = progression_events(p_comp,
i_comp,
alpha_2,
no_compartments,
states,
index_vector,
reverse_prod,
class_idx,
matrix_shape,
Q_int)
Q_int = progression_events(i_comp,
r_comp,
gamma,
no_compartments,
states,
index_vector,
reverse_prod,
class_idx,
matrix_shape,
Q_int)
Q_int = isolation_events(e_comp,
q_comp,
iso_rates[e_comp],
class_is_isolating,
iso_method,
adult_bd,
no_adults,
children_present,
adults_isolating,
no_compartments,
states,
index_vector,
reverse_prod,
class_idx,
matrix_shape,
Q_int)
Q_int = isolation_events(p_comp,
q_comp,
iso_rates[p_comp],
class_is_isolating,
iso_method,
adult_bd,
no_adults,
children_present,
adults_isolating,
no_compartments,
states,
index_vector,
reverse_prod,
class_idx,
matrix_shape,
Q_int)
Q_int = isolation_events(i_comp,
q_comp,
iso_rates[i_comp],
class_is_isolating,
iso_method,
adult_bd,
no_adults,
children_present,
adults_isolating,
no_compartments,
states,
index_vector,
reverse_prod,
class_idx,
matrix_shape,
Q_int)
Q_int = progression_events(q_comp,
r_comp,
discharge_rate,
no_compartments,
states,
index_vector,
reverse_prod,
class_idx,
matrix_shape,
Q_int)
S = Q_int.sum(axis=1).getA().squeeze()
Q_int += sparse((
-S, (
arange(household_spec.total_size),
arange(household_spec.total_size)
)))
return tuple((
Q_int,
states,
array(inf_event_row, dtype=my_int, ndmin=1),
array(inf_event_col, dtype=my_int, ndmin=1),
array(inf_event_class, dtype=my_int, ndmin=1),
reverse_prod,
index_vector))
def _sedur_subsystem(self, household_spec):
'''This function processes a composition to create subsystems i.e.
matrices and vectors describing all possible epdiemiological states
for a given household composition
Assuming frequency-dependent homogeneous within-household mixing
composition[i] is the number of individuals in age-class i inside the
household'''
no_compartments = household_spec.no_compartments
s_comp, e_comp, d_comp, u_comp, r_comp = range(no_compartments)
composition = household_spec.composition
matrix_shape = household_spec.matrix_shape
sus = self.model_input.sus
det = self.model_input.det
inf_scales = copy(self.model_input.inf_scales)
K_home = self.model_input.k_home
alpha = self.model_input.alpha
gamma = self.model_input.gamma
density_expo = self.model_input.density_expo
# Set of individuals actually present here
class_idx = household_spec.class_indexes
K_home = K_home[ix_(class_idx, class_idx)]
sus = sus[class_idx]
det = det[class_idx]
r_home = atleast_2d(diag(sus).dot(K_home))
for i in range(len(inf_scales)):
inf_scales[i] = inf_scales[i][class_idx]
states, \
reverse_prod, \
index_vector, \
rows = build_state_matrix(household_spec)
Q_int = sparse(household_spec.matrix_shape,)
inf_event_row = array([], dtype=my_int)
inf_event_col = array([], dtype=my_int)
inf_event_class = array([], dtype=my_int)
Q_int, inf_event_row, inf_event_col, inf_event_class = inf_events(s_comp,
e_comp,
[d_comp,u_comp],
inf_scales,
r_home,
density_expo,
no_compartments,
composition,
states,
index_vector,
reverse_prod,
class_idx,
matrix_shape,
Q_int,
inf_event_row,
inf_event_col,
inf_event_class)
Q_int = stratified_progression_events(e_comp,
d_comp,
alpha*det,
no_compartments,
states,
index_vector,
reverse_prod,
class_idx,
matrix_shape,
Q_int)
Q_int = stratified_progression_events(e_comp,
u_comp,
alpha*(1-det),
no_compartments,
states,
index_vector,
reverse_prod,
class_idx,
matrix_shape,
Q_int)
Q_int = progression_events(d_comp,
r_comp,
gamma,
no_compartments,
states,
index_vector,
reverse_prod,
class_idx,
matrix_shape,
Q_int)
Q_int = progression_events(u_comp,
r_comp,
gamma,
no_compartments,
states,
index_vector,
reverse_prod,
class_idx,
matrix_shape,
Q_int)
S = Q_int.sum(axis=1).getA().squeeze()
Q_int += sparse((
-S, (
arange(household_spec.total_size),
arange(household_spec.total_size)
)))
return tuple((
Q_int,
states,
array(inf_event_row, dtype=my_int, ndmin=1),
array(inf_event_col, dtype=my_int, ndmin=1),
array(inf_event_class, dtype=my_int, ndmin=1),
reverse_prod,
index_vector))
''' Entries in the subsystem key are in the following order: [list of
compartments, number of compartments, list of compartments which
contribute to infection, compartment corresponding to new infections].
'''
subsystem_key = {
'SIR' : [_sir_subsystem, 3, [1], 1],
'SEIR' : [_seir_subsystem, 4, [2], 1],
'SEPIR' : [_sepir_subsystem,5, [2,3], 1],
'SEPIRQ' : [_sepirq_subsystem,6, [2,3,5], 1],
'SEDUR' : [_sedur_subsystem,5, [2,3], 1],
}
| 34.885743 | 82 | 0.557823 | 3,964 | 34,502 | 4.531534 | 0.079465 | 0.03741 | 0.023381 | 0.036074 | 0.771586 | 0.753994 | 0.734287 | 0.726493 | 0.709625 | 0.706563 | 0 | 0.006221 | 0.366356 | 34,502 | 988 | 83 | 34.921053 | 0.815433 | 0.117327 | 0 | 0.791304 | 0 | 0 | 0.002778 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.016149 | false | 0 | 0.004969 | 0 | 0.038509 | 0.001242 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
c3442d547ed16220ae290209724033dee4258e18 | 22 | py | Python | fos/shader/__init__.py | fos/fos | 8d33bf0cd60292ad5164973b5285122acbc03b86 | [
"BSD-3-Clause"
] | 5 | 2015-08-08T22:04:49.000Z | 2020-05-29T10:30:09.000Z | fos/shader/__init__.py | fos/fos | 8d33bf0cd60292ad5164973b5285122acbc03b86 | [
"BSD-3-Clause"
] | 1 | 2018-04-25T12:59:56.000Z | 2018-04-25T13:26:47.000Z | fos/shader/__init__.py | fos/fos | 8d33bf0cd60292ad5164973b5285122acbc03b86 | [
"BSD-3-Clause"
] | null | null | null | from .lib import *
| 4.4 | 18 | 0.590909 | 3 | 22 | 4.333333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.318182 | 22 | 4 | 19 | 5.5 | 0.866667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
c34cd05e72bbbb731ca3f241d7d8248b672c6f6e | 64,001 | py | Python | core/test/test_signal_processing.py | ajmal017/amp | 8de7e3b88be87605ec3bad03c139ac64eb460e5c | [
"BSD-3-Clause"
] | null | null | null | core/test/test_signal_processing.py | ajmal017/amp | 8de7e3b88be87605ec3bad03c139ac64eb460e5c | [
"BSD-3-Clause"
] | null | null | null | core/test/test_signal_processing.py | ajmal017/amp | 8de7e3b88be87605ec3bad03c139ac64eb460e5c | [
"BSD-3-Clause"
] | null | null | null | import collections
import logging
import os
import pprint
from typing import Any, List, Optional, Tuple, Union
import numpy as np
import pandas as pd
import pytest
import core.artificial_signal_generators as cartif
import core.signal_processing as csigna
import helpers.git as git
import helpers.printing as hprint
import helpers.unit_test as hut
_LOG = logging.getLogger(__name__)
class Test__compute_lagged_cumsum(hut.TestCase):
def test1(self) -> None:
input_df = self._get_df()
output_df = csigna._compute_lagged_cumsum(input_df, 3)
self.check_string(
f"{hprint.frame('input')}\n"
f"{hut.convert_df_to_string(input_df, index=True)}\n"
f"{hprint.frame('output')}\n"
f"{hut.convert_df_to_string(output_df, index=True)}"
)
def test2(self) -> None:
input_df = self._get_df()
input_df.columns = ["x", "y1", "y2"]
output_df = csigna._compute_lagged_cumsum(input_df, 3, ["y1", "y2"])
self.check_string(
f"{hprint.frame('input')}\n"
f"{hut.convert_df_to_string(input_df, index=True)}\n"
f"{hprint.frame('output')}\n"
f"{hut.convert_df_to_string(output_df, index=True)}"
)
def test_lag_1(self) -> None:
input_df = self._get_df()
input_df.columns = ["x", "y1", "y2"]
output_df = csigna._compute_lagged_cumsum(input_df, 1, ["y1", "y2"])
self.check_string(
f"{hprint.frame('input')}\n"
f"{hut.convert_df_to_string(input_df, index=True)}\n"
f"{hprint.frame('output')}\n"
f"{hut.convert_df_to_string(output_df, index=True)}"
)
@staticmethod
def _get_df() -> pd.DataFrame:
df = pd.DataFrame([list(range(10))] * 3).T
df[1] = df[0] + 1
df[2] = df[0] + 2
df.index = pd.date_range(start="2010-01-01", periods=10)
df.rename(columns=lambda x: f"col_{x}", inplace=True)
return df
class Test_correlate_with_lagged_cumsum(hut.TestCase):
def test1(self) -> None:
input_df = self._get_arma_df()
output_df = csigna.correlate_with_lagged_cumsum(
input_df, 3, y_vars=["y1", "y2"]
)
self.check_string(
f"{hprint.frame('input')}\n"
f"{hut.convert_df_to_string(input_df, index=True)}\n"
f"{hprint.frame('output')}\n"
f"{hut.convert_df_to_string(output_df, index=True)}"
)
def test2(self) -> None:
input_df = self._get_arma_df()
output_df = csigna.correlate_with_lagged_cumsum(
input_df, 3, y_vars=["y1"], x_vars=["x"]
)
self.check_string(
f"{hprint.frame('input')}\n"
f"{hut.convert_df_to_string(input_df, index=True)}\n"
f"{hprint.frame('output')}\n"
f"{hut.convert_df_to_string(output_df, index=True)}"
)
@staticmethod
def _get_arma_df(seed: int = 0) -> pd.DataFrame:
arma_process = cartif.ArmaProcess([], [])
date_range = {"start": "2010-01-01", "periods": 40, "freq": "M"}
srs1 = arma_process.generate_sample(
date_range_kwargs=date_range, scale=0.1, seed=seed
).rename("x")
srs2 = arma_process.generate_sample(
date_range_kwargs=date_range, scale=0.1, seed=seed + 1
).rename("y1")
srs3 = arma_process.generate_sample(
date_range_kwargs=date_range, scale=0.1, seed=seed + 2
).rename("y2")
return pd.concat([srs1, srs2, srs3], axis=1)
class Test_accumulate(hut.TestCase):
def test1(self) -> None:
srs = pd.Series(
range(0, 20), index=pd.date_range("2010-01-01", periods=20)
)
actual = csigna.accumulate(srs, num_steps=1)
expected = srs.astype(float)
pd.testing.assert_series_equal(actual, expected)
def test2(self) -> None:
idx = pd.date_range("2010-01-01", periods=10)
srs = pd.Series([0, 1, 2, 3, 4, 5, 6, 7, 8, 9], index=idx)
actual = csigna.accumulate(srs, num_steps=2)
expected = pd.Series([np.nan, 1, 3, 5, 7, 9, 11, 13, 15, 17], index=idx)
pd.testing.assert_series_equal(actual, expected)
def test3(self) -> None:
idx = pd.date_range("2010-01-01", periods=10)
srs = pd.Series([0, 1, 2, 3, 4, 5, 6, 7, 8, 9], index=idx)
actual = csigna.accumulate(srs, num_steps=3)
expected = pd.Series(
[np.nan, np.nan, 3, 6, 9, 12, 15, 18, 21, 24], index=idx
)
pd.testing.assert_series_equal(actual, expected)
def test4(self) -> None:
srs = pd.Series(
np.random.randn(100), index=pd.date_range("2010-01-01", periods=100)
)
output = pd.concat([srs, csigna.accumulate(srs, num_steps=5)], axis=1)
output.columns = ["series", "series_accumulated"]
self.check_string(hut.convert_df_to_string(output, index=True))
def test_long_step1(self) -> None:
idx = pd.date_range("2010-01-01", periods=3)
srs = pd.Series([1, 2, 3], index=idx)
actual = csigna.accumulate(srs, num_steps=5)
expected = pd.Series([np.nan, np.nan, np.nan], index=idx)
pd.testing.assert_series_equal(actual, expected)
def test_nans1(self) -> None:
idx = pd.date_range("2010-01-01", periods=10)
srs = pd.Series([0, 1, np.nan, 2, 3, 4, np.nan, 5, 6, 7], index=idx)
actual = csigna.accumulate(srs, num_steps=3)
expected = pd.Series(
[
np.nan,
np.nan,
np.nan,
np.nan,
np.nan,
9,
np.nan,
np.nan,
np.nan,
18,
],
index=idx,
)
pd.testing.assert_series_equal(actual, expected)
def test_nans2(self) -> None:
idx = pd.date_range("2010-01-01", periods=6)
srs = pd.Series([np.nan, np.nan, np.nan, 2, 3, 4], index=idx)
actual = csigna.accumulate(srs, num_steps=3)
expected = pd.Series(
[np.nan, np.nan, np.nan, np.nan, np.nan, 9], index=idx
)
pd.testing.assert_series_equal(actual, expected)
def test_nans3(self) -> None:
idx = pd.date_range("2010-01-01", periods=6)
srs = pd.Series([np.nan, np.nan, np.nan, 2, 3, 4], index=idx)
actual = csigna.accumulate(srs, num_steps=2)
expected = pd.Series([np.nan, np.nan, np.nan, np.nan, 5, 7], index=idx)
pd.testing.assert_series_equal(actual, expected)
class Test_get_symmetric_equisized_bins(hut.TestCase):
def test_zero_in_bin_interior_false(self) -> None:
input_ = pd.Series([-1, 3])
expected = np.array([-3, -2, -1, 0, 1, 2, 3])
actual = csigna.get_symmetric_equisized_bins(input_, 1)
np.testing.assert_array_equal(actual, expected)
def test_zero_in_bin_interior_true(self) -> None:
input_ = pd.Series([-1, 3])
expected = np.array([-3.5, -2.5, -1.5, -0.5, 0.5, 1.5, 2.5, 3.5])
actual = csigna.get_symmetric_equisized_bins(input_, 1, True)
np.testing.assert_array_equal(actual, expected)
def test_infs(self) -> None:
data = pd.Series([-1, np.inf, -np.inf, 3])
expected = np.array([-4, -2, 0, 2, 4])
actual = csigna.get_symmetric_equisized_bins(data, 2)
np.testing.assert_array_equal(actual, expected)
class Test_compute_rolling_zscore1(hut.TestCase):
def test_default_values1(self) -> None:
"""
Test with default parameters on a heaviside series.
"""
heaviside = cartif.get_heaviside(-10, 252, 1, 1).rename("input")
actual = csigna.compute_rolling_zscore(heaviside, tau=40).rename("output")
output_df = pd.concat([heaviside, actual], axis=1)
output_df_string = hut.convert_df_to_string(output_df, index=True)
self.check_string(output_df_string)
def test_default_values2(self) -> None:
"""
Test for tau with default parameters on a heaviside series.
"""
heaviside = cartif.get_heaviside(-10, 252, 1, 1).rename("input")
actual = csigna.compute_rolling_zscore(heaviside, tau=20).rename("output")
output_df = pd.concat([heaviside, actual], axis=1)
output_df_string = hut.convert_df_to_string(output_df, index=True)
self.check_string(output_df_string)
def test_arma_clean1(self) -> None:
"""
Test on a clean arma series.
"""
series = self._get_arma_series(seed=1)
actual = csigna.compute_rolling_zscore(series, tau=20).rename("output")
output_df = pd.concat([series, actual], axis=1)
output_df_string = hut.convert_df_to_string(output_df, index=True)
self.check_string(output_df_string)
def test_arma_nan1(self) -> None:
"""
Test on an arma series with leading NaNs.
"""
series = self._get_arma_series(seed=1)
series[:5] = np.nan
actual = csigna.compute_rolling_zscore(series, tau=20).rename("output")
output_df = pd.concat([series, actual], axis=1)
output_df_string = hut.convert_df_to_string(output_df, index=True)
self.check_string(output_df_string)
def test_arma_nan2(self) -> None:
"""
Test on an arma series with interspersed NaNs.
"""
series = self._get_arma_series(seed=1)
series[5:10] = np.nan
actual = csigna.compute_rolling_zscore(series, tau=20).rename("output")
output_df = pd.concat([series, actual], axis=1)
output_df_string = hut.convert_df_to_string(output_df, index=True)
self.check_string(output_df_string)
def test_arma_zero1(self) -> None:
"""
Test on an arma series with leading zeros.
"""
series = self._get_arma_series(seed=1)
series[:5] = 0
actual = csigna.compute_rolling_zscore(series, tau=20).rename("output")
output_df = pd.concat([series, actual], axis=1)
output_df_string = hut.convert_df_to_string(output_df, index=True)
self.check_string(output_df_string)
def test_arma_zero2(self) -> None:
"""
Test on an arma series with interspersed zeros.
"""
series = self._get_arma_series(seed=1)
series[5:10] = 0
actual = csigna.compute_rolling_zscore(series, tau=20).rename("output")
output_df = pd.concat([series, actual], axis=1)
output_df_string = hut.convert_df_to_string(output_df, index=True)
self.check_string(output_df_string)
def test_arma_atol1(self) -> None:
"""
Test on an arma series with all-zeros period and `atol>0`.
"""
series = self._get_arma_series(seed=1)
series[10:25] = 0
actual = csigna.compute_rolling_zscore(series, tau=2, atol=0.01).rename(
"output"
)
output_df = pd.concat([series, actual], axis=1)
output_df_string = hut.convert_df_to_string(output_df, index=True)
self.check_string(output_df_string)
def test_arma_inf1(self) -> None:
"""
Test on an arma series with leading infs.
"""
series = self._get_arma_series(seed=1)
series[:5] = np.inf
actual = csigna.compute_rolling_zscore(series, tau=20).rename("output")
output_df = pd.concat([series, actual], axis=1)
output_df_string = hut.convert_df_to_string(output_df, index=True)
self.check_string(output_df_string)
def test_arma_inf2(self) -> None:
"""
Test on an arma series with interspersed infs.
"""
series = self._get_arma_series(seed=1)
series[5:10] = np.inf
actual = csigna.compute_rolling_zscore(series, tau=20).rename("output")
output_df = pd.concat([series, actual], axis=1)
output_df_string = hut.convert_df_to_string(output_df, index=True)
self.check_string(output_df_string)
def test_delay1_arma_clean1(self) -> None:
"""
Test on a clean arma series when `delay=1`.
"""
series = self._get_arma_series(seed=1)
actual = csigna.compute_rolling_zscore(series, tau=20, delay=1).rename(
"output"
)
output_df = pd.concat([series, actual], axis=1)
output_df_string = hut.convert_df_to_string(output_df, index=True)
self.check_string(output_df_string)
def test_delay1_arma_nan1(self) -> None:
"""
Test on an arma series with leading NaNs when `delay=1`.
"""
series = self._get_arma_series(seed=1)
series[:5] = np.nan
actual = csigna.compute_rolling_zscore(series, tau=20, delay=1).rename(
"output"
)
output_df = pd.concat([series, actual], axis=1)
output_df_string = hut.convert_df_to_string(output_df, index=True)
self.check_string(output_df_string)
def test_delay1_arma_nan2(self) -> None:
"""
Test on an arma series with interspersed NaNs when `delay=1`.
"""
series = self._get_arma_series(seed=1)
series[5:10] = np.nan
actual = csigna.compute_rolling_zscore(series, tau=20, delay=1).rename(
"output"
)
output_df = pd.concat([series, actual], axis=1)
output_df_string = hut.convert_df_to_string(output_df, index=True)
self.check_string(output_df_string)
def test_delay1_arma_zero1(self) -> None:
"""
Test on an arma series with leading zeros when `delay=1`.
"""
series = self._get_arma_series(seed=1)
series[:5] = 0
actual = csigna.compute_rolling_zscore(series, tau=20, delay=1).rename(
"output"
)
output_df = pd.concat([series, actual], axis=1)
output_df_string = hut.convert_df_to_string(output_df, index=True)
self.check_string(output_df_string)
def test_delay1_arma_zero2(self) -> None:
"""
Test on an arma series with interspersed zeros when `delay=1`.
"""
series = self._get_arma_series(seed=1)
series[5:10] = 0
actual = csigna.compute_rolling_zscore(series, tau=20, delay=1).rename(
"output"
)
output_df = pd.concat([series, actual], axis=1)
output_df_string = hut.convert_df_to_string(output_df, index=True)
self.check_string(output_df_string)
def test_delay1_arma_atol1(self) -> None:
"""
Test on an arma series with all-zeros period, `delay=1` and `atol>0`.
"""
series = self._get_arma_series(seed=1)
series[10:25] = 0
actual = csigna.compute_rolling_zscore(
series, tau=2, delay=1, atol=0.01
).rename("output")
output_df = pd.concat([series, actual], axis=1)
output_df_string = hut.convert_df_to_string(output_df, index=True)
self.check_string(output_df_string)
def test_delay1_arma_inf1(self) -> None:
"""
Test on an arma series with leading infs when `delay=1`.
"""
series = self._get_arma_series(seed=1)
series[:5] = np.inf
actual = csigna.compute_rolling_zscore(series, tau=20, delay=1).rename(
"output"
)
output_df = pd.concat([series, actual], axis=1)
output_df_string = hut.convert_df_to_string(output_df, index=True)
self.check_string(output_df_string)
def test_delay1_arma_inf2(self) -> None:
"""
Test on an arma series with interspersed infs when `delay=1`.
"""
series = self._get_arma_series(seed=1)
series[5:10] = np.inf
actual = csigna.compute_rolling_zscore(series, tau=20, delay=1).rename(
"output"
)
output_df = pd.concat([series, actual], axis=1)
output_df_string = hut.convert_df_to_string(output_df, index=True)
self.check_string(output_df_string)
def test_delay2_arma_clean1(self) -> None:
"""
Test on a clean arma series when `delay=2`.
"""
series = self._get_arma_series(seed=1)
actual = csigna.compute_rolling_zscore(series, tau=20, delay=2).rename(
"output"
)
output_df = pd.concat([series, actual], axis=1)
output_df_string = hut.convert_df_to_string(output_df, index=True)
self.check_string(output_df_string)
def test_delay2_arma_nan1(self) -> None:
"""
Test on an arma series with leading NaNs when `delay=2`.
"""
series = self._get_arma_series(seed=1)
series[:5] = np.nan
actual = csigna.compute_rolling_zscore(series, tau=20, delay=2).rename(
"output"
)
output_df = pd.concat([series, actual], axis=1)
output_df_string = hut.convert_df_to_string(output_df, index=True)
self.check_string(output_df_string)
def test_delay2_arma_nan2(self) -> None:
"""
Test on an arma series with interspersed NaNs when `delay=2`.
"""
series = self._get_arma_series(seed=1)
series[5:10] = np.nan
actual = csigna.compute_rolling_zscore(series, tau=20, delay=2).rename(
"output"
)
output_df = pd.concat([series, actual], axis=1)
output_df_string = hut.convert_df_to_string(output_df, index=True)
self.check_string(output_df_string)
def test_delay2_arma_zero1(self) -> None:
"""
Test on an arma series with leading zeros when `delay=2`.
"""
series = self._get_arma_series(seed=1)
series[:5] = 0
actual = csigna.compute_rolling_zscore(series, tau=20, delay=2).rename(
"output"
)
output_df = pd.concat([series, actual], axis=1)
output_df_string = hut.convert_df_to_string(output_df, index=True)
self.check_string(output_df_string)
def test_delay2_arma_zero2(self) -> None:
"""
Test on an arma series with interspersed zeros when `delay=2`.
"""
series = self._get_arma_series(seed=1)
series[5:10] = 0
actual = csigna.compute_rolling_zscore(series, tau=20, delay=2).rename(
"output"
)
output_df = pd.concat([series, actual], axis=1)
output_df_string = hut.convert_df_to_string(output_df, index=True)
self.check_string(output_df_string)
def test_delay2_arma_atol1(self) -> None:
"""
Test on an arma series with all-zeros period, `delay=2` and `atol>0`.
"""
series = self._get_arma_series(seed=1)
series[10:25] = 0
actual = csigna.compute_rolling_zscore(
series, tau=2, delay=2, atol=0.01
).rename("output")
output_df = pd.concat([series, actual], axis=1)
output_df_string = hut.convert_df_to_string(output_df, index=True)
self.check_string(output_df_string)
def test_delay2_arma_inf1(self) -> None:
"""
Test on an arma series with leading infs when `delay=2`.
"""
series = self._get_arma_series(seed=1)
series[:5] = np.inf
actual = csigna.compute_rolling_zscore(series, tau=20, delay=2).rename(
"output"
)
output_df = pd.concat([series, actual], axis=1)
output_df_string = hut.convert_df_to_string(output_df, index=True)
self.check_string(output_df_string)
def test_delay2_arma_inf2(self) -> None:
"""
Test on an arma series with interspersed infs when `delay=2`.
"""
series = self._get_arma_series(seed=1)
series[5:10] = np.inf
actual = csigna.compute_rolling_zscore(series, tau=20, delay=2).rename(
"output"
)
output_df = pd.concat([series, actual], axis=1)
output_df_string = hut.convert_df_to_string(output_df, index=True)
self.check_string(output_df_string)
@staticmethod
def _get_arma_series(seed: int) -> pd.Series:
arma_process = cartif.ArmaProcess([1], [1])
date_range = {"start": "1/1/2010", "periods": 40, "freq": "M"}
series = arma_process.generate_sample(
date_range_kwargs=date_range, scale=0.1, seed=seed
).rename("input")
return series
class Test_process_outliers1(hut.TestCase):
def test_winsorize1(self) -> None:
srs = self._get_data1()
mode = "winsorize"
lower_quantile = 0.01
# Check.
self._helper(srs, mode, lower_quantile)
def test_set_to_nan1(self) -> None:
srs = self._get_data1()
mode = "set_to_nan"
lower_quantile = 0.01
# Check.
self._helper(srs, mode, lower_quantile)
def test_set_to_zero1(self) -> None:
srs = self._get_data1()
mode = "set_to_zero"
lower_quantile = 0.01
# Check.
self._helper(srs, mode, lower_quantile)
def test_winsorize2(self) -> None:
srs = self._get_data2()
mode = "winsorize"
lower_quantile = 0.2
# Check.
self._helper(srs, mode, lower_quantile, num_df_rows=len(srs))
def test_set_to_nan2(self) -> None:
srs = self._get_data2()
mode = "set_to_nan"
lower_quantile = 0.2
# Check.
self._helper(srs, mode, lower_quantile, num_df_rows=len(srs))
def test_set_to_zero2(self) -> None:
srs = self._get_data2()
mode = "set_to_zero"
lower_quantile = 0.2
upper_quantile = 0.5
# Check.
self._helper(
srs,
mode,
lower_quantile,
num_df_rows=len(srs),
upper_quantile=upper_quantile,
)
def _helper(
self,
srs: pd.Series,
mode: str,
lower_quantile: float,
num_df_rows: int = 10,
window: int = 100,
min_periods: Optional[int] = 2,
**kwargs: Any,
) -> None:
info: collections.OrderedDict = collections.OrderedDict()
srs_out = csigna.process_outliers(
srs,
mode,
lower_quantile,
window=window,
min_periods=min_periods,
info=info,
**kwargs,
)
txt = []
txt.append("# info")
txt.append(pprint.pformat(info))
txt.append("# srs_out")
txt.append(str(srs_out.head(num_df_rows)))
self.check_string("\n".join(txt))
@staticmethod
def _get_data1() -> pd.Series:
np.random.seed(100)
n = 100000
data = np.random.normal(loc=0.0, scale=1.0, size=n)
return pd.Series(data)
@staticmethod
def _get_data2() -> pd.Series:
return pd.Series(range(1, 10))
class Test_compute_smooth_derivative1(hut.TestCase):
def test1(self) -> None:
np.random.seed(42)
tau = 40
min_periods = 20
scaling = 2
order = 2
n = 1000
signal = pd.Series(np.random.randn(n))
actual = csigna.compute_smooth_derivative(
signal, tau, min_periods, scaling, order
)
self.check_string(actual.to_string())
class Test_compute_smooth_moving_average1(hut.TestCase):
def test1(self) -> None:
np.random.seed(42)
tau = 40
min_periods = 20
min_depth = 1
max_depth = 5
n = 1000
signal = pd.Series(np.random.randn(n))
actual = csigna.compute_smooth_moving_average(
signal, tau, min_periods, min_depth, max_depth
)
self.check_string(actual.to_string())
class Test_extract_smooth_moving_average_weights(hut.TestCase):
def test1(self) -> None:
df = pd.DataFrame(index=range(0, 20))
weights = csigna.extract_smooth_moving_average_weights(
df,
tau=1.4,
index_location=15,
)
actual = hut.convert_df_to_string(
weights.round(5), index=True, decimals=5
)
self.check_string(actual)
def test2(self) -> None:
df = pd.DataFrame(index=range(0, 20))
weights = csigna.extract_smooth_moving_average_weights(
df,
tau=16,
index_location=15,
)
actual = hut.convert_df_to_string(
weights.round(5), index=True, decimals=5
)
self.check_string(actual)
def test3(self) -> None:
df = pd.DataFrame(index=range(0, 20))
weights = csigna.extract_smooth_moving_average_weights(
df,
tau=16,
min_depth=2,
max_depth=2,
index_location=15,
)
actual = hut.convert_df_to_string(
weights.round(5), index=True, decimals=5
)
self.check_string(actual)
def test4(self) -> None:
df = pd.DataFrame(
index=pd.date_range(start="2001-01-04", end="2001-01-31", freq="B")
)
weights = csigna.extract_smooth_moving_average_weights(
df,
tau=16,
index_location="2001-01-24",
)
actual = hut.convert_df_to_string(
weights.round(5), index=True, decimals=5
)
self.check_string(actual)
def test5(self) -> None:
df = pd.DataFrame(
index=pd.date_range(start="2001-01-04", end="2001-01-31", freq="B")
)
weights = csigna.extract_smooth_moving_average_weights(
df,
tau=252,
index_location="2001-01-24",
)
actual = hut.convert_df_to_string(
weights.round(5), index=True, decimals=5
)
self.check_string(actual)
def test6(self) -> None:
df = pd.DataFrame(
index=pd.date_range(start="2001-01-04", end="2001-01-31", freq="B")
)
weights = csigna.extract_smooth_moving_average_weights(
df,
tau=252,
)
actual = hut.convert_df_to_string(
weights.round(5), index=True, decimals=5
)
self.check_string(actual)
class Test_digitize1(hut.TestCase):
def test1(self) -> None:
np.random.seed(42)
bins = [0, 0.2, 0.4]
right = False
n = 1000
signal = pd.Series(np.random.randn(n))
actual = csigna.digitize(signal, bins, right)
self.check_string(actual.to_string())
def test_heaviside1(self) -> None:
heaviside = cartif.get_heaviside(-10, 20, 1, 1)
bins = [0, 0.2, 0.4]
right = False
actual = csigna.digitize(heaviside, bins, right)
self.check_string(actual.to_string())
class Test_compute_rolling_moment1(hut.TestCase):
def test1(self) -> None:
np.random.seed(42)
tau = 40
min_periods = 20
min_depth = 1
max_depth = 5
p_moment = 2
n = 1000
signal = pd.Series(np.random.randn(n))
actual = csigna.compute_rolling_moment(
signal, tau, min_periods, min_depth, max_depth, p_moment
)
self.check_string(actual.to_string())
class Test_compute_rolling_norm1(hut.TestCase):
def test1(self) -> None:
np.random.seed(42)
tau = 40
min_periods = 20
min_depth = 1
max_depth = 5
p_moment = 2
n = 1000
signal = pd.Series(np.random.randn(n))
actual = csigna.compute_rolling_norm(
signal, tau, min_periods, min_depth, max_depth, p_moment
)
self.check_string(actual.to_string())
class Test_compute_rolling_var1(hut.TestCase):
def test1(self) -> None:
np.random.seed(42)
tau = 40
min_periods = 20
min_depth = 1
max_depth = 5
p_moment = 2
n = 1000
signal = pd.Series(np.random.randn(n))
actual = csigna.compute_rolling_var(
signal, tau, min_periods, min_depth, max_depth, p_moment
)
self.check_string(actual.to_string())
class Test_compute_rolling_std1(hut.TestCase):
def test1(self) -> None:
np.random.seed(42)
tau = 40
min_periods = 20
min_depth = 1
max_depth = 5
p_moment = 2
n = 1000
signal = pd.Series(np.random.randn(n))
actual = csigna.compute_rolling_std(
signal, tau, min_periods, min_depth, max_depth, p_moment
)
self.check_string(actual.to_string())
class Test_compute_rolling_demean1(hut.TestCase):
def test1(self) -> None:
np.random.seed(42)
tau = 40
min_periods = 20
min_depth = 1
max_depth = 5
n = 1000
signal = pd.Series(np.random.randn(n))
actual = csigna.compute_rolling_demean(
signal, tau, min_periods, min_depth, max_depth
)
self.check_string(actual.to_string())
class Test_compute_rolling_skew1(hut.TestCase):
def test1(self) -> None:
np.random.seed(42)
tau_z = 40
tau_s = 20
min_periods = 20
min_depth = 1
max_depth = 5
p_moment = 2
n = 1000
signal = pd.Series(np.random.randn(n))
actual = csigna.compute_rolling_skew(
signal, tau_z, tau_s, min_periods, min_depth, max_depth, p_moment
)
self.check_string(actual.to_string())
class Test_compute_rolling_kurtosis1(hut.TestCase):
def test1(self) -> None:
np.random.seed(42)
tau_z = 40
tau_s = 20
min_periods = 20
min_depth = 1
max_depth = 5
p_moment = 2
n = 1000
signal = pd.Series(np.random.randn(n))
actual = csigna.compute_rolling_kurtosis(
signal, tau_z, tau_s, min_periods, min_depth, max_depth, p_moment
)
self.check_string(actual.to_string())
class Test_compute_rolling_sharpe_ratio1(hut.TestCase):
def test1(self) -> None:
np.random.seed(42)
tau = 40
min_periods = 20
min_depth = 1
max_depth = 5
p_moment = 2
n = 1000
signal = pd.Series(np.random.randn(n))
actual = csigna.compute_rolling_sharpe_ratio(
signal, tau, min_periods, min_depth, max_depth, p_moment
)
self.check_string(actual.to_string())
class Test_compute_rolling_corr1(hut.TestCase):
def test1(self) -> None:
np.random.seed(42)
tau = 40
demean = True
min_periods = 20
min_depth = 1
max_depth = 5
p_moment = 2
n = 1000
df = pd.DataFrame(np.random.randn(n, 2))
signal1 = df[0]
signal2 = df[1]
actual = csigna.compute_rolling_corr(
signal1,
signal2,
tau,
demean,
min_periods,
min_depth,
max_depth,
p_moment,
)
self.check_string(actual.to_string())
class Test_compute_rolling_zcorr1(hut.TestCase):
def test1(self) -> None:
np.random.seed(42)
tau = 40
demean = True
min_periods = 20
min_depth = 1
max_depth = 5
p_moment = 2
n = 1000
df = pd.DataFrame(np.random.randn(n, 2))
signal1 = df[0]
signal2 = df[1]
actual = csigna.compute_rolling_zcorr(
signal1,
signal2,
tau,
demean,
min_periods,
min_depth,
max_depth,
p_moment,
)
self.check_string(actual.to_string())
class Test_compute_ipca(hut.TestCase):
def test1(self) -> None:
"""
Test for a clean input.
"""
df = self._get_df(seed=1)
num_pc = 3
tau = 16
lambda_df, unit_eigenvec_dfs = csigna.compute_ipca(df, num_pc, tau)
unit_eigenvec_dfs_txt = "\n".join(
[f"{i}:\n{df.to_string()}" for i, df in enumerate(unit_eigenvec_dfs)]
)
txt = (
f"lambda_df:\n{lambda_df.to_string()}\n, "
f"unit_eigenvecs_dfs:\n{unit_eigenvec_dfs_txt}"
)
self.check_string(txt)
def test2(self) -> None:
"""
Test for an input with leading NaNs in only a subset of cols.
"""
df = self._get_df(seed=1)
df.iloc[0:3, :-3] = np.nan
num_pc = 3
tau = 16
lambda_df, unit_eigenvec_dfs = csigna.compute_ipca(df, num_pc, tau)
unit_eigenvec_dfs_txt = "\n".join(
[f"{i}:\n{df.to_string()}" for i, df in enumerate(unit_eigenvec_dfs)]
)
txt = (
f"lambda_df:\n{lambda_df.to_string()}\n, "
f"unit_eigenvecs_dfs:\n{unit_eigenvec_dfs_txt}"
)
self.check_string(txt)
def test3(self) -> None:
"""
Test for an input with interspersed NaNs.
"""
df = self._get_df(seed=1)
df.iloc[5:8, 3:5] = np.nan
df.iloc[2:4, 8:] = np.nan
num_pc = 3
tau = 16
lambda_df, unit_eigenvec_dfs = csigna.compute_ipca(df, num_pc, tau)
unit_eigenvec_dfs_txt = "\n".join(
[f"{i}:\n{df.to_string()}" for i, df in enumerate(unit_eigenvec_dfs)]
)
txt = (
f"lambda_df:\n{lambda_df.to_string()}\n, "
f"unit_eigenvecs_dfs:\n{unit_eigenvec_dfs_txt}"
)
self.check_string(txt)
def test4(self) -> None:
"""
Test for an input with a full-NaN row among the 3 first rows.
The eigenvalue estimates aren't in sorted order but should be.
TODO(*): Fix problem with not sorted eigenvalue estimates.
"""
df = self._get_df(seed=1)
df.iloc[1:2, :] = np.nan
num_pc = 3
tau = 16
lambda_df, unit_eigenvec_dfs = csigna.compute_ipca(df, num_pc, tau)
unit_eigenvec_dfs_txt = "\n".join(
[f"{i}:\n{df.to_string()}" for i, df in enumerate(unit_eigenvec_dfs)]
)
txt = (
f"lambda_df:\n{lambda_df.to_string()}\n, "
f"unit_eigenvecs_dfs:\n{unit_eigenvec_dfs_txt}"
)
self.check_string(txt)
def test5(self) -> None:
"""
Test for an input with 5 leading NaNs in all cols.
"""
df = self._get_df(seed=1)
df.iloc[:5, :] = np.nan
num_pc = 3
tau = 16
lambda_df, unit_eigenvec_dfs = csigna.compute_ipca(df, num_pc, tau)
unit_eigenvec_dfs_txt = "\n".join(
[f"{i}:\n{df.to_string()}" for i, df in enumerate(unit_eigenvec_dfs)]
)
txt = (
f"lambda_df:\n{lambda_df.to_string()}\n, "
f"unit_eigenvecs_dfs:\n{unit_eigenvec_dfs_txt}"
)
self.check_string(txt)
def test6(self) -> None:
"""
Test for interspersed all-NaNs rows.
"""
df = self._get_df(seed=1)
df.iloc[0:1, :] = np.nan
df.iloc[2:3, :] = np.nan
num_pc = 3
tau = 16
lambda_df, unit_eigenvec_dfs = csigna.compute_ipca(df, num_pc, tau)
unit_eigenvec_dfs_txt = "\n".join(
[f"{i}:\n{df.to_string()}" for i, df in enumerate(unit_eigenvec_dfs)]
)
txt = (
f"lambda_df:\n{lambda_df.to_string()}\n, "
f"unit_eigenvecs_dfs:\n{unit_eigenvec_dfs_txt}"
)
self.check_string(txt)
@staticmethod
def _get_df(seed: int) -> pd.DataFrame:
"""
Generate a dataframe via `cartif.MultivariateNormalProcess()`.
"""
mn_process = cartif.MultivariateNormalProcess()
mn_process.set_cov_from_inv_wishart_draw(dim=10, seed=seed)
df = mn_process.generate_sample(
{"start": "2000-01-01", "periods": 40, "freq": "B"}, seed=seed
)
return df
class Test__compute_ipca_step(hut.TestCase):
def test1(self) -> None:
"""
Test for clean input series.
"""
mn_process = cartif.MultivariateNormalProcess()
mn_process.set_cov_from_inv_wishart_draw(dim=10, seed=1)
df = mn_process.generate_sample(
{"start": "2000-01-01", "periods": 10, "freq": "B"}, seed=1
)
u = df.iloc[1]
v = df.iloc[2]
alpha = 0.5
u_next, v_next = csigna._compute_ipca_step(u, v, alpha)
txt = self._get_output_txt(u, v, u_next, v_next)
self.check_string(txt)
def test2(self) -> None:
"""
Test for input series with all zeros.
"""
mn_process = cartif.MultivariateNormalProcess()
mn_process.set_cov_from_inv_wishart_draw(dim=10, seed=1)
df = mn_process.generate_sample(
{"start": "2000-01-01", "periods": 10, "freq": "B"}, seed=1
)
u = df.iloc[1]
v = df.iloc[2]
u[:] = 0
v[:] = 0
alpha = 0.5
u_next, v_next = csigna._compute_ipca_step(u, v, alpha)
txt = self._get_output_txt(u, v, u_next, v_next)
self.check_string(txt)
def test3(self) -> None:
"""
Test that u == u_next for the case when np.linalg.norm(v)=0.
"""
mn_process = cartif.MultivariateNormalProcess()
mn_process.set_cov_from_inv_wishart_draw(dim=10, seed=1)
df = mn_process.generate_sample(
{"start": "2000-01-01", "periods": 10, "freq": "B"}, seed=1
)
u = df.iloc[1]
v = df.iloc[2]
v[:] = 0
alpha = 0.5
u_next, v_next = csigna._compute_ipca_step(u, v, alpha)
txt = self._get_output_txt(u, v, u_next, v_next)
self.check_string(txt)
def test4(self) -> None:
"""
Test for input series with all NaNs.
Output is not intended.
TODO(Dan): implement a way to deal with NaNs in the input.
"""
mn_process = cartif.MultivariateNormalProcess()
mn_process.set_cov_from_inv_wishart_draw(dim=10, seed=1)
df = mn_process.generate_sample(
{"start": "2000-01-01", "periods": 10, "freq": "B"}, seed=1
)
u = df.iloc[1]
v = df.iloc[2]
u[:] = np.nan
v[:] = np.nan
alpha = 0.5
u_next, v_next = csigna._compute_ipca_step(u, v, alpha)
txt = self._get_output_txt(u, v, u_next, v_next)
self.check_string(txt)
def test5(self) -> None:
"""
Test for input series with some NaNs.
Output is not intended.
"""
mn_process = cartif.MultivariateNormalProcess()
mn_process.set_cov_from_inv_wishart_draw(dim=10, seed=1)
df = mn_process.generate_sample(
{"start": "2000-01-01", "periods": 10, "freq": "B"}, seed=1
)
u = df.iloc[1]
v = df.iloc[2]
u[3:6] = np.nan
v[5:8] = np.nan
alpha = 0.5
u_next, v_next = csigna._compute_ipca_step(u, v, alpha)
txt = self._get_output_txt(u, v, u_next, v_next)
self.check_string(txt)
@staticmethod
def _get_output_txt(
u: pd.Series, v: pd.Series, u_next: pd.Series, v_next: pd.Series
) -> str:
"""
Create string output for tests results.
"""
u_string = hut.convert_df_to_string(u, index=True)
v_string = hut.convert_df_to_string(v, index=True)
u_next_string = hut.convert_df_to_string(u_next, index=True)
v_next_string = hut.convert_df_to_string(v_next, index=True)
txt = (
f"u:\n{u_string}\n"
f"v:\n{v_string}\n"
f"u_next:\n{u_next_string}\n"
f"v_next:\n{v_next_string}"
)
return txt
@pytest.mark.slow
class Test_gallery_signal_processing1(hut.TestCase):
def test_notebook1(self) -> None:
file_name = os.path.join(
git.get_amp_abs_path(),
"core/notebooks/gallery_signal_processing.ipynb",
)
scratch_dir = self.get_scratch_space()
hut.run_notebook(file_name, scratch_dir)
class TestProcessNonfinite1(hut.TestCase):
def test1(self) -> None:
series = self._get_messy_series(1)
actual = csigna.process_nonfinite(series)
actual_string = hut.convert_df_to_string(actual, index=True)
self.check_string(actual_string)
def test2(self) -> None:
series = self._get_messy_series(1)
actual = csigna.process_nonfinite(series, remove_nan=False)
actual_string = hut.convert_df_to_string(actual, index=True)
self.check_string(actual_string)
def test3(self) -> None:
series = self._get_messy_series(1)
actual = csigna.process_nonfinite(series, remove_inf=False)
actual_string = hut.convert_df_to_string(actual, index=True)
self.check_string(actual_string)
@staticmethod
def _get_messy_series(seed: int) -> pd.Series:
arparams = np.array([0.75, -0.25])
maparams = np.array([0.65, 0.35])
arma_process = cartif.ArmaProcess(arparams, maparams)
date_range = {"start": "1/1/2010", "periods": 40, "freq": "M"}
series = arma_process.generate_sample(
date_range_kwargs=date_range, seed=seed
)
series[:5] = 0
series[-5:] = np.nan
series[10:13] = np.inf
series[13:16] = -np.inf
return series
class Test_compute_rolling_annualized_sharpe_ratio(hut.TestCase):
def test1(self) -> None:
ar_params: List[float] = []
ma_params: List[float] = []
arma_process = cartif.ArmaProcess(ar_params, ma_params)
realization = arma_process.generate_sample(
{"start": "2000-01-01", "periods": 40, "freq": "B"},
scale=1,
burnin=5,
)
rolling_sr = csigna.compute_rolling_annualized_sharpe_ratio(
realization, tau=16
)
self.check_string(hut.convert_df_to_string(rolling_sr, index=True))
class Test_get_swt(hut.TestCase):
def test_clean1(self) -> None:
"""
Test for default values.
"""
series = self._get_series(seed=1, periods=40)
actual = csigna.get_swt(series, wavelet="haar")
output_str = self._get_tuple_output_txt(actual)
self.check_string(output_str)
def test_timing_mode1(self) -> None:
"""
Test for timing_mode="knowledge_time".
"""
series = self._get_series(seed=1)
actual = csigna.get_swt(
series, wavelet="haar", timing_mode="knowledge_time"
)
output_str = self._get_tuple_output_txt(actual)
self.check_string(output_str)
def test_timing_mode2(self) -> None:
"""
Test for timing_mode="zero_phase".
"""
series = self._get_series(seed=1)
actual = csigna.get_swt(series, wavelet="haar", timing_mode="zero_phase")
output_str = self._get_tuple_output_txt(actual)
self.check_string(output_str)
def test_timing_mode3(self) -> None:
"""
Test for timing_mode="raw".
"""
series = self._get_series(seed=1)
actual = csigna.get_swt(series, wavelet="haar", timing_mode="raw")
output_str = self._get_tuple_output_txt(actual)
self.check_string(output_str)
def test_output_mode1(self) -> None:
"""
Test for output_mode="tuple".
"""
series = self._get_series(seed=1)
actual = csigna.get_swt(series, wavelet="haar", output_mode="tuple")
output_str = self._get_tuple_output_txt(actual)
self.check_string(output_str)
def test_output_mode2(self) -> None:
"""
Test for output_mode="smooth".
"""
series = self._get_series(seed=1)
actual = csigna.get_swt(series, wavelet="haar", output_mode="smooth")
actual_str = hut.convert_df_to_string(actual, index=True)
output_str = f"smooth_df:\n{actual_str}\n"
self.check_string(output_str)
def test_output_mode3(self) -> None:
"""
Test for output_mode="detail".
"""
series = self._get_series(seed=1)
actual = csigna.get_swt(series, wavelet="haar", output_mode="detail")
actual_str = hut.convert_df_to_string(actual, index=True)
output_str = f"detail_df:\n{actual_str}\n"
self.check_string(output_str)
@staticmethod
def _get_series(seed: int, periods: int = 20) -> pd.Series:
arma_process = cartif.ArmaProcess([0], [0])
date_range = {"start": "1/1/2010", "periods": periods, "freq": "M"}
series = arma_process.generate_sample(
date_range_kwargs=date_range, scale=0.1, seed=seed
)
return series
@staticmethod
def _get_tuple_output_txt(
output: Union[pd.DataFrame, Tuple[pd.DataFrame, pd.DataFrame]]
) -> str:
"""
Create string output for a tuple type return.
"""
smooth_df_string = hut.convert_df_to_string(output[0], index=True)
detail_df_string = hut.convert_df_to_string(output[1], index=True)
output_str = (
f"smooth_df:\n{smooth_df_string}\n"
f"\ndetail_df\n{detail_df_string}\n"
)
return output_str
class Test_compute_swt_var(hut.TestCase):
def test1(self) -> None:
srs = self._get_data(seed=0)
swt_var = csigna.compute_swt_var(srs, depth=6)
actual = swt_var.count().values[0]
np.testing.assert_equal(actual, 1179)
def test2(self) -> None:
srs = self._get_data(seed=0)
swt_var = csigna.compute_swt_var(srs, depth=6)
actual = swt_var.sum()
np.testing.assert_allclose(actual, [1102.66], atol=0.01)
def test3(self) -> None:
srs = self._get_data(seed=0)
swt_var = csigna.compute_swt_var(srs, depth=6, axis=1)
actual = swt_var.sum()
np.testing.assert_allclose(actual, [1102.66], atol=0.01)
def _get_data(self, seed: int) -> pd.Series:
process = cartif.ArmaProcess([], [])
realization = process.generate_sample(
{"start": "2000-01-01", "end": "2005-01-01", "freq": "B"}, seed=seed
)
return realization
class Test_resample_srs(hut.TestCase):
# TODO(gp): Replace `check_string()` with `assert_equal()` to tests that benefit
# from seeing / freezing the results, using a command like:
# ```
# > invoke find_check_string_output -c Test_resample_srs -m test_day_to_year1
# ```
# Converting days to other units.
def test_day_to_year1(self) -> None:
"""
Test freq="D", unit="Y".
"""
series = self._get_series(seed=1, periods=9, freq="D")
rule = "Y"
actual_default = (
csigna.resample(series, rule=rule)
.sum()
.rename(f"Output in freq='{rule}'")
)
actual_closed_left = (
csigna.resample(series, rule=rule, closed="left")
.sum()
.rename(f"Output in freq='{rule}'")
)
act = self._get_output_txt(series, actual_default, actual_closed_left)
exp = r"""
Input:
Input in freq='D'
2014-12-26 0.162435
2014-12-27 0.263693
2014-12-28 0.149701
2014-12-29 -0.010413
2014-12-30 -0.031170
2014-12-31 -0.174783
2015-01-01 -0.230455
2015-01-02 -0.132095
2015-01-03 -0.176312
Output with default arguments:
Output in freq='Y'
2014-12-31 0.359463
2015-12-31 -0.538862
Output with closed='left':
Output in freq='Y'
2014-12-31 0.534246
2015-12-31 -0.713644
""".lstrip().rstrip()
self.assert_equal(act, exp, fuzzy_match=True)
def test_day_to_month1(self) -> None:
"""
Test freq="D", unit="M".
"""
series = self._get_series(seed=1, periods=9, freq="D")
actual_default = (
csigna.resample(series, rule="M").sum().rename("Output in freq='M'")
)
actual_closed_left = (
csigna.resample(series, rule="M", closed="left")
.sum()
.rename("Output in freq='M'")
)
txt = self._get_output_txt(series, actual_default, actual_closed_left)
self.check_string(txt)
def test_day_to_week1(self) -> None:
"""
Test freq="D", unit="W".
"""
series = self._get_series(seed=1, periods=9, freq="D")
actual_default = (
csigna.resample(series, rule="W").sum().rename("Output in freq='W'")
)
actual_closed_left = (
csigna.resample(series, rule="W", closed="left")
.sum()
.rename("Output in freq='W'")
)
txt = self._get_output_txt(series, actual_default, actual_closed_left)
self.check_string(txt)
def test_day_to_business_day1(self) -> None:
"""
Test freq="D", unit="B".
"""
series = self._get_series(seed=1, periods=9, freq="D")
actual_default = (
csigna.resample(series, rule="B").sum().rename("Output in freq='B'")
)
actual_closed_left = (
csigna.resample(series, rule="B", closed="left")
.sum()
.rename("Output in freq='B'")
)
txt = self._get_output_txt(series, actual_default, actual_closed_left)
self.check_string(txt)
# Equal frequency resampling.
def test_only_day1(self) -> None:
"""
Test freq="D", unit="D".
"""
series = self._get_series(seed=1, periods=9, freq="D")
actual_default = (
csigna.resample(series, rule="D").sum().rename("Output in freq='D'")
)
actual_closed_left = (
csigna.resample(series, rule="D", closed="left")
.sum()
.rename("Output in freq='D'")
)
txt = self._get_output_txt(series, actual_default, actual_closed_left)
self.check_string(txt)
def test_only_minute1(self) -> None:
"""
Test freq="T", unit="T".
"""
series = self._get_series(seed=1, periods=9, freq="T")
actual_default = (
csigna.resample(series, rule="T").sum().rename("Output in freq='T'")
)
actual_closed_left = (
csigna.resample(series, rule="T", closed="left")
.sum()
.rename("Output in freq='T'")
)
txt = self._get_output_txt(series, actual_default, actual_closed_left)
self.check_string(txt)
def test_only_business_day1(self) -> None:
"""
Test freq="B", unit="B".
"""
series = self._get_series(seed=1, periods=9, freq="B")
actual_default = (
csigna.resample(series, rule="B").sum().rename("Output in freq='B'")
)
actual_closed_left = (
csigna.resample(series, rule="B", closed="left")
.sum()
.rename("Output in freq='B'")
)
txt = self._get_output_txt(series, actual_default, actual_closed_left)
self.check_string(txt)
# Upsampling.
def test_upsample_month_to_day1(self) -> None:
"""
Test freq="M", unit="D".
"""
series = self._get_series(seed=1, periods=3, freq="M")
actual_default = (
csigna.resample(series, rule="D").sum().rename("Output in freq='D'")
)
actual_closed_left = (
csigna.resample(series, rule="D", closed="left")
.sum()
.rename("Output in freq='D'")
)
txt = self._get_output_txt(series, actual_default, actual_closed_left)
self.check_string(txt)
def test_upsample_business_day_to_day1(self) -> None:
"""
Test freq="B", unit="D".
"""
series = self._get_series(seed=1, periods=9, freq="B")
actual_default = (
csigna.resample(series, rule="D").sum().rename("Output in freq='D'")
)
actual_closed_left = (
csigna.resample(series, rule="D", closed="left")
.sum()
.rename("Output in freq='D'")
)
txt = self._get_output_txt(series, actual_default, actual_closed_left)
self.check_string(txt)
# Resampling freq-less series.
def test_no_freq_day_to_business_day1(self) -> None:
"""
Test for an input without `freq`.
"""
series = self._get_series(seed=1, periods=9, freq="D").rename(
"Input with no freq"
)
# Remove some observations in order to make `freq` None.
series = series.drop(series.index[3:7])
actual_default = (
csigna.resample(series, rule="B").sum().rename("Output in freq='B'")
)
actual_closed_left = (
csigna.resample(series, rule="B", closed="left")
.sum()
.rename("Output in freq='B'")
)
txt = self._get_output_txt(series, actual_default, actual_closed_left)
self.check_string(txt)
@staticmethod
def _get_series(seed: int, periods: int, freq: str) -> pd.Series:
"""
Periods include:
26/12/2014 - Friday, workday, 5th DoW
27/12/2014 - Saturday, weekend, 6th DoW
28/12/2014 - Sunday, weekend, 7th DoW
29/12/2014 - Monday, workday, 1th DoW
30/12/2014 - Tuesday, workday, 2th DoW
31/12/2014 - Wednesday, workday, 3th DoW
01/12/2014 - Thursday, workday, 4th DoW
02/12/2014 - Friday, workday, 5th DoW
03/12/2014 - Saturday, weekend, 6th DoW
"""
arma_process = cartif.ArmaProcess([1], [1])
date_range = {"start": "2014-12-26", "periods": periods, "freq": freq}
series = arma_process.generate_sample(
date_range_kwargs=date_range, scale=0.1, seed=seed
).rename(f"Input in freq='{freq}'")
return series
@staticmethod
def _get_output_txt(
input_data: pd.Series,
output_default: pd.Series,
output_closed_left: pd.Series,
) -> str:
"""
Create string output for tests results.
"""
input_string = hut.convert_df_to_string(input_data, index=True)
output_default_string = hut.convert_df_to_string(
output_default, index=True
)
output_closed_left_string = hut.convert_df_to_string(
output_closed_left, index=True
)
txt = (
f"Input:\n{input_string}\n\n"
f"Output with default arguments:\n{output_default_string}\n\n"
f"Output with closed='left':\n{output_closed_left_string}\n"
)
return txt
class Test_resample_df(hut.TestCase):
# Converting days to other units.
def test_day_to_year1(self) -> None:
"""
Test freq="D", unit="Y".
"""
df = self._get_df(seed=1, periods=9, freq="D")
actual_default = csigna.resample(df, rule="Y").sum()
actual_default.columns = [
"1st output in freq='Y'",
"2nd output in freq='Y'",
]
actual_closed_left = csigna.resample(df, rule="Y", closed="left").sum()
actual_closed_left.columns = [
"1st output in freq='Y'",
"2nd output in freq='Y'",
]
txt = self._get_output_txt(df, actual_default, actual_closed_left)
self.check_string(txt)
def test_day_to_month1(self) -> None:
"""
Test freq="D", unit="M".
"""
df = self._get_df(seed=1, periods=9, freq="D")
actual_default = csigna.resample(df, rule="M").sum()
actual_default.columns = [
"1st output in freq='M'",
"2nd output in freq='M'",
]
actual_closed_left = csigna.resample(df, rule="M", closed="left").sum()
actual_closed_left.columns = [
"1st output in freq='M'",
"2nd output in freq='M'",
]
txt = self._get_output_txt(df, actual_default, actual_closed_left)
self.check_string(txt)
def test_day_to_week1(self) -> None:
"""
Test freq="D", unit="W".
"""
df = self._get_df(seed=1, periods=9, freq="D")
actual_default = csigna.resample(df, rule="W").sum()
actual_default.columns = [
"1st output in freq='W'",
"2nd output in freq='W'",
]
actual_closed_left = csigna.resample(df, rule="W", closed="left").sum()
actual_closed_left.columns = [
"1st output in freq='W'",
"2nd output in freq='W'",
]
txt = self._get_output_txt(df, actual_default, actual_closed_left)
self.check_string(txt)
def test_day_to_business_day1(self) -> None:
"""
Test freq="D", unit="B".
"""
df = self._get_df(seed=1, periods=9, freq="D")
actual_default = csigna.resample(df, rule="B").sum()
actual_default.columns = [
"1st output in freq='B'",
"2nd output in freq='B'",
]
actual_closed_left = csigna.resample(df, rule="B", closed="left").sum()
actual_closed_left.columns = [
"1st output in freq='B'",
"2nd output in freq='B'",
]
txt = self._get_output_txt(df, actual_default, actual_closed_left)
self.check_string(txt)
# Equal frequency resampling.
def test_only_day1(self) -> None:
"""
Test freq="D", unit="D".
"""
df = self._get_df(seed=1, periods=9, freq="D")
actual_default = csigna.resample(df, rule="D").sum()
actual_default.columns = [
"1st output in freq='D'",
"2nd output in freq='D'",
]
actual_closed_left = csigna.resample(df, rule="D", closed="left").sum()
actual_closed_left.columns = [
"1st output in freq='D'",
"2nd output in freq='D'",
]
txt = self._get_output_txt(df, actual_default, actual_closed_left)
self.check_string(txt)
def test_only_minute1(self) -> None:
"""
Test freq="T", unit="T".
"""
df = self._get_df(seed=1, periods=9, freq="T")
actual_default = csigna.resample(df, rule="T").sum()
actual_default.columns = [
"1st output in freq='T'",
"2nd output in freq='T'",
]
actual_closed_left = csigna.resample(df, rule="T", closed="left").sum()
actual_closed_left.columns = [
"1st output in freq='T'",
"2nd output in freq='T'",
]
txt = self._get_output_txt(df, actual_default, actual_closed_left)
self.check_string(txt)
def test_only_business_day1(self) -> None:
"""
Test freq="B", unit="B".
"""
df = self._get_df(seed=1, periods=9, freq="B")
actual_default = csigna.resample(df, rule="B").sum()
actual_default.columns = [
"1st output in freq='B'",
"2nd output in freq='B'",
]
actual_closed_left = csigna.resample(df, rule="B", closed="left").sum()
actual_closed_left.columns = [
"1st output in freq='B'",
"2nd output in freq='B'",
]
txt = self._get_output_txt(df, actual_default, actual_closed_left)
self.check_string(txt)
# Upsampling.
def test_upsample_month_to_day1(self) -> None:
"""
Test freq="M", unit="D".
"""
df = self._get_df(seed=1, periods=3, freq="M")
actual_default = csigna.resample(df, rule="D").sum()
actual_default.columns = [
"1st output in freq='D'",
"2nd output in freq='D'",
]
actual_closed_left = csigna.resample(df, rule="D", closed="left").sum()
actual_closed_left.columns = [
"1st output in freq='D'",
"2nd output in freq='D'",
]
txt = self._get_output_txt(df, actual_default, actual_closed_left)
self.check_string(txt)
def test_upsample_business_day_to_day1(self) -> None:
"""
Test freq="B", unit="D".
"""
df = self._get_df(seed=1, periods=9, freq="B")
actual_default = csigna.resample(df, rule="D").sum()
actual_default.columns = [
"1st output in freq='D'",
"2nd output in freq='D'",
]
actual_closed_left = csigna.resample(df, rule="D", closed="left").sum()
actual_closed_left.columns = [
"1st output in freq='D'",
"2nd output in freq='D'",
]
txt = self._get_output_txt(df, actual_default, actual_closed_left)
self.check_string(txt)
# Resampling freq-less series.
def test_no_freq_day_to_business_day1(self) -> None:
"""
Test for an input without `freq`.
"""
df = self._get_df(seed=1, periods=9, freq="D")
df.columns = ["1st input with no freq", "2nd input with no freq"]
# Remove some observations in order to make `freq` None.
df = df.drop(df.index[3:7])
actual_default = csigna.resample(df, rule="B").sum()
actual_default.columns = [
"1st output in freq='B'",
"2nd output in freq='B'",
]
actual_closed_left = csigna.resample(df, rule="B", closed="left").sum()
actual_closed_left.columns = [
"1st output in freq='B'",
"2nd output in freq='B'",
]
txt = self._get_output_txt(df, actual_default, actual_closed_left)
self.check_string(txt)
@staticmethod
def _get_df(seed: int, periods: int, freq: str) -> pd.DataFrame:
"""
Periods include:
26/12/2014 - Friday, workday, 5th DoW
27/12/2014 - Saturday, weekend, 6th DoW
28/12/2014 - Sunday, weekend, 7th DoW
29/12/2014 - Monday, workday, 1th DoW
30/12/2014 - Tuesday, workday, 2th DoW
31/12/2014 - Wednesday, workday, 3th DoW
01/12/2014 - Thursday, workday, 4th DoW
02/12/2014 - Friday, workday, 5th DoW
03/12/2014 - Saturday, weekend, 6th DoW
"""
arma_process = cartif.ArmaProcess([1], [1])
date_range = {"start": "2014-12-26", "periods": periods, "freq": freq}
srs_1 = arma_process.generate_sample(
date_range_kwargs=date_range, scale=0.1, seed=seed
).rename(f"1st input in freq='{freq}'")
srs_2 = arma_process.generate_sample(
date_range_kwargs=date_range, scale=0.1, seed=seed + 1
).rename(f"2nd input in freq='{freq}'")
df = pd.DataFrame([srs_1, srs_2]).T
return df
@staticmethod
def _get_output_txt(
input_data: pd.DataFrame,
output_default: pd.DataFrame,
output_closed_left: pd.DataFrame,
) -> str:
"""
Create string output for tests results.
"""
input_string = hut.convert_df_to_string(input_data, index=True)
output_default_string = hut.convert_df_to_string(
output_default, index=True
)
output_closed_left_string = hut.convert_df_to_string(
output_closed_left, index=True
)
txt = (
f"Input:\n{input_string}\n\n"
f"Output with default arguments:\n{output_default_string}\n\n"
f"Output with closed='left':\n{output_closed_left_string}\n"
)
return txt
class Test_calculate_inverse(hut.TestCase):
def test1(self) -> None:
df = pd.DataFrame([[1, 2], [3, 4]])
inverse_df = hut.convert_df_to_string(
csigna.calculate_inverse(df), index=True
)
self.check_string(inverse_df)
class Test_calculate_presudoinverse(hut.TestCase):
def test1(self) -> None:
df = pd.DataFrame([[1, 2], [3, 4], [5, 6]])
inverse_df = hut.convert_df_to_string(
csigna.calculate_pseudoinverse(df), index=True
)
self.check_string(inverse_df)
| 34.557775 | 84 | 0.578725 | 8,428 | 64,001 | 4.156621 | 0.056716 | 0.02649 | 0.041105 | 0.025177 | 0.867407 | 0.853819 | 0.839747 | 0.815626 | 0.787566 | 0.75825 | 0 | 0.038578 | 0.296074 | 64,001 | 1,851 | 85 | 34.576445 | 0.739013 | 0.07028 | 0 | 0.657022 | 0 | 0 | 0.088268 | 0.028908 | 0 | 0 | 0 | 0.001621 | 0.00988 | 1 | 0.093155 | false | 0 | 0.009174 | 0.000706 | 0.134792 | 0.009174 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
c3746be33782173c6e26ae613070a38808df811d | 236 | py | Python | iotbx/dsn6/__init__.py | rimmartin/cctbx_project | 644090f9432d9afc22cfb542fc3ab78ca8e15e5d | [
"BSD-3-Clause-LBNL"
] | null | null | null | iotbx/dsn6/__init__.py | rimmartin/cctbx_project | 644090f9432d9afc22cfb542fc3ab78ca8e15e5d | [
"BSD-3-Clause-LBNL"
] | null | null | null | iotbx/dsn6/__init__.py | rimmartin/cctbx_project | 644090f9432d9afc22cfb542fc3ab78ca8e15e5d | [
"BSD-3-Clause-LBNL"
] | null | null | null |
# TODO TESTS
from __future__ import division
import cctbx.array_family.flex # import dependency
import boost.python
ext = boost.python.import_ext("iotbx_dsn6_map_ext")
from iotbx_dsn6_map_ext import *
import iotbx_dsn6_map_ext as ext
| 23.6 | 51 | 0.830508 | 38 | 236 | 4.763158 | 0.473684 | 0.149171 | 0.198895 | 0.248619 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.014354 | 0.114407 | 236 | 9 | 52 | 26.222222 | 0.851675 | 0.118644 | 0 | 0 | 0 | 0 | 0.088235 | 0 | 0 | 0 | 0 | 0.111111 | 0 | 1 | 0 | false | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
5edad3d842ae419bef69eb70531225b76d945f9b | 36,657 | py | Python | tessellate/utils/getRing.py | scientificomputing/tessellate | 2166863476faf7542530d0d6e2a28da3ab3f909c | [
"Apache-2.0"
] | null | null | null | tessellate/utils/getRing.py | scientificomputing/tessellate | 2166863476faf7542530d0d6e2a28da3ab3f909c | [
"Apache-2.0"
] | null | null | null | tessellate/utils/getRing.py | scientificomputing/tessellate | 2166863476faf7542530d0d6e2a28da3ab3f909c | [
"Apache-2.0"
] | 2 | 2017-12-08T22:13:43.000Z | 2019-10-14T10:12:50.000Z | import logging
logger = logging.getLogger(__name__)
logger.setLevel(logging.INFO)
class dd_dict(dict): # the dd is for "deferred delete"
_deletes = None
def __delitem__(self, key):
if key not in self:
raise KeyError(str(key))
dict.__delitem__(self, key) if self._deletes is None else self._deletes.add(key)
def __enter__(self):
self._deletes = set()
def __exit__(self, type, value, tb):
for key in self._deletes:
try:
dict.__delitem__(self, key)
except KeyError:
pass
self._deletes = None
#| common class ordering of ring systems for 5,6 rings
common5rings = [("C2'", "C3'", "C4'", "O4'", "C1'"), ("C2R", "C3R", "C4R", "O4R", "C1R")]
common6rings = [("C3'", "C4'", "C5'", "O5'", "C1'", "C2'"), ("C3A", "C4A", "C5A", "O5A", "C1A", "C2A"),
("C3", "C4", "C5", "O5", "C1", "C2")]
commonrings = common5rings + common6rings
def getcommonring(ring):
"""
Common atom naming systems for rings, this routine is used to automagically reorder guesses for
rings s.t. that they provide the expected results...
"""
thiscycle = {}
if len(ring) == 5:
import re
for atom in ring: # note this is for a five ring...
if (atom.startswith('C1') or atom.endswith('C1')) and ('1' in re.findall(r'\d+',atom)): # make sure not matching C13
thiscycle['C1'] = atom
elif (atom.startswith('C2') or atom.endswith('C2')) and ('2' in re.findall(r'\d+',atom)): # make sure not matching C23
thiscycle['C2'] = atom
elif (atom.startswith('C3') or atom.endswith('C3')) and ('3' in re.findall(r'\d+',atom)): # make sure not matching C33
thiscycle['C3'] = atom
elif (atom.startswith('C4') or atom.endswith('C4')) and ('4' in re.findall(r'\d+',atom)): # make sure not matching C44
thiscycle['C4'] = atom
elif atom.startswith('O') or atom.endswith('O'):
thiscycle['O4'] = atom
else:
# there is an atom that is not common. So cannot apply this ordering trick.
return None
try:
return (thiscycle['C2'], thiscycle['C3'], thiscycle['C4'], thiscycle['O4'], thiscycle['C1'])
except:
return None
elif len(ring) == 6:
import re
for atom in ring: # note this is for a six ring...
if (atom.startswith('C1') or atom.endswith('C1')) and ('1' in re.findall(r'\d+',atom)): # make sure not matching C13
thiscycle['C1'] = atom
elif (atom.startswith('C2') or atom.endswith('C2')) and ('2' in re.findall(r'\d+',atom)):
thiscycle['C2'] = atom
elif (atom.startswith('C3') or atom.endswith('C3')) and ('3' in re.findall(r'\d+',atom)):
thiscycle['C3'] = atom
elif (atom.startswith('C4') or atom.endswith('C4')) and ('4' in re.findall(r'\d+',atom)):
thiscycle['C4'] = atom
elif (atom.startswith('C5') or atom.endswith('C5')) and ('5' in re.findall(r'\d+',atom)):
thiscycle['C5'] = atom
elif atom.startswith('O') or atom.endswith('O'):
thiscycle['O5'] = atom
else:
# what about C10 O9 C14 C13 C12 C11 - should reverse to C11 C12 C13 C14 O9 C10
# if 10>9 and 10<11 and 11 < 12 then reverse
# import copy
# allatoms = " ".join(ring)
# intsinatoms=re.findall(r'\d+',allatoms)
# logger.debug("ORDERING %s %s", ring, intsinatoms)
# if int(intsinatoms[0])> int(intsinatoms[1]) and int(intsinatoms[0])<int(intsinatoms[5]) and int(intsinatoms[5])< int(intsinatoms[4]):
# localcopyofring=copy.deepcopy(ring)
# #popped=localcopyofring.pop()
# #localcopyofring.insert(0,popped)
# localcopyofring.reverse()
# logger.debug("REVERSE %s", localcopyofring)
# return localcopyofring
#. get O into position 3 (0,1,2,3)
#. Get O to position 3, then check numbering 4<0
#!. this depends on atom numbering. If C1 is numbered higher than C4 then the non-contextualised conformer will seem incorrect
indices = []
for i, elem in enumerate(ring):
if 'O' in elem:
indices.append(i)
if len(indices) == 1:
opos=indices[0]
if opos==3:
return ring
pos=opos
while pos!=3:
popped=ring.pop()
ring.insert(0,popped)
indices=[]
for i, elem in enumerate(ring):
if 'O' in elem:
indices.append(i)
pos=indices[0]
logger.debug("POPPEDTOPOS4 %s", ring)
try:
if int(re.findall(r'\d+',ring[4])[0]) < int(re.findall(r'\d+',ring[0])[0]) :
logger.debug("RETURN4 %s", ring)
return ring
else:
popped=ring.pop()
ring.insert(0,popped)
logger.debug("POPPEDAGAIN %s", ring)
return ring
except:
return None
else:
return None
try:
return (
thiscycle['C3'], thiscycle['C4'], thiscycle['C5'], thiscycle['O5'], thiscycle['C1'], thiscycle['C2'])
except:
return None
else:
return None
def getring(startatom, atomset):
"""getRing(startatom, atomset, lookup, oatoms)->atoms, bonds
starting at startatom do a bfs traversal through the atoms
in atomset and return the smallest ring found
returns (), () on failure
note: atoms and bonds are not returned in traversal order"""
path = {}
bpaths = {}
for atomID in atomset.keys():
# initially the paths are empty
path[atomID] = []
bpaths[atomID] = []
#... insert from Figueras paper
# The BFS algorithm. We wish to find the smallest ring in the molecule that includes startatom
# we assign paths , path[i] to each node i. The path conrinas the nodes in the path from the starting node to node i
# initialise all the paths to null
# assign values to paths in the starting node
#for subnodes in atomset[startatom]:
# path[subnodes]=[startatom,subnodes]
# now check if subnodes subnodes is empty
# but actually will start at starting node but do all nodes...
q = []
# Initialize the queue with nodes attached to rootNode
# and initialise these paths
for subnodes in atomset[startatom]:
q.append([startatom, subnodes])
path[subnodes] = [startatom, subnodes]
logger.debug("getRING q nodes %s",q)
logger.debug("getRING path nodes %s",path)
# loop while the queue size is greater than zero (it exists)
while q:
root, node = q.pop()
for subnodes in atomset[node]:
logger.debug("getRING subnodes %s in set %s, root is %s", subnodes, atomset[node], root)
if subnodes != root: # node shouldn't be start atom but check...
# check if path is empty or not
if not path[subnodes]: # if empty assign path as root path + subnodes
path[subnodes] = path[node] + [subnodes]
logger.debug("getRING had no paths now path %s from node %s paths %s and itself %s", path[subnodes], node, path[node],[subnodes])
q.append([node, subnodes])
else: # possible ring closure
# compute the intersection of path[root], path[subnodes], it must be a singleton i.e. one element
intersection = set(path[node]) & set(path[subnodes])
logger.debug("getRING intersection %s", intersection)
#print "INTERSECTION ", set(path[node])&set(path[subnodes]), len(intersection)
if len(intersection) == int(1):
logger.debug("getRING use intersection %s is size %i", intersection, len(intersection))
#union
union = set(path[node]) | set(path[subnodes])
logger.debug("getRING union %s", union)
avail = sorted(list(union)) # sort here to prevent dupl
logger.debug("getRING avail union %s", avail)
chosen = []
if len(union) < 5 or len(union)> 400: # it is pointless considering
return ()
while avail:
if not chosen:
chosen.append(avail[0])
del avail[0]
else:
lastadded = chosen[-1]
for children in atomset[lastadded]:
if children not in chosen and children in avail: # child must not be used and must be part of the atoms in the ring
chosen.append(children)
del avail[avail.index(children)]
break
return chosen
else: # ignore path
logger.debug("getRING pass on intersection %s is size %i", intersection, len(intersection))
pass
else:
logger.debug("subnode=root %s %s", subnodes, root)
logger.debug("returning nothing")
return () # for subnodes in atomSet[startAtom]:
def min_degree(edges):
"""getRing(edges)
loop through all edges and calculate the min degree of the current nodes in the graph
returns an int min_degree
This is this literally the lowest available degree. A graph with 1,2 and 3 degree nodes has a mindegree of 1"""
min_atom, min_degree = None, int(10000)
# loop over edges
for atom in edges:
if len(edges[atom]) < min_degree:
min_atom = atom
min_degree = len(edges[atom])
return min_atom, min_degree
def create_graph_and_find_rings_suite(atomlist, mineuclid=1.1, maxeuclid=2.0):
""" find possible rings """
try:
# import getRing
import tessellate.utils.getRing as getRing
except Exception as e:
print("Error - Cannot import module ", e)
exit(1)
SSSR=[]
SSSR1=getRing.create_graph_and_find_rings_d3069(atomlist,mineuclid,maxeuclid)
SSSR2=getRing.create_graph_and_find_rings_6abb(atomlist,mineuclid,maxeuclid)
SSSR.extend(SSSR1)
for itm in SSSR2:
if itm not in SSSR:
SSSR.extend([itm])
return SSSR
def make_unique(original_list):
unique_list = []
map(lambda x: unique_list.append(x) if (x not in unique_list) else False, original_list)
return unique_list
def create_graph_and_find_rings_d3069(atomlist, mineuclid=1.1, maxeuclid=2.0):
"""
:type atomlist: list
:type mineuclid: float
:type maxeuclid: float
:rtype : list
"""
try:
#import getRing
import tessellate.utils.getRing as getRing
import itertools
import numpy as np
except Exception as e:
print("Error - Cannot import module ", e)
exit(1)
SSSR = [] # keep track of all the rings
edges = {}
for a, b in itertools.combinations(atomlist, 2):
# work out euclidean distance and choose to call this an edge if mineuclid<dist<maxeuclid
dist = np.linalg.norm(a[1] - b[1])
if maxeuclid > dist > mineuclid:
try:
edges[a[0]].append(b[0])
except:
edges[a[0]] = [b[0]]
try:
edges[b[0]].append(a[0])
except:
edges[b[0]] = [a[0]]
# debug print out the edges
for atom in edges:
logger.debug('Atom edges info %s %s %s', atom, edges[atom], len(edges[atom]))
# Now recursively remove all terminal nodes i.e.with only one edge
alledges = dict(edges)
deletededges = {}
#logger.debug("allededges dict %s",alledges)
#logger.debug("edges %s",edges)
if edges != []:
while edges:
for atom in dict(edges).keys():
N2nodes = []
N3nodes = []
logger.debug("all edge keys %s",edges.keys())
logger.debug("Current edge %s",atom)
if atom in deletededges.keys():
logger.critical("revisiting a deleted edge")
exit(1)
if int(len(edges[atom])) == int(0): # trim degree zero nodes
logger.debug("Zero edge node deleted %s",atom)
deletededges[atom]=edges[atom]
del edges[atom] # cannot delete dictionary while iterating use a while
elif int(len(edges[atom])) == int(1): # trim degree one nodes
# pop it from other atoms
logger.debug("One edge node %s with %i edges %s removed from parent node edges list %s",atom, len(edges[atom]),edges[atom],edges[edges[atom][0]])
edges[edges[atom][0]].remove(atom)
edges[atom] = []
deletededges[atom]=edges[atom]
del edges[atom] # cannot delete dictionary while iterating use a while
elif int(len(edges[atom])) == int(2): # find nodes of degree 2 and add to N2 nodes
logger.debug("Two edge node %s with %i edges",atom, len(edges[atom]))
N2nodes.append(atom)
elif int(len(edges[atom])) == int(3): # find nodes of degree 2 and add to N3 nodes
logger.debug("Three edge node %s with %i edges",atom, len(edges[atom]))
N3nodes.append(atom)
if getRing.min_degree(edges)[1] == int(2) and N2nodes: # the minimum degree of the entire graph
logger.debug("current min degree %i", getRing.min_degree(edges)[1] )
for N2 in N2nodes:
ring = getRing.getring(N2, alledges) # give this ring the entire graph
logger.debug("ring from getring %s", ring)
if len(ring) > 0:
if ring in SSSR: # if exists then ignore, not sorting as a unique ring is being offered by getRing
logger.debug("ring already in SSSR %s %i %s", ring, len(ring),type(ring))
pass
else:
SSSR.append(ring)
else:
pass
try: # try isolate and eliminate one N2 node
logger.debug("isolate N2 node %s with %i edges and break one bond ", N2, len(edges[N2]) )
bondtobreak=edges[N2].pop()
logger.debug("break %s with current edges %s ", bondtobreak, edges[bondtobreak] )
edges[bondtobreak].remove(N2)
except:
logger.debug("N2 not eliminated")
pass
elif getRing.min_degree(edges)[1] == int(3) and N3nodes: # the minimum degree of the entire graph
logger.debug("current min degree %i", getRing.min_degree(edges)[1] )
logger.debug("N3 d3069")
for N3 in N3nodes:
ring = getRing.getring(N3, alledges) # give this ring the entire graph
logger.debug("ring from getring %s", ring)
if len(ring) > 0:
if ring in SSSR: # if exists then ignore, not sorting as a unique ring is being offered by getRing
logger.debug("ring already in SSSR %s %i %s", ring, len(ring),type(ring))
pass
else:
SSSR.append(ring)
else:
pass
try: # try isolate and eliminate one N3 node
if len(edges[N3])>1:
logger.debug("isolate N3 node %s with %i edges and break one bond ", N3, len(edges[N3]) )
bondtobreak=edges[N3].pop()
logger.debug("break %s with current edges %s ", bondtobreak, edges[bondtobreak] )
edges[bondtobreak].remove(N3)
except Exception as e:
logger.error(e)
raise e
# if edges != []:
# while edges:
# #currentkeyset=list(edges.keys())
# for atom in list(edges.keys()):
# N2nodes = []
# N3nodes = []
# logger.debug("all edge keys %s",edges.keys())
# logger.debug("deleted edge keys %s",deletededges.keys())
# logger.debug("Current edge %s",atom)
# if atom in deletededges.keys():
# logger.critical("revisiting a deleted edge")
# exit(1)
# if int(len(edges[atom])) == int(0): # trim degree zero nodes
# logger.debug("Zero edge node deleted %s",atom)
# deletededges[atom]=edges[atom]
# del edges[atom] # cannot delete dictionary while iterating use a while
# continue
# elif int(len(edges[atom])) == int(1): # trim degree one nodes
# # pop it from other atoms
# logger.debug("One edge node %s with %i edges %s removed from parent node edges list %s",atom, len(edges[atom]),edges[atom],edges[edges[atom][0]])
# edges[edges[atom][0]].remove(atom)
# edges[atom] = []
# deletededges[atom]=edges[atom]
# del edges[atom] # cannot delete dictionary while iterating use a while
# continue
# elif int(len(edges[atom])) == int(2): # find nodes of degree 2 and add to N2 nodes
# logger.debug("Two edge node %s with %i edges",atom, len(edges[atom]))
# N2nodes.append(atom)
# elif int(len(edges[atom])) == int(3): # find nodes of degree 2 and add to N3 nodes
# logger.debug("Three edge node %s with %i edges",atom, len(edges[atom]))
# N3nodes.append(atom)
#
# if getRing.min_degree(edges)[1] == int(2) and N2nodes: # the minimum degree of the entire graph
# logger.debug("current min degree %i", getRing.min_degree(edges)[1] )
# for N2 in N2nodes:
# ring = getRing.getring(N2, alledges) # give this ring the entire graph
# logger.debug("ring from getring %s", ring)
# if len(ring) > 0:
# if ring in SSSR: # if exists then ignore, not sorting as a unique ring is being offered by getRing
# logger.debug("ring already in SSSR %s %i %s", ring, len(ring),type(ring))
# pass
# else:
# SSSR.append(ring)
# else:
# pass
# try: # try isolate and eliminate one N2 node
# logger.debug("isolate N2 node %s with %i edges and break one bond ", N2, len(edges[N2]) )
# bondtobreak=edges[N2].pop()
# logger.debug("break %s with current edges %s ", bondtobreak, edges[bondtobreak] )
# edges[bondtobreak].remove(N2)
# except:
# logger.debug("N2 not eliminated")
# pass
#
# elif getRing.min_degree(edges)[1] == int(3) and N3nodes: # the minimum degree of the entire graph
# logger.debug("current min degree %i", getRing.min_degree(edges)[1] )
# logger.debug("N3 d3069")
# for N3 in N3nodes:
# ring = getRing.getring(N3, alledges) # give this ring the entire graph
# logger.debug("ring from getring %s", ring)
# if len(ring) > 0:
# if ring in SSSR: # if exists then ignore, not sorting as a unique ring is being offered by getRing
# logger.debug("ring already in SSSR %s %i %s", ring, len(ring),type(ring))
# pass
# else:
# SSSR.append(ring)
# else:
# pass
# try: # try isolate and eliminate one N3 node
# logger.debug("isolate N3 node %s with %i edges and break one bond ", N3, len(edges[N3]) )
# bondtobreak=edges[N3].pop()
# logger.debug("break %s with current edges %s ", bondtobreak, edges[bondtobreak] )
# edges[bondtobreak].remove(N3)
# except Exception as e:
# logger.error(e)
# print(e)
if SSSR:
logger.debug("SSSR %s", SSSR)
return SSSR
def create_graph_and_find_rings_old(atomlist, mineuclid=1.1, maxeuclid=2.0):
"""
:type atomlist: list
:type mineuclid: float
:type maxeuclid: float
:rtype : list
"""
try:
# import getRing
import tessellate.utils.getRing as getRing
import itertools
import numpy as np
except Exception as e:
print("Error - Cannot import module ", e)
exit(1)
SSSR = [] # keep track of all the rings
edges = {}
for a, b in itertools.combinations(atomlist, 2):
# work out euclidean distance and choose to call this an edge if mineuclid<dist<maxeuclid
dist = np.linalg.norm(a[1] - b[1])
if maxeuclid > dist > mineuclid:
try:
edges[a[0]].append(b[0])
except:
edges[a[0]] = [b[0]]
try:
edges[b[0]].append(a[0])
except:
edges[b[0]] = [a[0]]
for atom in edges:
logger.debug('Atom edges info %s %s %s', atom, edges[atom], len(edges[atom]))
# Now recursively remove all terminal nodes i.e.with only one edge
alledges = dict(edges)
if edges != []:
while edges:
for atom in dict(edges).keys():
N2nodes = []
logger.debug("all edge keys %s",edges.keys())
logger.debug("Current edge %s",atom)
if int(len(edges[atom])) == int(0): # trim degree zero nodes
logger.debug("Zero edge node deleted %s",atom)
del edges[atom] # cannot delete dictionary while iterating use a while
elif int(len(edges[atom])) == int(1): # trim degree one nodes
# pop it from other atoms
logger.debug("One edge node %s with %i edges %s removed from parent node edges list %s",atom, len(edges[atom]),edges[atom],edges[edges[atom][0]])
edges[edges[atom][0]].remove(atom)
edges[atom] = []
del edges[atom] # cannot delete dictionary while iterating use a while
elif int(len(edges[atom])) == int(2): # find nodes of degree 2 and add to N2 nodes
logger.debug("Two edge node %s with %i edges",atom, len(edges[atom]))
N2nodes.append(atom)
if getRing.min_degree(edges)[1] == int(2): # the minimum degree of the entire graph
logger.debug("current min degree %i", getRing.min_degree(edges)[1] )
for N2 in N2nodes:
ring = getRing.getring(N2, alledges) # give this ring the entire graph
logger.debug("ring from getring %s", ring)
if len(ring) > 0:
if ring in SSSR: # if exists then ignore, not sorting as a unique ring is being offered by getRing
logger.debug("ring already in SSSR %s %i %s", ring, len(ring),type(ring))
pass
else:
SSSR.append(ring)
else:
pass
try: # try isolate and eliminate one N2 node
logger.debug("isolate N2 node %s with %i edges and break one bond ", N2, len(edges[N2]) )
bondtobreak=edges[N2].pop()
logger.debug("break %s with current edges %s ", bondtobreak, edges[bondtobreak] )
edges[bondtobreak].remove(N2)
except:
logger.debug("Couldn't eliminate N2")
pass
elif getRing.min_degree(edges)[1] == int(3): # the minimum degree of the entire graph
logger.debug("current min degree %i", getRing.min_degree(edges)[1] )
ring = getRing.getring(atom, alledges)
logger.debug("ring from getring %s", ring)
if len(ring) > 0:
if ring in SSSR:
pass
else:
SSSR.append(ring)
else:
pass
try: # select an optimum edge for elimination. trial each edge in alledges
logger.error("FUTURETODO - N3 check edges not yet implemented only applicable to cages etc." )
exit(1)
except:
pass
if SSSR:
logger.debug("SSSR %s", SSSR)
return SSSR
def create_graph_and_find_rings_6abb(atomlist, mineuclid=1.1, maxeuclid=2.0):
"""
:type atomlist: list
:type mineuclid: float
:type maxeuclid: float
:rtype : list
"""
try:
# import getRing
import tessellate.utils.getRing as getRing
import itertools
import numpy as np
except Exception as e:
print("Error - Cannot import module ", e)
exit(1)
SSSR = [] # keep track of all the rings
edges = {}
for a, b in itertools.combinations(atomlist, 2):
# work out euclidean distance and choose to call this an edge if mineuclid<dist<maxeuclid
dist = np.linalg.norm(a[1] - b[1])
if maxeuclid > dist > mineuclid:
#edges.append([a,b]) # this works but is difficult to remove edges later
try:
edges[a[0]].append(b[0])
except:
edges[a[0]] = [b[0]]
try:
edges[b[0]].append(a[0])
except:
edges[b[0]] = [a[0]]
for atom in edges:
logger.debug('Atom edges info %s %s %s', atom, edges[atom], len(edges[atom]))
# Now recursively remove all terminal nodes i.e.with only one edge
alledges = dict(edges)
deletededges = {}
if edges != []:
while edges:
N2nodes = []
N3nodes = []
for atom in dict(edges).keys():
logger.debug("all edge keys %s",edges.keys())
logger.debug("Current edge %s",atom)
if atom in deletededges.keys():
logger.critical("revisiting a deleted edge")
exit(1)
if int(len(edges[atom])) == int(0): # trim degree zero nodes
logger.debug("Zero edge node deleted %s",atom)
deletededges[atom]=edges[atom]
del edges[atom] # cannot delete dictionary while iterating use a while
elif int(len(edges[atom])) == int(1): # trim degree one nodes
# pop it from other atoms
logger.debug("One edge node %s with %i edges %s removed from parent node edges list %s",atom, len(edges[atom]),edges[atom],edges[edges[atom][0]])
edges[edges[atom][0]].remove(atom)
edges[atom] = []
deletededges[atom]=edges[atom]
#logger.critical("Deleting atom %s %s",atom, deletededges)
del edges[atom] # cannot delete dictionary while iterating use a while
elif int(len(edges[atom])) == int(2): # find nodes of degree 2 and add to N2 nodes
logger.debug("Two edge node %s with %i edges",atom, len(edges[atom]))
N2nodes.append(atom)
elif int(len(edges[atom])) == int(3): # find nodes of degree 2 and add to N3 nodes
logger.debug("Three edge node %s with %i edges",atom, len(edges[atom]))
N3nodes.append(atom)
if getRing.min_degree(edges)[1] == int(2) and N2nodes: # the minimum degree of the entire graph
logger.debug("current min degree %i", getRing.min_degree(edges)[1] )
for N2 in N2nodes:
ring = getRing.getring(N2, alledges) # give this ring the entire graph
logger.debug("ring from getring %s", ring)
if len(ring) > 0:
if ring in SSSR: # if exists then ignore, not sorting as a unique ring is being offered by getRing
logger.debug("ring already in SSSR %s %i %s", ring, len(ring),type(ring))
else:
SSSR.append(ring)
logger.debug("isolate N2 node %s with %i edges and break one bond ", N2nodes[0], len(edges[N2nodes[0]]) )
bondtobreak=edges[N2nodes[0]].pop(0)
logger.debug("break %s with current edges %s ", bondtobreak, edges[bondtobreak] )
edges[bondtobreak].remove(N2nodes[0])
elif getRing.min_degree(edges)[1] == int(3) and N3nodes: # the minimum degree of the entire graph
logger.debug("current min degree %i", getRing.min_degree(edges)[1] )
logger.debug("N3 6abb")
for N3 in N3nodes:
ring = getRing.getring(N3, alledges) # give this ring the entire graph
logger.debug("ring from getring %s", ring)
if len(ring) > 0:
if ring in SSSR: # if exists then ignore, not sorting as a unique ring is being offered by getRing
logger.debug("ring already in SSSR %s %i %s", ring, len(ring),type(ring))
pass
else:
SSSR.append(ring)
else:
pass
try: # try isolate and eliminate one N3 node
if len(edges[N3])>1:
logger.debug("isolate N3 node %s with %i edges and break one bond ", N3, len(edges[N3]) )
bondtobreak=edges[N3].pop()
logger.debug("break %s with current edges %s ", bondtobreak, edges[bondtobreak] )
edges[bondtobreak].remove(N3)
except Exception as e:
logger.error(e)
raise e
# if edges != []:
# while edges:
# N2nodes = []
# N3nodes = []
# #for atom in dict(edges).keys():
# for atom in list(edges.keys()):
# logger.debug("all edge keys %s",edges.keys())
# logger.debug("deleted edge keys %s",deletededges.keys())
# logger.debug("Current edge %s",atom)
# if atom in deletededges.keys():
# logger.critical("revisiting a deleted edge")
# exit(1)
#
# if int(len(edges[atom])) == int(0): # trim degree zero nodes
# logger.debug("Zero edge node deleted %s",atom)
# deletededges[atom]=edges[atom]
# del edges[atom] # cannot delete dictionary while iterating use a while
# elif int(len(edges[atom])) == int(1): # trim degree one nodes
# # pop it from other atoms
# logger.debug("One edge node %s with %i edges %s removed from parent node edges list %s",atom, len(edges[atom]),edges[atom],edges[edges[atom][0]])
# edges[edges[atom][0]].remove(atom)
# edges[atom] = []
# deletededges[atom]=edges[atom]
# #logger.critical("Deleting atom %s %s",atom, deletededges)
# del edges[atom] # cannot delete dictionary while iterating use a while
# elif int(len(edges[atom])) == int(2): # find nodes of degree 2 and add to N2 nodes
# logger.debug("Two edge node %s with %i edges",atom, len(edges[atom]))
# N2nodes.append(atom)
# elif int(len(edges[atom])) == int(3): # find nodes of degree 2 and add to N3 nodes
# logger.debug("Three edge node %s with %i edges",atom, len(edges[atom]))
# N3nodes.append(atom)
#
# if getRing.min_degree(edges)[1] == int(2) and N2nodes: # the minimum degree of the entire graph
# logger.debug("current min degree %i", getRing.min_degree(edges)[1] )
# for N2 in N2nodes:
# ring = getRing.getring(N2, alledges) # give this ring the entire graph
# logger.debug("ring from getring %s", ring)
# if len(ring) > 0:
# if ring in SSSR: # if exists then ignore, not sorting as a unique ring is being offered by getRing
# logger.debug("ring already in SSSR %s %i %s", ring, len(ring),type(ring))
# else:
# SSSR.append(ring)
# logger.debug("isolate N2 node %s with %i edges and break one bond ", N2nodes[0], len(edges[N2nodes[0]]) )
# bondtobreak=edges[N2nodes[0]].pop(0)
# logger.debug("break %s with current edges %s ", bondtobreak, edges[bondtobreak] )
# edges[bondtobreak].remove(N2nodes[0])
#
# elif getRing.min_degree(edges)[1] == int(3) and N3nodes: # the minimum degree of the entire graph
# logger.debug("current min degree %i", getRing.min_degree(edges)[1] )
# logger.debug("N3 6abb")
# for N3 in N3nodes:
# ring = getRing.getring(N3, alledges) # give this ring the entire graph
# logger.debug("ring from getring %s", ring)
# if len(ring) > 0:
# if ring in SSSR: # if exists then ignore, not sorting as a unique ring is being offered by getRing
# logger.debug("ring already in SSSR %s %i %s", ring, len(ring),type(ring))
# pass
# else:
# SSSR.append(ring)
# else:
# pass
# try: # try isolate and eliminate one N3 node
# logger.debug("isolate N3 node %s with %i edges and break one bond ", N3, len(edges[N3]) )
# bondtobreak=edges[N3].pop()
# logger.debug("break %s with current edges %s ", bondtobreak, edges[bondtobreak] )
# edges[bondtobreak].remove(N3)
# except Exception as e:
# logger.error(e)
# print(e)
if SSSR:
logger.debug("SSSR %s", SSSR)
return SSSR
| 51.629577 | 166 | 0.505961 | 4,280 | 36,657 | 4.310748 | 0.089486 | 0.064986 | 0.024715 | 0.012466 | 0.761626 | 0.7471 | 0.736369 | 0.726883 | 0.711545 | 0.70206 | 0 | 0.021041 | 0.389339 | 36,657 | 709 | 167 | 51.702398 | 0.803172 | 0.403197 | 0 | 0.736486 | 0 | 0 | 0.107359 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.024775 | false | 0.036036 | 0.038288 | 0 | 0.112613 | 0.009009 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
6f2abe8cd021cfc3a2b9a1151c68a7b8fe051a62 | 113 | py | Python | python/map/map.py | hulkcoder/studycoder | 93e05ff6c8778ac5698284c2b0862bab7e58b696 | [
"MIT"
] | null | null | null | python/map/map.py | hulkcoder/studycoder | 93e05ff6c8778ac5698284c2b0862bab7e58b696 | [
"MIT"
] | 9 | 2020-02-25T22:02:21.000Z | 2022-03-30T23:06:06.000Z | python/map/map.py | bodhileafy/studycoder | 07b526b8cc040ed3b3baf17ae0d93eac83a48d7a | [
"MIT"
] | null | null | null | print("map")
a = "["
x = range(30000,32767,1)
for i in range(30000,32768,1):
a = a + str(i) + ","
print a + "]" | 18.833333 | 30 | 0.530973 | 21 | 113 | 2.857143 | 0.619048 | 0.333333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.244444 | 0.20354 | 113 | 6 | 31 | 18.833333 | 0.422222 | 0 | 0 | 0 | 0 | 0 | 0.052632 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0.333333 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
6f344cd75700d42f0d8fa6ffba402679bdf47ebc | 38,490 | py | Python | tests/backend/builders/test_sdist.py | ashemedai/hatch | 9ec00d5e027c992efbc16dd777b1f6926368b6bf | [
"MIT"
] | null | null | null | tests/backend/builders/test_sdist.py | ashemedai/hatch | 9ec00d5e027c992efbc16dd777b1f6926368b6bf | [
"MIT"
] | null | null | null | tests/backend/builders/test_sdist.py | ashemedai/hatch | 9ec00d5e027c992efbc16dd777b1f6926368b6bf | [
"MIT"
] | null | null | null | import os
import tarfile
import pytest
from hatchling.builders.plugin.interface import BuilderInterface
from hatchling.builders.sdist import SdistBuilder
from hatchling.builders.utils import get_reproducible_timestamp
from hatchling.metadata.utils import get_core_metadata_constructors
def test_class():
assert issubclass(SdistBuilder, BuilderInterface)
def test_default_versions(isolation):
builder = SdistBuilder(str(isolation))
assert builder.get_default_versions() == ['standard']
class TestSupportLegacy:
def test_default(self, isolation):
builder = SdistBuilder(str(isolation))
assert builder.support_legacy is builder.support_legacy is False
def test_target(self, isolation):
config = {'tool': {'hatch': {'build': {'targets': {'sdist': {'support-legacy': True}}}}}}
builder = SdistBuilder(str(isolation), config=config)
assert builder.support_legacy is builder.support_legacy is True
class TestCoreMetadataConstructor:
def test_default(self, isolation):
builder = SdistBuilder(str(isolation))
assert builder.core_metadata_constructor is builder.core_metadata_constructor
assert builder.core_metadata_constructor is get_core_metadata_constructors()['2.1']
def test_not_string(self, isolation):
config = {'tool': {'hatch': {'build': {'targets': {'sdist': {'core-metadata-version': 42}}}}}}
builder = SdistBuilder(str(isolation), config=config)
with pytest.raises(
TypeError, match='Field `tool.hatch.build.targets.sdist.core-metadata-version` must be a string'
):
_ = builder.core_metadata_constructor
def test_unknown(self, isolation):
config = {'tool': {'hatch': {'build': {'targets': {'sdist': {'core-metadata-version': '9000'}}}}}}
builder = SdistBuilder(str(isolation), config=config)
with pytest.raises(
ValueError,
match=(
f'Unknown metadata version `9000` for field `tool.hatch.build.targets.sdist.core-metadata-version`. '
f'Available: {", ".join(sorted(get_core_metadata_constructors()))}'
),
):
_ = builder.core_metadata_constructor
class TestConstructSetupPyFile:
def test_default(self, helpers, isolation):
config = {'project': {'name': 'my__app', 'version': '0.1.0'}}
builder = SdistBuilder(str(isolation), config=config)
assert builder.construct_setup_py_file([]) == helpers.dedent(
"""
# -*- coding: utf-8 -*-
from setuptools import setup
setup(
name='my-app',
version='0.1.0',
)
"""
)
def test_packages(self, helpers, isolation):
config = {'project': {'name': 'my__app', 'version': '0.1.0'}}
builder = SdistBuilder(str(isolation), config=config)
assert builder.construct_setup_py_file(['my_app', os.path.join('my_app', 'pkg')]) == helpers.dedent(
"""
# -*- coding: utf-8 -*-
from setuptools import setup
setup(
name='my-app',
version='0.1.0',
packages=[
'my_app',
'my_app.pkg',
],
)
"""
)
def test_description(self, helpers, isolation):
config = {'project': {'name': 'my__app', 'version': '0.1.0', 'description': 'foo'}}
builder = SdistBuilder(str(isolation), config=config)
assert builder.construct_setup_py_file(['my_app', os.path.join('my_app', 'pkg')]) == helpers.dedent(
"""
# -*- coding: utf-8 -*-
from setuptools import setup
setup(
name='my-app',
version='0.1.0',
description='foo',
packages=[
'my_app',
'my_app.pkg',
],
)
"""
)
def test_readme(self, helpers, isolation):
config = {
'project': {
'name': 'my__app',
'version': '0.1.0',
'readme': {'content-type': 'text/markdown', 'text': 'test content\n'},
}
}
builder = SdistBuilder(str(isolation), config=config)
assert builder.construct_setup_py_file(['my_app', os.path.join('my_app', 'pkg')]) == helpers.dedent(
"""
# -*- coding: utf-8 -*-
from setuptools import setup
setup(
name='my-app',
version='0.1.0',
long_description='test content\\n',
packages=[
'my_app',
'my_app.pkg',
],
)
"""
)
def test_authors_name(self, helpers, isolation):
config = {'project': {'name': 'my__app', 'version': '0.1.0', 'authors': [{'name': 'foo'}]}}
builder = SdistBuilder(str(isolation), config=config)
assert builder.construct_setup_py_file(['my_app', os.path.join('my_app', 'pkg')]) == helpers.dedent(
"""
# -*- coding: utf-8 -*-
from setuptools import setup
setup(
name='my-app',
version='0.1.0',
author='foo',
packages=[
'my_app',
'my_app.pkg',
],
)
"""
)
def test_authors_email(self, helpers, isolation):
config = {'project': {'name': 'my__app', 'version': '0.1.0', 'authors': [{'email': 'foo@domain'}]}}
builder = SdistBuilder(str(isolation), config=config)
assert builder.construct_setup_py_file(['my_app', os.path.join('my_app', 'pkg')]) == helpers.dedent(
"""
# -*- coding: utf-8 -*-
from setuptools import setup
setup(
name='my-app',
version='0.1.0',
author_email='foo@domain',
packages=[
'my_app',
'my_app.pkg',
],
)
"""
)
def test_authors_name_and_email(self, helpers, isolation):
config = {
'project': {'name': 'my__app', 'version': '0.1.0', 'authors': [{'email': 'bar@domain', 'name': 'foo'}]}
}
builder = SdistBuilder(str(isolation), config=config)
assert builder.construct_setup_py_file(['my_app', os.path.join('my_app', 'pkg')]) == helpers.dedent(
"""
# -*- coding: utf-8 -*-
from setuptools import setup
setup(
name='my-app',
version='0.1.0',
author_email='foo <bar@domain>',
packages=[
'my_app',
'my_app.pkg',
],
)
"""
)
def test_authors_multiple(self, helpers, isolation):
config = {'project': {'name': 'my__app', 'version': '0.1.0', 'authors': [{'name': 'foo'}, {'name': 'bar'}]}}
builder = SdistBuilder(str(isolation), config=config)
assert builder.construct_setup_py_file(['my_app', os.path.join('my_app', 'pkg')]) == helpers.dedent(
"""
# -*- coding: utf-8 -*-
from setuptools import setup
setup(
name='my-app',
version='0.1.0',
author='foo, bar',
packages=[
'my_app',
'my_app.pkg',
],
)
"""
)
def test_maintainers_name(self, helpers, isolation):
config = {'project': {'name': 'my__app', 'version': '0.1.0', 'maintainers': [{'name': 'foo'}]}}
builder = SdistBuilder(str(isolation), config=config)
assert builder.construct_setup_py_file(['my_app', os.path.join('my_app', 'pkg')]) == helpers.dedent(
"""
# -*- coding: utf-8 -*-
from setuptools import setup
setup(
name='my-app',
version='0.1.0',
maintainer='foo',
packages=[
'my_app',
'my_app.pkg',
],
)
"""
)
def test_maintainers_email(self, helpers, isolation):
config = {'project': {'name': 'my__app', 'version': '0.1.0', 'maintainers': [{'email': 'foo@domain'}]}}
builder = SdistBuilder(str(isolation), config=config)
assert builder.construct_setup_py_file(['my_app', os.path.join('my_app', 'pkg')]) == helpers.dedent(
"""
# -*- coding: utf-8 -*-
from setuptools import setup
setup(
name='my-app',
version='0.1.0',
maintainer_email='foo@domain',
packages=[
'my_app',
'my_app.pkg',
],
)
"""
)
def test_maintainers_name_and_email(self, helpers, isolation):
config = {
'project': {'name': 'my__app', 'version': '0.1.0', 'maintainers': [{'email': 'bar@domain', 'name': 'foo'}]}
}
builder = SdistBuilder(str(isolation), config=config)
assert builder.construct_setup_py_file(['my_app', os.path.join('my_app', 'pkg')]) == helpers.dedent(
"""
# -*- coding: utf-8 -*-
from setuptools import setup
setup(
name='my-app',
version='0.1.0',
maintainer_email='foo <bar@domain>',
packages=[
'my_app',
'my_app.pkg',
],
)
"""
)
def test_maintainers_multiple(self, helpers, isolation):
config = {'project': {'name': 'my__app', 'version': '0.1.0', 'maintainers': [{'name': 'foo'}, {'name': 'bar'}]}}
builder = SdistBuilder(str(isolation), config=config)
assert builder.construct_setup_py_file(['my_app', os.path.join('my_app', 'pkg')]) == helpers.dedent(
"""
# -*- coding: utf-8 -*-
from setuptools import setup
setup(
name='my-app',
version='0.1.0',
maintainer='foo, bar',
packages=[
'my_app',
'my_app.pkg',
],
)
"""
)
def test_classifiers(self, helpers, isolation):
config = {'project': {'name': 'my__app', 'version': '0.1.0', 'classifiers': ['foo', 'bar']}}
builder = SdistBuilder(str(isolation), config=config)
assert builder.construct_setup_py_file(['my_app', os.path.join('my_app', 'pkg')]) == helpers.dedent(
"""
# -*- coding: utf-8 -*-
from setuptools import setup
setup(
name='my-app',
version='0.1.0',
classifiers=[
'bar',
'foo',
],
packages=[
'my_app',
'my_app.pkg',
],
)
"""
)
def test_dependencies(self, helpers, isolation):
config = {'project': {'name': 'my__app', 'version': '0.1.0', 'dependencies': ['foo==1', 'bar==5']}}
builder = SdistBuilder(str(isolation), config=config)
assert builder.construct_setup_py_file(['my_app', os.path.join('my_app', 'pkg')]) == helpers.dedent(
"""
# -*- coding: utf-8 -*-
from setuptools import setup
setup(
name='my-app',
version='0.1.0',
install_requires=[
'bar==5',
'foo==1',
],
packages=[
'my_app',
'my_app.pkg',
],
)
"""
)
def test_optional_dependencies(self, helpers, isolation):
config = {
'project': {
'name': 'my__app',
'version': '0.1.0',
'optional-dependencies': {
'feature2': ['foo==1; python_version < "3"', 'bar==5'],
'feature1': ['foo==1', 'bar==5; python_version < "3"'],
},
}
}
builder = SdistBuilder(str(isolation), config=config)
assert builder.construct_setup_py_file(['my_app', os.path.join('my_app', 'pkg')]) == helpers.dedent(
"""
# -*- coding: utf-8 -*-
from setuptools import setup
setup(
name='my-app',
version='0.1.0',
extras_require={
'feature1': [
'bar==5; python_version < "3"',
'foo==1',
],
'feature2': [
'bar==5',
'foo==1; python_version < "3"',
],
},
packages=[
'my_app',
'my_app.pkg',
],
)
"""
)
def test_scripts(self, helpers, isolation):
config = {'project': {'name': 'my__app', 'version': '0.1.0', 'scripts': {'foo': 'pkg:bar', 'bar': 'pkg:foo'}}}
builder = SdistBuilder(str(isolation), config=config)
assert builder.construct_setup_py_file(['my_app', os.path.join('my_app', 'pkg')]) == helpers.dedent(
"""
# -*- coding: utf-8 -*-
from setuptools import setup
setup(
name='my-app',
version='0.1.0',
entry_points={
'console_scripts': [
'bar = pkg:foo',
'foo = pkg:bar',
],
},
packages=[
'my_app',
'my_app.pkg',
],
)
"""
)
def test_gui_scripts(self, helpers, isolation):
config = {
'project': {'name': 'my__app', 'version': '0.1.0', 'gui-scripts': {'foo': 'pkg:bar', 'bar': 'pkg:foo'}}
}
builder = SdistBuilder(str(isolation), config=config)
assert builder.construct_setup_py_file(['my_app', os.path.join('my_app', 'pkg')]) == helpers.dedent(
"""
# -*- coding: utf-8 -*-
from setuptools import setup
setup(
name='my-app',
version='0.1.0',
entry_points={
'gui_scripts': [
'bar = pkg:foo',
'foo = pkg:bar',
],
},
packages=[
'my_app',
'my_app.pkg',
],
)
"""
)
def test_entry_points(self, helpers, isolation):
config = {
'project': {
'name': 'my__app',
'version': '0.1.0',
'entry-points': {
'foo': {'bar': 'pkg:foo', 'foo': 'pkg:bar'},
'bar': {'foo': 'pkg:bar', 'bar': 'pkg:foo'},
},
}
}
builder = SdistBuilder(str(isolation), config=config)
assert builder.construct_setup_py_file(['my_app', os.path.join('my_app', 'pkg')]) == helpers.dedent(
"""
# -*- coding: utf-8 -*-
from setuptools import setup
setup(
name='my-app',
version='0.1.0',
entry_points={
'bar': [
'bar = pkg:foo',
'foo = pkg:bar',
],
'foo': [
'bar = pkg:foo',
'foo = pkg:bar',
],
},
packages=[
'my_app',
'my_app.pkg',
],
)
"""
)
def test_all(self, helpers, isolation):
config = {
'project': {
'name': 'my__app',
'version': '0.1.0',
'description': 'foo',
'readme': {'content-type': 'text/markdown', 'text': 'test content\n'},
'authors': [{'email': 'bar@domain', 'name': 'foo'}],
'maintainers': [{'email': 'bar@domain', 'name': 'foo'}],
'classifiers': ['foo', 'bar'],
'dependencies': ['foo==1', 'bar==5'],
'optional-dependencies': {
'feature2': ['foo==1; python_version < "3"', 'bar==5'],
'feature1': ['foo==1', 'bar==5; python_version < "3"'],
'feature3': [],
},
'scripts': {'foo': 'pkg:bar', 'bar': 'pkg:foo'},
'gui-scripts': {'foo': 'pkg:bar', 'bar': 'pkg:foo'},
'entry-points': {
'foo': {'bar': 'pkg:foo', 'foo': 'pkg:bar'},
'bar': {'foo': 'pkg:bar', 'bar': 'pkg:foo'},
},
}
}
builder = SdistBuilder(str(isolation), config=config)
assert builder.construct_setup_py_file(['my_app', os.path.join('my_app', 'pkg')]) == helpers.dedent(
"""
# -*- coding: utf-8 -*-
from setuptools import setup
setup(
name='my-app',
version='0.1.0',
description='foo',
long_description='test content\\n',
author_email='foo <bar@domain>',
maintainer_email='foo <bar@domain>',
classifiers=[
'bar',
'foo',
],
install_requires=[
'bar==5',
'foo==1',
],
extras_require={
'feature1': [
'bar==5; python_version < "3"',
'foo==1',
],
'feature2': [
'bar==5',
'foo==1; python_version < "3"',
],
},
entry_points={
'console_scripts': [
'bar = pkg:foo',
'foo = pkg:bar',
],
'gui_scripts': [
'bar = pkg:foo',
'foo = pkg:bar',
],
'bar': [
'bar = pkg:foo',
'foo = pkg:bar',
],
'foo': [
'bar = pkg:foo',
'foo = pkg:bar',
],
},
packages=[
'my_app',
'my_app.pkg',
],
)
"""
)
class TestBuildStandard:
def test_default(self, hatch, helpers, temp_dir):
project_name = 'My App'
with temp_dir.as_cwd():
result = hatch('new', project_name)
assert result.exit_code == 0, result.output
project_path = temp_dir / 'my-app'
config = {
'project': {'name': 'my__app', 'dynamic': ['version']},
'tool': {
'hatch': {
'version': {'path': 'my_app/__about__.py'},
'build': {'targets': {'sdist': {'versions': ['standard']}}},
},
},
}
builder = SdistBuilder(str(project_path), config=config)
build_path = project_path / 'dist'
with project_path.as_cwd():
artifacts = list(builder.build())
assert len(artifacts) == 1
expected_artifact = artifacts[0]
build_artifacts = list(build_path.iterdir())
assert len(build_artifacts) == 1
assert expected_artifact == str(build_artifacts[0])
assert expected_artifact == str(build_path / f'{builder.project_id}.tar.gz')
extraction_directory = temp_dir / '_archive'
extraction_directory.mkdir()
with tarfile.open(str(expected_artifact), 'r:gz') as tar_archive:
tar_archive.extractall(str(extraction_directory))
expected_files = helpers.get_template_files(
'sdist.standard_default', project_name, relative_root=builder.project_id
)
helpers.assert_files(extraction_directory, expected_files, check_contents=True)
stat = os.stat(str(extraction_directory / builder.project_id / 'PKG-INFO'))
assert stat.st_mtime == get_reproducible_timestamp()
def test_default_no_reproducible(self, hatch, helpers, temp_dir):
project_name = 'My App'
with temp_dir.as_cwd():
result = hatch('new', project_name)
assert result.exit_code == 0, result.output
project_path = temp_dir / 'my-app'
config = {
'project': {'name': 'my__app', 'dynamic': ['version']},
'tool': {
'hatch': {
'version': {'path': 'my_app/__about__.py'},
'build': {'targets': {'sdist': {'versions': ['standard'], 'reproducible': False}}},
},
},
}
builder = SdistBuilder(str(project_path), config=config)
build_path = project_path / 'dist'
build_path.mkdir()
with project_path.as_cwd():
artifacts = list(builder.build(str(build_path)))
assert len(artifacts) == 1
expected_artifact = artifacts[0]
build_artifacts = list(build_path.iterdir())
assert len(build_artifacts) == 1
assert expected_artifact == str(build_artifacts[0])
assert expected_artifact == str(build_path / f'{builder.project_id}.tar.gz')
extraction_directory = temp_dir / '_archive'
extraction_directory.mkdir()
with tarfile.open(str(expected_artifact), 'r:gz') as tar_archive:
tar_archive.extractall(str(extraction_directory))
expected_files = helpers.get_template_files(
'sdist.standard_default', project_name, relative_root=builder.project_id
)
helpers.assert_files(extraction_directory, expected_files, check_contents=True)
stat = os.stat(str(extraction_directory / builder.project_id / 'PKG-INFO'))
assert stat.st_mtime != get_reproducible_timestamp()
def test_default_support_legacy(self, hatch, helpers, temp_dir):
project_name = 'My App'
with temp_dir.as_cwd():
result = hatch('new', project_name)
assert result.exit_code == 0, result.output
project_path = temp_dir / 'my-app'
config = {
'project': {'name': 'my__app', 'dynamic': ['version']},
'tool': {
'hatch': {
'version': {'path': 'my_app/__about__.py'},
'build': {'targets': {'sdist': {'versions': ['standard'], 'support-legacy': True}}},
},
},
}
builder = SdistBuilder(str(project_path), config=config)
build_path = project_path / 'dist'
build_path.mkdir()
with project_path.as_cwd():
artifacts = list(builder.build(str(build_path)))
assert len(artifacts) == 1
expected_artifact = artifacts[0]
build_artifacts = list(build_path.iterdir())
assert len(build_artifacts) == 1
assert expected_artifact == str(build_artifacts[0])
assert expected_artifact == str(build_path / f'{builder.project_id}.tar.gz')
extraction_directory = temp_dir / '_archive'
extraction_directory.mkdir()
with tarfile.open(str(expected_artifact), 'r:gz') as tar_archive:
tar_archive.extractall(str(extraction_directory))
expected_files = helpers.get_template_files(
'sdist.standard_default_support_legacy', project_name, relative_root=builder.project_id
)
helpers.assert_files(extraction_directory, expected_files, check_contents=True)
def test_default_build_script_artifacts(self, hatch, helpers, temp_dir):
project_name = 'My App'
with temp_dir.as_cwd():
result = hatch('new', project_name)
assert result.exit_code == 0, result.output
project_path = temp_dir / 'my-app'
vcs_ignore_file = project_path / '.gitignore'
vcs_ignore_file.write_text('*.pyc\n*.so\n*.h\n')
build_script = project_path / 'build.py'
build_script.write_text(
helpers.dedent(
"""
import pathlib
from hatchling.builders.hooks.plugin.interface import BuildHookInterface
class CustomHook(BuildHookInterface):
def initialize(self, version, build_data):
pathlib.Path('my_app', 'lib.so').touch()
pathlib.Path('my_app', 'lib.h').touch()
"""
)
)
config = {
'project': {'name': 'my__app', 'dynamic': ['version']},
'tool': {
'hatch': {
'version': {'path': 'my_app/__about__.py'},
'build': {
'targets': {'sdist': {'versions': ['standard']}},
'artifacts': ['my_app/lib.so'],
'hooks': {'custom': {'path': 'build.py'}},
},
},
},
}
builder = SdistBuilder(str(project_path), config=config)
build_path = project_path / 'dist'
build_path.mkdir()
with project_path.as_cwd():
artifacts = list(builder.build(str(build_path)))
assert len(artifacts) == 1
expected_artifact = artifacts[0]
build_artifacts = list(build_path.iterdir())
assert len(build_artifacts) == 1
assert expected_artifact == str(build_artifacts[0])
assert expected_artifact == str(build_path / f'{builder.project_id}.tar.gz')
extraction_directory = temp_dir / '_archive'
extraction_directory.mkdir()
with tarfile.open(str(expected_artifact), 'r:gz') as tar_archive:
tar_archive.extractall(str(extraction_directory))
expected_files = helpers.get_template_files(
'sdist.standard_default_build_script_artifacts', project_name, relative_root=builder.project_id
)
helpers.assert_files(extraction_directory, expected_files, check_contents=True)
def test_include_project_file(self, hatch, helpers, temp_dir):
project_name = 'My App'
with temp_dir.as_cwd():
result = hatch('new', project_name)
assert result.exit_code == 0, result.output
project_path = temp_dir / 'my-app'
config = {
'project': {'name': 'my__app', 'dynamic': ['version'], 'readme': 'README.md'},
'tool': {
'hatch': {
'version': {'path': 'my_app/__about__.py'},
'build': {
'targets': {'sdist': {'versions': ['standard'], 'include': ['my_app/', 'pyproject.toml']}}
},
},
},
}
builder = SdistBuilder(str(project_path), config=config)
build_path = project_path / 'dist'
with project_path.as_cwd():
artifacts = list(builder.build())
assert len(artifacts) == 1
expected_artifact = artifacts[0]
build_artifacts = list(build_path.iterdir())
assert len(build_artifacts) == 1
assert expected_artifact == str(build_artifacts[0])
assert expected_artifact == str(build_path / f'{builder.project_id}.tar.gz')
extraction_directory = temp_dir / '_archive'
extraction_directory.mkdir()
with tarfile.open(str(expected_artifact), 'r:gz') as tar_archive:
tar_archive.extractall(str(extraction_directory))
expected_files = helpers.get_template_files(
'sdist.standard_include', project_name, relative_root=builder.project_id
)
helpers.assert_files(extraction_directory, expected_files, check_contents=True)
stat = os.stat(str(extraction_directory / builder.project_id / 'PKG-INFO'))
assert stat.st_mtime == get_reproducible_timestamp()
def test_project_file_always_included(self, hatch, helpers, temp_dir):
project_name = 'My App'
with temp_dir.as_cwd():
result = hatch('new', project_name)
assert result.exit_code == 0, result.output
project_path = temp_dir / 'my-app'
config = {
'project': {'name': 'my__app', 'dynamic': ['version'], 'readme': 'README.md'},
'tool': {
'hatch': {
'version': {'path': 'my_app/__about__.py'},
'build': {
'targets': {
'sdist': {'versions': ['standard'], 'include': ['my_app/'], 'exclude': ['pyproject.toml']},
},
},
},
},
}
builder = SdistBuilder(str(project_path), config=config)
# Ensure that only the root project file is forcibly included
(project_path / 'my_app' / 'pyproject.toml').touch()
build_path = project_path / 'dist'
with project_path.as_cwd():
artifacts = list(builder.build())
assert len(artifacts) == 1
expected_artifact = artifacts[0]
build_artifacts = list(build_path.iterdir())
assert len(build_artifacts) == 1
assert expected_artifact == str(build_artifacts[0])
assert expected_artifact == str(build_path / f'{builder.project_id}.tar.gz')
extraction_directory = temp_dir / '_archive'
extraction_directory.mkdir()
with tarfile.open(str(expected_artifact), 'r:gz') as tar_archive:
tar_archive.extractall(str(extraction_directory))
expected_files = helpers.get_template_files(
'sdist.standard_include', project_name, relative_root=builder.project_id
)
helpers.assert_files(extraction_directory, expected_files, check_contents=True)
stat = os.stat(str(extraction_directory / builder.project_id / 'PKG-INFO'))
assert stat.st_mtime == get_reproducible_timestamp()
def test_include_readme(self, hatch, helpers, temp_dir):
project_name = 'My App'
with temp_dir.as_cwd():
result = hatch('new', project_name)
assert result.exit_code == 0, result.output
project_path = temp_dir / 'my-app'
config = {
'project': {'name': 'my__app', 'dynamic': ['version'], 'readme': 'README.md'},
'tool': {
'hatch': {
'version': {'path': 'my_app/__about__.py'},
'build': {'targets': {'sdist': {'versions': ['standard'], 'include': ['my_app/', 'README.md']}}},
},
},
}
builder = SdistBuilder(str(project_path), config=config)
build_path = project_path / 'dist'
with project_path.as_cwd():
artifacts = list(builder.build())
assert len(artifacts) == 1
expected_artifact = artifacts[0]
build_artifacts = list(build_path.iterdir())
assert len(build_artifacts) == 1
assert expected_artifact == str(build_artifacts[0])
assert expected_artifact == str(build_path / f'{builder.project_id}.tar.gz')
extraction_directory = temp_dir / '_archive'
extraction_directory.mkdir()
with tarfile.open(str(expected_artifact), 'r:gz') as tar_archive:
tar_archive.extractall(str(extraction_directory))
expected_files = helpers.get_template_files(
'sdist.standard_include', project_name, relative_root=builder.project_id
)
helpers.assert_files(extraction_directory, expected_files, check_contents=True)
stat = os.stat(str(extraction_directory / builder.project_id / 'PKG-INFO'))
assert stat.st_mtime == get_reproducible_timestamp()
def test_readme_always_included(self, hatch, helpers, temp_dir):
project_name = 'My App'
with temp_dir.as_cwd():
result = hatch('new', project_name)
assert result.exit_code == 0, result.output
project_path = temp_dir / 'my-app'
config = {
'project': {'name': 'my__app', 'dynamic': ['version'], 'readme': 'README.md'},
'tool': {
'hatch': {
'version': {'path': 'my_app/__about__.py'},
'build': {
'targets': {
'sdist': {'versions': ['standard'], 'include': ['my_app/'], 'exclude': ['README.md']},
},
},
},
},
}
builder = SdistBuilder(str(project_path), config=config)
# Ensure that only the desired readme is forcibly included
(project_path / 'my_app' / 'README.md').touch()
build_path = project_path / 'dist'
with project_path.as_cwd():
artifacts = list(builder.build())
assert len(artifacts) == 1
expected_artifact = artifacts[0]
build_artifacts = list(build_path.iterdir())
assert len(build_artifacts) == 1
assert expected_artifact == str(build_artifacts[0])
assert expected_artifact == str(build_path / f'{builder.project_id}.tar.gz')
extraction_directory = temp_dir / '_archive'
extraction_directory.mkdir()
with tarfile.open(str(expected_artifact), 'r:gz') as tar_archive:
tar_archive.extractall(str(extraction_directory))
expected_files = helpers.get_template_files(
'sdist.standard_include', project_name, relative_root=builder.project_id
)
helpers.assert_files(extraction_directory, expected_files, check_contents=True)
stat = os.stat(str(extraction_directory / builder.project_id / 'PKG-INFO'))
assert stat.st_mtime == get_reproducible_timestamp()
def test_include_license_files(self, hatch, helpers, temp_dir):
project_name = 'My App'
with temp_dir.as_cwd():
result = hatch('new', project_name)
assert result.exit_code == 0, result.output
project_path = temp_dir / 'my-app'
config = {
'project': {'name': 'my__app', 'dynamic': ['version'], 'readme': 'README.md'},
'tool': {
'hatch': {
'version': {'path': 'my_app/__about__.py'},
'build': {'targets': {'sdist': {'versions': ['standard'], 'include': ['my_app/', 'LICENSE.txt']}}},
},
},
}
builder = SdistBuilder(str(project_path), config=config)
build_path = project_path / 'dist'
with project_path.as_cwd():
artifacts = list(builder.build())
assert len(artifacts) == 1
expected_artifact = artifacts[0]
build_artifacts = list(build_path.iterdir())
assert len(build_artifacts) == 1
assert expected_artifact == str(build_artifacts[0])
assert expected_artifact == str(build_path / f'{builder.project_id}.tar.gz')
extraction_directory = temp_dir / '_archive'
extraction_directory.mkdir()
with tarfile.open(str(expected_artifact), 'r:gz') as tar_archive:
tar_archive.extractall(str(extraction_directory))
expected_files = helpers.get_template_files(
'sdist.standard_include', project_name, relative_root=builder.project_id
)
helpers.assert_files(extraction_directory, expected_files, check_contents=True)
stat = os.stat(str(extraction_directory / builder.project_id / 'PKG-INFO'))
assert stat.st_mtime == get_reproducible_timestamp()
def test_license_files_always_included(self, hatch, helpers, temp_dir):
project_name = 'My App'
with temp_dir.as_cwd():
result = hatch('new', project_name)
assert result.exit_code == 0, result.output
project_path = temp_dir / 'my-app'
config = {
'project': {'name': 'my__app', 'dynamic': ['version'], 'readme': 'README.md'},
'tool': {
'hatch': {
'version': {'path': 'my_app/__about__.py'},
'build': {
'targets': {
'sdist': {'versions': ['standard'], 'include': ['my_app/'], 'exclude': ['LICENSE.txt']},
},
},
},
},
}
builder = SdistBuilder(str(project_path), config=config)
# Ensure that only the desired readme is forcibly included
(project_path / 'my_app' / 'LICENSE.txt').touch()
build_path = project_path / 'dist'
with project_path.as_cwd():
artifacts = list(builder.build())
assert len(artifacts) == 1
expected_artifact = artifacts[0]
build_artifacts = list(build_path.iterdir())
assert len(build_artifacts) == 1
assert expected_artifact == str(build_artifacts[0])
assert expected_artifact == str(build_path / f'{builder.project_id}.tar.gz')
extraction_directory = temp_dir / '_archive'
extraction_directory.mkdir()
with tarfile.open(str(expected_artifact), 'r:gz') as tar_archive:
tar_archive.extractall(str(extraction_directory))
expected_files = helpers.get_template_files(
'sdist.standard_include', project_name, relative_root=builder.project_id
)
helpers.assert_files(extraction_directory, expected_files, check_contents=True)
stat = os.stat(str(extraction_directory / builder.project_id / 'PKG-INFO'))
assert stat.st_mtime == get_reproducible_timestamp()
| 35.022748 | 120 | 0.507976 | 3,719 | 38,490 | 5.027696 | 0.052433 | 0.04332 | 0.027917 | 0.033373 | 0.923093 | 0.91336 | 0.902503 | 0.896513 | 0.882822 | 0.851268 | 0 | 0.009494 | 0.354144 | 38,490 | 1,098 | 121 | 35.054645 | 0.742669 | 0.004495 | 0 | 0.667838 | 0 | 0.003515 | 0.154046 | 0.026359 | 0 | 0 | 0 | 0 | 0.163445 | 1 | 0.063269 | false | 0 | 0.012302 | 0 | 0.082601 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
6f345ebde49099357c68eb4601fc1103022f1c56 | 70 | py | Python | fetchmesh/bgp/__init__.py | SmartMonitoringSchemes/fetchmesh | 139a68a380786cca5caba33f16f4ff482477d66a | [
"MIT"
] | null | null | null | fetchmesh/bgp/__init__.py | SmartMonitoringSchemes/fetchmesh | 139a68a380786cca5caba33f16f4ff482477d66a | [
"MIT"
] | 5 | 2021-08-01T18:11:07.000Z | 2022-02-01T18:42:10.000Z | fetchmesh/bgp/__init__.py | SmartMonitoringSchemes/fetchmesh | 139a68a380786cca5caba33f16f4ff482477d66a | [
"MIT"
] | null | null | null | from .asnames import *
from .asndb import *
from .collectors import *
| 17.5 | 25 | 0.742857 | 9 | 70 | 5.777778 | 0.555556 | 0.384615 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.171429 | 70 | 3 | 26 | 23.333333 | 0.896552 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
6f3eb94eeb4e87edff113004c29917ae86989ffe | 1,540 | py | Python | django_soc_lite/HTML_Escape.py | threatequation/django-soc-lite | 3df27d06ed63e5a382f5ab4e387c51f611b48f81 | [
"0BSD"
] | null | null | null | django_soc_lite/HTML_Escape.py | threatequation/django-soc-lite | 3df27d06ed63e5a382f5ab4e387c51f611b48f81 | [
"0BSD"
] | null | null | null | django_soc_lite/HTML_Escape.py | threatequation/django-soc-lite | 3df27d06ed63e5a382f5ab4e387c51f611b48f81 | [
"0BSD"
] | null | null | null | """custom escaping methods"""
def XSSEncode(maliciouscode):
"""custom xss containg input escaper"""
html_code = (
('"', '"'), ('%22', '"'),
("'", '''), ('%27', '''),
('/', '/'), ('%2f', '/'), ('%2F', '/'),
('<', '<'), ('%3C', '<'), ('%3c', '<'),
('>', '>'), ('%3E', '>'), ('%3e', '>'),
(';', '&end;'), ('%3B', '&end;'), ('%3b', '&end;'),
('&', '&'), ('%26', '&'),
)
for code in html_code:
maliciouscode = maliciouscode.replace(code[0], code[1])
import re
maliciouscode = re.sub(' +', ' ', maliciouscode)
return maliciouscode
def CommandEscape(maliciouscode):
"""custom command input escaper"""
html_code = (
('"', '"'), ('%22', '"'),
("'", '''), ('%27', '''),
('/', '/'), ('%2f', '/'), ('%2F', '/'),
('<', '<'), ('%3C', '<'), ('%3c', '<'),
('>', '>'), ('%3E', '>'), ('%3e', '>'),
(';', '&end;'), ('%3B', '&end;'), ('%3b', '&end;'),
('&', '&'), ('%26', '&'),
)
for code in html_code:
maliciouscode = maliciouscode.replace(code[0], code[1])
import re
maliciouscode = re.sub(' +', ' ', maliciouscode)
return maliciouscode
def UrlEncode(url):
html_code = (
(':', '-'),
('//', '--'),
('/', '-'),
('.', '-'),
)
for code in html_code:
url = url.replace(code[0], code[1])
return url | 32.765957 | 63 | 0.391558 | 139 | 1,540 | 4.294964 | 0.280576 | 0.080402 | 0.053601 | 0.065327 | 0.770519 | 0.713568 | 0.713568 | 0.713568 | 0.713568 | 0.713568 | 0 | 0.042142 | 0.26039 | 1,540 | 47 | 64 | 32.765957 | 0.482002 | 0.055844 | 0 | 0.7 | 0 | 0 | 0.19319 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.075 | false | 0 | 0.05 | 0 | 0.2 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
d2a2d7edaa3c299d94cf6a706ef02621335a07e2 | 34 | py | Python | pyspark_utilities/spark_udfs/__init__.py | zaksamalik/pyspark-utilities | da9f843bdf03658d58d1dbc8d5f3067bd6702494 | [
"MIT"
] | 9 | 2020-03-10T10:31:06.000Z | 2021-12-03T03:43:00.000Z | pyspark_utilities/spark_udfs/__init__.py | zaksamalik/pyspark-utilities | da9f843bdf03658d58d1dbc8d5f3067bd6702494 | [
"MIT"
] | null | null | null | pyspark_utilities/spark_udfs/__init__.py | zaksamalik/pyspark-utilities | da9f843bdf03658d58d1dbc8d5f3067bd6702494 | [
"MIT"
] | 2 | 2020-11-14T15:13:43.000Z | 2021-12-22T11:33:03.000Z | from .spark_udfs import SparkUDFs
| 17 | 33 | 0.852941 | 5 | 34 | 5.6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.117647 | 34 | 1 | 34 | 34 | 0.933333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
960be7cf0370b3714d9693080ef55b70bf5c63c9 | 2,212 | py | Python | migrations/versions/2018_12_03_fix_migrations.py | AlexKouzy/ethnicity-facts-and-figures-publisher | 18ab2495a8633f585e18e607c7f75daa564a053d | [
"MIT"
] | 1 | 2021-10-06T13:48:36.000Z | 2021-10-06T13:48:36.000Z | migrations/versions/2018_12_03_fix_migrations.py | AlexKouzy/ethnicity-facts-and-figures-publisher | 18ab2495a8633f585e18e607c7f75daa564a053d | [
"MIT"
] | 116 | 2018-11-02T17:20:47.000Z | 2022-02-09T11:06:22.000Z | migrations/versions/2018_12_03_fix_migrations.py | racedisparityaudit/rd_cms | a12f0e3f5461cc41eed0077ed02e11efafc5dd76 | [
"MIT"
] | 2 | 2018-11-09T16:47:35.000Z | 2020-04-09T13:06:48.000Z | """fix migrations by incorporating outstanding changes
Revision ID: 2018_12_03_fix_migrations
Revises: 2018_11_28_drop_contact_details
Create Date: 2018-12-03 10:51:52.822365
"""
from alembic import op
import sqlalchemy as sa
# revision identifiers, used by Alembic.
revision = "2018_12_03_fix_migrations"
down_revision = "2018_11_28_drop_contact_details"
branch_labels = None
depends_on = None
def upgrade():
op.create_foreign_key(
"data_source_frequency_of_release_id_fkey",
"data_source",
"frequency_of_release",
["frequency_of_release_id"],
["id"],
)
op.alter_column("dimension_chart", "classification_id", existing_type=sa.VARCHAR(length=255), nullable=False)
op.alter_column("dimension_table", "classification_id", existing_type=sa.VARCHAR(length=255), nullable=False)
op.alter_column(
"ethnicity_in_classification", "classification_id", existing_type=sa.VARCHAR(length=255), nullable=False
)
op.alter_column("ethnicity_in_classification", "ethnicity_id", existing_type=sa.INTEGER(), nullable=False)
op.alter_column(
"parent_ethnicity_in_classification", "classification_id", existing_type=sa.VARCHAR(length=255), nullable=False
)
op.alter_column("parent_ethnicity_in_classification", "ethnicity_id", existing_type=sa.INTEGER(), nullable=False)
def downgrade():
op.alter_column("parent_ethnicity_in_classification", "ethnicity_id", existing_type=sa.INTEGER(), nullable=True)
op.alter_column(
"parent_ethnicity_in_classification", "classification_id", existing_type=sa.VARCHAR(length=255), nullable=True
)
op.alter_column("ethnicity_in_classification", "ethnicity_id", existing_type=sa.INTEGER(), nullable=True)
op.alter_column(
"ethnicity_in_classification", "classification_id", existing_type=sa.VARCHAR(length=255), nullable=True
)
op.alter_column("dimension_table", "classification_id", existing_type=sa.VARCHAR(length=255), nullable=True)
op.alter_column("dimension_chart", "classification_id", existing_type=sa.VARCHAR(length=255), nullable=True)
op.drop_constraint("data_source_frequency_of_release_id_fkey", "data_source", type_="foreignkey")
| 43.372549 | 119 | 0.764467 | 285 | 2,212 | 5.568421 | 0.242105 | 0.05293 | 0.098299 | 0.120983 | 0.809074 | 0.771267 | 0.7385 | 0.7385 | 0.7385 | 0.68305 | 0 | 0.039095 | 0.121157 | 2,212 | 50 | 120 | 44.24 | 0.777263 | 0.095841 | 0 | 0.111111 | 0 | 0 | 0.351908 | 0.202309 | 0 | 0 | 0 | 0 | 0 | 1 | 0.055556 | false | 0 | 0.055556 | 0 | 0.111111 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
82511270d5450862f4f276842012c9b0221f3782 | 96 | py | Python | venv/lib/python3.8/site-packages/rope/refactor/occurrences.py | Retraces/UkraineBot | 3d5d7f8aaa58fa0cb8b98733b8808e5dfbdb8b71 | [
"MIT"
] | 2 | 2022-03-13T01:58:52.000Z | 2022-03-31T06:07:54.000Z | venv/lib/python3.8/site-packages/rope/refactor/occurrences.py | DesmoSearch/Desmobot | b70b45df3485351f471080deb5c785c4bc5c4beb | [
"MIT"
] | 19 | 2021-11-20T04:09:18.000Z | 2022-03-23T15:05:55.000Z | venv/lib/python3.8/site-packages/rope/refactor/occurrences.py | DesmoSearch/Desmobot | b70b45df3485351f471080deb5c785c4bc5c4beb | [
"MIT"
] | null | null | null | /home/runner/.cache/pip/pool/af/90/f7/f89d139c04c31ed7354f13cdac5ec00282c4d4306d56ec430f55165137 | 96 | 96 | 0.895833 | 9 | 96 | 9.555556 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.4375 | 0 | 96 | 1 | 96 | 96 | 0.458333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
82c0a2e9e7007de4b7524c813fd6111e126c677f | 120 | py | Python | spidermon/python/__init__.py | heylouiz/spidermon | 3ae2c46d1cf5b46efb578798b881264be3e68394 | [
"BSD-3-Clause"
] | 2 | 2019-10-03T16:47:11.000Z | 2022-02-22T11:56:02.000Z | spidermon/python/__init__.py | heylouiz/spidermon | 3ae2c46d1cf5b46efb578798b881264be3e68394 | [
"BSD-3-Clause"
] | 23 | 2019-05-30T20:27:38.000Z | 2019-08-20T07:23:09.000Z | spidermon/python/__init__.py | heylouiz/spidermon | 3ae2c46d1cf5b46efb578798b881264be3e68394 | [
"BSD-3-Clause"
] | 1 | 2022-03-24T03:01:19.000Z | 2022-03-24T03:01:19.000Z | from __future__ import absolute_import
from .interpreter import Interpreter
from . import factory
from . import schemas
| 24 | 38 | 0.841667 | 15 | 120 | 6.4 | 0.466667 | 0.208333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.133333 | 120 | 4 | 39 | 30 | 0.923077 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
81a91d8c244dd48b0ff6383bf1ae09b584aa732c | 268 | py | Python | tests/test_imports.py | team23/django_textformat | e195f56a5d12f6ceb70855da1164251fc43d1c9d | [
"BSD-3-Clause"
] | 2 | 2016-03-22T16:59:26.000Z | 2016-07-15T09:39:31.000Z | tests/test_imports.py | team23/django_textformat | e195f56a5d12f6ceb70855da1164251fc43d1c9d | [
"BSD-3-Clause"
] | 2 | 2016-03-22T16:56:50.000Z | 2016-04-05T08:18:50.000Z | tests/test_imports.py | team23/django_textformat | e195f56a5d12f6ceb70855da1164251fc43d1c9d | [
"BSD-3-Clause"
] | null | null | null | def test_imports():
import django_textformat # noqa
from django_textformat import TextFormatField # noqa
assert TextFormatField is not None
def test_has_version():
import django_textformat
assert django_textformat.__version__.count('.') >= 2
| 22.333333 | 57 | 0.746269 | 31 | 268 | 6.096774 | 0.548387 | 0.338624 | 0.232804 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.004608 | 0.190299 | 268 | 11 | 58 | 24.363636 | 0.866359 | 0.033582 | 0 | 0.285714 | 0 | 0 | 0.003906 | 0 | 0 | 0 | 0 | 0 | 0.285714 | 1 | 0.285714 | true | 0 | 0.571429 | 0 | 0.857143 | 0 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
81e164c90a2050c8f16a6941c46f1a1bab4387eb | 8,934 | py | Python | ogusa/tests/test_SS.py | rickecon/TaxFuncIntegr | 715cc76e3305c00dd64d79521c504bb388c6d87d | [
"CC0-1.0"
] | null | null | null | ogusa/tests/test_SS.py | rickecon/TaxFuncIntegr | 715cc76e3305c00dd64d79521c504bb388c6d87d | [
"CC0-1.0"
] | 1 | 2018-07-02T18:24:17.000Z | 2018-07-02T18:24:17.000Z | ogusa/tests/test_SS.py | rickecon/TaxFuncIntegr | 715cc76e3305c00dd64d79521c504bb388c6d87d | [
"CC0-1.0"
] | 6 | 2016-09-18T01:39:54.000Z | 2020-09-02T12:54:55.000Z | from __future__ import print_function
import pytest
import json
import pickle
import numpy as np
import os
import multiprocessing
from multiprocessing import Process
from dask.distributed import Client
from ogusa import SS, utils
# Define parameters to use for multiprocessing
# client = Client(processes=False)
# # num_workers = int(os.cpu_count()) # not in os on Python 2.7?
# num_workers = multiprocessing.cpu_count()
CUR_PATH = os.path.abspath(os.path.dirname(__file__))
def test_SS_fsolve():
# Test SS.SS_fsolve function. Provide inputs to function and
# ensure that output returned matches what it has been before.
input_tuple = utils.safe_read_pickle(
os.path.join(CUR_PATH, 'test_io_data/SS_fsolve_inputs.pkl'))
guesses, params = input_tuple
params = params + (None, 1)
(bssmat, nssmat, chi_params, ss_params, income_tax_params,
iterative_params, small_open_params, client, num_workers) = params
income_tax_params = ('DEP',) + income_tax_params
params = (bssmat, nssmat, chi_params, ss_params, income_tax_params,
iterative_params, small_open_params, client, num_workers)
test_list = SS.SS_fsolve(guesses, params)
expected_list = utils.safe_read_pickle(
os.path.join(CUR_PATH, 'test_io_data/SS_fsolve_outputs.pkl'))
print('outputs = ', np.absolute(np.array(test_list) -
np.array(expected_list)).max())
assert(np.allclose(np.array(test_list), np.array(expected_list)))
def test_SS_fsolve_reform():
# Test SS.SS_fsolve_reform function. Provide inputs to function and
# ensure that output returned matches what it has been before.
input_tuple = utils.safe_read_pickle(
os.path.join(CUR_PATH, 'test_io_data/SS_fsolve_reform_inputs.pkl'))
guesses, params = input_tuple
params = params + (None, 1)
(bssmat, nssmat, chi_params, ss_params, income_tax_params,
iterative_params, factor, small_open_params, client,
num_workers) = params
income_tax_params = ('DEP',) + income_tax_params
params = (bssmat, nssmat, chi_params, ss_params, income_tax_params,
iterative_params, factor, small_open_params, client,
num_workers)
test_list = SS.SS_fsolve_reform(guesses, params)
expected_list = utils.safe_read_pickle(
os.path.join(CUR_PATH, 'test_io_data/SS_fsolve_reform_outputs.pkl'))
assert(np.allclose(np.array(test_list), np.array(expected_list)))
def test_SS_fsolve_reform_baselinespend():
# Test SS.SS_fsolve_reform_baselinespend function. Provide inputs
# to function and ensure that output returned matches what it has
# been before.
input_tuple = utils.safe_read_pickle(
os.path.join(CUR_PATH, 'test_io_data/SS_fsolve_reform_baselinespend_inputs.pkl'))
guesses, params = input_tuple
params = params + (None, 1)
(bssmat, nssmat, T_Hss, chi_params, ss_params, income_tax_params,
iterative_params, factor, small_open_params, client,
num_workers) = params
income_tax_params = ('DEP',) + income_tax_params
params = (bssmat, nssmat, T_Hss, chi_params, ss_params,
income_tax_params, iterative_params, factor,
small_open_params, client, num_workers)
test_list = SS.SS_fsolve_reform_baselinespend(guesses, params)
expected_list = utils.safe_read_pickle(
os.path.join(CUR_PATH, 'test_io_data/SS_fsolve_reform_baselinespend_outputs.pkl'))
assert(np.allclose(np.array(test_list), np.array(expected_list)))
def test_SS_solver():
# Test SS.SS_solver function. Provide inputs to function and
# ensure that output returned matches what it has been before.
input_tuple = utils.safe_read_pickle(
os.path.join(CUR_PATH, 'test_io_data/SS_solver_inputs.pkl'))
(b_guess_init, n_guess_init, rss, T_Hss, factor_ss, Yss, params,
baseline, fsolve_flag, baseline_spending) = input_tuple
(bssmat, nssmat, chi_params, ss_params, income_tax_params,
iterative_params, small_open_params) = params
income_tax_params = ('DEP',) + income_tax_params
params = (bssmat, nssmat, chi_params, ss_params, income_tax_params,
iterative_params, small_open_params)
test_dict = SS.SS_solver(
b_guess_init, n_guess_init, rss, T_Hss, factor_ss, Yss, params,
baseline, fsolve_flag, baseline_spending)
expected_dict = utils.safe_read_pickle(
os.path.join(CUR_PATH, 'test_io_data/SS_solver_outputs.pkl'))
for k, v in expected_dict.items():
assert(np.allclose(test_dict[k], v))
def test_inner_loop():
# Test SS.inner_loop function. Provide inputs to function and
# ensure that output returned matches what it has been before.
input_tuple = utils.safe_read_pickle(
os.path.join(CUR_PATH, 'test_io_data/inner_loop_inputs.pkl'))
(outer_loop_vars, params, baseline, baseline_spending) = input_tuple
ss_params, income_tax_params, chi_params, small_open_params = params
income_tax_params = ('DEP',) + income_tax_params
params = (ss_params, income_tax_params, chi_params,
small_open_params)
test_tuple = SS.inner_loop(
outer_loop_vars, params, baseline, baseline_spending)
expected_tuple = utils.safe_read_pickle(
os.path.join(CUR_PATH, 'test_io_data/inner_loop_outputs.pkl'))
for i, v in enumerate(expected_tuple):
assert(np.allclose(test_tuple[i], v))
def test_euler_equation_solver():
# Test SS.inner_loop function. Provide inputs to function and
# ensure that output returned matches what it has been before.
input_tuple = utils.safe_read_pickle(
os.path.join(CUR_PATH, 'test_io_data/euler_eqn_solver_inputs.pkl'))
(guesses, params) = input_tuple
(r, w, T_H, factor, j, J, S, beta, sigma, ltilde, g_y, g_n_ss,
tau_payroll, retire, mean_income_data, h_wealth, p_wealth,
m_wealth, b_ellipse, upsilon, j, chi_b, chi_n, tau_bq, rho, lambdas,
omega_SS, e, analytical_mtrs, etr_params, mtrx_params,
mtry_params) = params
tax_func_type = 'DEP'
params = (r, w, T_H, factor, j, J, S, beta, sigma, ltilde, g_y,
g_n_ss, tau_payroll, retire, mean_income_data, h_wealth,
p_wealth, m_wealth, b_ellipse, upsilon, j, chi_b, chi_n,
tau_bq, rho, lambdas, omega_SS, e, tax_func_type,
analytical_mtrs, etr_params, mtrx_params, mtry_params)
test_list = SS.euler_equation_solver(guesses, params)
expected_list = utils.safe_read_pickle(
os.path.join(CUR_PATH, 'test_io_data/euler_eqn_solver_outputs.pkl'))
assert(np.allclose(np.array(test_list), np.array(expected_list)))
def test_create_steady_state_parameters():
# Test that SS parameters creates same objects with same inputs.
input_dict = utils.safe_read_pickle(
os.path.join(CUR_PATH, 'test_io_data/create_params_inputs.pkl'))
input_dict['tax_func_type'] = 'DEP'
test_tuple = SS.create_steady_state_parameters(**input_dict)
expected_tuple = utils.safe_read_pickle(
os.path.join(CUR_PATH, 'test_io_data/create_params_outputs.pkl'))
(income_tax_params, ss_params, iterative_params, chi_params,
small_open_params) = expected_tuple
income_tax_params = ('DEP', ) + income_tax_params
expected_tuple = (income_tax_params, ss_params, iterative_params,
chi_params, small_open_params)
for i, v in enumerate(expected_tuple):
for i2, v2 in enumerate(v):
try:
assert(all(test_tuple[i][i2] == v2))
except ValueError:
assert((test_tuple[i][i2] == v2).all())
except TypeError:
assert(test_tuple[i][i2] == v2)
@pytest.mark.parametrize('input_path,expected_path',
[('run_SS_open_unbal_inputs.pkl',
'run_SS_open_unbal_outputs.pkl'),
('run_SS_closed_balanced_inputs.pkl',
'run_SS_closed_balanced_outputs.pkl')],
ids=['Open, Unbalanced', 'Closed Balanced'])
def test_run_SS(input_path, expected_path):
# Test SS.run_SS function. Provide inputs to function and
# ensure that output returned matches what it has been before.
input_tuple = utils.safe_read_pickle(
os.path.join(CUR_PATH, 'test_io_data', input_path))
(income_tax_params, ss_params, iterative_params, chi_params,
small_open_params, baseline, baseline_spending, baseline_dir) =\
input_tuple
income_tax_params = ('DEP',) + income_tax_params
test_dict = SS.run_SS(
income_tax_params, ss_params, iterative_params, chi_params,
small_open_params, baseline, baseline_spending, baseline_dir)
expected_dict = utils.safe_read_pickle(
os.path.join(CUR_PATH, 'test_io_data', expected_path))
for k, v in expected_dict.items():
assert(np.allclose(test_dict[k], v))
| 43.794118 | 90 | 0.705507 | 1,266 | 8,934 | 4.619273 | 0.134281 | 0.043092 | 0.071819 | 0.051984 | 0.804549 | 0.781977 | 0.766074 | 0.74145 | 0.707934 | 0.707934 | 0 | 0.001819 | 0.200246 | 8,934 | 203 | 91 | 44.009852 | 0.816655 | 0.12514 | 0 | 0.342282 | 0 | 0 | 0.102913 | 0.089439 | 0 | 0 | 0 | 0 | 0.067114 | 1 | 0.053691 | false | 0 | 0.067114 | 0 | 0.120805 | 0.013423 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
c4e4bb583fc0e23cd20d9b5f6fa6cf0aa2858e4c | 8,658 | py | Python | Dragon/python/dragon/vm/caffe/layers/loss.py | awesome-archive/Dragon | b35f9320909d07d138c2f6b345a4c24911f7c521 | [
"BSD-2-Clause"
] | null | null | null | Dragon/python/dragon/vm/caffe/layers/loss.py | awesome-archive/Dragon | b35f9320909d07d138c2f6b345a4c24911f7c521 | [
"BSD-2-Clause"
] | null | null | null | Dragon/python/dragon/vm/caffe/layers/loss.py | awesome-archive/Dragon | b35f9320909d07d138c2f6b345a4c24911f7c521 | [
"BSD-2-Clause"
] | null | null | null | # ------------------------------------------------------------
# Copyright (c) 2017-present, SeetaTech, Co.,Ltd.
#
# Licensed under the BSD 2-Clause License.
# You should have received a copy of the BSD 2-Clause License
# along with the software. If not, See,
#
# <https://opensource.org/licenses/BSD-2-Clause>
#
# ------------------------------------------------------------
"""The Implementation of the data layers."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import dragon
from ..layer import Layer
class SoftmaxWithLossLayer(Layer):
"""The implementation of ``SoftmaxWithLossLayer``.
Parameters
----------
axis : int
The axis of softmax. Refer `SoftmaxParameter.axis`_.
ignore_label : int
The label id to ignore. Refer `LossParameter.ignore_label`_.
normalization : NormalizationMode
The normalization. Refer `LossParameter.normalization`_.
normalize : boolean
Whether to normalize. Refer `LossParameter.normalize`_.
"""
def __init__(self, LayerParameter):
super(SoftmaxWithLossLayer, self).__init__(LayerParameter)
param = LayerParameter.loss_param
softmax_param = LayerParameter.softmax_param
norm_mode = {0: 'FULL', 1: 'VALID', 2: 'BATCH_SIZE', 3: 'NONE', 4: 'UNIT'}
normalization = 'VALID'
if param.HasField('normalize'):
if not param.normalize: normalization = 'BATCH_SIZE'
else:
normalization = norm_mode[param.normalization]
self.arguments = {
'axis': softmax_param.axis,
'normalization': normalization,
'ignore_labels': [param.ignore_label]
if param.HasField('ignore_label') else [],
}
def LayerSetup(self, bottom):
loss = dragon.ops.SparseSoftmaxCrossEntropy(bottom, **self.arguments)
if self._loss_weight is not None: loss *= self._loss_weight
return loss
class SigmoidCrossEntropyLossLayer(Layer):
"""The implementation of ``SigmoidCrossEntropyLossLayer``.
Parameters
----------
normalization : NormalizationMode
The normalization. Refer `LossParameter.normalization`_.
normalize : boolean
Whether to normalize. Refer `LossParameter.normalize`_.
"""
def __init__(self, LayerParameter):
super(SigmoidCrossEntropyLossLayer, self).__init__(LayerParameter)
param = LayerParameter.loss_param
norm_mode = {0: 'FULL', 1: 'VALID', 2: 'BATCH_SIZE', 3: 'NONE', 4: 'UNIT'}
normalization = 'VALID'
if param.HasField('normalize'):
if not param.normalize: normalization = 'BATCH_SIZE'
else: normalization = norm_mode[param.normalization]
self.arguments = {'normalization': normalization}
def LayerSetup(self, bottom):
loss = dragon.ops.SigmoidCrossEntropy(bottom, **self.arguments)
if self._loss_weight is not None: loss *= self._loss_weight
return loss
class L2LossLayer(Layer):
"""The implementation of ``L2LossLayer``.
Parameters
----------
normalization : NormalizationMode
The normalization. Refer `LossParameter.normalization`_.
normalize : boolean
Whether to normalize. Refer `LossParameter.normalize`_.
"""
def __init__(self, LayerParameter):
super(L2LossLayer, self).__init__(LayerParameter)
param = LayerParameter.loss_param
norm_mode = {0: 'FULL', 1: 'BATCH_SIZE', 2: 'BATCH_SIZE', 3: 'NONE'}
normalization = 'BATCH_SIZE'
if param.HasField('normalize'):
if param.normalize: normalization = 'FULL'
else: normalization = norm_mode[param.normalization]
self.arguments = {'normalization': normalization}
def LayerSetup(self, bottom):
loss = dragon.ops.L2Loss(bottom, **self.arguments)
if self._loss_weight is not None: loss *= self._loss_weight
return loss
class SmoothL1LossLayer(Layer):
"""The implementation of ``SmoothL1LossLayer``.
Parameters
----------
sigma : float
The sigma. Refer `SmoothL1LossParameter.sigma`_.
normalization : NormalizationMode
The normalization. Refer `LossParameter.normalization`_.
normalize : boolean
Whether to normalize. Refer `LossParameter.normalize`_.
"""
def __init__(self, LayerParameter):
super(SmoothL1LossLayer, self).__init__(LayerParameter)
param = LayerParameter.loss_param
smooth_l1_param = LayerParameter.smooth_l1_loss_param
norm_mode = {0: 'FULL', 1: 'BATCH_SIZE', 2: 'BATCH_SIZE', 3: 'NONE'}
normalization = 'BATCH_SIZE'
if param.HasField('normalize'):
if param.normalize: normalization = 'FULL'
else: normalization = norm_mode[param.normalization]
sigma2 = smooth_l1_param.sigma * smooth_l1_param.sigma
self.arguments = {
'beta': float(1. / sigma2),
'normalization': normalization,
}
def LayerSetup(self, bottom):
loss = dragon.ops.SmoothL1Loss(bottom, **self.arguments)
if self._loss_weight is not None: loss *= self._loss_weight
return loss
class SigmoidWithFocalLossLayer(Layer):
"""The implementation of ``SigmoidWithFocalLossLayer``.
Parameters
----------
axis : int
The axis of softmax. Refer `SoftmaxParameter.axis`_.
alpha : float
The scale on the rare class. Refer `FocalLossParameter.alpha`_.
gamma : float
The exponential decay. Refer `FocalLossParameter.gamma`_.
neg_id : int
The negative id. Refer `FocalLossParameter.neg_id`_.
normalization : NormalizationMode
The normalization. Refer `LossParameter.normalization`_.
normalize : boolean
Whether to normalize. Refer `LossParameter.normalize`_.
"""
def __init__(self, LayerParameter):
super(SigmoidWithFocalLossLayer, self).__init__(LayerParameter)
param = LayerParameter.loss_param
softmax_param = LayerParameter.softmax_param
focal_loss_param = LayerParameter.focal_loss_param
norm_mode = {0: 'FULL', 1: 'VALID', 2: 'BATCH_SIZE', 3: 'NONE', 4: 'UNIT'}
normalization = 'VALID'
if param.HasField('normalize'):
if not param.normalize: normalization = 'BATCH_SIZE'
else: normalization = norm_mode[param.normalization]
self.arguments = {
'axis': softmax_param.axis,
'normalization': normalization,
'alpha': float(focal_loss_param.alpha),
'gamma': float(focal_loss_param.gamma),
'neg_id': focal_loss_param.neg_id,
}
def LayerSetup(self, bottom):
loss = dragon.ops.SigmoidFocalLoss(bottom, **self.arguments)
if self._loss_weight is not None: loss *= self._loss_weight
return loss
class SoftmaxWithFocalLossLayer(Layer):
"""The implementation of ``SoftmaxWithFocalLossLayer``.
Parameters
----------
axis : int
The axis of softmax. Refer `SoftmaxParameter.axis`_.
alpha : float
The scale on the rare class. Refer `FocalLossParameter.alpha`_.
gamma : float
The exponential decay. Refer `FocalLossParameter.gamma`_.
neg_id : int
The negative id. Refer `FocalLossParameter.neg_id`_.
normalization : NormalizationMode
The normalization. Refer `LossParameter.normalization`_.
normalize : boolean
Whether to normalize. Refer `LossParameter.normalize`_.
"""
def __init__(self, LayerParameter):
super(SoftmaxWithFocalLossLayer, self).__init__(LayerParameter)
param = LayerParameter.loss_param
softmax_param = LayerParameter.softmax_param
focal_loss_param = LayerParameter.focal_loss_param
norm_mode = {0: 'FULL', 1: 'VALID', 2: 'BATCH_SIZE', 3: 'NONE', 4: 'UNIT'}
normalization = 'VALID'
if param.HasField('normalize'):
if not param.normalize: normalization = 'BATCH_SIZE'
else: normalization = norm_mode[param.normalization]
self.arguments = {
'axis': softmax_param.axis,
'normalization': normalization,
'ignore_labels': [param.ignore_label] if param.HasField('ignore_label') else [],
'alpha': float(focal_loss_param.alpha),
'gamma': float(focal_loss_param.gamma),
'neg_id': focal_loss_param.neg_id,
}
def LayerSetup(self, bottom):
loss = dragon.ops.SoftmaxFocalLoss(bottom, **self.arguments)
if self._loss_weight is not None: loss *= self._loss_weight
return loss | 37.318966 | 92 | 0.649573 | 864 | 8,658 | 6.283565 | 0.137731 | 0.028182 | 0.030945 | 0.026524 | 0.791122 | 0.783754 | 0.783754 | 0.767913 | 0.767913 | 0.756493 | 0 | 0.007683 | 0.23331 | 8,658 | 232 | 93 | 37.318966 | 0.810184 | 0.311042 | 0 | 0.70339 | 0 | 0 | 0.085264 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.101695 | false | 0 | 0.042373 | 0 | 0.245763 | 0.008475 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
c4fd718e3889f57aa812056c2008c07b14cb2431 | 23 | py | Python | loci/__init__.py | t-brandt/acorns-adi | 6645fae7878a1801beeda0c6604b01e61f37ca15 | [
"BSD-2-Clause"
] | 1 | 2016-10-30T16:29:51.000Z | 2016-10-30T16:29:51.000Z | loci/__init__.py | t-brandt/acorns-adi | 6645fae7878a1801beeda0c6604b01e61f37ca15 | [
"BSD-2-Clause"
] | null | null | null | loci/__init__.py | t-brandt/acorns-adi | 6645fae7878a1801beeda0c6604b01e61f37ca15 | [
"BSD-2-Clause"
] | null | null | null | from loci import loci
| 7.666667 | 21 | 0.782609 | 4 | 23 | 4.5 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.217391 | 23 | 2 | 22 | 11.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
f20513ffe1da16c904bfe793fb24c8e8230fa4c1 | 4,191 | py | Python | pyEX/alternative/alternative.py | cjwang/pyEX | 1b5f40f80110afaa4809ea48fac067033c7bdf89 | [
"Apache-2.0"
] | 1 | 2020-10-11T07:05:49.000Z | 2020-10-11T07:05:49.000Z | pyEX/alternative/alternative.py | cjwang/pyEX | 1b5f40f80110afaa4809ea48fac067033c7bdf89 | [
"Apache-2.0"
] | null | null | null | pyEX/alternative/alternative.py | cjwang/pyEX | 1b5f40f80110afaa4809ea48fac067033c7bdf89 | [
"Apache-2.0"
] | null | null | null | # -*- coding: utf-8 -*-
import pandas as pd
from ..common import _expire, _getJson, _raiseIfNotStr, _strOrDate, _reindex, _toDatetime
def crypto(token='', version='', filter=''):
'''This will return an array of quotes for all Cryptocurrencies supported by the IEX API. Each element is a standard quote object with four additional keys.
https://iexcloud.io/docs/api/#crypto
Args:
token (string); Access token
version (string); API version
filter (string); filters: https://iexcloud.io/docs/api/#filter-results
Returns:
dict: result
'''
return _getJson('stock/market/crypto/', token, version, filter)
def cryptoDF(token='', version='', filter=''):
'''This will return an array of quotes for all Cryptocurrencies supported by the IEX API. Each element is a standard quote object with four additional keys.
https://iexcloud.io/docs/api/#crypto
Args:
token (string); Access token
version (string); API version
filter (string); filters: https://iexcloud.io/docs/api/#filter-results
Returns:
DataFrame: result
'''
df = pd.DataFrame(crypto(token, version, filter))
_toDatetime(df)
_reindex(df, 'symbol')
return df
def sentiment(symbol, type='daily', date=None, token='', version='', filter=''):
'''This endpoint provides social sentiment data from StockTwits. Data can be viewed as a daily value, or by minute for a given date.
https://iexcloud.io/docs/api/#social-sentiment
Continuous
Args:
symbol (string); Ticker to request
type (string); 'daily' or 'minute'
date (string); date in YYYYMMDD or datetime
token (string); Access token
version (string); API version
filter (string); filters: https://iexcloud.io/docs/api/#filter-results
Returns:
dict: result
'''
_raiseIfNotStr(symbol)
if date:
date = _strOrDate(date)
return _getJson('stock/{symbol}/sentiment/{type}/{date}'.format(symbol=symbol, type=type, date=date), token, version, filter)
return _getJson('stock/{symbol}/sentiment/{type}/'.format(symbol=symbol, type=type), token, version, filter)
def sentimentDF(symbol, type='daily', date=None, token='', version='', filter=''):
'''This endpoint provides social sentiment data from StockTwits. Data can be viewed as a daily value, or by minute for a given date.
https://iexcloud.io/docs/api/#social-sentiment
Continuous
Args:
symbol (string); Ticker to request
type (string); 'daily' or 'minute'
date (string); date in YYYYMMDD or datetime
token (string); Access token
version (string); API version
filter (string); filters: https://iexcloud.io/docs/api/#filter-results
Returns:
DataFrame: result
'''
ret = sentiment(symbol, type, date, token, version, filter)
if type == 'daily':
ret = [ret]
df = pd.DataFrame(ret)
_toDatetime(df)
return df
@_expire(hour=1)
def ceoCompensation(symbol, token='', version='', filter=''):
'''This endpoint provides CEO compensation for a company by symbol.
https://iexcloud.io/docs/api/#ceo-compensation
1am daily
Args:
symbol (string); Ticker to request
token (string); Access token
version (string); API version
filter (string); filters: https://iexcloud.io/docs/api/#filter-results
Returns:
dict: result
'''
_raiseIfNotStr(symbol)
return _getJson('stock/{symbol}/ceo-compensation'.format(symbol=symbol), token, version, filter)
def ceoCompensationDF(symbol, token='', version='', filter=''):
'''This endpoint provides CEO compensation for a company by symbol.
https://iexcloud.io/docs/api/#ceo-compensation
1am daily
Args:
symbol (string); Ticker to request
token (string); Access token
version (string); API version
filter (string); filters: https://iexcloud.io/docs/api/#filter-results
Returns:
DataFrame: result
'''
ret = ceoCompensation(symbol, token, version, filter)
df = pd.io.json.json_normalize(ret)
_toDatetime(df)
return df
| 32.488372 | 160 | 0.657122 | 513 | 4,191 | 5.331384 | 0.19883 | 0.083364 | 0.085558 | 0.083364 | 0.804753 | 0.749177 | 0.722121 | 0.722121 | 0.722121 | 0.722121 | 0 | 0.001224 | 0.219995 | 4,191 | 128 | 161 | 32.742188 | 0.835424 | 0.580052 | 0 | 0.258065 | 0 | 0 | 0.095752 | 0.068105 | 0 | 0 | 0 | 0 | 0 | 1 | 0.193548 | false | 0 | 0.064516 | 0 | 0.483871 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
48006e8af8d6fa7af96f9554b33b8860ec73194e | 29 | py | Python | src/compas_maya/com/maya/__init__.py | gonzalocasas/compas | 2fabc7e5c966a02d823fa453564151e1a1e7e3c6 | [
"MIT"
] | null | null | null | src/compas_maya/com/maya/__init__.py | gonzalocasas/compas | 2fabc7e5c966a02d823fa453564151e1a1e7e3c6 | [
"MIT"
] | null | null | null | src/compas_maya/com/maya/__init__.py | gonzalocasas/compas | 2fabc7e5c966a02d823fa453564151e1a1e7e3c6 | [
"MIT"
] | null | null | null | from .sock import MayaSocket
| 14.5 | 28 | 0.827586 | 4 | 29 | 6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.137931 | 29 | 1 | 29 | 29 | 0.96 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
4809412885ea70f8e146265c90bcc8e20db9fa31 | 398 | py | Python | autofit/plot/__init__.py | caoxiaoyue/PyAutoFit | 819cd2acc8d4069497a161c3bb6048128e44d828 | [
"MIT"
] | 39 | 2019-01-24T10:45:23.000Z | 2022-03-18T09:37:59.000Z | autofit/plot/__init__.py | caoxiaoyue/PyAutoFit | 819cd2acc8d4069497a161c3bb6048128e44d828 | [
"MIT"
] | 260 | 2018-11-27T12:56:33.000Z | 2022-03-31T16:08:59.000Z | autofit/plot/__init__.py | caoxiaoyue/PyAutoFit | 819cd2acc8d4069497a161c3bb6048128e44d828 | [
"MIT"
] | 13 | 2018-11-30T16:49:05.000Z | 2022-01-21T17:39:29.000Z | from autofit.plot.samples_plotters import SamplesPlotter
from autofit.non_linear.nest.dynesty.plotter import DynestyPlotter
from autofit.non_linear.nest.ultranest.plotter import UltraNestPlotter
from autofit.non_linear.mcmc.emcee.plotter import EmceePlotter
from autofit.non_linear.mcmc.zeus.plotter import ZeusPlotter
from autofit.non_linear.optimize.pyswarms.plotter import PySwarmsPlotter
| 56.857143 | 73 | 0.869347 | 52 | 398 | 6.538462 | 0.442308 | 0.194118 | 0.205882 | 0.294118 | 0.282353 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.075377 | 398 | 6 | 74 | 66.333333 | 0.923913 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
48094a27594f15064f8bb42cc12e7efe63e18edf | 165 | py | Python | continuum/tasks/__init__.py | psychicmario/continuum | 22f60d3fc71553f1334cffa7e88a1727cdf2413c | [
"MIT"
] | null | null | null | continuum/tasks/__init__.py | psychicmario/continuum | 22f60d3fc71553f1334cffa7e88a1727cdf2413c | [
"MIT"
] | null | null | null | continuum/tasks/__init__.py | psychicmario/continuum | 22f60d3fc71553f1334cffa7e88a1727cdf2413c | [
"MIT"
] | null | null | null | # pylint: disable=C0401
# flake8: noqa
from continuum.tasks.task_set import TaskSet
from continuum.tasks.utils import split_train_val, concat
__all__ = ["TaskSet"]
| 23.571429 | 57 | 0.793939 | 23 | 165 | 5.391304 | 0.782609 | 0.209677 | 0.290323 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.034247 | 0.115152 | 165 | 6 | 58 | 27.5 | 0.815068 | 0.206061 | 0 | 0 | 0 | 0 | 0.054688 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
4809c422bf1321bdb476d8bf295e84a6d8af7d38 | 4,606 | py | Python | example/boost_lin_tree_1d.py | wgurecky/pCRTree | 3b7bf8596793b32bb8162db5df7a765cf774e2f1 | [
"BSD-3-Clause"
] | 1 | 2020-06-07T04:36:06.000Z | 2020-06-07T04:36:06.000Z | example/boost_lin_tree_1d.py | wgurecky/pCRTree | 3b7bf8596793b32bb8162db5df7a765cf774e2f1 | [
"BSD-3-Clause"
] | 6 | 2017-07-21T07:03:05.000Z | 2020-09-24T18:38:36.000Z | example/boost_lin_tree_1d.py | wgurecky/pCRTree | 3b7bf8596793b32bb8162db5df7a765cf774e2f1 | [
"BSD-3-Clause"
] | null | null | null | #!/usr/bin/python3
from boosting.gbm import GBRTmodel
from matplotlib import gridspec
import numpy as np
MPL = False
try:
from pylab import cm
import matplotlib.pyplot as plt
MPL = True
except: pass
def example_boosed_lin_reg_lin(f, plot_name='1d_boosted_regression_lin_lin_ex.png'):
n_samples_per_edit = 1
X = np.atleast_2d(np.linspace(0, 10.0, 80).repeat(n_samples_per_edit)).T
X = X.astype(np.float32)
y = f(X).ravel()
std_dev = 4.2
noise = np.random.normal(0, std_dev, size=y.shape)
y += noise
y = y.astype(np.float32)
# Mesh the input space for evaluations of the real function, the prediction and
# its MSE
xx = np.atleast_2d(np.linspace(0, 10, 1000)).T
xx = xx.astype(np.float32)
# fit to median
gbt = GBRTmodel(max_depth=1, learning_rate=0.03, subsample=0.8, loss="se", tree_method="lin", minSplitPts=10, minDataLeaf=20)
gbt.train(X, y, n_estimators=400)
y_median = gbt.predict(xx)
if MPL:
# Plot the function, the prediction and the 90% confidence interval based on
# the MSE
fig = plt.figure()
plt.plot(X, y, 'b.', markersize=2, label=u'Observations', alpha=0.3)
plt.plot(xx, y_median, 'r-', label=r'$\hat \mu$')
plt.plot(xx, f(xx), 'g', label=u'$f(x) = 0.25x+10$')
plt.xlabel('$x$')
plt.ylabel('$f(x)$')
# plt.ylim(-10, 20)
plt.legend(loc='upper left')
plt.savefig('lin_' + plot_name + '.png', dpi=120)
plt.close()
# fit to median
gbt = GBRTmodel(max_depth=1, learning_rate=0.03, subsample=0.8, loss="se", tree_method="cart", minSplitPts=4)
gbt.train(X, y, n_estimators=400)
y_median = gbt.predict(xx)
if MPL:
# Plot the function, the prediction and the 90% confidence interval based on
# the MSE
fig = plt.figure()
plt.plot(X, y, 'b.', markersize=2, label=u'Observations', alpha=0.3)
plt.plot(xx, y_median, 'r-', label=r'$\hat \mu$')
plt.plot(xx, f(xx), 'g', label=u'$f(x) = 0.25x+10$')
plt.xlabel('$x$')
plt.ylabel('$f(x)$')
# plt.ylim(-10, 20)
plt.legend(loc='upper left')
plt.savefig('const_' + plot_name + '.png', dpi=120)
plt.close()
def example_boosed_lin_reg_sin():
def f(x):
heavyside = np.heaviside(x - 5.0, 1.0) * 12.
return x * np.sin(x) + heavyside + 10.
n_samples_per_edit = 1
X = np.atleast_2d(np.linspace(0, 10.0, 220).repeat(n_samples_per_edit)).T
X = X.astype(np.float32)
y = f(X).ravel()
# std_dev = 1.5 + 1.0 * np.random.random(y.shape)
std_dev = 2.0
noise = np.random.normal(0, std_dev, size=y.shape)
y += noise
y = y.astype(np.float32)
# Mesh the input space for evaluations of the real function, the prediction and
# its MSE
xx = np.atleast_2d(np.linspace(0, 10, 1000)).T
xx = xx.astype(np.float32)
# fit to median
gbt = GBRTmodel(max_depth=2, learning_rate=0.01, subsample=0.5, loss="se", tree_method="lin", minSplitPts=20, minDataLeaf=25)
gbt.train(X, y, n_estimators=380)
y_median = gbt.predict(xx)
if MPL:
# Plot the function, the prediction and the 90% confidence interval based on
# the MSE
fig = plt.figure()
plt.plot(X, y, 'b.', markersize=2, label=u'Observations', alpha=0.3)
plt.plot(xx, y_median, 'r-', label=r'$\hat q_{0.50}$')
plt.plot(xx, f(xx), 'g', label=u'$f(x) = x\,\sin(x) + 12 H(x-5)$')
plt.xlabel('$x$')
plt.ylabel('$f(x)$')
# plt.ylim(-10, 20)
plt.legend(loc='upper left')
plt.savefig('1d_boosted_regression_lin_sin_ex.png', dpi=120)
plt.close()
if __name__ == "__main__":
def f(x):
#y = np.zeros(len(x))
#y[np.asarray(x < 5).flatten()] = x[x < 5] * 0.25 + 10.
#y[np.asarray(x >= 5).flatten()] = x[x >= 5] * -0.25 + 12.
y = x ** 2.0
return y
example_boosed_lin_reg_lin(f, 'pos_quadratic')
def f(x):
#y = np.zeros(len(x))
#y[np.asarray(x < 5).flatten()] = x[x < 5] * 0.25 + 10.
#y[np.asarray(x >= 5).flatten()] = x[x >= 5] * -0.25 + 12.
y = -x ** 2.0
return y
example_boosed_lin_reg_lin(f, 'neg_quadratic')
example_boosed_lin_reg_sin()
| 36.267717 | 133 | 0.546678 | 704 | 4,606 | 3.455966 | 0.21875 | 0.009042 | 0.036991 | 0.039046 | 0.815043 | 0.787505 | 0.751336 | 0.730785 | 0.730785 | 0.730785 | 0 | 0.057478 | 0.297438 | 4,606 | 126 | 134 | 36.555556 | 0.694376 | 0.183239 | 0 | 0.614458 | 0 | 0 | 0.093098 | 0.019262 | 0 | 0 | 0 | 0 | 0 | 1 | 0.060241 | false | 0.012048 | 0.060241 | 0 | 0.156627 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
485061818a0eb4e60d6caa58c3947b002619beaa | 24 | py | Python | ppca/__init__.py | sinziana91/pca-magic | db813177028e7453f4bceec42cb9383809559dc0 | [
"Apache-2.0"
] | 218 | 2015-02-22T21:22:49.000Z | 2022-03-11T14:00:58.000Z | ppca/__init__.py | asinga1982/pca-magic | 1e94f983c61e41b31e353465810fc0d8d46a8c98 | [
"Apache-2.0"
] | 11 | 2016-07-09T23:49:45.000Z | 2022-03-08T09:19:18.000Z | ppca/__init__.py | asinga1982/pca-magic | 1e94f983c61e41b31e353465810fc0d8d46a8c98 | [
"Apache-2.0"
] | 45 | 2015-02-23T16:06:01.000Z | 2022-01-12T15:58:08.000Z | from ._ppca import PPCA
| 12 | 23 | 0.791667 | 4 | 24 | 4.5 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.166667 | 24 | 1 | 24 | 24 | 0.9 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
6f83e73965f15f8699279a754045bb9013357ccb | 3,267 | py | Python | plaso/parsers/__init__.py | rick-slin/plaso | 8685bd1689cbbaea78ebf723881a842eeaf94c35 | [
"Apache-2.0"
] | null | null | null | plaso/parsers/__init__.py | rick-slin/plaso | 8685bd1689cbbaea78ebf723881a842eeaf94c35 | [
"Apache-2.0"
] | null | null | null | plaso/parsers/__init__.py | rick-slin/plaso | 8685bd1689cbbaea78ebf723881a842eeaf94c35 | [
"Apache-2.0"
] | null | null | null | # -*- coding: utf-8 -*-
"""This file imports Python modules that register parsers."""
from plaso.parsers import asl
from plaso.parsers import android_app_usage
from plaso.parsers import apache_access
from plaso.parsers import apt_history
from plaso.parsers import aws_elb_access
from plaso.parsers import bash_history
from plaso.parsers import bencode_parser
from plaso.parsers import bsm
from plaso.parsers import chrome_cache
from plaso.parsers import chrome_preferences
from plaso.parsers import cups_ipp
from plaso.parsers import custom_destinations
from plaso.parsers import czip
from plaso.parsers import dpkg
from plaso.parsers import esedb
from plaso.parsers import filestat
from plaso.parsers import firefox_cache
from plaso.parsers import fish_history
from plaso.parsers import fseventsd
from plaso.parsers import gdrive_synclog
from plaso.parsers import google_logging
from plaso.parsers import iis
from plaso.parsers import ios_lockdownd
from plaso.parsers import ios_logd
from plaso.parsers import ios_mobile_installation_log
from plaso.parsers import java_idx
from plaso.parsers import jsonl_parser
from plaso.parsers import locate
from plaso.parsers import mac_appfirewall
from plaso.parsers import mac_keychain
from plaso.parsers import mac_securityd
from plaso.parsers import mac_wifi
from plaso.parsers import mactime
from plaso.parsers import mcafeeav
from plaso.parsers import msiecf
from plaso.parsers import networkminer
from plaso.parsers import ntfs
from plaso.parsers import olecf
from plaso.parsers import opera
from plaso.parsers import pe
from plaso.parsers import plist
from plaso.parsers import popcontest
from plaso.parsers import pls_recall
from plaso.parsers import recycler
from plaso.parsers import safari_cookies
from plaso.parsers import santa
from plaso.parsers import sccm
from plaso.parsers import selinux
from plaso.parsers import setupapi
from plaso.parsers import skydrivelog
from plaso.parsers import sophos_av
from plaso.parsers import spotlight_storedb
from plaso.parsers import sqlite
from plaso.parsers import symantec
from plaso.parsers import systemd_journal
from plaso.parsers import syslog
from plaso.parsers import trendmicroav
from plaso.parsers import utmp
from plaso.parsers import utmpx
from plaso.parsers import vsftpd
from plaso.parsers import winevt
from plaso.parsers import winevtx
from plaso.parsers import winfirewall
from plaso.parsers import winjob
from plaso.parsers import winlnk
from plaso.parsers import winprefetch
from plaso.parsers import winreg_parser
from plaso.parsers import winrestore
from plaso.parsers import xchatlog
from plaso.parsers import xchatscrollback
from plaso.parsers import zsh_extended_history
# Register plugins.
from plaso.parsers import bencode_plugins
from plaso.parsers import czip_plugins
from plaso.parsers import esedb_plugins
from plaso.parsers import jsonl_plugins
from plaso.parsers import olecf_plugins
from plaso.parsers import plist_plugins
from plaso.parsers import sqlite_plugins
from plaso.parsers import syslog_plugins
from plaso.parsers import winreg_plugins
# These modules do not register parsers themselves, but contain super classes
# used by parsers in other modules.
# from plaso.parsers import dsv_parser
# from plaso.parsers import text_parser
| 35.901099 | 77 | 0.854913 | 486 | 3,267 | 5.652263 | 0.251029 | 0.268657 | 0.477612 | 0.656716 | 0.383327 | 0 | 0 | 0 | 0 | 0 | 0 | 0.000345 | 0.112642 | 3,267 | 90 | 78 | 36.3 | 0.947223 | 0.086012 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
6f8f7ed8337271bf7ac7ab965547f3e0fecc552d | 107 | py | Python | core/primitive/string.py | ponyatov/metaLpy | 96149313e8083536ade1c331825242f6996f05b3 | [
"MIT"
] | null | null | null | core/primitive/string.py | ponyatov/metaLpy | 96149313e8083536ade1c331825242f6996f05b3 | [
"MIT"
] | null | null | null | core/primitive/string.py | ponyatov/metaLpy | 96149313e8083536ade1c331825242f6996f05b3 | [
"MIT"
] | null | null | null | ## @file
from .primitive import *
## text string
## @ingroup primitive
class String(Primitive):
pass
| 11.888889 | 24 | 0.682243 | 12 | 107 | 6.083333 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.196262 | 107 | 8 | 25 | 13.375 | 0.848837 | 0.336449 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
6fceb6b5ab16f3001ea7837c90e6cb2481f3d4e2 | 113,303 | py | Python | train_functions.py | vassilis-karavias/fNRI-mastersigma | d3f4fecf9d28a9bc6e6150994824ca7674006ed3 | [
"MIT"
] | null | null | null | train_functions.py | vassilis-karavias/fNRI-mastersigma | d3f4fecf9d28a9bc6e6150994824ca7674006ed3 | [
"MIT"
] | null | null | null | train_functions.py | vassilis-karavias/fNRI-mastersigma | d3f4fecf9d28a9bc6e6150994824ca7674006ed3 | [
"MIT"
] | null | null | null | '''
This code is based on https://github.com/ekwebb/fNRI which in turn is based on https://github.com/ethanfetaya/NRI
(MIT licence)
'''
import argparse
import csv
import datetime
import os
import pickle
import torch.optim as optim
from torch.optim import lr_scheduler
import time
from modules_sigma import *
from utils import *
import math
class Model(object):
def __init__(self, args, edge_types, log_prior, encoder_file, decoder_file, current_encoder_file, current_decoder_file, log, loss_data, csv_writer, perm_writer, encoder, decoder, optimizer, scheduler):
# initialises states
super(Model, self).__init__()
self.edge_types = edge_types
self.log_prior = log_prior
self.encoder_file = encoder_file
self.decoder_file = decoder_file
self.current_encoder_file = current_encoder_file
self.current_decoder_file = current_decoder_file
self.log = log
self.loss_data = loss_data
self.csv_writer = csv_writer
self.perm_writer = perm_writer
self.encoder = encoder
self.decoder = decoder
self.optimizer = optimizer
self.scheduler = scheduler
if args.NRI:
self.train_loader, self.valid_loader, self.test_loader, self.loc_max, self.loc_min, self.vel_max, \
self.vel_min = load_data_NRI(args.batch_size, args.sim_folder, shuffle=True, data_folder=args.data_folder)
else:
self.train_loader, self.valid_loader, self.test_loader, self.loc_max, self.loc_min, self.vel_max,\
self.vel_min = load_data_fNRI(args.batch_size, args.sim_folder, shuffle=True, data_folder=args.data_folder)
self.datatensor = torch.FloatTensor([])
if args.cuda:
self.datatensor = self.datatensor.cuda()
self.datatensor = Variable(self.datatensor)
for batch_idx, (data, relations) in enumerate(self.train_loader):
if args.cuda:
data, relations = data.cuda(), relations.cuda()
data, relations = Variable(data), Variable(relations)
self.datatensor = torch.cat((self.datatensor, data), dim=0)
# get the prior for sigma
self.sigma_target = getsigma_target(self.datatensor, phys_error_folder=args.phys_folder,
comp_error_folder=args.comp_folder, data_folder=args.data_folder,
sim_folder=args.sim_folder)
self.sigma_target = self.sigma_target.unsqueeze(dim=0)
self.sigma_target = self.sigma_target.unsqueeze(dim=1)
if args.cuda:
self.sigma_target = self.sigma_target.cuda()
# Generate off-diagonal interaction graph
self.off_diag = np.ones([args.num_atoms, args.num_atoms]) - np.eye(args.num_atoms)
self.rel_rec = np.array(encode_onehot(np.where(self.off_diag)[1]), dtype=np.float32)
self.rel_send = np.array(encode_onehot(np.where(self.off_diag)[0]), dtype=np.float32)
self.rel_rec = torch.FloatTensor(self.rel_rec)
self.rel_send = torch.FloatTensor(self.rel_send)
# initialise parameters needed for convexification
self.alpha = 1
self.preds_prev = torch.zeros((args.num_atoms, args.timesteps-1, 4))
self.sigma_prev = torch.zeros((args.num_atoms, args.timesteps-1, 4))
if args.cuda:
self.preds_prev, self.sigma_prev = self.preds_prev.cuda(), self.sigma_prev.cuda()
self.preds_prev, self.sigma_prev = Variable(self.preds_prev), Variable(self.sigma_prev)
if args.cuda:
self.rel_rec = self.rel_rec.cuda()
self.rel_send = self.rel_send.cuda()
self.rel_rec = Variable(self.rel_rec)
self.rel_send = Variable(self.rel_send)
def close_log(self, save_folder, log_csv, perm_csv):
# close the log files
if self.log is not None:
print(save_folder)
self.log.close()
log_csv.close()
perm_csv.close()
self.loss_data.close()
def loss_fixed(self, data_decoder, edges, sigma, args, pred_steps=1, use_onepred=False):
# calculate the loss for the fixed variance case. Also returns the output of the NN.
target = data_decoder[:, :, 1:, :] # dimensions are [batch, particle, time, state]
# forward() in decoder called here - carries out decoding step
if use_onepred:
output, sigma, accel, vel = self.decoder(data_decoder, edges, self.rel_rec, self.rel_send, sigma, False,
False, args.temp_softplus, pred_steps)
else:
output, sigma, accel, vel = self.decoder(data_decoder, edges, self.rel_rec, self.rel_send, sigma, False,
False, args.temp_softplus, args.prediction_steps)
# calculate loss function
loss_nll = nll_gaussian(output, target, args.var)
loss_nll_var = nll_gaussian_var(output, target, args.var)
return loss_nll, loss_nll_var, output
def loss_anisotropic(self, data_decoder, edges, sigma, args, pred_steps=1, use_onepred=False):
# calculate the loss for the anisotropic case. Also returns the output of the NN
target = data_decoder[:, :, 1:, :] # dimensions are [batch, particle, time, state]
# forward() in decoder called here - carries out decoding step
if use_onepred:
output, sigma, accel, vel = self.decoder(data_decoder, edges, self.rel_rec, self.rel_send, sigma, True,
True, args.temp_softplus, pred_steps)
else:
output, sigma, accel, vel = self.decoder(data_decoder, edges, self.rel_rec, self.rel_send, sigma, True,
True, args.temp_softplus, args.prediction_steps)
# calculate loss function
loss_nll, loss_1, loss_2 = nll_gaussian_multivariatesigma_efficient(output, target, sigma, accel, vel)
loss_nll_var = nll_gaussian_var_multivariatesigma_efficient(output, target, sigma, accel, vel)
return loss_nll, loss_nll_var, output
def loss_KL(self, data_decoder, edges, sigma, args, pred_steps=1, use_onepred=False):
# calculate the loss for the anisotropic case with a KL divergence to account for prior. Also returns the output of the NN
target = data_decoder[:, :, 1:, :] # dimensions are [batch, particle, time, state]
# recast target sigma to correct shape.
sigma_target_1 = tile(self.sigma_target, 0, sigma.size(0))
sigma_target_1 = tile(sigma_target_1, 1, sigma.size(1))
# forward() in decoder called here - carries out decoding step
if use_onepred:
output, sigma, accel, vel = self.decoder(data_decoder, edges, self.rel_rec, self.rel_send, sigma, True,
True, args.temp_softplus, pred_steps)
else:
output, sigma, accel, vel = self.decoder(data_decoder, edges, self.rel_rec, self.rel_send, sigma, True,
True, args.temp_softplus, args.prediction_steps)
# calculate the loss function
loss_nll, loss_1, loss_2 = nll_gaussian_multivariatesigma_efficient(output, target, sigma, accel, vel)
loss_nll_var = nll_gaussian_var_multivariatesigma_efficient(output, target, sigma, accel, vel)
loss_kl_decoder = KL_output_multivariate(output, sigma, target, sigma_target_1)
return loss_nll + args.beta * loss_kl_decoder, loss_nll_var, output
def loss_normalinversewishart(self, data_decoder, edges, sigma, args, batch_idx, settouse, pred_steps=1, use_onepred=False):
# calculate the loss for the anisotropic Normal-Inverse-Wishart distribution. Also returns the output of the NN.
target = data_decoder[:, :, 1:, :] # dimensions are [batch, particle, time, state]
# forward() in decoder called here - carries out decoding step
if use_onepred:
output, sigma, accel, vel = self.decoder(data_decoder, edges, self.rel_rec, self.rel_send, sigma, True,
True, args.temp_softplus, pred_steps)
else:
output, sigma, accel, vel = self.decoder(data_decoder, edges, self.rel_rec, self.rel_send, sigma, True,
True, args.temp_softplus, args.prediction_steps)
# get the appropriate prior.
if settouse.lower() == 'train':
prior_pos = self.prior_pos_tensor_train[batch_idx]
prior_vel = self.prior_vel_tensor_train[batch_idx]
elif settouse.lower() == 'validation':
prior_pos = self.prior_pos_tensor_valid[batch_idx]
prior_vel = self.prior_vel_tensor_valid[batch_idx]
elif settouse.lower() == 'test':
prior_pos = self.prior_pos_tensor_test[batch_idx]
prior_vel = self.prior_vel_tensor_test[batch_idx]
else:
print("The set to use parameter must be one of 'train', 'validation' or 'test'.")
# calculate the loss
loss_nll, loss_nll_var = nll_Normal_Inverse_WishartLoss(output, sigma, accel, vel, prior_pos, prior_vel)
return loss_nll, loss_nll_var, output
def loss_kalmanfilter(self, data_decoder, edges, sigma, args, pred_steps=1, use_onepred=False):
# calculate the loss for the anisotropic Normal distribution with a Kalman filter envelope. Also returns the output of the NN.
# Note that this follows the suggestion given in: Multivariate Uncertainty in Deep Learning,
# Rebecca L. Russell and Christopher Reale,
# 2019, arXiv:1910.14215 [cs.LG]
target = data_decoder[:, :, 1:, :]
# recast target sigma to correct shape
sigma_target_1 = tile(self.sigma_target, 0, sigma.size(0))
sigma_target_1 = tile(sigma_target_1, 1, sigma.size(1))
# forward() in decoder called here - carries out decoding step
if use_onepred:
output, sigma, accel, vel = self.decoder(data_decoder, edges, self.rel_rec, self.rel_send, sigma, True,
True, args.temp_softplus, pred_steps)
else:
output, sigma, accel, vel = self.decoder(data_decoder, edges, self.rel_rec, self.rel_send, sigma, True,
True, args.temp_softplus, args.prediction_steps)
# calculates the output after passing through a kalman filter
kalmanfiler = KalmanFilter(sigma_target_1[:output.size(0), :, 0, :])
output, covmat = kalmanfiler.kalman_filter_steps(target, output, sigma)
# loss here is the MSE.
loss_nll = F.mse_loss(output, target)
loss_nll_var = nll_gaussian_var(output, target, args.var)
return loss_nll, loss_nll_var, output
def loss_isotropic(self, data_decoder, edges, sigma, args, epoch, pred_steps=1, use_onepred=False):
# calculates the loss for the isotropic model. Also returns the output of the NN
target = data_decoder[:, :, 1:, :] # dimensions are [batch, particle, time, state]
# forward() in decoder called here - carries out decoding step
if use_onepred:
output, sigma_1, accel, vel = self.decoder(data_decoder, edges, self.rel_rec, self.rel_send, sigma, True,
False, args.temp_softplus, pred_steps)
else:
output, sigma_1, accel, vel = self.decoder(data_decoder, edges, self.rel_rec, self.rel_send, sigma, True,
False, args.temp_softplus, args.prediction_steps)
# in case of isotropic we need to recast sigma to the same shape as output as it is required in the gaussian loss calculation
sigma_1 = tile(sigma_1, 3, list(output.size())[3])
# calculates loss function
loss_nll, loss_1, loss_2 = nll_gaussian_variablesigma(output, target, sigma_1, epoch, args.temp_sigmoid,
args.epochs)
loss_nll_var = nll_gaussian_var__variablesigma(output, target, sigma_1, epoch, args.temp_sigmoid, args.epochs)
return loss_nll, loss_nll_var, output
def loss_lorentzian(self, data_decoder, edges, sigma, args, pred_steps=1, use_onepred=False):
# calculates the loss for the Lorentzian model. Also returns the output of the NN
target = data_decoder[:, :, 1:, :] # dimensions are [batch, particle, time, state]
# forward() in decoder called here - carries out decoding step
if use_onepred:
output, sigma_1, accel, vel = self.decoder(data_decoder, edges, self.rel_rec, self.rel_send, sigma, True,
False, args.temp_softplus, pred_steps)
else:
output, sigma_1, accel, vel = self.decoder(data_decoder, edges, self.rel_rec, self.rel_send, sigma, True,
False, args.temp_softplus, args.prediction_steps)
# in case of isotropic we need to recast sigma to the same shape as output as it is required in the gaussian loss calculation
sigma_1 = tile(sigma_1, 3, list(output.size())[3])
# calculates loss function
loss_nll = nll_lorentzian(output, target, sigma_1)
loss_nll_var = nll_lorentzian_var(output, target, sigma_1)
return loss_nll, loss_nll_var, output
def loss_semi_isotropic(self, data_decoder, edges, sigma, args, epoch, pred_steps=1, use_onepred=False):
# calculates the loss for the semi-isotropic Gaussian model. Also returns the output of the NN
target = data_decoder[:, :, 1:, :] # dimensions are [batch, particle, time, state]
# forward() in decoder called here - carries out decoding step
if use_onepred:
output, sigma, accel, vel = self.decoder(data_decoder, edges, self.rel_rec, self.rel_send, sigma, True,
False, args.temp_softplus, pred_steps)
else:
output, sigma, accel, vel = self.decoder(data_decoder, edges, self.rel_rec, self.rel_send, sigma, True,
False, args.temp_softplus, args.prediction_steps)
# calculates loss function
loss_nll, loss_1, loss_2 = nll_gaussian_variablesigma_semiisotropic(output, target, sigma, epoch,
args.temp_sigmoid, args.epochs)
loss_nll_var = nll_gaussian_var__variablesigma_semiisotropic(output, target, sigma, epoch, args.temp_sigmoid,
args.epochs)
return loss_nll, loss_nll_var, output
def loss_anisotropic_withconvex(self, data_decoder, edges, sigma, args, vvec, sigmavec, pred_steps=1, use_onepred= False):
# calculates the loss for the anisotropic Gaussian model with convexification. Also returns the output of the NN
# The algorithm was designed by Edoardo Calvello
target = data_decoder[:, :, 1:, :] # dimensions are [batch, particle, time, state]
# ensures alpha is not too small
if (abs(self.alpha) > 1e-16):
self.alpha = 1e-16
# forward() in decoder called here - carries out decoding step
if use_onepred:
output, sigma_1, accel, vel = self.decoder(data_decoder, edges, self.rel_rec, self.rel_send, sigma, True,
True, args.temp_softplus, pred_steps)
else:
output, sigma_1, accel, vel = self.decoder(data_decoder, edges, self.rel_rec, self.rel_send, sigma, True,
True, args.temp_softplus, args.prediction_steps)
sigma_prev = tile(self.sigma_prev.unsqueeze(0), 0, target.size(0))
preds_prev = tile(self.preds_prev.unsqueeze(0), 0, target.size(0))
# calculates the loss
loss_nll, loss_1, loss_2 = nll_gaussian_multivariatesigma_convexified(output, target, sigma_1, accel, vel, sigma_prev, preds_prev, vvec, sigmavec, self.alpha)
loss_nll_var = nll_gaussian_multivariatesigma_var_convexified(output, target, sigma_1, accel, vel, sigma_prev, preds_prev, vvec, sigmavec, self.alpha)
# update step 3 of algorithm by Edoardo Calvello
vvec_new = preds_prev + (output-preds_prev) /self.alpha
sigmavec_new = sigma_prev + (sigma_1-sigma_prev) / self.alpha
self.alpha = (np.sqrt(pow(self.alpha,4) + 4 * pow(self.alpha, 2)) - pow(self.alpha, 2)) / 2
return loss_nll, loss_nll_var, output, vvec_new, sigmavec_new
def fixed_var_plot(self, args, acc_blocks_batch, target, output_plot):
import matplotlib.pyplot as plt
import matplotlib.patches as patches
from trajectory_plot import draw_lines
# plots trajectories over timesteps - Plot the trajectories of the output of the NN and the target
for i in range(args.batch_size):
fig = plt.figure(figsize=(7, 7))
ax = fig.add_subplot(111)
# ax = fig.add_axes([0, 0, 1, 1])
ax.xaxis.set_visible(True)
ax.yaxis.set_visible(True)
xmin_t, ymin_t, xmax_t, ymax_t = draw_lines(target, i, linestyle=':', alpha=0.6)
xmin_o, ymin_o, xmax_o, ymax_o = draw_lines(output_plot.detach().cpu().numpy(), i,
linestyle='-')
# ax.set_xlim([min(xmin_t, xmin_o), max(xmax_t, xmax_o)])
# ax.set_ylim([min(ymin_t, ymin_o), max(ymax_t, ymax_o)])#
ax.set_xlim([-1,1])
ax.set_ylim([-1,1])
rect = patches.Rectangle((-1,-1),2,2, edgecolor='r', facecolor='none')
ax.add_patch(rect)
# ax.set_xticks(np.linspace(math.ceil(min(xmin_t, xmin_o)*10)/10, math.floor(max(xmax_t, xmax_o)*10)/10,2))
# ax.set_yticks(np.linspace(math.ceil(min(ymin_t, ymin_o)*10)/10, math.floor(max(ymax_t, ymax_o)*10)/10,2))
block_names = ['layer ' + str(j) for j in range(len(args.edge_types_list))]
# block_names = [ 'springs', 'charges' ]
acc_text = [block_names[j] + ' acc: {:02.0f}%'.format(100 * acc_blocks_batch[i, j])
for j in range(acc_blocks_batch.shape[1])]
acc_text = ', '.join(acc_text)
plt.text(0.5, 0.95, acc_text, horizontalalignment='center', transform=ax.transAxes)
ax.xaxis.set_visible(True)
ax.yaxis.set_visible(True)
plt.xlabel('x')
plt.ylabel('y')
plt.savefig(os.path.join(args.load_folder,str(i)+'_pred_and_true.png'), dpi=300)
plt.show()
def isotropic_plot(self, args, data_decoder, edges, sigma, logits, logits_split, relations, target, zscorelist_x,
zscorelist_y):
# plots the graphs for the isotropic model. returns the z-score values parallel and perpendicular to the velocity.
import matplotlib.pyplot as plt
# for plotting
output_plot, sigma_plot, accel_plot, vel_plot = self.decoder(data_decoder, edges, self.rel_rec,
self.rel_send, sigma, True, False,
args.temp_softplus, 49)
if args.loss_type.lower() == 'isotropic' or args.loss_type.lower() == 'lorentzian':
# put sigma_plot in correct form for isotropic case
sigma_plot = tile(sigma_plot, 3, list(output_plot.size())[3])
else:
# semi-isotropic need to select the position coords and velocity coords and convert from
# [batch, particle, time , 1] -> [batch, particle, time, 2 (3 for 3D)] then reconcatinate to
# put sigma_plot in correct form
indices_pos = torch.LongTensor([0])
indices_vel = torch.LongTensor([1])
if args.cuda:
indices_pos = indices_pos.cuda()
sigma_plot_pos = torch.index_select(sigma_plot, 3, indices_pos)
sigma_plot_vel = torch.index_select(sigma_plot, 3, indices_vel)
sigma_plot_pos = tile(sigma_plot_pos, 3, 2)
sigma_plot_vel = tile(sigma_plot_vel, 3, 2)
sigma_plot = torch.cat((sigma_plot_pos, sigma_plot_vel), 3)
if args.NRI:
acc_batch, perm, acc_blocks_batch = edge_accuracy_perm_NRI_batch(logits, relations,
args.edge_types_list)
else:
acc_batch, perm, acc_blocks_batch = edge_accuracy_perm_fNRI_batch(logits_split,
relations,
args.edge_types_list)
# plot the mean value of sigma over timestep:
timestep = np.arange(0.1, 0.1 * (args.timesteps - 1), 0.1)
sigma_mean = sigma_plot.mean(dim=0)
sigma_mean = sigma_mean.mean(dim=0)
sigma_mean = sigma_mean.mean(dim=1)
sigma_mean = sigma_mean.detach().cpu().numpy()
fig = plt.figure()
plt.plot(timestep, sigma_mean, label='raw data')
plt.ylabel('Sigma Value')
plt.xlabel('Time along Trajectory')
plt.show()
from trajectory_plot import draw_lines_sigma
from matplotlib.patches import Ellipse, Rectangle
# plotting graphs for isotropic/anisotropic case - here we plot the values of sigma using
# Ellipses which are same colour as plots (see draw_lines_sigma in trajectory_plot)
for i in range(args.batch_size):
fig = plt.figure(figsize=(7, 7))
ax = fig.add_subplot(111)
# ax = fig.add_axes([0, 0, 1, 1])
ax.xaxis.set_visible(True)
ax.yaxis.set_visible(True)
xmin_t, ymin_t, xmax_t, ymax_t = -1, -1, 1, 1
xmin_o, ymin_o, xmax_o, ymax_o = -0.5, -0.5, 0.5, 0.5
xmin_t, ymin_t, xmax_t, ymax_t = draw_lines_sigma(target, i, sigma_plot.detach().cpu().numpy(), ax,
linestyle=':', alpha=0.6)
xmin_o, ymin_o, xmax_o, ymax_o = draw_lines_sigma(output_plot.detach().cpu().numpy(), i,
sigma_plot.detach().cpu().numpy(), ax, linestyle='-',
plot_ellipses=True)
# ax.set_xlim([min(xmin_t, xmin_o), max(xmax_t, xmax_o)])
# ax.set_ylim([min(ymin_t, ymin_o), max(ymax_t, ymax_o)])
ax.set_xlim([-1,1])
ax.set_ylim([-1,1])
rect = Rectangle((-1, -1), 2, 2, edgecolor='r', facecolor='none')
ax.add_patch(rect)
block_names = ['layer ' + str(j) for j in range(len(args.edge_types_list))]
# block_names = [ 'springs', 'charges' ]
acc_text = [block_names[j] + ' acc: {:02.0f}%'.format(100 * acc_blocks_batch[i, j])
for j in range(acc_blocks_batch.shape[1])]
acc_text = ', '.join(acc_text)
plt.text(0.5, 0.95, acc_text, horizontalalignment='center', transform=ax.transAxes)
ax.xaxis.set_visible(True)
ax.yaxis.set_visible(True)
plt.xlabel('x')
plt.ylabel('y')
# plt.savefig(os.path.join(args.load_folder,str(i)+'_pred_and_true.png'), dpi=300)
plt.show()
# z-score calcualtion
if (torch.min(sigma_plot) < pow(10, -7)):
accuracy = np.full((sigma_plot.size(0), sigma_plot.size(1), sigma_plot.size(2),
sigma_plot.size(3)), pow(10, -7), dtype=np.float32)
accuracy = torch.from_numpy(accuracy)
if args.cuda:
accuracy = accuracy.cuda()
output_plot = torch.max(output_plot, accuracy)
# z = (y-yhat)/sigma
zscore = (output_plot - target) / (sigma_plot)
# select out velocity coords to get direction parallel and perpendicular to velocity- need this
# to find z-score valuesalong these directions
indices = torch.LongTensor([2, 3])
if args.cuda:
indices = indices.cuda()
velocities = torch.index_select(output_plot, 3, indices)
# abs(v)
velnorm = velocities.norm(p=2, dim=3, keepdim=True)
# vhat = v/abs(v)
normalisedvel = velocities.div(velnorm.expand_as(velocities))
accelnorm = accel_plot.norm(p=2, dim=3, keepdim=True)
normalisedaccel = accel_plot.div(accelnorm.expand_as(accel_plot))
# get perpendicular components to the accelerations and velocities accelperp, velperp
# note in 2D perpendicular vector is just rotation by pi/2 about origin (x,y) -> (-y,x)
rotationmatrix = np.zeros(
(velocities.size(0), velocities.size(1), velocities.size(2), 2, 2),
dtype=np.float32)
for i in range(len(rotationmatrix)):
for j in range(len(rotationmatrix[i])):
for l in range(len(rotationmatrix[i][j])):
rotationmatrix[i][j][l][0][1] = np.float32(-1)
rotationmatrix[i][j][l][1][0] = np.float32(1)
rotationmatrix = torch.from_numpy(rotationmatrix)
if args.cuda:
rotationmatrix = rotationmatrix.cuda()
velperp = torch.matmul(rotationmatrix, normalisedvel.unsqueeze(4))
velperp = velperp.squeeze()
accelperp = torch.matmul(rotationmatrix, normalisedaccel.unsqueeze(4))
accelperp = accelperp.squeeze()
indices = torch.LongTensor([0, 1])
if args.cuda:
indices = indices.cuda()
# zscore along the position axes
zscore = torch.index_select(zscore, 3, indices)
# zscore parallel to velocity
zscore_x = torch.matmul(zscore.unsqueeze(3), normalisedvel.unsqueeze(4))
# zscore perp to velocity
zscore_y = torch.matmul(zscore.unsqueeze(3), velperp.unsqueeze(4))
zscorelist_x.append(zscore_x)
zscorelist_y.append(zscore_y)
return zscorelist_x, zscorelist_y
def anisotropic_plot(self, args, data_decoder, edges, sigma, logits, logits_split, relations, target, zscorelist_x,
zscorelist_y):
# plots the graphs for the anisotropic model. returns the z-score values parallel and perpendicular to the velocity.
import matplotlib.pyplot as plt
from scipy.optimize import curve_fit
output_plot, sigma_plot, accel_plot, vel_plot = self.decoder(data_decoder, edges, self.rel_rec,
self.rel_send, sigma, True, True,
args.temp_softplus, 49)
# plot the MSE over time to investigate chaos theory.
loc = target[:, :, :, 0:2].detach().numpy()
loc_new = output_plot[:, :, :, 0:2].detach().numpy()
mse_loc = ((loc_new - loc) ** 2).mean(axis=3) / args.num_atoms
mse_loc = mse_loc.mean(axis=1)
mse_loc = mse_loc.mean(axis=0)
deltaT = 0.1
T = np.arange(0, deltaT * (len(mse_loc) - 1 / 2), 0.1)
optimised_params_x, pcov = curve_fit(exp, T, mse_loc,
p0=[1, 5, -1], maxfev=1000000)
fig = plt.figure()
plt.plot(T, np.log(exp(T, *optimised_params_x)), label='fit')
plt.plot(T, np.log(mse_loc), label='raw data')
plt.ylabel('log(Mean Square Error)')
plt.xlabel('Time along Trajectory')
plt.legend(loc='best')
plt.show()
# plot the mean value of sigma over timestep:
timestep = np.arange(0.1, 0.1 * (args.timesteps - 1), 0.1)
sigma_mean = sigma_plot.mean(dim=0)
sigma_mean = sigma_mean.mean(dim=0)
sigma_mean = sigma_mean.mean(dim=1)
sigma_mean = sigma_mean.detach().cpu().numpy()
optimised_params_x, pcov = curve_fit(exp, timestep, sigma_mean,
p0=[1, 5, -1], maxfev=1000000)
fig = plt.figure()
plt.plot(T, exp(T, *optimised_params_x), label='fit')
plt.plot(timestep, sigma_mean, label='raw data')
plt.ylabel('Sigma Value')
plt.xlabel('Time along Trajectory')
plt.legend(loc='best')
plt.show()
fig = plt.figure()
plt.plot(T, np.log(exp(T, *optimised_params_x)), label='fit')
plt.plot(timestep, np.log(sigma_mean), label='raw data')
plt.ylabel('log(Sigma Value)')
plt.xlabel('Time along Trajectory')
plt.legend(loc='best')
plt.show()
if args.NRI:
acc_batch, perm, acc_blocks_batch = edge_accuracy_perm_NRI_batch(logits, relations,
args.edge_types_list)
else:
acc_batch, perm, acc_blocks_batch = edge_accuracy_perm_fNRI_batch(logits_split,
relations,
args.edge_types_list)
# plot trajectories and sigma's - use ellipses to plot anisotropy - see draw_lines_anisotropic
from trajectory_plot import draw_lines_anisotropic, draw_lines_sigma_animation
from matplotlib.patches import Rectangle
for i in range(args.batch_size):
# draw_lines_sigma_animation(target, output_plot, i, sigma_plot, vel_plot)
fig = plt.figure(figsize=(7, 7))
ax = fig.add_subplot(111)
# ax = fig.add_axes([0, 0, 1, 1])
ax.xaxis.set_visible(True)
ax.yaxis.set_visible(True)
xmin_t, ymin_t, xmax_t, ymax_t = draw_lines_anisotropic(target, i,
sigma_plot.detach().cpu().numpy(),
vel_plot, ax, linestyle=':',
alpha=0.6)
xmin_o, ymin_o, xmax_o, ymax_o = draw_lines_anisotropic(
output_plot.detach().cpu().numpy(), i, sigma_plot.detach().cpu().numpy(),
vel_plot, ax, linestyle='-', plot_ellipses=True)
# ax.set_xlim([min(xmin_t, xmin_o), max(xmax_t, xmax_o)])
# ax.set_ylim([min(ymin_t, ymin_o), max(ymax_t, ymax_o)])
ax.set_xlim([-1,1])
ax.set_ylim([-1,1])
rect = Rectangle((-1, -1), 2, 2, edgecolor='r', facecolor='none')
ax.add_patch(rect)
block_names = ['layer ' + str(j) for j in range(len(args.edge_types_list))]
# block_names = [ 'springs', 'charges' ]
acc_text = [block_names[j] + ' acc: {:02.0f}%'.format(100 * acc_blocks_batch[i, j])
for j in range(acc_blocks_batch.shape[1])]
acc_text = ', '.join(acc_text)
plt.text(0.5, 0.95, acc_text, horizontalalignment='center', transform=ax.transAxes)
ax.xaxis.set_visible(True)
ax.yaxis.set_visible(True)
plt.xlabel('x')
plt.ylabel('y')
# plt.savefig(os.path.join(args.load_folder,str(i)+'_pred_and_true.png'), dpi=300)
plt.show()
# for z score
# make sure we aren't dividing by 0
if (torch.min(sigma_plot) < pow(10, -7)):
accuracy = np.full((sigma_plot.size(0), sigma_plot.size(1), sigma_plot.size(2),
sigma_plot.size(3)), pow(10, -7), dtype=np.float32)
accuracy = torch.from_numpy(accuracy)
if args.cuda:
accuracy = accuracy.cuda()
output_plot = torch.max(output_plot, accuracy)
zscore = (output_plot - target) / (sigma_plot)
# select out velocity coords to get direction parallel and perpendicular to velocity- need this
# to find z-score values along these directions
velnorm = vel_plot.norm(p=2, dim=3, keepdim=True)
normalisedvel = vel_plot.div(velnorm.expand_as(vel_plot))
accelnorm = accel_plot.norm(p=2, dim=3, keepdim=True)
normalisedaccel = accel_plot.div(accelnorm.expand_as(accel_plot))
# get perpendicular components to the accelerations and velocities accelperp, velperp
# note in 2D perpendicular vector is just rotation by pi/2 about origin (x,y) -> (-y,x)
rotationmatrix = np.zeros(
(normalisedvel.size(0), normalisedvel.size(1), normalisedvel.size(2), 2, 2),
dtype=np.float32)
for i in range(len(rotationmatrix)):
for j in range(len(rotationmatrix[i])):
for l in range(len(rotationmatrix[i][j])):
rotationmatrix[i][j][l][0][1] = np.float32(-1)
rotationmatrix[i][j][l][1][0] = np.float32(1)
rotationmatrix = torch.from_numpy(rotationmatrix)
if args.cuda:
rotationmatrix = rotationmatrix.cuda()
velperp = torch.matmul(rotationmatrix, normalisedvel.unsqueeze(4))
velperp = velperp.squeeze()
accelperp = torch.matmul(rotationmatrix, normalisedaccel.unsqueeze(4))
accelperp = accelperp.squeeze()
indices = torch.LongTensor([0, 1])
if args.cuda:
indices = indices.cuda()
zscore = torch.index_select(zscore, 3, indices)
zscore_x = torch.matmul(zscore.unsqueeze(3), normalisedvel.unsqueeze(4))
zscore_y = torch.matmul(zscore.unsqueeze(3), velperp.unsqueeze(4))
zscorelist_x.append(zscore_x)
zscorelist_y.append(zscore_y)
return zscorelist_x, zscorelist_y
def zscore_plot(self, zscorelist_x, zscorelist_y):
# plots the z-score graphs
import matplotlib.pyplot as plt
from scipy.optimize import curve_fit
from scipy.stats import chisquare
# average over all timesteps
zscorelistintx = np.empty((0))
zscorelistinty = np.empty((0))
for i in range(len(zscorelist_x)):
zscorelistintx = np.append(zscorelistintx, zscorelist_x[i].numpy())
zscorelistinty = np.append(zscorelistinty, zscorelist_y[i].numpy())
bins = np.arange(-4, 4.1, 0.1)
# get histogram distribution for terms parallel to the velocity
histdatax, bin_edges, patches = plt.hist(zscorelistintx, bins, density=True)
# take the histdata point to be at the centre of the bin_edges:
# Gaussian fit- we expect a good model to give mean = 0 and sigma = 1
# for fit to full graph. NOTE USE histdatax instead of histdatax_new
xcoords = np.empty(len(bin_edges) - 1)
for i in range(len(bin_edges) - 1):
xcoords[i] = (bin_edges[i] + bin_edges[i + 1]) / 2
numberofpoints = len(xcoords)
# for fit to all points except central peak. NOTE USE histdatax_new instead of histdatax
# xcoords = np.empty(0)
# histdatax_new = np.empty(0)
# xcoords_small = np.empty(0)
# histdatax_small = np.empty(0)
# for i in range(len(bin_edges) - 1):
# if (abs(bin_edges[i])>0.5):
# xcoords = np.append(xcoords, [(bin_edges[i] + bin_edges[i + 1])/ 2])
# histdatax_new = np.append(histdatax_new, [histdatax[i]])
# else:
# xcoords_small = np.append(xcoords_small, [(bin_edges[i] + bin_edges[i + 1])/ 2])
# histdatax_small = np.append(histdatax_small, [histdatax[i]])
# numberofpoints = len(xcoords)
# mean is 1/N SUM(xy)
mean_gaussian_x = np.sum(xcoords * histdatax) / numberofpoints
# var = 1/N SUM(y*(x-mean) ** 2)
sigma_x = np.sqrt(np.sum(histdatax * (xcoords - mean_gaussian_x) ** 2) / numberofpoints)
# Fit to Gaussian
optimised_params_x, pcov = curve_fit(gaussian, xcoords, histdatax, p0=[1, mean_gaussian_x, sigma_x])
plt.plot(xcoords, gaussian(xcoords, *optimised_params_x), label='fit')
# fit ro Lorentzian
optimised_params_lor_x, pcov = curve_fit(lorentzian, xcoords, histdatax,
p0=[1, mean_gaussian_x, sigma_x])
plt.plot(xcoords, lorentzian(xcoords, *optimised_params_lor_x), 'k')
plt.xlabel("z-score")
plt.ylabel("frequency")
plt.text(60, .025, r'$\mu=100,\ \sigma=15$')
plt.xlim(-4, 4)
plt.savefig('zscorepartimestep.png')
plt.show()
area_under_gauss, p_1 = chisquare(f_obs = histdatax, f_exp = gaussian(xcoords, *optimised_params_x))
area_under_lor, p_2 = chisquare(f_obs= histdatax, f_exp = lorentzian(xcoords, *optimised_params_lor_x))
# area_under_gauss = (((gaussian(xcoords, *optimised_params_x) - histdatax) ** 2)/ (gaussian(xcoords, *optimised_params_x))).mean(axis=None)
# area_under_lor = (((lorentzian(xcoords, *optimised_params_lor_x) - histdatax) ** 2)/ (lorentzian(xcoords, *optimised_params_lor_x))).mean(axis= None)
# get histogram distribution for terms perpendicular to velocity
histdatay, bin_edges, patches = plt.hist(zscorelistinty, bins, density=True)
# take the histdata point to be at the centre of the bin_edges:
# Gaussian fit- we expect a good model to give mean = 0 and sigma = 1
## for fit to full graph. NOTE USE histdatay instead of histdatay_new
ycoords = np.empty(len(bin_edges) - 1)
for i in range(len(bin_edges) - 1):
ycoords[i] = (bin_edges[i] + bin_edges[i + 1]) / 2
numberofpoints = len(ycoords)
# for fit to all points except central peak. NOTE USE histdatay_new instead of histdatay
# ycoords = np.empty(0)
# histdatay_new = np.empty(0)
# ycoords_small = np.empty(0)
# histdatay_small = np.empty(0)
# for i in range(len(bin_edges) - 1):
# if (abs(bin_edges[i]) > 0.5):
# ycoords = np.append(ycoords,[(bin_edges[i] + bin_edges[i + 1]) / 2])
# histdatay_new = np.append(histdatay_new,[histdatay[i]])
# else:
# ycoords_small = np.append(ycoords_small, [(bin_edges[i] + bin_edges[i + 1]) / 2])
# histdatay_small = np.append(histdatay_small, [histdatay[i]])
# numberofpoints = len(ycoords)
# mean is 1/N SUM(xy)
mean_gaussian_y = np.sum(ycoords * histdatay) / numberofpoints
# var = 1/N SUM(y*(x-mean) ** 2)
sigma_y = np.sqrt(np.sum(histdatay * (ycoords - mean_gaussian_y) ** 2) / numberofpoints)
# Fit to Gaussian
# optimised_params_y, pcov = curve_fit(gaussian, ycoords, histdatay, p0=[1, mean_gaussian_y, sigma_y])
# plt.plot(ycoords, gaussian(ycoords, *optimised_params_y), label='fit')
# Fit to Lorentzian
# optimised_params_lor_y, pcov = curve_fit(lorentzian, ycoords, histdatay,
# p0=[1, mean_gaussian_y, sigma_y])
# plt.plot(ycoords, lorentzian(ycoords, *optimised_params_lor_y), 'k')
plt.xlabel("z-score")
plt.ylabel("frequency")
plt.text(60, .025, r'$\mu=100,\ \sigma=15$')
# plt.title('Timestep = ' + str(j))
plt.xlim(-4, 4)
plt.savefig('zscoreorth.png')
plt.show()
print("Gaussian Fit parallel to vel with mean: " + str(
optimised_params_x[1]) + " and std: " + str(optimised_params_x[2]))
print(
"Lorentzian Fit parallel to vel with mean: " + str(optimised_params_lor_x[1]) + " and std: " + str(
optimised_params_lor_x[2]))
print("Gaussian Fit parallel to vel area between curves: " + str(area_under_gauss) + ". p value: " + str(p_1))
print("Lorentzian Fit parallel to vel area between curves: " + str(area_under_lor) + ". p value: " + str(p_2))
# print(
# "Gaussian Fit perpendicular to vel with mean: " + str(optimised_params_y[1]) + " and std: " + str(
# optimised_params_y[2]))
# print("Lorentzian Fit perpendicular to vel with mean: " + str(
# optimised_params_lor_y[1]) + " and std: " + str(
# optimised_params_lor_y[2]))
# each timestep z-score uncomment for this plot.
# for j in range(1,len(zscorelist_x[0][0,0])-1):
# zscorelistintx = np.empty((0))
# zscorelistinty = np.empty((0))
# for i in range(len(zscorelist_x)):
# zscorelistintx = np.append(zscorelistintx, zscorelist_x[i][:, :, j, :].numpy())
# zscorelistinty = np.append(zscorelistinty, zscorelist_y[i][:, :, j, :].numpy())
# bins = np.arange(-4, 4.1, 0.1)
# # get histogram distribution
# histdatax, bin_edges, patches = plt.hist(zscorelistintx, bins, density = True)
# take the histdata point to be at the centre of the bin_edges:
# Gaussian fit- we expect a good model to give mean = 0 and sigma = 1
# xcoords = np.empty(len(bin_edges) - 1)
# for i in range(len(bin_edges) - 1):
# xcoords[i] = (bin_edges[i] + bin_edges[i+1]) /2
# numberofpoints = len(xcoords)
# # mean is 1/N SUM(xy)
# mean_gaussian_x = np.sum(xcoords * histdatax) / numberofpoints
# # var = 1/N SUM(y*(x-mean) ** 2)
# sigma_x = np.sqrt(np.sum(histdatax * (xcoords - mean_gaussian_x) ** 2) / numberofpoints)
# optimised_params_x, pcov = curve_fit(gaussian, xcoords, histdatax, p0 = [1, mean_gaussian_x, sigma_x])
# plt.plot(xcoords, gaussian(xcoords, *optimised_params_x), label = 'fit')
# optimised_params_lor_x, pcov = curve_fit(lorentzian, xcoords, histdatax, p0=[1, mean_gaussian_x, sigma_x])
# plt.plot(xcoords, lorentzian(xcoords, *optimised_params_lor_x), 'k')
# plt.xlabel("z-score")
# plt.ylabel("frequency")
# plt.text(60, .025, r'$\mu=100,\ \sigma=15$')
# plt.title('Timestep = ' + str(j))
# plt.xlim(-4, 4)
# plt.savefig('zscorepartimestep' + str(j) + '.png')
# plt.show()
# # get histogram distribution
# histdatay, bin_edges, patches = plt.hist(zscorelistinty, bins, density=True)
#
# # take the histdata point to be at the centre of the bin_edges:
# # Gaussian fit- we expect a good model to give mean = 0 and sigma = 1
# ycoords = np.empty(len(bin_edges) - 1)
# for i in range(len(bin_edges) - 1):
# ycoords[i] = (bin_edges[i] + bin_edges[i + 1]) / 2
# numberofpoints = len(ycoords)
# # mean is 1/N SUM(xy)
# mean_gaussian_y = np.sum(ycoords * histdatay) / numberofpoints
# # var = 1/N SUM(y*(x-mean) ** 2)
# sigma_y = np.sqrt(np.sum(histdatay * (ycoords - mean_gaussian_y) ** 2) / numberofpoints)
# optimised_params_y, pcov = curve_fit(gaussian, ycoords, histdatay, p0=[1, mean_gaussian_y, sigma_y])
# plt.plot(ycoords, gaussian(ycoords, *optimised_params_y), label='fit')
# optimised_params_lor_y, pcov = curve_fit(lorentzian, ycoords, histdatay, p0=[1, mean_gaussian_y, sigma_y])
# plt.plot(ycoords, lorentzian(ycoords, *optimised_params_lor_y), 'k')
# plt.xlabel("z-score")
# plt.ylabel("frequency")
# plt.text(60, .025, r'$\mu=100,\ \sigma=15$')
# plt.title('Timestep = ' + str(j))
# plt.xlim(-4, 4)
# plt.savefig('zscoreorthtimestep' + str(j)+ '.png')
# plt.show()
#
# print('Timestep = ' + str(j) + ". Gaussian Fit parallel to vel with mean: " + str(optimised_params_x[1]) + " and std: " + str(optimised_params_x[2]))
# # print("Lorentzian Fit parallel to vel with mean: " + str(optimised_params_lor_x[1]) + " and std: " + str(optimised_params_lor_x[2]))
# print(
# "Gaussian Fit perpendicular to vel with mean: " + str(optimised_params_y[1]) + " and std: " + str(optimised_params_y[2]))
# # print("Lorentzian Fit perpendicular to vel with mean: " + str(optimised_params_lor_y[1]) + " and std: " + str(
# # optimised_params_lor_y[2]))
class Trainer(Model):
def __init__(self, args, edge_types, log_prior, encoder_file, decoder_file, current_encoder_file, current_decoder_file, log, loss_data, csv_writer, perm_writer, encoder, decoder, optimizer, scheduler):
super(Trainer, self).__init__(args, edge_types, log_prior, encoder_file, decoder_file, current_encoder_file, current_decoder_file, log, loss_data, csv_writer, perm_writer, encoder, decoder, optimizer, scheduler)
# gets the prior for the normal inverse wishart distribution
if args.loss_type.lower() == 'norminvwishart':
# prior for training data
t = time.time()
self.prior_pos_tensor_train = np.empty(0)
self.prior_vel_tensor_train = np.empty(0)
for batch_idx, (data, relations) in enumerate(self.train_loader):
if args.cuda:
data, relations = data.cuda(), relations.cuda()
data, relations = Variable(data), Variable(relations)
data = data.clone()
relations = relations.clone()
indices_pos = torch.LongTensor([0, 1])
indices_vel = torch.LongTensor([2, 3])
if args.cuda:
indices_pos, indices_vel = indices_pos.cuda(), indices_vel.cuda()
data_pos = torch.index_select(data, 3, indices_pos)
data_vel = torch.index_select(data, 3, indices_vel)
sigma_pos = torch.index_select(self.sigma_target, 3, indices_pos)
sigma_vel = torch.index_select(self.sigma_target, 3, indices_vel)
prior_pos = getpriordist(data_pos, sigma_pos, 4)
prior_vel = getpriordist(data_vel, sigma_vel, 4)
self.prior_pos_tensor_train = np.concatenate((self.prior_pos_tensor_train, [prior_pos]))
self.prior_vel_tensor_train = np.concatenate((self.prior_vel_tensor_train, [prior_vel]))
print('train time: {:.1f}s'.format(time.time() - t))
t = time.time()
# prior for validation data
self.prior_pos_tensor_valid = np.empty(0)
self.prior_vel_tensor_valid = np.empty(0)
for batch_idx, (data, relations) in enumerate(self.valid_loader):
if args.cuda:
data, relations = data.cuda(), relations.cuda()
data, relations = Variable(data), Variable(relations)
data = data.clone()
relations = relations.clone()
indices_pos = torch.LongTensor([0, 1])
indices_vel = torch.LongTensor([2, 3])
if args.cuda:
indices_pos, indices_vel = indices_pos.cuda(), indices_vel.cuda()
data_pos = torch.index_select(data, 3, indices_pos)
data_vel = torch.index_select(data, 3, indices_vel)
sigma_pos = torch.index_select(self.sigma_target, 3, indices_pos)
sigma_vel = torch.index_select(self.sigma_target, 3, indices_vel)
prior_pos = getpriordist(data_pos, sigma_pos, 4)
prior_vel = getpriordist(data_vel, sigma_vel, 4)
self.prior_pos_tensor_valid = np.concatenate((self.prior_pos_tensor_valid, [prior_pos]))
self.prior_vel_tensor_valid = np.concatenate((self.prior_vel_tensor_valid, [prior_vel]))
print('validation time: {:.1f}s'.format(time.time() - t))
def train(self, epoch, best_val_loss, args):
t = time.time()
# train set
nll_train = []
nll_var_train = []
mse_train = []
kl_train = []
kl_list_train = []
kl_var_list_train = []
acc_train = []
acc_var_train = []
perm_train = []
acc_var_blocks_train = []
acc_blocks_train = []
KLb_train = []
KLb_blocks_train = []
# array of loss components
loss_1_array = []
loss_2_array = []
# gets an array of the sigma tensor per run through of the batch
sigmadecoderoutput = []
self.encoder.train()
self.decoder.train()
self.scheduler.step()
if not args.plot:
for batch_idx, (data, relations) in enumerate(self.train_loader): # relations are the ground truth interactions graphs
# tottime = time.time()
if args.cuda:
data, relations = data.cuda(), relations.cuda()
data, relations = Variable(data), Variable(relations)
if args.dont_split_data:
data_encoder = data[:, :, :args.timesteps, :].contiguous()
data_decoder = data[:, :, :args.timesteps, :].contiguous()
elif args.split_enc_only:
data_encoder = data[:, :, :args.timesteps, :].contiguous()
data_decoder = data
else:
# assert (data.size(2) - args.timesteps) >= args.timesteps
data_encoder = data[:, :, :args.timesteps, :].contiguous()
data_decoder = data[:, :, -args.timesteps:, :].contiguous()
# stores the values of the uncertainty. This will be an array of size [batchsize, no. of particles, time,no. of axes (isotropic = 1, anisotropic = 4]
# initialise sigma to an array large negative numbers, under softplus function this will make them small positive numbers
sigma = initsigma(len(data_decoder), len(data_decoder[0][0]), args.anisotropic, args.num_atoms, inversesoftplus(pow(args.var,1/2), args.temp_softplus))
if args.cuda:
sigma = sigma.cuda()
if args.loss_type.lower() == 'semi_isotropic'.lower():
sigma = tile(sigma, 3, 2)
sigma = Variable(sigma)
self.optimizer.zero_grad()
logits = self.encoder(data_encoder, self.rel_rec, self.rel_send)
if args.NRI:
# dim of logits, edges and prob are [batchsize, N^2-N, edgetypes] where N = no. of particles
edges = gumbel_softmax(logits, tau=args.temp, hard=args.hard)
prob = my_softmax(logits, -1)
loss_kl = kl_categorical_uniform(prob, args.num_atoms, self.edge_types)
loss_kl_split = [loss_kl]
loss_kl_var_split = [kl_categorical_uniform_var(prob, args.num_atoms, self.edge_types)]
KLb_train.append(0)
KLb_blocks_train.append([0])
if args.no_edge_acc:
acc_perm, perm, acc_blocks, acc_var, acc_var_blocks = 0, np.array([0]), np.zeros(len(args.edge_types_list)), 0, np.zeros(len(args.edge_types_list))
else:
acc_perm, perm, acc_blocks, acc_var, acc_var_blocks = edge_accuracy_perm_NRI(logits, relations, args.edge_types_list)
else:
# dim of logits, edges and prob are [batchsize, N^2-N, sum(edge_types_list)] where N = no. of particles
logits_split = torch.split(logits, args.edge_types_list, dim=-1)
edges_split = tuple([gumbel_softmax(logits_i, tau=args.temp, hard=args.hard)
for logits_i in logits_split])
edges = torch.cat(edges_split, dim=-1)
prob_split = [my_softmax(logits_i, -1) for logits_i in logits_split]
if args.prior:
loss_kl_split = [kl_categorical(prob_split[type_idx], self.log_prior[type_idx], args.num_atoms)
for type_idx in range(len(args.edge_types_list))]
loss_kl = sum(loss_kl_split)
else:
loss_kl_split = [kl_categorical_uniform(prob_split[type_idx], args.num_atoms,
args.edge_types_list[type_idx])
for type_idx in range(len(args.edge_types_list))]
loss_kl = sum(loss_kl_split)
loss_kl_var_split = [kl_categorical_uniform_var(prob_split[type_idx], args.num_atoms,
args.edge_types_list[type_idx])
for type_idx in range(len(args.edge_types_list))]
if args.no_edge_acc:
acc_perm, perm, acc_blocks, acc_var, acc_var_blocks = 0, np.array([0]), np.zeros(len(args.edge_types_list)), 0, np.zeros(len(args.edge_types_list))
else:
acc_perm, perm, acc_blocks, acc_var, acc_var_blocks = edge_accuracy_perm_fNRI(logits_split, relations,
args.edge_types_list, args.skip_first)
KLb_blocks = KL_between_blocks(prob_split, args.num_atoms)
KLb_train.append(sum(KLb_blocks).data.item())
KLb_blocks_train.append([KL.data.item() for KL in KLb_blocks])
# fixed variance train loss calculation
target = data_decoder[:, :, 1:, :] # dimensions are [batch, particle, time, state]
if args.loss_type.lower() == 'fixed_var'.lower():
loss_nll, loss_nll_var, output = self.loss_fixed(data_decoder, edges, sigma, args)
# variable variance train loss calculation
elif args.loss_type.lower() == 'anisotropic':
loss_nll, loss_nll_var, output = self.loss_anisotropic(data_decoder, edges, sigma, args)
elif args.loss_type.upper() == 'KL':
loss_nll, loss_nll_var, output = self.loss_KL(data_decoder, edges, sigma, args)
elif args.loss_type.lower() == 'norminvwishart':
loss_nll, loss_nll_var, output = self.loss_normalinversewishart(data_decoder, edges, sigma, args, batch_idx, 'train')
elif args.loss_type.lower() == 'kalmanfilter':
loss_nll, loss_nll_var, output = self.loss_kalmanfilter(data_decoder, edges, sigma, args)
elif args.loss_type.lower() == 'isotropic':
loss_nll, loss_nll_var, output = self.loss_isotropic(data_decoder, edges, sigma, args, epoch)
elif args.loss_type.lower() == 'lorentzian':
loss_nll, loss_nll_var, output = self.loss_lorentzian(data_decoder, edges, sigma, args)
elif args.loss_type.lower() == 'semi_isotropic'.lower():
loss_nll, loss_nll_var, output = self.loss_semi_isotropic(data_decoder, edges, sigma, args, epoch)
elif args.loss_type.lower() == 'ani_convex'.lower():
target = data_decoder[:, :, 1:, :]
if epoch == 0:
vvec = target.clone()
sigma_vec = sigma[:,:,1:,:].clone()
loss_nll, loss_nll_var, output, vvec, sigma_vec = self.loss_anisotropic_withconvex(data_decoder, edges, sigma, args, vvec, sigma_vec)
if args.mse_loss:
loss = F.mse_loss(output, target)
else:
loss = loss_nll
if not math.isclose(args.beta, 0, rel_tol=1e-6):
loss += args.beta * loss_kl
perm_train.append(perm)
acc_train.append(acc_perm)
acc_blocks_train.append(acc_blocks)
acc_var_train.append(acc_var)
acc_var_blocks_train.append(acc_var_blocks)
loss.backward()
self.optimizer.step()
mse_train.append(F.mse_loss(output, target).data.item())
nll_train.append(loss_nll.data.item())
kl_train.append(loss_kl.data.item())
kl_list_train.append([kl.data.item() for kl in loss_kl_split])
nll_var_train.append(loss_nll_var.data.item())
kl_var_list_train.append([kl_var.data.item() for kl_var in loss_kl_var_split])
# validation set
nll_val = []
nll_var_val = []
mse_val = []
kl_val = []
kl_list_val = []
kl_var_list_val = []
acc_val = []
acc_var_val = []
acc_blocks_val = []
acc_var_blocks_val = []
perm_val = []
KLb_val = []
KLb_blocks_val = [] # KL between blocks list
nll_M_val = []
nll_M_var_val = []
self.encoder.eval()
self.decoder.eval()
for batch_idx, (data, relations) in enumerate(self.valid_loader):
with torch.no_grad():
if args.cuda:
data, relations = data.cuda(), relations.cuda()
if args.dont_split_data:
data_encoder = data[:, :, :args.timesteps, :].contiguous()
data_decoder = data[:, :, :args.timesteps, :].contiguous()
elif args.split_enc_only:
data_encoder = data[:, :, :args.timesteps, :].contiguous()
data_decoder = data
else:
assert (data.size(2) - args.timesteps) >= args.timesteps
data_encoder = data[:, :, :args.timesteps, :].contiguous()
data_decoder = data[:, :, -args.timesteps:, :].contiguous()
# stores the values of the uncertainty. This will be an array of size [batchsize, no. of particles, time,no. of axes (isotropic = 1, anisotropic = 4)]
# initialise sigma to an array of large negative numbers which become small positive numbers when passed through softplus
sigma = initsigma(len(data_decoder), len(data_decoder[0][0]), args.anisotropic, args.num_atoms, inversesoftplus(pow(args.var,1/2), args.temp_softplus))
if args.cuda:
sigma = sigma.cuda()
if args.loss_type.lower() == 'semi_isotropic'.lower():
sigma = tile(sigma, 3, 2)
sigma = Variable(sigma)
# dim of logits, edges and prob are [batchsize, N^2-N, sum(edge_types_list)] where N = no. of particles
logits = self.encoder(data_encoder, self.rel_rec, self.rel_send)
if args.NRI:
# dim of logits, edges and prob are [batchsize, N^2-N, edgetypes] where N = no. of particles
edges = gumbel_softmax(logits, tau=args.temp, hard=args.hard) # uses concrete distribution (for hard=False) to sample edge types
prob = my_softmax(logits, -1) # my_softmax returns the softmax over the edgetype dim
loss_kl = kl_categorical_uniform(prob, args.num_atoms, self.edge_types)
loss_kl_split = [loss_kl]
loss_kl_var_split = [kl_categorical_uniform_var(prob, args.num_atoms, self.edge_types)]
KLb_val.append(0)
KLb_blocks_val.append([0])
if args.no_edge_acc:
acc_perm, perm, acc_blocks, acc_var, acc_var_blocks = 0, np.array([0]), np.zeros(len(args.edge_types_list)), 0, np.zeros(len(args.edge_types_list))
else:
acc_perm, perm, acc_blocks, acc_var, acc_var_blocks = edge_accuracy_perm_NRI(logits, relations, args.edge_types_list)
else:
# dim of logits, edges and prob are [batchsize, N^2-N, sum(edge_types_list)] where N = no. of particles
logits_split = torch.split(logits, args.edge_types_list, dim=-1)
edges_split = tuple([gumbel_softmax(logits_i, tau=args.temp, hard=args.hard)
for logits_i in logits_split])
edges = torch.cat(edges_split, dim=-1)
prob_split = [my_softmax(logits_i, -1) for logits_i in logits_split]
if args.prior:
loss_kl_split = [kl_categorical(prob_split[type_idx], self.log_prior[type_idx], args.num_atoms)
for type_idx in range(len(args.edge_types_list))]
loss_kl = sum(loss_kl_split)
else:
loss_kl_split = [kl_categorical_uniform(prob_split[type_idx], args.num_atoms,
args.edge_types_list[type_idx])
for type_idx in range(len(args.edge_types_list))]
loss_kl = sum(loss_kl_split)
loss_kl_var_split = [kl_categorical_uniform_var(prob_split[type_idx], args.num_atoms,
args.edge_types_list[type_idx])
for type_idx in range(len(args.edge_types_list))]
if args.no_edge_acc:
acc_perm, perm, acc_blocks, acc_var, acc_var_blocks = 0, np.array([0]), np.zeros(len(args.edge_types_list)), 0, np.zeros(len(args.edge_types_list))
else:
acc_perm, perm, acc_blocks, acc_var, acc_var_blocks = edge_accuracy_perm_fNRI(logits_split, relations,
args.edge_types_list, args.skip_first)
KLb_blocks = KL_between_blocks(prob_split, args.num_atoms)
KLb_val.append(sum(KLb_blocks).data.item())
KLb_blocks_val.append([KL.data.item() for KL in KLb_blocks])
target = data_decoder[:, :, 1:, :] # dimensions are [batch, particle, time, state]
# validation loss calculation
if args.loss_type.lower() == 'fixed_var'.lower():
loss_nll, loss_nll_var, output = self.loss_fixed(data_decoder, edges, sigma, args, use_onepred=True)
loss_nll_M, loss_nll_M_var, output_M = self.loss_fixed(data_decoder, edges, sigma, args)
elif args.loss_type.lower() == 'isotropic':
loss_nll, loss_nll_var, output = self.loss_isotropic(data_decoder, edges, sigma, args, epoch, use_onepred=True)
loss_nll_M, loss_nll_M_var, output_M = self.loss_isotropic(data_decoder, edges, sigma, args, epoch)
elif args.loss_type.lower() == 'lorentzian':
loss_nll, loss_nll_var, output = self.loss_lorentzian(data_decoder, edges, sigma, args, use_onepred=True)
loss_nll_M, loss_nll_M_var, output_M = self.loss_lorentzian(data_decoder, edges, sigma, args)
elif args.loss_type.lower() == 'semi_isotropic'.lower():
loss_nll, loss_nll_var, output = self.loss_semi_isotropic(data_decoder, edges, sigma, args, epoch, use_onepred=True)
loss_nll_M, loss_nll_M_var, output_M = self.loss_semi_isotropic(data_decoder, edges, sigma, args, epoch)
elif args.loss_type.lower() == 'anisotropic':
loss_nll, loss_nll_var, output = self.loss_anisotropic(data_decoder, edges, sigma, args, use_onepred=True)
loss_nll_M, loss_nll_M_var, output_M = self.loss_anisotropic(data_decoder, edges, sigma, args)
elif args.loss_type.upper() == 'KL':
loss_nll, loss_nll_var, output = self.loss_KL(data_decoder, edges, sigma, args, use_onepred=True)
loss_nll_M, loss_nll_M_var, output_M = self.loss_KL(data_decoder, edges, sigma, args)
elif args.loss_type.lower() == 'norminvwishart':
loss_nll, loss_nll_var, output = self.loss_normalinversewishart(data_decoder, edges, sigma,
args, batch_idx, 'validation', use_onepred=True)
loss_nll_M, loss_nll_M_var, output_M = self.loss_normalinversewishart(data_decoder, edges, sigma,
args, batch_idx, 'validation')
elif args.loss_type.lower() == 'kalmanfilter':
loss_nll, loss_nll_var, output = self.loss_kalmanfilter(data_decoder, edges, sigma, args,
use_onepred=True)
loss_nll_M, loss_nll_M_var, output_M = self.loss_kalmanfilter(data_decoder, edges, sigma, args)
elif args.loss_type.lower() == 'ani_convex'.lower():
target = data_decoder[:, :, 1:, :]
if epoch == 0:
vvec_ver = target.clone()
sigma_vec_ver = sigma[:,:,1:,:].clone()
loss_nll, loss_nll_var, output, vvec_ver_n, sigma_vec_ver_n = self.loss_anisotropic_withconvex(data_decoder, edges, sigma, args, vvec_ver, sigma_vec_ver, use_onepred=True)
loss_nll_M, loss_nll_M_var, output_M, vvec_ver, sigma_vec_ver = self.loss_anisotropic_withconvex(data_decoder, edges, sigma, args, vvec_ver, sigma_vec_ver)
perm_val.append(perm)
acc_val.append(acc_perm)
acc_blocks_val.append(acc_blocks)
acc_var_val.append(acc_var)
acc_var_blocks_val.append(acc_var_blocks)
mse_val.append(F.mse_loss(output_M, target).data.item())
nll_val.append(loss_nll.data.item())
nll_var_val.append(loss_nll_var.data.item())
kl_val.append(loss_kl.data.item())
kl_list_val.append([kl_loss.data.item() for kl_loss in loss_kl_split])
kl_var_list_val.append([kl_var.data.item() for kl_var in loss_kl_var_split])
nll_M_val.append(loss_nll_M.data.item())
nll_M_var_val.append(loss_nll_M_var.data.item())
print('Epoch: {:03d}'.format(epoch),
'perm_val: ' + str(np.around(np.mean(np.array(perm_val), axis=0), 4)),
'time: {:.1f}s'.format(time.time() - t))
print('nll_trn: {:.2f}'.format(np.mean(nll_train)),
'kl_trn: {:.5f}'.format(np.mean(kl_train)),
'mse_trn: {:.10f}'.format(np.mean(mse_train)),
'acc_trn: {:.5f}'.format(np.mean(acc_train)),
'KLb_trn: {:.5f}'.format(np.mean(KLb_train))
)
print('acc_b_trn: ' + str(np.around(np.mean(np.array(acc_blocks_train), axis=0), 4)),
'kl_trn: ' + str(np.around(np.mean(np.array(kl_list_train), axis=0), 4))
)
print('nll_val: {:.2f}'.format(np.mean(nll_M_val)),
'kl_val: {:.5f}'.format(np.mean(kl_val)),
'mse_val: {:.10f}'.format(np.mean(mse_val)),
'acc_val: {:.5f}'.format(np.mean(acc_val)),
'KLb_val: {:.5f}'.format(np.mean(KLb_val))
)
print('acc_b_val: ' + str(np.around(np.mean(np.array(acc_blocks_val), axis=0), 4)),
'kl_val: ' + str(np.around(np.mean(np.array(kl_list_val), axis=0), 4))
)
print('Epoch: {:04d}'.format(epoch),
'perm_val: ' + str(np.around(np.mean(np.array(perm_val), axis=0), 4)),
'time: {:.4f}s'.format(time.time() - t),
file=self.log)
print('nll_trn: {:.5f}'.format(np.mean(nll_train)),
'kl_trn: {:.5f}'.format(np.mean(kl_train)),
'mse_trn: {:.10f}'.format(np.mean(mse_train)),
'acc_trn: {:.5f}'.format(np.mean(acc_train)),
'KLb_trn: {:.5f}'.format(np.mean(KLb_train)),
'acc_b_trn: ' + str(np.around(np.mean(np.array(acc_blocks_train), axis=0), 4)),
'kl_trn: ' + str(np.around(np.mean(np.array(kl_list_train), axis=0), 4)),
file=self.log)
print('nll_val: {:.5f}'.format(np.mean(nll_M_val)),
'kl_val: {:.5f}'.format(np.mean(kl_val)),
'mse_val: {:.10f}'.format(np.mean(mse_val)),
'acc_val: {:.5f}'.format(np.mean(acc_val)),
'KLb_val: {:.5f}'.format(np.mean(KLb_val)),
'acc_b_val: ' + str(np.around(np.mean(np.array(acc_blocks_val), axis=0), 4)),
'kl_val: ' + str(np.around(np.mean(np.array(kl_list_val), axis=0), 4)),
file=self.log)
if epoch == 0:
labels = ['epoch', 'nll trn', 'kl trn', 'mse train', 'KLb trn', 'acc trn']
labels += ['b' + str(i) + ' acc trn' for i in range(len(args.edge_types_list))] + ['nll var trn']
labels += ['b' + str(i) + ' kl trn' for i in range(len(kl_list_train[0]))]
labels += ['b' + str(i) + ' kl var trn' for i in range(len(kl_list_train[0]))]
labels += ['acc var trn'] + ['b' + str(i) + ' acc var trn' for i in range(len(args.edge_types_list))]
labels += ['nll val', 'nll_M_val', 'kl val', 'mse val', 'KLb val', 'acc val']
labels += ['b' + str(i) + ' acc val' for i in range(len(args.edge_types_list))]
labels += ['nll var val', 'nll_M var val']
labels += ['b' + str(i) + ' kl val' for i in range(len(kl_list_val[0]))]
labels += ['b' + str(i) + ' kl var val' for i in range(len(kl_list_val[0]))]
labels += ['acc var val'] + ['b' + str(i) + ' acc var val' for i in range(len(args.edge_types_list))]
self.csv_writer.writerow(labels)
labels = ['trn ' + str(i) for i in range(len(perm_train[0]))]
labels += ['val ' + str(i) for i in range(len(perm_val[0]))]
self.perm_writer.writerow(labels)
self.csv_writer.writerow([epoch, np.mean(nll_train), np.mean(kl_train),
np.mean(mse_train), np.mean(KLb_train), np.mean(acc_train)] +
list(np.mean(np.array(acc_blocks_train), axis=0)) +
[np.mean(nll_var_train)] +
list(np.mean(np.array(kl_list_train), axis=0)) +
list(np.mean(np.array(kl_var_list_train), axis=0)) +
# list(np.mean(np.array(KLb_blocks_train),axis=0)) +
[np.mean(acc_var_train)] + list(np.mean(np.array(acc_var_blocks_train), axis=0)) +
[np.mean(nll_val), np.mean(nll_M_val), np.mean(kl_val), np.mean(mse_val),
np.mean(KLb_val), np.mean(acc_val)] +
list(np.mean(np.array(acc_blocks_val), axis=0)) +
[np.mean(nll_var_val), np.mean(nll_M_var_val)] +
list(np.mean(np.array(kl_list_val), axis=0)) +
list(np.mean(np.array(kl_var_list_val), axis=0)) +
# list(np.mean(np.array(KLb_blocks_val),axis=0))
[np.mean(acc_var_val)] + list(np.mean(np.array(acc_var_blocks_val), axis=0))
)
self.perm_writer.writerow(list(np.mean(np.array(perm_train), axis=0)) +
list(np.mean(np.array(perm_val), axis=0))
)
self.log.flush()
# save condn
if args.save_folder and np.mean(nll_M_val) < best_val_loss:
torch.save(self.encoder.state_dict(), self.encoder_file)
torch.save(self.decoder.state_dict(), self.decoder_file)
print('Best model so far, saving...')
# save model in different folder even if not the best model. This is a temporary fix for BUS errors- not ideal but
# the best we can do..
if args.save_folder:
torch.save(self.encoder.state_dict(), self.current_encoder_file)
torch.save(self.decoder.state_dict(), self.current_decoder_file)
return np.mean(acc_val), np.mean(nll_M_val), np.around(np.mean(np.array(acc_blocks_val), axis=0), 4)
def train_plot(self, epoch, args):
t = time.time()
# validation set
nll_val = []
nll_var_val = []
mse_val = []
kl_val = []
kl_list_val = []
kl_var_list_val = []
acc_val = []
acc_var_val = []
acc_blocks_val = []
acc_var_blocks_val = []
perm_val = []
KLb_val = []
KLb_blocks_val = [] # KL between blocks list
nll_M_val = []
nll_M_var_val = []
# for z-score analysis
zscorelist_x = []
zscorelist_y = []
self.encoder.eval()
self.decoder.eval()
for batch_idx, (data, relations) in enumerate(self.valid_loader):
with torch.no_grad():
if args.cuda:
data, relations = data.cuda(), relations.cuda()
if args.dont_split_data:
data_encoder = data[:, :, :args.timesteps, :].contiguous()
data_decoder = data[:, :, :args.timesteps, :].contiguous()
elif args.split_enc_only:
data_encoder = data[:, :, :args.timesteps, :].contiguous()
data_decoder = data
else:
assert (data.size(2) - args.timesteps) >= args.timesteps
data_encoder = data[:, :, :args.timesteps, :].contiguous()
data_decoder = data[:, :, -args.timesteps:, :].contiguous()
# stores the values of the uncertainty. This will be an array of size [batchsize, no. of particles, time,no. of axes (isotropic = 1, anisotropic = 4)]
# initialise sigma to an array of large negative numbers which become small positive numbers when passed through softplus
sigma = initsigma(len(data_decoder), len(data_decoder[0][0]), args.anisotropic, args.num_atoms,
inversesoftplus(pow(args.var, 1 / 2), args.temp_softplus))
if args.cuda:
sigma = sigma.cuda()
if args.loss_type.lower() == 'semi_isotropic'.lower():
sigma = tile(sigma, 3, 2)
sigma = Variable(sigma)
# dim of logits, edges and prob are [batchsize, N^2-N, sum(edge_types_list)] where N = no. of particles
logits = self.encoder(data_encoder, self.rel_rec, self.rel_send)
if args.NRI:
# dim of logits, edges and prob are [batchsize, N^2-N, edgetypes] where N = no. of particles
edges = gumbel_softmax(logits, tau=args.temp,
hard=args.hard) # uses concrete distribution (for hard=False) to sample edge types
prob = my_softmax(logits, -1) # my_softmax returns the softmax over the edgetype dim
loss_kl = kl_categorical_uniform(prob, args.num_atoms, self.edge_types)
loss_kl_split = [loss_kl]
loss_kl_var_split = [kl_categorical_uniform_var(prob, args.num_atoms, self.edge_types)]
KLb_val.append(0)
KLb_blocks_val.append([0])
if args.no_edge_acc:
acc_perm, perm, acc_blocks, acc_var, acc_var_blocks = 0, np.array([0]), np.zeros(
len(args.edge_types_list)), 0, np.zeros(len(args.edge_types_list))
else:
acc_perm, perm, acc_blocks, acc_var, acc_var_blocks = edge_accuracy_perm_NRI(logits,
relations,
args.edge_types_list)
else:
# dim of logits, edges and prob are [batchsize, N^2-N, sum(edge_types_list)] where N = no. of particles
logits_split = torch.split(logits, args.edge_types_list, dim=-1)
edges_split = tuple([gumbel_softmax(logits_i, tau=args.temp, hard=args.hard)
for logits_i in logits_split])
edges = torch.cat(edges_split, dim=-1)
prob_split = [my_softmax(logits_i, -1) for logits_i in logits_split]
if args.prior:
loss_kl_split = [kl_categorical(prob_split[type_idx], self.log_prior[type_idx], args.num_atoms)
for type_idx in range(len(args.edge_types_list))]
loss_kl = sum(loss_kl_split)
else:
loss_kl_split = [kl_categorical_uniform(prob_split[type_idx], args.num_atoms,
args.edge_types_list[type_idx])
for type_idx in range(len(args.edge_types_list))]
loss_kl = sum(loss_kl_split)
loss_kl_var_split = [kl_categorical_uniform_var(prob_split[type_idx], args.num_atoms,
args.edge_types_list[type_idx])
for type_idx in range(len(args.edge_types_list))]
if args.no_edge_acc:
acc_perm, perm, acc_blocks, acc_var, acc_var_blocks = 0, np.array([0]), np.zeros(
len(args.edge_types_list)), 0, np.zeros(len(args.edge_types_list))
else:
acc_perm, perm, acc_blocks, acc_var, acc_var_blocks = edge_accuracy_perm_fNRI(logits_split,
relations,
args.edge_types_list,
args.skip_first)
KLb_blocks = KL_between_blocks(prob_split, args.num_atoms)
KLb_val.append(sum(KLb_blocks).data.item())
KLb_blocks_val.append([KL.data.item() for KL in KLb_blocks])
# plotting for fixed variance models
if args.loss_type.lower() == 'fixed_var'.lower():
target = data_decoder[:, :, 1:, :] # dimensions are [batch, particle, time, state]
if args.plot:
# for plotting
output_plot, sigma_plot, accel_plot, vel_plot = self.decoder(data_decoder, edges, self.rel_rec,
self.rel_send, sigma, False, False,
args.temp_softplus, 49)
if args.NRI:
acc_batch, perm, acc_blocks_batch = edge_accuracy_perm_NRI_batch(logits, relations,
args.edge_types_list)
else:
acc_batch, perm, acc_blocks_batch = edge_accuracy_perm_fNRI_batch(logits_split,
relations,
args.edge_types_list)
self.fixed_var_plot(args, acc_blocks_batch, target, output_plot)
loss_nll, loss_nll_var, output = self.loss_fixed(data_decoder, edges, sigma, args, use_onepred=True)
loss_nll_M, loss_nll_M_var, output_M = self.loss_fixed(data_decoder, edges, sigma, args)
# plotting for non-fixed variance models
elif args.loss_type.lower() == 'isotropic':
target = data_decoder[:, :, 1:, :]
zscorelist_x, zscorelist_y = self.isotropic_plot(args, data_decoder, edges, sigma, logits,
logits_split, relations, target, zscorelist_x, zscorelist_y)
loss_nll, loss_nll_var, output = self.loss_isotropic(data_decoder, edges, sigma, args, epoch, use_onepred=True)
loss_nll_M, loss_nll_M_var, output_M = self.loss_isotropic(data_decoder, edges, sigma, args, epoch)
elif args.loss_type.lower() == 'lorentzian':
target = data_decoder[:, :, 1:, :]
zscorelist_x, zscorelist_y = self.isotropic_plot(args, data_decoder, edges, sigma, logits,
logits_split, relations, target, zscorelist_x,
zscorelist_y)
loss_nll, loss_nll_var, output = self.loss_lorentzian(data_decoder, edges, sigma, args, use_onepred=True)
loss_nll_M, loss_nll_M_var, output_M = self.loss_lorentzian(data_decoder, edges, sigma, args)
elif args.loss_type.lower() == 'semi_isotropic'.lower():
target = data_decoder[:, :, 1:, :]
zscorelist_x, zscorelist_y = self.isotropic_plot(args, data_decoder, edges, sigma, logits,
logits_split, relations, target, zscorelist_x,
zscorelist_y)
loss_nll, loss_nll_var, output = self.loss_semi_isotropic(data_decoder, edges, sigma, args, epoch, use_onepred=True)
loss_nll_M, loss_nll_M_var, output_M = self.loss_semi_isotropic(data_decoder, edges, sigma, args, epoch)
elif args.loss_type.lower() == 'anisotropic'.lower():
target = data_decoder[:, :, 1:, :]
zscorelist_x, zscorelist_y = self.anisotropic_plot(args, data_decoder, edges, sigma, logits,
logits_split, relations, target, zscorelist_x,
zscorelist_y)
loss_nll, loss_nll_var, output = self.loss_anisotropic(data_decoder, edges, sigma, args, use_onepred=True)
loss_nll_M, loss_nll_M_var, output_M = self.loss_anisotropic(data_decoder, edges, sigma, args)
elif args.loss_type.upper() == 'KL':
target = data_decoder[:, :, 1:, :]
zscorelist_x, zscorelist_y = self.anisotropic_plot(args, data_decoder, edges, sigma, logits,
logits_split, relations, target, zscorelist_x,
zscorelist_y)
loss_nll, loss_nll_var, output = self.loss_KL(data_decoder, edges, sigma, args, use_onepred=True)
loss_nll_M, loss_nll_M_var, output_M = self.loss_KL(data_decoder, edges, sigma)
elif args.loss_type.lower() == 'norminvwishart':
target = data_decoder[:, :, 1:, :]
zscorelist_x, zscorelist_y = self.anisotropic_plot(args, data_decoder, edges, sigma, logits,
logits_split, relations, target, zscorelist_x,
zscorelist_y)
loss_nll, loss_nll_var, output = self.loss_normalinversewishart(data_decoder, edges, sigma,
args, batch_idx, 'validation', use_onepred=True)
loss_nll_M, loss_nll_M_var, output_M = self.loss_normalinversewishart(data_decoder, edges, sigma,
args, batch_idx, 'validation')
elif args.loss_type.lower() == 'kalmanfilter':
target = data_decoder[:, :, 1:, :]
zscorelist_x, zscorelist_y = self.anisotropic_plot(args, data_decoder, edges, sigma, logits,
logits_split, relations, target, zscorelist_x,
zscorelist_y)
loss_nll, loss_nll_var, output = self.loss_kalmanfilter(data_decoder, edges, sigma, args,
use_onepred=True)
loss_nll_M, loss_nll_M_var, output_M = self.loss_kalmanfilter(data_decoder, edges, sigma, args)
elif args.loss_type.lower() == 'ani_convex'.lower():
target = data_decoder[:, :, 1:, :]
if epoch == 0:
vvec_ver = target.clone()
sigma_vec_ver = sigma.clone()
loss_nll, loss_nll_var, output, vvec_ver_n, sigma_vec_ver_n = self.loss_anisotropic_withconvex(data_decoder, edges, sigma, args, vvec_ver, sigma_vec_ver, use_onepred=True)
loss_nll_M, loss_nll_M_var, output_M, vvec_ver, sigma_vec_ver = self.loss_anisotropic_withconvex(data_decoder, edges, sigma, args, vvec_ver, sigma_vec_ver)
perm_val.append(perm)
acc_val.append(acc_perm)
acc_blocks_val.append(acc_blocks)
acc_var_val.append(acc_var)
acc_var_blocks_val.append(acc_var_blocks)
mse_val.append(F.mse_loss(output_M, target).data.item())
nll_val.append(loss_nll.data.item())
nll_var_val.append(loss_nll_var.data.item())
kl_val.append(loss_kl.data.item())
kl_list_val.append([kl_loss.data.item() for kl_loss in loss_kl_split])
kl_var_list_val.append([kl_var.data.item() for kl_var in loss_kl_var_split])
nll_M_val.append(loss_nll_M.data.item())
nll_M_var_val.append(loss_nll_M_var.data.item())
# deal with z-score here - plot zscores and fit gaussian/lorentzian to the histogram plot
if not args.loss_type.lower() == 'fixed_var'.lower():
self.zscore_plot(zscorelist_x, zscorelist_y)
print('Epoch: {:03d}'.format(epoch),
'perm_val: ' + str(np.around(np.mean(np.array(perm_val), axis=0), 4)),
'time: {:.1f}s'.format(time.time() - t))
print('nll_val: {:.2f}'.format(np.mean(nll_M_val)),
'kl_val: {:.5f}'.format(np.mean(kl_val)),
'mse_val: {:.10f}'.format(np.mean(mse_val)),
'acc_val: {:.5f}'.format(np.mean(acc_val)),
'KLb_val: {:.5f}'.format(np.mean(KLb_val))
)
print('acc_b_val: ' + str(np.around(np.mean(np.array(acc_blocks_val), axis=0), 4)),
'kl_val: ' + str(np.around(np.mean(np.array(kl_list_val), axis=0), 4))
)
print('Epoch: {:04d}'.format(epoch),
'perm_val: ' + str(np.around(np.mean(np.array(perm_val), axis=0), 4)),
'time: {:.4f}s'.format(time.time() - t),
file=self.log)
print('nll_val: {:.5f}'.format(np.mean(nll_M_val)),
'kl_val: {:.5f}'.format(np.mean(kl_val)),
'mse_val: {:.10f}'.format(np.mean(mse_val)),
'acc_val: {:.5f}'.format(np.mean(acc_val)),
'KLb_val: {:.5f}'.format(np.mean(KLb_val)),
'acc_b_val: ' + str(np.around(np.mean(np.array(acc_blocks_val), axis=0), 4)),
'kl_val: ' + str(np.around(np.mean(np.array(kl_list_val), axis=0), 4)),
file=self.log)
class Tester(Model):
def __init__(self, args, edge_types, log_prior, encoder_file, decoder_file, current_encoder_file, current_decoder_file, log, loss_data, csv_writer, perm_writer, encoder, decoder, optimizer, scheduler):
super(Tester, self).__init__(args, edge_types, log_prior, encoder_file, decoder_file, current_encoder_file, current_decoder_file, log, loss_data, csv_writer, perm_writer, encoder, decoder, optimizer, scheduler)
# gets the prior for the normal inverse wishart distribution on test data
if args.loss_type.lower() == 'norminvwishart':
# prior for test data
self.prior_pos_tensor_test = np.empty(0)
self.prior_vel_tensor_test = np.empty(0)
t = time.time()
for batch_idx, (data, relations) in enumerate(self.test_loader):
if args.cuda:
data, relations = data.cuda(), relations.cuda()
data, relations = Variable(data), Variable(relations)
data = data.clone()
relations = relations.clone()
indices_pos = torch.LongTensor([0, 1])
indices_vel = torch.LongTensor([2, 3])
if args.cuda:
indices_pos, indices_vel = indices_pos.cuda(), indices_vel.cuda()
data_pos = torch.index_select(data, 3, indices_pos)
data_vel = torch.index_select(data, 3, indices_vel)
sigma_pos = torch.index_select(self.sigma_target, 3, indices_pos)
sigma_vel = torch.index_select(self.sigma_target, 3, indices_vel)
prior_pos = getpriordist(data_pos, sigma_pos, 4)
prior_vel = getpriordist(data_vel, sigma_vel, 4)
self.prior_pos_tensor_test = np.concatenate((self.prior_pos_tensor_test, [prior_pos]))
self.prior_vel_tensor_test = np.concatenate((self.prior_vel_tensor_test, [prior_vel]))
print('test time: {:.1f}s'.format(time.time() - t))
def test(self, args):
# test set
t = time.time()
nll_test = []
nll_var_test = []
mse_1_test = []
mse_10_test = []
mse_20_test = []
kl_test = []
kl_list_test = []
kl_var_list_test = []
acc_test = []
acc_var_test = []
acc_blocks_test = []
acc_var_blocks_test = []
perm_test = []
KLb_test = []
KLb_blocks_test = [] # KL between blocks list
nll_M_test = []
nll_M_var_test = []
# for zscore analysis
zscorelist_x = []
zscorelist_y = []
self.encoder.eval()
self.decoder.eval()
if not args.cuda:
self.encoder.load_state_dict(torch.load(self.encoder_file, map_location='cpu'))
self.decoder.load_state_dict(torch.load(self.decoder_file, map_location='cpu'))
else:
self.encoder.load_state_dict(torch.load(self.encoder_file))
self.decoder.load_state_dict(torch.load(self.decoder_file))
for batch_idx, (data, relations) in enumerate(self.test_loader):
with torch.no_grad():
if args.cuda:
data, relations = data.cuda(), relations.cuda()
assert (data.size(2) - args.timesteps) >= args.timesteps
data_encoder = data[:, :, :args.timesteps, :].contiguous()
data_decoder = data[:, :, -args.timesteps:, :].contiguous()
# stores the values of the uncertainty. This will be an array of size [batchsize, no. of particles, time,no. of axes (isotropic = 1, anisotropic = 2)]
# initialise sigma to an array of large negative numbers which become small positive numbers when passted through softplus function.
sigma = initsigma(len(data_decoder), len(data_decoder[0][0]), args.anisotropic, args.num_atoms,
inversesoftplus(pow(args.var, 1 / 2), args.temp_softplus))
if args.cuda:
sigma = sigma.cuda()
if args.loss_type.lower() == 'semi_isotropic'.lower():
sigma = tile(sigma, 3, 2)
sigma = Variable(sigma)
# dim of logits, edges and prob are [batchsize, N^2-N, sum(edge_types_list)] where N = no. of particles
logits = self.encoder(data_encoder, self.rel_rec, self.rel_send)
if args.NRI:
edges = gumbel_softmax(logits, tau=args.temp, hard=args.hard)
prob = my_softmax(logits, -1)
loss_kl = kl_categorical_uniform(prob, args.num_atoms, self.edge_types)
loss_kl_split = [loss_kl]
loss_kl_var_split = [kl_categorical_uniform_var(prob, args.num_atoms, self.edge_types)]
KLb_test.append(0)
KLb_blocks_test.append([0])
acc_perm, perm, acc_blocks, acc_var, acc_var_blocks = edge_accuracy_perm_NRI(logits, relations,
args.edge_types_list)
else:
logits_split = torch.split(logits, args.edge_types_list, dim=-1)
edges_split = tuple(
[gumbel_softmax(logits_i, tau=args.temp, hard=args.hard) for logits_i in logits_split])
edges = torch.cat(edges_split, dim=-1)
prob_split = [my_softmax(logits_i, -1) for logits_i in logits_split]
if args.prior:
loss_kl_split = [kl_categorical(prob_split[type_idx], self.log_prior[type_idx],
args.num_atoms) for type_idx in
range(len(args.edge_types_list))]
loss_kl = sum(loss_kl_split)
else:
loss_kl_split = [kl_categorical_uniform(prob_split[type_idx], args.num_atoms,
args.edge_types_list[type_idx])
for type_idx in range(len(args.edge_types_list))]
loss_kl = sum(loss_kl_split)
loss_kl_var_split = [kl_categorical_uniform_var(prob_split[type_idx], args.num_atoms,
args.edge_types_list[type_idx])
for type_idx in range(len(args.edge_types_list))]
acc_perm, perm, acc_blocks, acc_var, acc_var_blocks = edge_accuracy_perm_fNRI(logits_split,
relations,
args.edge_types_list,
args.skip_first)
KLb_blocks = KL_between_blocks(prob_split, args.num_atoms)
KLb_test.append(sum(KLb_blocks).data.item())
KLb_blocks_test.append([KL.data.item() for KL in KLb_blocks])
epoch = 0
# plotting fixed variance models
if args.loss_type.lower() == 'fixed_var'.lower():
target = data_decoder[:, :, 1:, :] # dimensions are [batch, particle, time, state]
if args.plot:
# for plotting
output_plot, sigma_plot, accel_plot, vel_plot = self.decoder(data_decoder, edges,
self.rel_rec,
self.rel_send, sigma, False,
False,
args.temp_softplus, 49)
if args.NRI:
acc_batch, perm, acc_blocks_batch = edge_accuracy_perm_NRI_batch(logits, relations,
args.edge_types_list)
else:
acc_batch, perm, acc_blocks_batch = edge_accuracy_perm_fNRI_batch(logits_split,
relations,
args.edge_types_list)
self.fixed_var_plot(args, acc_blocks_batch, target, output_plot)
loss_nll, loss_nll_var, output = self.loss_fixed(data_decoder, edges, sigma, args,
use_onepred=True)
loss_nll_10, loss_nll_var_10, output_10 = self.loss_fixed(data_decoder, edges, sigma, args, pred_steps=10,
use_onepred=True)
loss_nll_20, loss_nll_var_20, output_20 = self.loss_fixed(data_decoder, edges, sigma, args, pred_steps=20,
use_onepred=True)
loss_nll_M, loss_nll_M_var, output_M = self.loss_fixed(data_decoder, edges, sigma, args)
# plotting varying variance models. NOTE: THE MSE for 1, 10 and 20 trajectories is also calculated.
elif args.loss_type.lower() == 'isotropic':
target = data_decoder[:, :, 1:, :]
if args.plot:
zscorelist_x, zscorelist_y = self.isotropic_plot(args, data_decoder, edges, sigma, logits,
logits_split, relations, target, zscorelist_x,
zscorelist_y)
loss_nll, loss_nll_var, output = self.loss_isotropic(data_decoder, edges, sigma, args, epoch,
use_onepred=True)
loss_nll_10, loss_nll_var_10, output_10 = self.loss_isotropic(data_decoder, edges, sigma, args, epoch,
pred_steps=10, use_onepred=True)
loss_nll_20, loss_nll_var_20, output_20 = self.loss_isotropic(data_decoder, edges, sigma, args, epoch,
pred_steps=20, use_onepred=True)
loss_nll_M, loss_nll_M_var, output_M = self.loss_isotropic(data_decoder, edges, sigma, args,
epoch)
elif args.loss_type.lower() == 'lorentzian':
target = data_decoder[:, :, 1:, :]
if args.plot:
zscorelist_x, zscorelist_y = self.isotropic_plot(args, data_decoder, edges, sigma, logits,
logits_split, relations, target, zscorelist_x,
zscorelist_y)
loss_nll, loss_nll_var, output = self.loss_lorentzian(data_decoder, edges, sigma, args,
use_onepred=True)
loss_nll_10, loss_nll_var_10, output_10 = self.loss_lorentzian(data_decoder, edges, sigma, args,
pred_steps=10, use_onepred=True)
loss_nll_20, loss_nll_var_20, output_20 = self.loss_lorentzian(data_decoder, edges, sigma, args,
pred_steps=20, use_onepred=True)
loss_nll_M, loss_nll_M_var, output_M = self.loss_lorentzian(data_decoder, edges, sigma, args)
elif args.loss_type.lower() == 'semi_isotropic'.lower():
target = data_decoder[:, :, 1:, :]
if args.plot:
zscorelist_x, zscorelist_y = self.isotropic_plot(args, data_decoder, edges, sigma, logits,
logits_split, relations, target, zscorelist_x,
zscorelist_y)
loss_nll, loss_nll_var, output = self.loss_semi_isotropic(data_decoder, edges, sigma, args,
epoch, use_onepred=True)
loss_nll_10, loss_nll_var_10, output_10 = self.loss_semi_isotropic(data_decoder, edges, sigma, args,
epoch, pred_steps=10, use_onepred=True)
loss_nll_20, loss_nll_var_20, output_20 = self.loss_semi_isotropic(data_decoder, edges, sigma, args,
epoch, pred_steps=20, use_onepred=True)
loss_nll_M, loss_nll_M_var, output_M = self.loss_semi_isotropic(data_decoder, edges, sigma,
args, epoch)
elif args.loss_type.lower() == 'anisotropic'.lower():
target = data_decoder[:, :, 1:, :]
if args.plot:
zscorelist_x, zscorelist_y = self.anisotropic_plot(args, data_decoder, edges, sigma, logits,
logits_split, relations, target,
zscorelist_x,
zscorelist_y)
loss_nll, loss_nll_var, output = self.loss_anisotropic(data_decoder, edges, sigma, args,
use_onepred=True)
loss_nll_10, loss_nll_var_10, output_10 = self.loss_anisotropic(data_decoder, edges, sigma,
args, pred_steps=10, use_onepred=True)
loss_nll_20, loss_nll_var_20, output_20 = self.loss_anisotropic(data_decoder, edges, sigma,
args, pred_steps=20, use_onepred=True)
loss_nll_M, loss_nll_M_var, output_M = self.loss_anisotropic(data_decoder, edges, sigma, args)
elif args.loss_type.upper() == 'KL':
target = data_decoder[:, :, 1:, :]
if args.plot:
zscorelist_x, zscorelist_y = self.anisotropic_plot(args, data_decoder, edges, sigma, logits,
logits_split, relations, target,
zscorelist_x,
zscorelist_y)
loss_nll, loss_nll_var, output = self.loss_KL(data_decoder, edges, sigma, args,
use_onepred=True)
loss_nll_10, loss_nll_var_10, output_10 = self.loss_KL(data_decoder, edges, sigma, args, pred_steps=10,
use_onepred=True)
loss_nll_20, loss_nll_var_20, output_20 = self.loss_KL(data_decoder, edges, sigma,args, pred_steps=20,
use_onepred=True)
loss_nll_M, loss_nll_M_var, output_M = self.loss_KL(data_decoder, edges, sigma, args)
elif args.loss_type.lower() == 'norminvwishart':
target = data_decoder[:, :, 1:, :]
if args.plot:
zscorelist_x, zscorelist_y = self.anisotropic_plot(args, data_decoder, edges, sigma, logits,
logits_split, relations, target,
zscorelist_x, zscorelist_y)
loss_nll, loss_nll_var, output = self.loss_normalinversewishart(data_decoder, edges, sigma,
args, batch_idx, 'test',
use_onepred=True)
loss_nll_10, loss_nll_var_10, output_10 = self.loss_normalinversewishart(data_decoder, edges, sigma, args, batch_idx, 'test', pred_steps=10,
use_onepred=True)
loss_nll_20, loss_nll_var_20, output_20 = self.loss_normalinversewishart(data_decoder, edges, sigma,args, batch_idx, 'test', pred_steps=20,
use_onepred=True)
loss_nll_M, loss_nll_M_var, output_M = self.loss_normalinversewishart(data_decoder, edges,
sigma,
args, batch_idx, 'test')
elif args.loss_type.lower() == 'kalmanfilter':
target = data_decoder[:, :, 1:, :]
if args.plot:
zscorelist_x, zscorelist_y = self.anisotropic_plot(args, data_decoder, edges, sigma, logits,
logits_split, relations, target,
zscorelist_x,
zscorelist_y)
loss_nll, loss_nll_var, output = self.loss_kalmanfilter(data_decoder, edges, sigma, args,
use_onepred=True)
loss_nll_10, loss_nll_var_10, output_10 = self.loss_kalmanfilter(data_decoder, edges,
sigma, args, pred_steps=10 ,use_onepred=True)
loss_nll_20, loss_nll_var_20, output_20 = self.loss_kalmanfilter(data_decoder, edges,
sigma, args, pred_steps=20, use_onepred=True)
loss_nll_M, loss_nll_M_var, output_M = self.loss_kalmanfilter(data_decoder, edges, sigma, args)
elif args.loss_type.lower() == 'ani_convex'.lower():
target = data_decoder[:, :, 1:, :]
if epoch == 0:
vvec_ver = target.clone()
sigma_vec_ver = sigma.clone()
loss_nll, loss_nll_var, output, vvec_ver_n, sigma_vec_ver_n = self.loss_anisotropic_withconvex(
data_decoder, edges, sigma, args, vvec_ver, sigma_vec_ver, use_onepred=True)
loss_nll_10, loss_nll_var_10, output_10, vvec_ver_n, sigma_vec_ver_n = self.loss_anisotropic_withconvex(
data_decoder, edges, sigma, args, vvec_ver, sigma_vec_ver, pred_steps=10, use_onepred=True)
loss_nll_20, loss_nll_var_20, output_20, vvec_ver_n, sigma_vec_ver_n = self.loss_anisotropic_withconvex(
data_decoder, edges, sigma, args, vvec_ver, sigma_vec_ver, pred_steps=20, use_onepred=True)
loss_nll_M, loss_nll_M_var, output_M, vvec_ver, sigma_vec_ver = self.loss_anisotropic_withconvex(
data_decoder, edges, sigma, args, vvec_ver, sigma_vec_ver)
perm_test.append(perm)
acc_test.append(acc_perm)
acc_blocks_test.append(acc_blocks)
acc_var_test.append(acc_var)
acc_var_blocks_test.append(acc_var_blocks)
mse_1_test.append(F.mse_loss(output, target).data.item())
mse_10_test.append(F.mse_loss(output_10, target).data.item())
mse_20_test.append(F.mse_loss(output_20, target).data.item())
nll_test.append(loss_nll.data.item())
kl_test.append(loss_kl.data.item())
kl_list_test.append([kl_loss.data.item() for kl_loss in loss_kl_split])
nll_var_test.append(loss_nll_var.data.item())
kl_var_list_test.append([kl_var.data.item() for kl_var in loss_kl_var_split])
nll_M_test.append(loss_nll_M.data.item())
nll_M_var_test.append(loss_nll_M_var.data.item())
# deal with z-score here - plot zscores and fit gaussian/lorentzian to the histogram plot
if not args.loss_type.lower() == 'fixed_var'.lower():
if args.plot:
self.zscore_plot(zscorelist_x, zscorelist_y)
print('--------------------------------')
print('------------Testing-------------')
print('--------------------------------')
print('nll_test: {:.2f}'.format(np.mean(nll_test)),
'nll_M_test: {:.2f}'.format(np.mean(nll_M_test)),
'kl_test: {:.5f}'.format(np.mean(kl_test)),
'mse_1_test: {:.10f}'.format(np.mean(mse_1_test)),
'mse_10_test: {:.10f}'.format(np.mean(mse_10_test)),
'mse_20_test: {:.10f}'.format(np.mean(mse_20_test)),
'acc_test: {:.5f}'.format(np.mean(acc_test)),
'acc_var_test: {:.5f}'.format(np.mean(acc_var_test)),
'KLb_test: {:.5f}'.format(np.mean(KLb_test)),
'time: {:.1f}s'.format(time.time() - t))
print('acc_b_test: ' + str(np.around(np.mean(np.array(acc_blocks_test), axis=0), 4)),
'acc_var_b: ' + str(np.around(np.mean(np.array(acc_var_blocks_test), axis=0), 4)),
'kl_test: ' + str(np.around(np.mean(np.array(kl_list_test), axis=0), 4))
)
if args.save_folder:
print('--------------------------------', file=self.log)
print('------------Testing-------------', file=self.log)
print('--------------------------------', file=self.log)
print('nll_test: {:.2f}'.format(np.mean(nll_test)),
'nll_M_test: {:.2f}'.format(np.mean(nll_M_test)),
'kl_test: {:.5f}'.format(np.mean(kl_test)),
'mse_1_test: {:.10f}'.format(np.mean(mse_1_test)),
'mse_10_test: {:.10f}'.format(np.mean(mse_10_test)),
'mse_20_test: {:.10f}'.format(np.mean(mse_20_test)),
'acc_test: {:.5f}'.format(np.mean(acc_test)),
'acc_var_test: {:.5f}'.format(np.mean(acc_var_test)),
'KLb_test: {:.5f}'.format(np.mean(KLb_test)),
'time: {:.1f}s'.format(time.time() - t),
file=self.log)
print('acc_b_test: ' + str(np.around(np.mean(np.array(acc_blocks_test), axis=0), 4)),
'acc_var_b_test: ' + str(np.around(np.mean(np.array(acc_var_blocks_test), axis=0), 4)),
'kl_test: ' + str(np.around(np.mean(np.array(kl_list_test), axis=0), 4)),
file=self.log)
self.log.flush()
| 63.47507 | 220 | 0.545643 | 13,517 | 113,303 | 4.315159 | 0.04535 | 0.025562 | 0.035112 | 0.038163 | 0.87032 | 0.852164 | 0.827116 | 0.81112 | 0.794044 | 0.781751 | 0 | 0.014461 | 0.350026 | 113,303 | 1,784 | 221 | 63.51065 | 0.777565 | 0.13997 | 0 | 0.676621 | 0 | 0 | 0.030347 | 0.002193 | 0 | 0 | 0 | 0 | 0.002185 | 1 | 0.014567 | false | 0 | 0.01748 | 0 | 0.042972 | 0.024035 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
6fd1c908be6c0c70082ec929b4a531f8d57fd17a | 35,282 | py | Python | pybind/slxos/v17r_2_00/mpls_state/transit_traffic_statistics/__init__.py | extremenetworks/pybind | 44c467e71b2b425be63867aba6e6fa28b2cfe7fb | [
"Apache-2.0"
] | null | null | null | pybind/slxos/v17r_2_00/mpls_state/transit_traffic_statistics/__init__.py | extremenetworks/pybind | 44c467e71b2b425be63867aba6e6fa28b2cfe7fb | [
"Apache-2.0"
] | null | null | null | pybind/slxos/v17r_2_00/mpls_state/transit_traffic_statistics/__init__.py | extremenetworks/pybind | 44c467e71b2b425be63867aba6e6fa28b2cfe7fb | [
"Apache-2.0"
] | 1 | 2021-11-05T22:15:42.000Z | 2021-11-05T22:15:42.000Z |
from operator import attrgetter
import pyangbind.lib.xpathhelper as xpathhelper
from pyangbind.lib.yangtypes import RestrictedPrecisionDecimalType, RestrictedClassType, TypedListType
from pyangbind.lib.yangtypes import YANGBool, YANGListType, YANGDynClass, ReferenceType
from pyangbind.lib.base import PybindBase
from decimal import Decimal
from bitarray import bitarray
import __builtin__
class transit_traffic_statistics(PybindBase):
"""
This class was auto-generated by the PythonClass plugin for PYANG
from YANG module brocade-mpls-operational - based on the path /mpls-state/transit-traffic-statistics. Each member element of
the container is represented as a class variable - with a specific
YANG type.
YANG Description: Transit Traffic Statistics
"""
__slots__ = ('_pybind_generated_by', '_path_helper', '_yang_name', '_rest_name', '_extmethods', '__number_of_packets','__number_of_packets_since_clear','__number_of_bytes','__number_of_bytes_since_clear','__packets_per_second','__bytes_per_second','__averaging_interval_seconds','__in_label','__protocol','__statistics_valid',)
_yang_name = 'transit-traffic-statistics'
_rest_name = 'transit-traffic-statistics'
_pybind_generated_by = 'container'
def __init__(self, *args, **kwargs):
path_helper_ = kwargs.pop("path_helper", None)
if path_helper_ is False:
self._path_helper = False
elif path_helper_ is not None and isinstance(path_helper_, xpathhelper.YANGPathHelper):
self._path_helper = path_helper_
elif hasattr(self, "_parent"):
path_helper_ = getattr(self._parent, "_path_helper", False)
self._path_helper = path_helper_
else:
self._path_helper = False
extmethods = kwargs.pop("extmethods", None)
if extmethods is False:
self._extmethods = False
elif extmethods is not None and isinstance(extmethods, dict):
self._extmethods = extmethods
elif hasattr(self, "_parent"):
extmethods = getattr(self._parent, "_extmethods", None)
self._extmethods = extmethods
else:
self._extmethods = False
self.__in_label = YANGDynClass(base=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32), is_leaf=True, yang_name="in-label", rest_name="in-label", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, is_keyval=True, namespace='urn:brocade.com:mgmt:brocade-mpls-operational', defining_module='brocade-mpls-operational', yang_type='uint32', is_config=False)
self.__protocol = YANGDynClass(base=RestrictedClassType(base_type=unicode, restriction_type="dict_key", restriction_arg={u'mpls-protocol-none': {'value': 0}, u'mpls-protocol-ldp': {'value': 1}, u'mpls-protocol-rsvp': {'value': 2}},), is_leaf=True, yang_name="protocol", rest_name="protocol", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='urn:brocade.com:mgmt:brocade-mpls-operational', defining_module='brocade-mpls-operational', yang_type='mpls-protocol', is_config=False)
self.__number_of_packets = YANGDynClass(base=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..18446744073709551615']}, int_size=64), is_leaf=True, yang_name="number-of-packets", rest_name="number-of-packets", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='urn:brocade.com:mgmt:brocade-mpls-operational', defining_module='brocade-mpls-operational', yang_type='uint64', is_config=False)
self.__packets_per_second = YANGDynClass(base=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..18446744073709551615']}, int_size=64), is_leaf=True, yang_name="packets-per-second", rest_name="packets-per-second", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='urn:brocade.com:mgmt:brocade-mpls-operational', defining_module='brocade-mpls-operational', yang_type='uint64', is_config=False)
self.__number_of_bytes = YANGDynClass(base=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..18446744073709551615']}, int_size=64), is_leaf=True, yang_name="number-of-bytes", rest_name="number-of-bytes", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='urn:brocade.com:mgmt:brocade-mpls-operational', defining_module='brocade-mpls-operational', yang_type='uint64', is_config=False)
self.__number_of_bytes_since_clear = YANGDynClass(base=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..18446744073709551615']}, int_size=64), is_leaf=True, yang_name="number-of-bytes-since-clear", rest_name="number-of-bytes-since-clear", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='urn:brocade.com:mgmt:brocade-mpls-operational', defining_module='brocade-mpls-operational', yang_type='uint64', is_config=False)
self.__averaging_interval_seconds = YANGDynClass(base=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32), is_leaf=True, yang_name="averaging-interval-seconds", rest_name="averaging-interval-seconds", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='urn:brocade.com:mgmt:brocade-mpls-operational', defining_module='brocade-mpls-operational', yang_type='uint32', is_config=False)
self.__bytes_per_second = YANGDynClass(base=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..18446744073709551615']}, int_size=64), is_leaf=True, yang_name="bytes-per-second", rest_name="bytes-per-second", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='urn:brocade.com:mgmt:brocade-mpls-operational', defining_module='brocade-mpls-operational', yang_type='uint64', is_config=False)
self.__number_of_packets_since_clear = YANGDynClass(base=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..18446744073709551615']}, int_size=64), is_leaf=True, yang_name="number-of-packets-since-clear", rest_name="number-of-packets-since-clear", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='urn:brocade.com:mgmt:brocade-mpls-operational', defining_module='brocade-mpls-operational', yang_type='uint64', is_config=False)
self.__statistics_valid = YANGDynClass(base=YANGBool, is_leaf=True, yang_name="statistics-valid", rest_name="statistics-valid", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='urn:brocade.com:mgmt:brocade-mpls-operational', defining_module='brocade-mpls-operational', yang_type='boolean', is_config=False)
load = kwargs.pop("load", None)
if args:
if len(args) > 1:
raise TypeError("cannot create a YANG container with >1 argument")
all_attr = True
for e in self._pyangbind_elements:
if not hasattr(args[0], e):
all_attr = False
break
if not all_attr:
raise ValueError("Supplied object did not have the correct attributes")
for e in self._pyangbind_elements:
nobj = getattr(args[0], e)
if nobj._changed() is False:
continue
setmethod = getattr(self, "_set_%s" % e)
if load is None:
setmethod(getattr(args[0], e))
else:
setmethod(getattr(args[0], e), load=load)
def _path(self):
if hasattr(self, "_parent"):
return self._parent._path()+[self._yang_name]
else:
return [u'mpls-state', u'transit-traffic-statistics']
def _rest_path(self):
if hasattr(self, "_parent"):
if self._rest_name:
return self._parent._rest_path()+[self._rest_name]
else:
return self._parent._rest_path()
else:
return [u'mpls-state', u'transit-traffic-statistics']
def _get_number_of_packets(self):
"""
Getter method for number_of_packets, mapped from YANG variable /mpls_state/transit_traffic_statistics/number_of_packets (uint64)
YANG Description: Total number of packets
"""
return self.__number_of_packets
def _set_number_of_packets(self, v, load=False):
"""
Setter method for number_of_packets, mapped from YANG variable /mpls_state/transit_traffic_statistics/number_of_packets (uint64)
If this variable is read-only (config: false) in the
source YANG file, then _set_number_of_packets is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_number_of_packets() directly.
YANG Description: Total number of packets
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..18446744073709551615']}, int_size=64), is_leaf=True, yang_name="number-of-packets", rest_name="number-of-packets", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='urn:brocade.com:mgmt:brocade-mpls-operational', defining_module='brocade-mpls-operational', yang_type='uint64', is_config=False)
except (TypeError, ValueError):
raise ValueError({
'error-string': """number_of_packets must be of a type compatible with uint64""",
'defined-type': "uint64",
'generated-type': """YANGDynClass(base=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..18446744073709551615']}, int_size=64), is_leaf=True, yang_name="number-of-packets", rest_name="number-of-packets", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='urn:brocade.com:mgmt:brocade-mpls-operational', defining_module='brocade-mpls-operational', yang_type='uint64', is_config=False)""",
})
self.__number_of_packets = t
if hasattr(self, '_set'):
self._set()
def _unset_number_of_packets(self):
self.__number_of_packets = YANGDynClass(base=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..18446744073709551615']}, int_size=64), is_leaf=True, yang_name="number-of-packets", rest_name="number-of-packets", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='urn:brocade.com:mgmt:brocade-mpls-operational', defining_module='brocade-mpls-operational', yang_type='uint64', is_config=False)
def _get_number_of_packets_since_clear(self):
"""
Getter method for number_of_packets_since_clear, mapped from YANG variable /mpls_state/transit_traffic_statistics/number_of_packets_since_clear (uint64)
YANG Description: Total number of packets since lst clear
"""
return self.__number_of_packets_since_clear
def _set_number_of_packets_since_clear(self, v, load=False):
"""
Setter method for number_of_packets_since_clear, mapped from YANG variable /mpls_state/transit_traffic_statistics/number_of_packets_since_clear (uint64)
If this variable is read-only (config: false) in the
source YANG file, then _set_number_of_packets_since_clear is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_number_of_packets_since_clear() directly.
YANG Description: Total number of packets since lst clear
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..18446744073709551615']}, int_size=64), is_leaf=True, yang_name="number-of-packets-since-clear", rest_name="number-of-packets-since-clear", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='urn:brocade.com:mgmt:brocade-mpls-operational', defining_module='brocade-mpls-operational', yang_type='uint64', is_config=False)
except (TypeError, ValueError):
raise ValueError({
'error-string': """number_of_packets_since_clear must be of a type compatible with uint64""",
'defined-type': "uint64",
'generated-type': """YANGDynClass(base=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..18446744073709551615']}, int_size=64), is_leaf=True, yang_name="number-of-packets-since-clear", rest_name="number-of-packets-since-clear", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='urn:brocade.com:mgmt:brocade-mpls-operational', defining_module='brocade-mpls-operational', yang_type='uint64', is_config=False)""",
})
self.__number_of_packets_since_clear = t
if hasattr(self, '_set'):
self._set()
def _unset_number_of_packets_since_clear(self):
self.__number_of_packets_since_clear = YANGDynClass(base=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..18446744073709551615']}, int_size=64), is_leaf=True, yang_name="number-of-packets-since-clear", rest_name="number-of-packets-since-clear", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='urn:brocade.com:mgmt:brocade-mpls-operational', defining_module='brocade-mpls-operational', yang_type='uint64', is_config=False)
def _get_number_of_bytes(self):
"""
Getter method for number_of_bytes, mapped from YANG variable /mpls_state/transit_traffic_statistics/number_of_bytes (uint64)
YANG Description: Total number of bytes
"""
return self.__number_of_bytes
def _set_number_of_bytes(self, v, load=False):
"""
Setter method for number_of_bytes, mapped from YANG variable /mpls_state/transit_traffic_statistics/number_of_bytes (uint64)
If this variable is read-only (config: false) in the
source YANG file, then _set_number_of_bytes is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_number_of_bytes() directly.
YANG Description: Total number of bytes
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..18446744073709551615']}, int_size=64), is_leaf=True, yang_name="number-of-bytes", rest_name="number-of-bytes", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='urn:brocade.com:mgmt:brocade-mpls-operational', defining_module='brocade-mpls-operational', yang_type='uint64', is_config=False)
except (TypeError, ValueError):
raise ValueError({
'error-string': """number_of_bytes must be of a type compatible with uint64""",
'defined-type': "uint64",
'generated-type': """YANGDynClass(base=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..18446744073709551615']}, int_size=64), is_leaf=True, yang_name="number-of-bytes", rest_name="number-of-bytes", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='urn:brocade.com:mgmt:brocade-mpls-operational', defining_module='brocade-mpls-operational', yang_type='uint64', is_config=False)""",
})
self.__number_of_bytes = t
if hasattr(self, '_set'):
self._set()
def _unset_number_of_bytes(self):
self.__number_of_bytes = YANGDynClass(base=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..18446744073709551615']}, int_size=64), is_leaf=True, yang_name="number-of-bytes", rest_name="number-of-bytes", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='urn:brocade.com:mgmt:brocade-mpls-operational', defining_module='brocade-mpls-operational', yang_type='uint64', is_config=False)
def _get_number_of_bytes_since_clear(self):
"""
Getter method for number_of_bytes_since_clear, mapped from YANG variable /mpls_state/transit_traffic_statistics/number_of_bytes_since_clear (uint64)
YANG Description: Total number of bytes since last clear
"""
return self.__number_of_bytes_since_clear
def _set_number_of_bytes_since_clear(self, v, load=False):
"""
Setter method for number_of_bytes_since_clear, mapped from YANG variable /mpls_state/transit_traffic_statistics/number_of_bytes_since_clear (uint64)
If this variable is read-only (config: false) in the
source YANG file, then _set_number_of_bytes_since_clear is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_number_of_bytes_since_clear() directly.
YANG Description: Total number of bytes since last clear
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..18446744073709551615']}, int_size=64), is_leaf=True, yang_name="number-of-bytes-since-clear", rest_name="number-of-bytes-since-clear", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='urn:brocade.com:mgmt:brocade-mpls-operational', defining_module='brocade-mpls-operational', yang_type='uint64', is_config=False)
except (TypeError, ValueError):
raise ValueError({
'error-string': """number_of_bytes_since_clear must be of a type compatible with uint64""",
'defined-type': "uint64",
'generated-type': """YANGDynClass(base=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..18446744073709551615']}, int_size=64), is_leaf=True, yang_name="number-of-bytes-since-clear", rest_name="number-of-bytes-since-clear", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='urn:brocade.com:mgmt:brocade-mpls-operational', defining_module='brocade-mpls-operational', yang_type='uint64', is_config=False)""",
})
self.__number_of_bytes_since_clear = t
if hasattr(self, '_set'):
self._set()
def _unset_number_of_bytes_since_clear(self):
self.__number_of_bytes_since_clear = YANGDynClass(base=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..18446744073709551615']}, int_size=64), is_leaf=True, yang_name="number-of-bytes-since-clear", rest_name="number-of-bytes-since-clear", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='urn:brocade.com:mgmt:brocade-mpls-operational', defining_module='brocade-mpls-operational', yang_type='uint64', is_config=False)
def _get_packets_per_second(self):
"""
Getter method for packets_per_second, mapped from YANG variable /mpls_state/transit_traffic_statistics/packets_per_second (uint64)
YANG Description: Packets per second
"""
return self.__packets_per_second
def _set_packets_per_second(self, v, load=False):
"""
Setter method for packets_per_second, mapped from YANG variable /mpls_state/transit_traffic_statistics/packets_per_second (uint64)
If this variable is read-only (config: false) in the
source YANG file, then _set_packets_per_second is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_packets_per_second() directly.
YANG Description: Packets per second
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..18446744073709551615']}, int_size=64), is_leaf=True, yang_name="packets-per-second", rest_name="packets-per-second", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='urn:brocade.com:mgmt:brocade-mpls-operational', defining_module='brocade-mpls-operational', yang_type='uint64', is_config=False)
except (TypeError, ValueError):
raise ValueError({
'error-string': """packets_per_second must be of a type compatible with uint64""",
'defined-type': "uint64",
'generated-type': """YANGDynClass(base=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..18446744073709551615']}, int_size=64), is_leaf=True, yang_name="packets-per-second", rest_name="packets-per-second", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='urn:brocade.com:mgmt:brocade-mpls-operational', defining_module='brocade-mpls-operational', yang_type='uint64', is_config=False)""",
})
self.__packets_per_second = t
if hasattr(self, '_set'):
self._set()
def _unset_packets_per_second(self):
self.__packets_per_second = YANGDynClass(base=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..18446744073709551615']}, int_size=64), is_leaf=True, yang_name="packets-per-second", rest_name="packets-per-second", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='urn:brocade.com:mgmt:brocade-mpls-operational', defining_module='brocade-mpls-operational', yang_type='uint64', is_config=False)
def _get_bytes_per_second(self):
"""
Getter method for bytes_per_second, mapped from YANG variable /mpls_state/transit_traffic_statistics/bytes_per_second (uint64)
YANG Description: Bytes per second
"""
return self.__bytes_per_second
def _set_bytes_per_second(self, v, load=False):
"""
Setter method for bytes_per_second, mapped from YANG variable /mpls_state/transit_traffic_statistics/bytes_per_second (uint64)
If this variable is read-only (config: false) in the
source YANG file, then _set_bytes_per_second is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_bytes_per_second() directly.
YANG Description: Bytes per second
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..18446744073709551615']}, int_size=64), is_leaf=True, yang_name="bytes-per-second", rest_name="bytes-per-second", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='urn:brocade.com:mgmt:brocade-mpls-operational', defining_module='brocade-mpls-operational', yang_type='uint64', is_config=False)
except (TypeError, ValueError):
raise ValueError({
'error-string': """bytes_per_second must be of a type compatible with uint64""",
'defined-type': "uint64",
'generated-type': """YANGDynClass(base=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..18446744073709551615']}, int_size=64), is_leaf=True, yang_name="bytes-per-second", rest_name="bytes-per-second", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='urn:brocade.com:mgmt:brocade-mpls-operational', defining_module='brocade-mpls-operational', yang_type='uint64', is_config=False)""",
})
self.__bytes_per_second = t
if hasattr(self, '_set'):
self._set()
def _unset_bytes_per_second(self):
self.__bytes_per_second = YANGDynClass(base=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..18446744073709551615']}, int_size=64), is_leaf=True, yang_name="bytes-per-second", rest_name="bytes-per-second", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='urn:brocade.com:mgmt:brocade-mpls-operational', defining_module='brocade-mpls-operational', yang_type='uint64', is_config=False)
def _get_averaging_interval_seconds(self):
"""
Getter method for averaging_interval_seconds, mapped from YANG variable /mpls_state/transit_traffic_statistics/averaging_interval_seconds (uint32)
YANG Description: Averaging Interval
"""
return self.__averaging_interval_seconds
def _set_averaging_interval_seconds(self, v, load=False):
"""
Setter method for averaging_interval_seconds, mapped from YANG variable /mpls_state/transit_traffic_statistics/averaging_interval_seconds (uint32)
If this variable is read-only (config: false) in the
source YANG file, then _set_averaging_interval_seconds is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_averaging_interval_seconds() directly.
YANG Description: Averaging Interval
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32), is_leaf=True, yang_name="averaging-interval-seconds", rest_name="averaging-interval-seconds", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='urn:brocade.com:mgmt:brocade-mpls-operational', defining_module='brocade-mpls-operational', yang_type='uint32', is_config=False)
except (TypeError, ValueError):
raise ValueError({
'error-string': """averaging_interval_seconds must be of a type compatible with uint32""",
'defined-type': "uint32",
'generated-type': """YANGDynClass(base=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32), is_leaf=True, yang_name="averaging-interval-seconds", rest_name="averaging-interval-seconds", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='urn:brocade.com:mgmt:brocade-mpls-operational', defining_module='brocade-mpls-operational', yang_type='uint32', is_config=False)""",
})
self.__averaging_interval_seconds = t
if hasattr(self, '_set'):
self._set()
def _unset_averaging_interval_seconds(self):
self.__averaging_interval_seconds = YANGDynClass(base=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32), is_leaf=True, yang_name="averaging-interval-seconds", rest_name="averaging-interval-seconds", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='urn:brocade.com:mgmt:brocade-mpls-operational', defining_module='brocade-mpls-operational', yang_type='uint32', is_config=False)
def _get_in_label(self):
"""
Getter method for in_label, mapped from YANG variable /mpls_state/transit_traffic_statistics/in_label (uint32)
YANG Description: In Label
"""
return self.__in_label
def _set_in_label(self, v, load=False):
"""
Setter method for in_label, mapped from YANG variable /mpls_state/transit_traffic_statistics/in_label (uint32)
If this variable is read-only (config: false) in the
source YANG file, then _set_in_label is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_in_label() directly.
YANG Description: In Label
"""
parent = getattr(self, "_parent", None)
if parent is not None and load is False:
raise AttributeError("Cannot set keys directly when" +
" within an instantiated list")
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32), is_leaf=True, yang_name="in-label", rest_name="in-label", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, is_keyval=True, namespace='urn:brocade.com:mgmt:brocade-mpls-operational', defining_module='brocade-mpls-operational', yang_type='uint32', is_config=False)
except (TypeError, ValueError):
raise ValueError({
'error-string': """in_label must be of a type compatible with uint32""",
'defined-type': "uint32",
'generated-type': """YANGDynClass(base=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32), is_leaf=True, yang_name="in-label", rest_name="in-label", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, is_keyval=True, namespace='urn:brocade.com:mgmt:brocade-mpls-operational', defining_module='brocade-mpls-operational', yang_type='uint32', is_config=False)""",
})
self.__in_label = t
if hasattr(self, '_set'):
self._set()
def _unset_in_label(self):
self.__in_label = YANGDynClass(base=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32), is_leaf=True, yang_name="in-label", rest_name="in-label", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, is_keyval=True, namespace='urn:brocade.com:mgmt:brocade-mpls-operational', defining_module='brocade-mpls-operational', yang_type='uint32', is_config=False)
def _get_protocol(self):
"""
Getter method for protocol, mapped from YANG variable /mpls_state/transit_traffic_statistics/protocol (mpls-protocol)
YANG Description: MPLS protocol
"""
return self.__protocol
def _set_protocol(self, v, load=False):
"""
Setter method for protocol, mapped from YANG variable /mpls_state/transit_traffic_statistics/protocol (mpls-protocol)
If this variable is read-only (config: false) in the
source YANG file, then _set_protocol is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_protocol() directly.
YANG Description: MPLS protocol
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=RestrictedClassType(base_type=unicode, restriction_type="dict_key", restriction_arg={u'mpls-protocol-none': {'value': 0}, u'mpls-protocol-ldp': {'value': 1}, u'mpls-protocol-rsvp': {'value': 2}},), is_leaf=True, yang_name="protocol", rest_name="protocol", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='urn:brocade.com:mgmt:brocade-mpls-operational', defining_module='brocade-mpls-operational', yang_type='mpls-protocol', is_config=False)
except (TypeError, ValueError):
raise ValueError({
'error-string': """protocol must be of a type compatible with mpls-protocol""",
'defined-type': "brocade-mpls-operational:mpls-protocol",
'generated-type': """YANGDynClass(base=RestrictedClassType(base_type=unicode, restriction_type="dict_key", restriction_arg={u'mpls-protocol-none': {'value': 0}, u'mpls-protocol-ldp': {'value': 1}, u'mpls-protocol-rsvp': {'value': 2}},), is_leaf=True, yang_name="protocol", rest_name="protocol", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='urn:brocade.com:mgmt:brocade-mpls-operational', defining_module='brocade-mpls-operational', yang_type='mpls-protocol', is_config=False)""",
})
self.__protocol = t
if hasattr(self, '_set'):
self._set()
def _unset_protocol(self):
self.__protocol = YANGDynClass(base=RestrictedClassType(base_type=unicode, restriction_type="dict_key", restriction_arg={u'mpls-protocol-none': {'value': 0}, u'mpls-protocol-ldp': {'value': 1}, u'mpls-protocol-rsvp': {'value': 2}},), is_leaf=True, yang_name="protocol", rest_name="protocol", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='urn:brocade.com:mgmt:brocade-mpls-operational', defining_module='brocade-mpls-operational', yang_type='mpls-protocol', is_config=False)
def _get_statistics_valid(self):
"""
Getter method for statistics_valid, mapped from YANG variable /mpls_state/transit_traffic_statistics/statistics_valid (boolean)
YANG Description: Statistics are valid
"""
return self.__statistics_valid
def _set_statistics_valid(self, v, load=False):
"""
Setter method for statistics_valid, mapped from YANG variable /mpls_state/transit_traffic_statistics/statistics_valid (boolean)
If this variable is read-only (config: false) in the
source YANG file, then _set_statistics_valid is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_statistics_valid() directly.
YANG Description: Statistics are valid
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=YANGBool, is_leaf=True, yang_name="statistics-valid", rest_name="statistics-valid", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='urn:brocade.com:mgmt:brocade-mpls-operational', defining_module='brocade-mpls-operational', yang_type='boolean', is_config=False)
except (TypeError, ValueError):
raise ValueError({
'error-string': """statistics_valid must be of a type compatible with boolean""",
'defined-type': "boolean",
'generated-type': """YANGDynClass(base=YANGBool, is_leaf=True, yang_name="statistics-valid", rest_name="statistics-valid", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='urn:brocade.com:mgmt:brocade-mpls-operational', defining_module='brocade-mpls-operational', yang_type='boolean', is_config=False)""",
})
self.__statistics_valid = t
if hasattr(self, '_set'):
self._set()
def _unset_statistics_valid(self):
self.__statistics_valid = YANGDynClass(base=YANGBool, is_leaf=True, yang_name="statistics-valid", rest_name="statistics-valid", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='urn:brocade.com:mgmt:brocade-mpls-operational', defining_module='brocade-mpls-operational', yang_type='boolean', is_config=False)
number_of_packets = __builtin__.property(_get_number_of_packets)
number_of_packets_since_clear = __builtin__.property(_get_number_of_packets_since_clear)
number_of_bytes = __builtin__.property(_get_number_of_bytes)
number_of_bytes_since_clear = __builtin__.property(_get_number_of_bytes_since_clear)
packets_per_second = __builtin__.property(_get_packets_per_second)
bytes_per_second = __builtin__.property(_get_bytes_per_second)
averaging_interval_seconds = __builtin__.property(_get_averaging_interval_seconds)
in_label = __builtin__.property(_get_in_label)
protocol = __builtin__.property(_get_protocol)
statistics_valid = __builtin__.property(_get_statistics_valid)
_pyangbind_elements = {'number_of_packets': number_of_packets, 'number_of_packets_since_clear': number_of_packets_since_clear, 'number_of_bytes': number_of_bytes, 'number_of_bytes_since_clear': number_of_bytes_since_clear, 'packets_per_second': packets_per_second, 'bytes_per_second': bytes_per_second, 'averaging_interval_seconds': averaging_interval_seconds, 'in_label': in_label, 'protocol': protocol, 'statistics_valid': statistics_valid, }
| 72.746392 | 621 | 0.751346 | 4,678 | 35,282 | 5.368961 | 0.043181 | 0.036949 | 0.046823 | 0.045708 | 0.887323 | 0.858417 | 0.841057 | 0.825768 | 0.809763 | 0.79519 | 0 | 0.026051 | 0.12743 | 35,282 | 484 | 622 | 72.896694 | 0.789775 | 0.177087 | 0 | 0.477941 | 0 | 0.036765 | 0.373081 | 0.23199 | 0 | 0 | 0 | 0 | 0 | 1 | 0.121324 | false | 0 | 0.029412 | 0 | 0.264706 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
6ffc4da15e0025892e43e4f3c07dd18b2566c3a5 | 43 | py | Python | quel/sort.py | eppingere/hackcmu18-backend | 696c050c4ce5acdf49aeaeeaded730a33443f5bd | [
"MIT"
] | 2 | 2018-09-22T00:18:06.000Z | 2018-09-23T04:49:29.000Z | quel/sort.py | eppingere/hackcmu18-backend | 696c050c4ce5acdf49aeaeeaded730a33443f5bd | [
"MIT"
] | null | null | null | quel/sort.py | eppingere/hackcmu18-backend | 696c050c4ce5acdf49aeaeeaded730a33443f5bd | [
"MIT"
] | null | null | null | def sort_a_list(xs):
return sorted(xs)
| 14.333333 | 21 | 0.697674 | 8 | 43 | 3.5 | 0.875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.186047 | 43 | 2 | 22 | 21.5 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | false | 0 | 0 | 0.5 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
d264700d4866f36751930a2bbd7fb7604dcb757f | 44 | py | Python | Network Automation/Router/__init__.py | kuhakuu04/Network_Automation | f3eb99943e569f3311233f437ea17cd1862e3dc9 | [
"Apache-2.0"
] | null | null | null | Network Automation/Router/__init__.py | kuhakuu04/Network_Automation | f3eb99943e569f3311233f437ea17cd1862e3dc9 | [
"Apache-2.0"
] | null | null | null | Network Automation/Router/__init__.py | kuhakuu04/Network_Automation | f3eb99943e569f3311233f437ea17cd1862e3dc9 | [
"Apache-2.0"
] | null | null | null | from .Mikrotik import *
from .Cisco import * | 22 | 23 | 0.75 | 6 | 44 | 5.5 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.159091 | 44 | 2 | 24 | 22 | 0.891892 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
d2694b9724a3b46e8859d0a0478f55a5f9278566 | 5,968 | py | Python | tests/test_cmd_execution.py | maximium/ffmpy | 40737b1ce2251914e8fb6b67e1b0ded274997a4f | [
"MIT"
] | null | null | null | tests/test_cmd_execution.py | maximium/ffmpy | 40737b1ce2251914e8fb6b67e1b0ded274997a4f | [
"MIT"
] | null | null | null | tests/test_cmd_execution.py | maximium/ffmpy | 40737b1ce2251914e8fb6b67e1b0ded274997a4f | [
"MIT"
] | null | null | null | import os
import subprocess
import threading
import time
import pytest
from ffmpy import FFExecutableNotFoundError, FFmpeg, FFRuntimeError
def test_invalid_executable_path():
ff = FFmpeg(executable="/tmp/foo/bar/ffmpeg")
with pytest.raises(FFExecutableNotFoundError) as exc_info:
ff.run()
assert str(exc_info.value) == "Executable '/tmp/foo/bar/ffmpeg' not found"
def test_no_redirection():
global_options = "--stdin none --stdout oneline --stderr multiline --exit-code 0"
ff = FFmpeg(global_options=global_options)
stdout, stderr = ff.run()
assert stdout is None
assert stderr is None
def test_redirect_to_devnull():
global_options = "--stdin none --stdout oneline --stderr multiline --exit-code 0"
ff = FFmpeg(global_options=global_options)
devnull = open(os.devnull, "wb")
stdout, stderr = ff.run(stdout=devnull, stderr=devnull)
assert stdout is None
assert stderr is None
def test_redirect_to_pipe():
global_options = "--stdin none --stdout oneline --stderr multiline --exit-code 0"
ff = FFmpeg(global_options=global_options)
stdout, stderr = ff.run(stdout=subprocess.PIPE, stderr=subprocess.PIPE)
assert stdout == b"This is printed to stdout"
assert stderr == b"These are\nmultiple lines\nprinted to stderr"
def test_input():
global_options = "--stdin pipe --stdout oneline --stderr multiline --exit-code 0"
ff = FFmpeg(global_options=global_options)
stdout, stderr = ff.run(
input_data=b"my input data", stdout=subprocess.PIPE, stderr=subprocess.PIPE
)
assert stdout == b"my input data\nThis is printed to stdout"
assert stderr == b"These are\nmultiple lines\nprinted to stderr"
def test_non_zero_exitcode():
global_options = "--stdin none --stdout multiline --stderr multiline --exit-code 42"
ff = FFmpeg(global_options=global_options)
with pytest.raises(FFRuntimeError) as exc_info:
ff.run(stdout=subprocess.PIPE, stderr=subprocess.PIPE)
assert exc_info.value.cmd == (
"ffmpeg --stdin none --stdout multiline --stderr multiline --exit-code 42"
)
assert exc_info.value.exit_code == 42
assert exc_info.value.stdout == b"These are\nmultiple lines\nprinted to stdout"
assert exc_info.value.stderr == b"These are\nmultiple lines\nprinted to stderr"
assert str(exc_info.value) == (
"`ffmpeg --stdin none --stdout multiline --stderr multiline --exit-code 42` "
"exited with status 42\n\n"
"STDOUT:\n"
"These are\n"
"multiple lines\n"
"printed to stdout\n\n"
"STDERR:\n"
"These are\n"
"multiple lines\n"
"printed to stderr"
)
def test_non_zero_exitcode_no_stderr():
global_options = "--stdin none --stdout multiline --stderr none --exit-code 42"
ff = FFmpeg(global_options=global_options)
with pytest.raises(FFRuntimeError) as exc_info:
ff.run(stdout=subprocess.PIPE, stderr=subprocess.PIPE)
assert exc_info.value.cmd == (
"ffmpeg --stdin none --stdout multiline --stderr none --exit-code 42"
)
assert exc_info.value.exit_code == 42
assert exc_info.value.stdout == b"These are\nmultiple lines\nprinted to stdout"
assert exc_info.value.stderr == b""
assert str(exc_info.value) == (
"`ffmpeg --stdin none --stdout multiline --stderr none --exit-code 42` "
"exited with status 42\n\n"
"STDOUT:\n"
"These are\n"
"multiple lines\n"
"printed to stdout\n\n"
"STDERR:\n"
)
def test_non_zero_exitcode_no_stdout():
global_options = "--stdin none --stdout none --stderr multiline --exit-code 42"
ff = FFmpeg(global_options=global_options)
with pytest.raises(FFRuntimeError) as exc_info:
ff.run(stdout=subprocess.PIPE, stderr=subprocess.PIPE)
assert exc_info.value.cmd == (
"ffmpeg --stdin none --stdout none --stderr multiline --exit-code 42"
)
assert exc_info.value.exit_code == 42
assert exc_info.value.stdout == b""
assert exc_info.value.stderr == b"These are\nmultiple lines\nprinted to stderr"
assert str(exc_info.value) == (
"`ffmpeg --stdin none --stdout none --stderr multiline --exit-code 42` "
"exited with status 42\n\n"
"STDOUT:\n"
"\n\n"
"STDERR:\n"
"These are\n"
"multiple lines\n"
"printed to stderr"
)
def test_non_zero_exitcode_no_stdout_and_stderr():
global_options = "--stdin none --stdout none --stderr none --exit-code 42"
ff = FFmpeg(global_options=global_options)
with pytest.raises(FFRuntimeError) as exc_info:
ff.run(stdout=subprocess.PIPE, stderr=subprocess.PIPE)
assert exc_info.value.cmd == (
"ffmpeg --stdin none --stdout none --stderr none --exit-code 42"
)
assert exc_info.value.exit_code == 42
assert exc_info.value.stdout == b""
assert exc_info.value.stderr == b""
assert str(exc_info.value) == (
"`ffmpeg --stdin none --stdout none --stderr none --exit-code 42` "
"exited with status 42\n\n"
"STDOUT:\n"
"\n\n"
"STDERR:\n"
)
def test_raise_exception_with_stdout_stderr_none():
global_options = "--stdin none --stdout none --stderr none --exit-code 42"
ff = FFmpeg(global_options=global_options)
with pytest.raises(FFRuntimeError) as exc_info:
ff.run()
assert str(exc_info.value) == (
"`ffmpeg --stdin none --stdout none --stderr none --exit-code 42` "
"exited with status 42\n\n"
"STDOUT:\n"
"\n\n"
"STDERR:\n"
)
def test_terminate_process():
global_options = "--long-run"
ff = FFmpeg(global_options=global_options)
thread_1 = threading.Thread(target=ff.run)
thread_1.start()
while not ff.process:
time.sleep(0.05)
print(ff.process.returncode)
ff.process.terminate()
thread_1.join()
assert ff.process.returncode == -15
| 33.52809 | 88 | 0.665214 | 804 | 5,968 | 4.80597 | 0.110697 | 0.100932 | 0.068323 | 0.074534 | 0.851449 | 0.832557 | 0.820652 | 0.79736 | 0.79736 | 0.752588 | 0 | 0.012372 | 0.214477 | 5,968 | 177 | 89 | 33.717514 | 0.81186 | 0 | 0 | 0.594406 | 0 | 0 | 0.332105 | 0.003519 | 0 | 0 | 0 | 0 | 0.216783 | 1 | 0.076923 | false | 0 | 0.041958 | 0 | 0.118881 | 0.090909 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
d27bf89e6938087054172df20a8c99c634e7a89d | 13,396 | py | Python | koapy/cli/utils/grpc_options.py | webclinic017/koapy | 0cdbfac6a10c70e83df800a3a4362872b8792aba | [
"MIT"
] | null | null | null | koapy/cli/utils/grpc_options.py | webclinic017/koapy | 0cdbfac6a10c70e83df800a3a4362872b8792aba | [
"MIT"
] | null | null | null | koapy/cli/utils/grpc_options.py | webclinic017/koapy | 0cdbfac6a10c70e83df800a3a4362872b8792aba | [
"MIT"
] | null | null | null | import click
from koapy.cli.extensions.functools import update_wrapper_with_click_params
from koapy.cli.extensions.parser import ClickArgumentParser
from koapy.cli.utils.verbose_option import full_verbose_option
from koapy.config import config
# grpc configs for default values
grpc_config = config.get("koapy.backend.kiwoom_open_api_plus.grpc")
default_server_bind_address = (
grpc_config.get("server.bind_address")
or grpc_config.get("server.host")
or grpc_config.get("host")
)
default_server_host = default_server_bind_address
default_server_port = grpc_config.get("server.port") or grpc_config.get("port")
default_client_host = grpc_config.get("client.host") or grpc_config.get("host")
default_client_port = grpc_config.get("client.port") or grpc_config.get("port")
default_common_host = grpc_config.get("host") or grpc_config.get("client.host")
default_common_port = (
grpc_config.get("port")
or grpc_config.get("server.port")
or grpc_config.get("client.port")
)
# server specific options
server_bind_address_option = click.option(
"--bind-address",
"--host",
metavar="ADDRESS",
help="Host address of gRPC server to bind.",
default=default_server_bind_address,
show_default=True,
)
server_port_option = click.option(
"--port",
metavar="PORT",
type=int,
help="Port number of gRPC server to bind.",
default=default_server_port,
show_default=True,
)
server_key_file_option = click.option(
"--key-file",
type=click.Path(),
help="PEM encoded private key file for server SSL/TLS.",
)
server_cert_file_option = click.option(
"--cert-file",
type=click.Path(),
help="PEM encoded certificate chain file for server SSL/TLS.",
)
server_root_certs_file_option = click.option(
"--root-certs-file",
type=click.Path(),
help="""
PEM encoded client root certificates file for client authentication.
Assumes --require-client-auth flag is set if this option is given,
unless --no-require-client-auth flag is set explicitly.
""",
)
server_require_client_auth_option = click.option(
"--require-client-auth",
help="Require clients to be authenticated, root certificates are required.",
is_flag=True,
)
server_no_require_client_auth_option = click.option(
"--no-require-client-auth",
help="Force not to require clients to be authenticated, even if root certificates are given.",
is_flag=True,
)
server_options = [
server_bind_address_option,
server_port_option,
server_key_file_option,
server_cert_file_option,
server_root_certs_file_option,
server_require_client_auth_option,
server_no_require_client_auth_option,
]
# client specific options
client_host_option = click.option(
"--host",
metavar="ADDRESS",
help="Host address of gRPC server to connect.",
default=default_client_host,
show_default=True,
)
client_port_option = click.option(
"--port",
metavar="PORT",
type=int,
help="Port number of gRPC server to connect.",
default=default_client_port,
show_default=True,
)
client_enable_ssl_option = click.option(
"--enable-ssl",
help="""
Enable SSL/TLS for gRPC connection.
If --root-certs-file option is not given,
will retrieve them from a default location chosen by gRPC runtime.
""",
is_flag=True,
)
client_root_certs_file_option = click.option(
"--root-certs-file",
type=click.Path(),
help="PEM encoded root certificates file for SSL/TLS.",
)
client_key_file_option = click.option(
"--key-file",
type=click.Path(),
help="PEM encoded private key file for client authentication.",
)
client_cert_file_option = click.option(
"--cert-file",
type=click.Path(),
help="PEM encoded certificate chain file for client authentication.",
)
client_options = [
client_host_option,
client_port_option,
client_enable_ssl_option,
client_root_certs_file_option,
client_key_file_option,
client_cert_file_option,
]
# server and client options (resolving option conflicts)
server_and_client_bind_address_option = click.option(
"--bind-address",
"--server-host",
metavar="ADDRESS",
help="Host address of gRPC server to bind.",
default=default_server_bind_address,
show_default=True,
)
server_and_client_host_option = click.option(
"--host",
"--client-host",
metavar="ADDRESS",
help="Host address of gRPC server to connect.",
default=default_client_host,
show_default=True,
)
server_and_client_port_option = click.option(
"--port",
metavar="PORT",
type=int,
help="Port number of gRPC server to bind and connect.",
default=default_common_port,
show_default=True,
)
server_and_client_enable_ssl_option = click.option(
"--enable-ssl",
help="""
Enable SSL/TLS for gRPC connection.
If --client-root-certs-file option is not given,
will retrieve them from a default location chosen by gRPC runtime.
""",
is_flag=True,
show_default=True,
)
server_and_client_server_key_file_option = click.option(
"--server-key-file",
type=click.Path(),
help="PEM encoded private key file for server SSL/TLS.",
)
server_and_client_server_cert_file_option = click.option(
"--server-cert-file",
type=click.Path(),
help="PEM encoded certificate chain file for server SSL/TLS.",
)
server_and_client_server_root_certs_file_option = click.option(
"--server-root-certs-file",
type=click.Path(),
help="""
PEM encoded client root certificates file for client authentication.
Assumes --require-client-auth flag is set if this option is given,
unless --no-require-client-auth flag is set explicitly.
""",
)
server_and_client_require_client_auth_option = click.option(
"--require-client-auth",
help="Require clients to be authenticated, root certificates are required.",
is_flag=True,
)
server_and_client_no_require_client_auth_option = click.option(
"--no-require-client-auth",
help="Foce not to require clients to be authenticated, even if root certificates are given.",
is_flag=True,
)
server_and_client_client_root_certs_file_option = click.option(
"--client-root-certs-file",
type=click.Path(),
help="PEM encoded root certificates file for SSL/TLS.",
)
server_and_client_client_key_file_option = click.option(
"--client-key-file",
type=click.Path(),
help="PEM encoded private key file for client authentication.",
)
server_and_client_client_cert_file_option = click.option(
"--client-cert-file",
type=click.Path(),
help="PEM encoded certificate chain file for client authentication.",
)
server_and_client_options = [
server_and_client_bind_address_option,
server_and_client_host_option,
server_and_client_port_option,
server_and_client_enable_ssl_option,
server_and_client_server_key_file_option,
server_and_client_server_cert_file_option,
server_and_client_server_root_certs_file_option,
server_and_client_require_client_auth_option,
server_and_client_no_require_client_auth_option,
server_and_client_client_root_certs_file_option,
server_and_client_client_key_file_option,
server_and_client_client_cert_file_option,
]
def grpc_server_options():
def decorator(f):
@click.pass_context
@server_bind_address_option
@server_port_option
@server_key_file_option
@server_cert_file_option
@server_root_certs_file_option
@server_require_client_auth_option
@server_no_require_client_auth_option
def new_func(ctx: click.Context, *args, **kwargs):
key_file = kwargs.get("key_file")
cert_file = kwargs.get("cert_file")
root_certs_file = kwargs.get("root_certs_file")
require_client_auth = kwargs.get("require_client_auth")
no_require_client_auth = kwargs.pop("no_require_client_auth")
# both --key-file and --cert-file should be given
if bool(key_file) != bool(cert_file):
ctx.fail("both --key-file and --cert-file should be given.")
# assume --require-client-auth flag is set if --root-certs-file is given
if root_certs_file is not None:
require_client_auth = True
kwargs["require_client_auth"] = require_client_auth
# value of --require-client-auth flag should be false if --no-require-client-auth flag is set
if no_require_client_auth:
require_client_auth = False
kwargs["require_client_auth"] = require_client_auth
# --require-client-auth flag is set but no --root-certs-file was given
if require_client_auth and root_certs_file is None:
ctx.fail(
"--require-client-auth flag is set but no --root-certs-file was given."
)
return ctx.invoke(f, *args, **kwargs)
return update_wrapper_with_click_params(new_func, f)
return decorator
def grpc_client_options():
def decorator(f):
@click.pass_context
@client_host_option
@client_port_option
@client_enable_ssl_option
@client_root_certs_file_option
@client_key_file_option
@client_cert_file_option
def new_func(ctx: click.Context, *args, **kwargs):
enable_ssl = kwargs.get("enable_ssl")
root_certs_file = kwargs.get("root_certs_file")
key_file = kwargs.get("key_file")
cert_file = kwargs.get("cert_file")
# assume --enable-ssl flag is set if --root-certs-file is given
if root_certs_file is not None:
enable_ssl = True
kwargs["enable_ssl"] = enable_ssl
# both --key-file and --cert-file should be given
if bool(key_file) != bool(cert_file):
ctx.fail("both --key-file and --cert-file should be given.")
return ctx.invoke(f, *args, **kwargs)
return update_wrapper_with_click_params(new_func, f)
return decorator
def grpc_server_and_client_options():
def decorator(f):
@click.pass_context
@server_and_client_bind_address_option
@server_and_client_host_option
@server_and_client_port_option
@server_and_client_enable_ssl_option
@server_and_client_server_key_file_option
@server_and_client_server_cert_file_option
@server_and_client_server_root_certs_file_option
@server_and_client_require_client_auth_option
@server_and_client_no_require_client_auth_option
@server_and_client_client_root_certs_file_option
@server_and_client_client_key_file_option
@server_and_client_client_cert_file_option
def new_func(ctx: click.Context, *args, **kwargs):
# server related options and logics
key_file = kwargs.get("server_key_file")
cert_file = kwargs.get("server_cert_file")
root_certs_file = kwargs.get("server_root_certs_file")
require_client_auth = kwargs.get("require_client_auth")
no_require_client_auth = kwargs.pop("no_require_client_auth")
# both --key-file and --cert-file should be given
if bool(key_file) != bool(cert_file):
ctx.fail(
"both --server-key-file and --server-cert-file should be given."
)
# assume --require-client-auth flag is set if --root-certs-file is given
if root_certs_file is not None:
require_client_auth = True
kwargs["require_client_auth"] = require_client_auth
# value of --require-client-auth flag should be false if --no-require-client-auth flag is set
if no_require_client_auth:
require_client_auth = False
kwargs["require_client_auth"] = require_client_auth
# --require-client-auth flag is set but no --root-certs-file was given
if require_client_auth and root_certs_file is None:
ctx.fail(
"--require-client-auth flag is set but no --server-root-certs-file was given."
)
# client related options and logics
enable_ssl = kwargs.get("enable_ssl")
root_certs_file = kwargs.get("client_root_certs_file")
key_file = kwargs.get("client_key_file")
cert_file = kwargs.get("client_cert_file")
# assume --enable-ssl flag is set if --root-certs-file is given
if root_certs_file is not None:
enable_ssl = True
kwargs["enable_ssl"] = enable_ssl
# both --key-file and --cert-file should be given
if bool(key_file) != bool(cert_file):
ctx.fail(
"both --client-key-file and --client-cert-file should be given."
)
return ctx.invoke(f, *args, **kwargs)
return update_wrapper_with_click_params(new_func, f)
return decorator
server_argument_parser = ClickArgumentParser(server_options + [full_verbose_option()])
client_argument_parser = ClickArgumentParser(client_options + [full_verbose_option()])
server_and_client_argument_parser = ClickArgumentParser(
server_and_client_options + [full_verbose_option()]
)
| 34.794805 | 105 | 0.689609 | 1,790 | 13,396 | 4.83743 | 0.069274 | 0.081072 | 0.106017 | 0.058205 | 0.892366 | 0.857258 | 0.828502 | 0.781499 | 0.751011 | 0.734727 | 0 | 0 | 0.217154 | 13,396 | 384 | 106 | 34.885417 | 0.825689 | 0.073305 | 0 | 0.479751 | 0 | 0.003115 | 0.2572 | 0.035902 | 0 | 0 | 0 | 0 | 0 | 1 | 0.028037 | false | 0.009346 | 0.015576 | 0 | 0.071651 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
967d05cbf3f4da6ee46031647ef77ffc13ff3b67 | 74 | py | Python | src/dataset/__init__.py | pigmon/SqueezeDet_Win | beb88d5a5652d2b3088aa2f670e6680043d13ac3 | [
"BSD-2-Clause"
] | 2 | 2017-05-25T01:26:41.000Z | 2019-08-16T13:38:57.000Z | src/dataset/__init__.py | pigmon/SqueezeDet_Win | beb88d5a5652d2b3088aa2f670e6680043d13ac3 | [
"BSD-2-Clause"
] | null | null | null | src/dataset/__init__.py | pigmon/SqueezeDet_Win | beb88d5a5652d2b3088aa2f670e6680043d13ac3 | [
"BSD-2-Clause"
] | 1 | 2017-05-25T01:26:50.000Z | 2017-05-25T01:26:50.000Z | from dataset.kitti import kitti
from dataset.pascal_voc import pascal_voc
| 24.666667 | 41 | 0.864865 | 12 | 74 | 5.166667 | 0.5 | 0.354839 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.108108 | 74 | 2 | 42 | 37 | 0.939394 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
96b95b21313c44901627bb3bae017294e2625190 | 717 | py | Python | proxySTAR_V3/certbot/venv.1509389747.bak/lib/python2.7/site-packages/pylint/test/input/func_disable_linebased.py | mami-project/lurk | 98c293251e9b1e9c9a4b02789486c5ddaf46ba3c | [
"Apache-2.0"
] | 2 | 2017-07-05T09:57:33.000Z | 2017-11-14T23:05:53.000Z | Libraries/Python/pylint/v1.4.4/pylint/test/input/func_disable_linebased.py | davidbrownell/Common_Environment | 4015872aeac8d5da30a6aa7940e1035a6aa6a75d | [
"BSL-1.0"
] | 1 | 2019-01-17T14:26:22.000Z | 2019-01-17T22:56:26.000Z | Libraries/Python/pylint/v1.4.4/pylint/test/input/func_disable_linebased.py | davidbrownell/Common_Environment | 4015872aeac8d5da30a6aa7940e1035a6aa6a75d | [
"BSL-1.0"
] | 1 | 2017-08-31T14:33:03.000Z | 2017-08-31T14:33:03.000Z | # This is a very very very very very very very very very very very very very very very very very very very very very long line.
# pylint: disable=line-too-long, print-statement
"""Make sure enable/disable pragmas work for messages that are applied to lines and not syntax nodes.
A disable pragma for a message that applies to nodes is applied to the whole
block if it comes before the first statement (excluding the docstring). For
line-based messages, this behavior needs to be altered to really only apply to
the enclosed lines.
"""
# pylint: enable=line-too-long
__revision__ = '1'
print('This is a very long line which the linter will warn about, now that line-too-long has been enabled again.')
| 47.8 | 128 | 0.760112 | 123 | 717 | 4.398374 | 0.495935 | 0.295749 | 0.421442 | 0.532348 | 0.155268 | 0.155268 | 0.155268 | 0.155268 | 0.155268 | 0.155268 | 0 | 0.001712 | 0.185495 | 717 | 14 | 129 | 51.214286 | 0.924658 | 0.772664 | 0 | 0 | 0 | 0.5 | 0.757143 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.5 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
73d8c55f08cb7d2869165c3df287bf23c2c67303 | 97 | py | Python | model/__init__.py | GauravSarkar/BERT-CRF | 649c4f5fce7b887f3db9a9303e938d90c4b87677 | [
"Apache-2.0"
] | null | null | null | model/__init__.py | GauravSarkar/BERT-CRF | 649c4f5fce7b887f3db9a9303e938d90c4b87677 | [
"Apache-2.0"
] | null | null | null | model/__init__.py | GauravSarkar/BERT-CRF | 649c4f5fce7b887f3db9a9303e938d90c4b87677 | [
"Apache-2.0"
] | null | null | null | from .modeling_jointbert import JointBERT
from .modeling_jointdistilbert import JointDistilBERT
| 24.25 | 53 | 0.886598 | 10 | 97 | 8.4 | 0.5 | 0.285714 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.092784 | 97 | 3 | 54 | 32.333333 | 0.954545 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
fb7a2bb57fa2e2cf642c9d3d082afb6820613ddb | 5,207 | py | Python | Project Euler/013.py | terror/solutions | bbe5d30b21d6194666c6c09ecb43f777e12925fa | [
"CC0-1.0"
] | 2 | 2021-04-05T14:26:37.000Z | 2021-06-10T04:22:01.000Z | Project Euler/013.py | terror/solutions | bbe5d30b21d6194666c6c09ecb43f777e12925fa | [
"CC0-1.0"
] | null | null | null | Project Euler/013.py | terror/solutions | bbe5d30b21d6194666c6c09ecb43f777e12925fa | [
"CC0-1.0"
] | null | null | null | nums = """
37107287533902102798797998220837590246510135740250
46376937677490009712648124896970078050417018260538
74324986199524741059474233309513058123726617309629
91942213363574161572522430563301811072406154908250
23067588207539346171171980310421047513778063246676
89261670696623633820136378418383684178734361726757
28112879812849979408065481931592621691275889832738
44274228917432520321923589422876796487670272189318
47451445736001306439091167216856844588711603153276
70386486105843025439939619828917593665686757934951
62176457141856560629502157223196586755079324193331
64906352462741904929101432445813822663347944758178
92575867718337217661963751590579239728245598838407
58203565325359399008402633568948830189458628227828
80181199384826282014278194139940567587151170094390
35398664372827112653829987240784473053190104293586
86515506006295864861532075273371959191420517255829
71693888707715466499115593487603532921714970056938
54370070576826684624621495650076471787294438377604
53282654108756828443191190634694037855217779295145
36123272525000296071075082563815656710885258350721
45876576172410976447339110607218265236877223636045
17423706905851860660448207621209813287860733969412
81142660418086830619328460811191061556940512689692
51934325451728388641918047049293215058642563049483
62467221648435076201727918039944693004732956340691
15732444386908125794514089057706229429197107928209
55037687525678773091862540744969844508330393682126
18336384825330154686196124348767681297534375946515
80386287592878490201521685554828717201219257766954
78182833757993103614740356856449095527097864797581
16726320100436897842553539920931837441497806860984
48403098129077791799088218795327364475675590848030
87086987551392711854517078544161852424320693150332
59959406895756536782107074926966537676326235447210
69793950679652694742597709739166693763042633987085
41052684708299085211399427365734116182760315001271
65378607361501080857009149939512557028198746004375
35829035317434717326932123578154982629742552737307
94953759765105305946966067683156574377167401875275
88902802571733229619176668713819931811048770190271
25267680276078003013678680992525463401061632866526
36270218540497705585629946580636237993140746255962
24074486908231174977792365466257246923322810917141
91430288197103288597806669760892938638285025333403
34413065578016127815921815005561868836468420090470
23053081172816430487623791969842487255036638784583
11487696932154902810424020138335124462181441773470
63783299490636259666498587618221225225512486764533
67720186971698544312419572409913959008952310058822
95548255300263520781532296796249481641953868218774
76085327132285723110424803456124867697064507995236
37774242535411291684276865538926205024910326572967
23701913275725675285653248258265463092207058596522
29798860272258331913126375147341994889534765745501
18495701454879288984856827726077713721403798879715
38298203783031473527721580348144513491373226651381
34829543829199918180278916522431027392251122869539
40957953066405232632538044100059654939159879593635
29746152185502371307642255121183693803580388584903
41698116222072977186158236678424689157993532961922
62467957194401269043877107275048102390895523597457
23189706772547915061505504953922979530901129967519
86188088225875314529584099251203829009407770775672
11306739708304724483816533873502340845647058077308
82959174767140363198008187129011875491310547126581
97623331044818386269515456334926366572897563400500
42846280183517070527831839425882145521227251250327
55121603546981200581762165212827652751691296897789
32238195734329339946437501907836945765883352399886
75506164965184775180738168837861091527357929701337
62177842752192623401942399639168044983993173312731
32924185707147349566916674687634660915035914677504
99518671430235219628894890102423325116913619626622
73267460800591547471830798392868535206946944540724
76841822524674417161514036427982273348055556214818
97142617910342598647204516893989422179826088076852
87783646182799346313767754307809363333018982642090
10848802521674670883215120185883543223812876952786
71329612474782464538636993009049310363619763878039
62184073572399794223406235393808339651327408011116
66627891981488087797941876876144230030984490851411
60661826293682836764744779239180335110989069790714
85786944089552990653640447425576083659976645795096
66024396409905389607120198219976047599490197230297
64913982680032973156037120041377903785566085089252
16730939319872750275468906903707539413042652315011
94809377245048795150954100921645863754710598436791
78639167021187492431995700641917969777599028300699
15368713711936614952811305876380278410754449733078
40789923115535562561142322423255033685442488917353
44889911501440648020369068063960672322193204149535
41503128880339536053299340368006977710650566631954
81234880673210146739058568557934581403627822703280
82616570773948327592232845941706525094512325230608
22918802058777319719839450180888072429661980811197
77158542502016545090413245809786882778948721859617
72107838435069186155435662884062257473692284509516
20849603980134001723930671666823555245252804609722
53503534226472524250874054075591789781264330331690
"""
print("".join(list(str(sum([int(n.rstrip()) for n in nums.split("\n") if n != ""])))[:10]))
| 50.067308 | 91 | 0.970616 | 118 | 5,207 | 42.830508 | 0.966102 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.98194 | 0.021702 | 5,207 | 103 | 92 | 50.553398 | 0.010208 | 0 | 0 | 0 | 0 | 0 | 0.980027 | 0.960246 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.009709 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
fba2407ed479ddc386811b480a3326ca31ae7f97 | 43 | py | Python | toeicbert/__main__.py | graykode/BERT-TOEIC | da97af28e91e843025c8cfeabddd99ed1c0dbcc8 | [
"MIT"
] | 108 | 2019-04-29T18:27:07.000Z | 2021-12-11T13:19:01.000Z | toeicbert/__main__.py | graykode/BERT-TOEIC | da97af28e91e843025c8cfeabddd99ed1c0dbcc8 | [
"MIT"
] | 5 | 2019-05-09T20:18:33.000Z | 2020-06-15T13:40:12.000Z | toeicbert/__main__.py | graykode/BERT-TOEIC | da97af28e91e843025c8cfeabddd99ed1c0dbcc8 | [
"MIT"
] | 23 | 2019-04-30T01:34:39.000Z | 2021-11-06T19:07:06.000Z | from . import bert_toeic
bert_toeic.main() | 14.333333 | 24 | 0.790698 | 7 | 43 | 4.571429 | 0.714286 | 0.5625 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.116279 | 43 | 3 | 25 | 14.333333 | 0.842105 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 1 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
83731a18f1190819c33c6a80b62f1ed098cc50be | 45 | py | Python | src/ui/handlers/__init__.py | Rabbithy/Fyks | 8a2e8fac75b445ae8a608dc873a732c6d66a0f6b | [
"MIT"
] | 1 | 2020-06-11T03:39:40.000Z | 2020-06-11T03:39:40.000Z | src/ui/handlers/__init__.py | Rabbithy/Fyks | 8a2e8fac75b445ae8a608dc873a732c6d66a0f6b | [
"MIT"
] | 6 | 2020-10-19T23:08:27.000Z | 2020-11-24T12:03:59.000Z | src/ui/handlers/__init__.py | Rabbithy/Fyks | 8a2e8fac75b445ae8a608dc873a732c6d66a0f6b | [
"MIT"
] | null | null | null | from .mousehandler import CustomMouseHandler
| 22.5 | 44 | 0.888889 | 4 | 45 | 10 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.088889 | 45 | 1 | 45 | 45 | 0.97561 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
83b0a1de629ae27565f8477ee80bb6b31529cc16 | 49,653 | py | Python | preprocessing_image_datasets.py | neu-spiral/DeepSpectralRanking | dbdf320478002e247cb3293d34e9f56e9e1e9a21 | [
"MIT"
] | 1 | 2021-06-27T05:02:14.000Z | 2021-06-27T05:02:14.000Z | preprocessing_image_datasets.py | neu-spiral/DeepSpectralRanking | dbdf320478002e247cb3293d34e9f56e9e1e9a21 | [
"MIT"
] | null | null | null | preprocessing_image_datasets.py | neu-spiral/DeepSpectralRanking | dbdf320478002e247cb3293d34e9f56e9e1e9a21 | [
"MIT"
] | 1 | 2021-03-16T19:12:51.000Z | 2021-03-16T19:12:51.000Z | import pandas as pd
import numpy as np
import pickle
import argparse
from utils import *
from PIL import Image
from os.path import exists
from keras.preprocessing.image import load_img, img_to_array
from keras.layers import Input
from keras.models import Model
from googlenet_functional import *
from scipy.sparse import save_npz
from scipy.misc import imresize
from itertools import combinations
import codecs
input_shape = (3,224,224)
def create_partitions(rankings_all, comparisons_all, n_fold):
'''
:param n_fold: number of cross validation folds
:param rankings_all: [(i_1,i_2, ...), (i_1,i_2, ...), ...]
:param comparisons_all: +1/-1
:return: rankings_train (n_fold x d), rankings_test (n_fold x len(rankings_all/n_fold))
'''
d_all = len(rankings_all)
d_fold = int(d_all / (n_fold + 1)) # last fold is the holdout/test set
# partition observations into train, validation, and test
np.random.seed(1)
# stock indices to a matrix of (n_fold, indices)
shuffled_ind = np.reshape(np.random.permutation(d_fold * (n_fold + 1)), ((n_fold + 1), d_fold))
rankings_train = []
rankings_val = []
comparisons_train = []
comparisons_val = []
# create train and validation sets
for test_fold in range(n_fold):
train_ind = shuffled_ind[[fold for fold in range(n_fold) if (fold != test_fold)]]
train_ind = train_ind.flatten()
# get training rankings
rankings_train.append(rankings_all[train_ind])
# get training comparisons
comparisons_train.append(comparisons_all[train_ind])
# get validation rankings
rankings_val.append(rankings_all[shuffled_ind[test_fold]])
# get validation comparisons
comparisons_val.append(comparisons_all[shuffled_ind[test_fold]])
# get test rankings
rankings_test = rankings_all[shuffled_ind[n_fold]]
# dims for train and validation: (n_fold, d_train, number of ranked items at a time (A_l))
return rankings_train, rankings_val, rankings_test, comparisons_train, comparisons_val
def create_partitions_wrt_sample(rankings_all, comparisons_all, n, n_fold):
'''
:param n: number of samples
:param n_fold: number of cross validation folds
:param rankings_all: [(i_1,i_2, ...), (i_1,i_2, ...), ...]
:param comparisons_all: +1/-1
partition rankings by the samples participating in train or test. no rankings across
:return: rankings_train (n_fold x d), rankings_test (n_fold x len(rankings_all/n_fold))
'''
samp_fold = int(n / (n_fold + 1))
# partition observations into train, validation, and test
np.random.seed(1)
# stock indices to a matrix of (n_fold, indices)
shuffled_samp = np.reshape(np.random.permutation(samp_fold * (n_fold + 1)), ((n_fold + 1), samp_fold))
rankings_train = []
rankings_val = []
comparisons_train = []
comparisons_val = []
train_samp_folds = []
for test_fold in range(n_fold):
train_samp = shuffled_samp[[fold for fold in range(n_fold) if (fold != test_fold)]]
train_samp = train_samp.flatten()
val_samp = shuffled_samp[test_fold]
# get training rankings associated with only training samples
rankings_train_fold = []
comparisons_train_fold = []
for i, rank in enumerate(rankings_all):
if np.all(np.isin(rank, train_samp)):
rankings_train_fold.append(rank)
comparisons_train_fold.append(comparisons_all[i])
rankings_train.append(rankings_train_fold)
comparisons_train.append(comparisons_train_fold)
train_samp.sort()
train_samp_folds.append(train_samp)
# get validation rankings associated with only validation samples
rankings_val_fold = []
comparisons_val_fold = []
for i, rank in enumerate(rankings_all):
if np.all(np.isin(rank, val_samp)):
rankings_val_fold.append(rank)
comparisons_val_fold.append(comparisons_all[i])
rankings_val.append(rankings_val_fold)
comparisons_val.append(comparisons_val_fold)
# get rankings associated with only test samples
test_samp = shuffled_samp[n_fold]
rankings_test = []
for rank in rankings_all:
if np.all(np.isin(rank, test_samp)):
rankings_test.append(rank)
# dims for train and validation: (n_fold, d_train, number of ranked items at a time (A_l))
return train_samp_folds, rankings_train, rankings_val, rankings_test, comparisons_train, comparisons_val
def partition_and_save(n_fold, dir, partition, rankings_all, comparisons_all, X, imgs_lst,
ranking_length_for_siamese=2):
n = X.shape[0]
rankings_all = np.array(rankings_all)
comparisons_all = np.array(comparisons_all)
if partition == 'per_samp':
train_samp_folds, rankings_train, rankings_val, rankings_test, comparisons_train, comparisons_val = \
create_partitions_wrt_sample(rankings_all, comparisons_all, n, n_fold)
else:
rankings_train, rankings_val, rankings_test, comparisons_train, comparisons_val = \
create_partitions(rankings_all, comparisons_all, n_fold)
train_samp_folds = [list(range(n)) for _ in range(n_fold)]
##########################################################################save
for test_fold in range(n_fold):
print(test_fold, 'folds have been saved')
current_rankings = np.array(rankings_train[test_fold])
np.save('../data/' + dir + 'data/' + str(test_fold) + '_train_samp', train_samp_folds[test_fold])
np.save('../data/' + dir + 'data/' + str(test_fold) + '_features', X)
np.save('../data/' + dir + 'data/' + str(test_fold) + '_imgs_lst', imgs_lst)
np.save('../data/' + dir + 'data/' + str(test_fold) + '_train', current_rankings)
np.save('../data/' + dir + 'data/' + str(test_fold) + '_val', np.array(rankings_val[test_fold]))
### save comp_imgs_lst_pair and comp_labels for siamese training
current_comparisons = np.array(comparisons_train[test_fold])
comp_imgs_lst_pair_left = imgs_lst[current_comparisons[:, 0]]
comp_imgs_lst_pair_right = imgs_lst[current_comparisons[:, 1]]
np.save('../data/' + dir + 'data/' + str(test_fold) + '_comp_train_imgs_lst_left', comp_imgs_lst_pair_left)
np.save('../data/' + dir + 'data/' + str(test_fold) + '_comp_train_imgs_lst_right', comp_imgs_lst_pair_right)
np.save('../data/' + dir + 'data/' + str(test_fold) + '_comp_train_labels', current_comparisons[:, 2])
### save rank_imgs_lst and true_order_labels for siamese training
rank_imgs_lst = [] # ranking length x (number of rankings, number of features)
true_order_labels = np.tile(list(range(ranking_length_for_siamese)),
(len(current_rankings), 1)) # (number of rankings, ranking length). images are appended wrt ranking order
for ranking_pos in range(ranking_length_for_siamese):
rank_imgs_lst.append(imgs_lst[current_rankings[:, ranking_pos]])
np.save('../data/' + dir + 'data/' + str(test_fold) + '_rank_imgs_lst', rank_imgs_lst)
np.save('../data/' + dir + 'data/' + str(test_fold) + '_true_order_labels', true_order_labels)
# Compute initial parameters and save
mat_Pij = est_Pij(n, current_rankings)
save_npz('../data/' + dir + 'data/' + str(test_fold) + '_mat_Pij', mat_Pij)
(beta_init, b_init, time_beta_b_init), (exp_beta_init, time_exp_beta_init), (u_init, time_u_init) = \
init_params(X, current_rankings, mat_Pij)
all_init_params = [(beta_init, b_init, time_beta_b_init), (exp_beta_init, time_exp_beta_init),
(u_init, time_u_init)]
with open('../data/' + dir + 'data/' + str(test_fold) + '_init_params.pickle', "wb") as pickle_out:
pickle.dump(all_init_params, pickle_out)
pickle_out.close()
# save test set
np.save('../data/' + dir + 'data/rankings_test', np.array(rankings_test))
def save_gifgif_happy_data(n_fold, dir='gifgif_happy_', partition='per_obs', n_img = 50):
'''
n: number of items
p: feature dimension
X: n*p, feature matrix
:param n_fold: number of cross validation folds
:param dir: current directory to read features and labels
'''
# First pass over the data to transform GIFGIF HAPPINESS IDs to consecutive integers.
image_ids = set([])
with open('../data/' + dir + 'data/' + 'gifgif-dataset-20150121-v1.csv') as f:
next(f) # First line is header.
for line in f:
emotion, left, right, choice = line.strip().split(",")
if len(left) > 0 and len(right) > 0 and (emotion == 'happiness' or emotion == 'sadness') and \
exists('../data/' + dir + 'data/images/' + left + '.gif') and \
exists('../data/' + dir + 'data/images/' + right + '.gif'):
image_ids.add(left)
image_ids.add(right)
# take n_gif images
if len(image_ids) >= n_img:
image_ids = list(image_ids)[:n_img]
break
# create googlenet feature extractor model
input1 = Input(shape=input_shape)
input2 = Input(shape=input_shape)
feature1, _ = create_googlenet(input1, input2)
base_net = Model(input1, feature1)
feature_model = Model(inputs=base_net.input, outputs=base_net.get_layer('feature_extractor').get_output_at(0))
feature_model.load_weights(GOOGLENET_INIT_WEIGHTS_PATH, by_name=True)
feature_model.compile(loss='mean_squared_error', optimizer='sgd')
# load images and googlenet features
X_imagenet = np.zeros((n_img, 1024), dtype=float)
# Extract image matrix, (n, 3, 224, 224)
int_to_idx = dict(enumerate(image_ids))
idx_to_int = dict((v, k) for k, v in int_to_idx.items())
imgs_lst = np.zeros((0, 3, 224, 224))
for image_id, i in idx_to_int.items():
# load
image_mtx = img_to_array(load_img('../data/' + dir + 'data/images/' + image_id + '.gif')).astype(np.uint8)
# resize
image_mtx = np.reshape(imresize(image_mtx, input_shape[1:]), input_shape)
# standardize
image_mtx = (image_mtx - np.mean(image_mtx)) / np.std(image_mtx)
image_mtx = image_mtx[np.newaxis, :, :, :]
# concatenate
imgs_lst = np.concatenate((imgs_lst, image_mtx), axis=0)
# take googlenet features
X_imagenet[i, :] = np.squeeze(feature_model.predict(image_mtx))
# take rankings (ordered lists) and comparisons (+1/-1) of images in image_ids
rankings_all = []
comparisons_all = []
with open('../data/' + dir + 'data/' + 'gifgif-dataset-20150121-v1.csv') as f:
next(f) # First line is header.
for line in f:
emotion, left, right, choice = line.strip().split(",")
if left in image_ids and right in image_ids:
if emotion == 'happiness': # left is happier
# Map ids to integers.
left = idx_to_int[left]
right = idx_to_int[right]
if choice == "left":
# Left image won the happiness comparison.
rankings_all.append((left, right))
# Append to comparisons
comparisons_all.append((left, right, +1))
elif choice == "right":
# Right image won the happiness comparison.
rankings_all.append((right, left))
# Append to comparisons
comparisons_all.append((left, right, -1))
elif emotion == 'sadness': # right is happier
# Map ids to integers.
left = idx_to_int[left]
right = idx_to_int[right]
if choice == "right":
# Left image won the sadness comparison.
rankings_all.append((left, right))
# Append to comparisons
comparisons_all.append((left, right, +1))
elif choice == "left":
# Right image won the sadness comparison.
rankings_all.append((right, left))
# Append to comparisons
comparisons_all.append((left, right, -1))
partition_and_save(n_fold, dir, partition, rankings_all, comparisons_all, X_imagenet, imgs_lst)
def save_fac_data(n_fold, dir='fac_', partition='per_obs', n_img = 50):
'''
n: number of items
p: feature dimension
X: n*p, feature matrix
:param n_fold: number of cross validation folds
:param dir: current directory to read features and labels
'''
comp_label_file = "/pairwise_comparison.pkl"
with open('../data/' + dir + 'data/' + comp_label_file, 'rb') as f:
comp_label_matrix = pickle.load(f)
image_ids = set([])
# get all unique images in category
for row in comp_label_matrix:
# category, f1, f2, workerID, passDup, imgId, ans
if row['category'] == 0:
left = row['f1'] + '/' + row['imgId'] + '.jpg'
right = row['f2'] + '/' + row['imgId'] + '.jpg'
if exists('../data/' + dir + 'data/' + left) and exists('../data/' + dir + 'data/' + right):
image_ids.add(left)
image_ids.add(right)
# take n_img images
if len(image_ids) >= n_img:
image_ids = list(image_ids)[:n_img]
break
# create googlenet feature extractor model
input1 = Input(shape=input_shape)
input2 = Input(shape=input_shape)
feature1, _ = create_googlenet(input1, input2)
base_net = Model(input1, feature1)
feature_model = Model(inputs=base_net.input, outputs=base_net.get_layer('feature_extractor').get_output_at(0))
feature_model.load_weights(GOOGLENET_INIT_WEIGHTS_PATH, by_name=True)
feature_model.compile(loss='mean_squared_error', optimizer='sgd')
# load images and googlenet features
X_imagenet = np.zeros((n_img, 1024), dtype=float)
# Extract image matrix, (n, 3, 224, 224)
int_to_idx = dict(enumerate(image_ids))
idx_to_int = dict((v, k) for k, v in int_to_idx.items())
imgs_lst = np.zeros((0, 3, 224, 224))
for image_id, i in idx_to_int.items():
# load
image_mtx = img_to_array(load_img('../data/' + dir + 'data/' + image_id)).astype(np.uint8)
# resize
image_mtx = np.reshape(imresize(image_mtx, input_shape[1:]), input_shape)
# standardize
image_mtx = (image_mtx - np.mean(image_mtx)) / np.std(image_mtx)
image_mtx = image_mtx[np.newaxis, :, :, :]
# concatenate
imgs_lst = np.concatenate((imgs_lst, image_mtx), axis=0)
# take googlenet features
X_imagenet[i, :] = np.squeeze(feature_model.predict(image_mtx))
# take rankings (ordered lists) and comparisons (+1/-1) of images in image_ids
rankings_all = []
comparisons_all = []
for row in comp_label_matrix:
# category, f1, f2, workerID, passDup, imgId, ans
if row['category'] == 0:
left = row['f1'] + '/' + row['imgId'] + '.jpg'
right = row['f2'] + '/' + row['imgId'] + '.jpg'
choice = row['ans']
if left in image_ids and right in image_ids:
# Map ids to integers.
left = idx_to_int[left]
right = idx_to_int[right]
if choice == "left":
# Left image won the comparison.
rankings_all.append((left, right))
# Append to comparisons
comparisons_all.append((left, right, +1))
elif choice == "right":
# Right image won the comparison.
rankings_all.append((right, left))
# Append to comparisons
comparisons_all.append((left, right, -1))
partition_and_save(n_fold, dir, partition, rankings_all, comparisons_all, X_imagenet, imgs_lst)
def save_rop_data(n_fold, dir='rop_', partition='per_obs', manual_feature=False):
'''
n: number of items
p: feature dimension
X: n*p, feature matrix
:param n_fold: number of cross validation folds
:param dir: current directory to read features and labels
'''
n_img = 100
# load all comparisons
with open('../data/' + dir + 'data/' + 'Partitions.p', 'rb') as f:
u = pickle._Unpickler(f)
u.encoding = 'latin1'
label_cmp = u.load()['cmpData'] # (expert,pair_index,label)
df = pd.read_excel('../data/' + dir + 'data/' + '100Features.xlsx')
image_ids = df.as_matrix()[:n_img, 0]
image_ids = np.array([name[:-4] for name in image_ids]) # correct extension
X = df.as_matrix()[:n_img, 1:144].astype('float')
# standardize
X_mean = np.mean(X, axis=0)
X_std = np.std(X, axis=0) + rtol
X = (X - X_mean) / X_std
# create googlenet feature extractor model
input1 = Input(shape=input_shape)
input2 = Input(shape=input_shape)
feature1, _ = create_googlenet(input1, input2)
base_net = Model(input1, feature1)
feature_model = Model(inputs=base_net.input, outputs=base_net.get_layer('feature_extractor').get_output_at(0))
feature_model.load_weights(GOOGLENET_INIT_WEIGHTS_PATH, by_name=True)
feature_model.compile(loss='mean_squared_error', optimizer='sgd')
# load images and googlenet features
X_imagenet = np.zeros((n_img, 1024), dtype=float)
# Extract image matrix, (n, 3, 224, 224)
int_to_idx = dict(enumerate(image_ids))
idx_to_int = dict((v, k) for k, v in int_to_idx.items())
imgs_lst = np.zeros((0, 3, 224, 224))
for image_id, i in idx_to_int.items():
# load
image_mtx = img_to_array(load_img('../data/' + dir + 'data/images/' + image_id + '.png')).astype(np.uint8)
# resize
image_mtx = np.reshape(imresize(image_mtx, input_shape[1:]), input_shape)
# standardize
image_mtx = (image_mtx - np.mean(image_mtx)) / np.std(image_mtx)
image_mtx = image_mtx[np.newaxis, :, :, :]
# concatenate
imgs_lst = np.concatenate((imgs_lst, image_mtx), axis=0)
# take googlenet features
X_imagenet[i, :] = np.squeeze(feature_model.predict(image_mtx))
# take rankings (ordered lists) and comparisons (+1/-1) of images in image_ids
M_per_expert = len(label_cmp[0]) # Number of comparisons per expert
rankings_all = []
comparisons_all = []
for expert in range(5):
for pair_ind in range(M_per_expert):
item1 = np.where(image_ids == label_cmp[expert][pair_ind][0])[0]
item2 = np.where(image_ids == label_cmp[expert][pair_ind][1])[0]
if item1 != np.empty((1,)) and item2 != np.empty((1,)):
item1 = np.asscalar(item1)
item2 = np.asscalar(item2)
if label_cmp[expert][pair_ind][2] == 1:
rankings_all.append((item1, item2))
# Append to comparisons
comparisons_all.append((item1, item2, +1))
else:
rankings_all.append((item2, item1))
# Append to comparisons
comparisons_all.append((item1, item2, -1))
if manual_feature:
partition_and_save(n_fold, dir + "manual_", partition, rankings_all, comparisons_all, X, X)
else:
partition_and_save(n_fold, dir, partition, rankings_all, comparisons_all, X_imagenet, imgs_lst)
def save_candy_data(n_fold, dir='candy_', partition='per_obs', flip_noise_prob=0.0,
ranking_length_for_siamese=2):
'''
n: number of items
p: feature dimension
X: n*p, feature matrix
:param n_fold: number of cross validation folds
:param dir: current directory to read features and labels
'''
X = [] # 85x11
win_percent = []
# open file in read mode
csv_reader = codecs.open('../data/' + dir + 'data/candy-data.csv', 'r', 'utf8')
# Iterate over each row in the csv using reader object
for row in csv_reader:
row = row.rstrip().split(",")
# row variable is a list that represents a row in csv, first column is row names
X.append(row[1:-1])
win_percent.append(row[-1])
# first row is column names
X = np.array(X[1:]).astype("float")
win_percent = np.array(win_percent[1:])
full_ranking_indices = np.flip(np.argsort(win_percent))
X = X[full_ranking_indices]
# standardize
X_mean = np.mean(X, axis=0)
X_std = np.std(X, axis=0) + rtol
X = (X - X_mean) / X_std
n = X.shape[0]
# generate comparisons and rankings w.r.t. win percent
all_multiway_rankings = list(combinations(range(n), ranking_length_for_siamese))
rankings_all = []
comparisons_all = []
for ranking in all_multiway_rankings:
#temp_ranking = np.array(temp_ranking)
#ranking = list(temp_ranking[np.flip(np.argsort(win_percent[temp_ranking]))])
# flip ranking and comparison to add noise
eps = np.random.uniform()
if eps > flip_noise_prob:
# correct one
item1 = ranking[0]
item2 = ranking[1]
rankings_all.append(ranking)
else:
item1 = ranking[-1]
item2 = ranking[0]
if ranking_length_for_siamese > 2:
rankings_all.append((item1, item2) + tuple(ranking[1:-1]))
else:
rankings_all.append((item1, item2))
# choose which way to compare
if np.random.uniform() > 0.5:
comparisons_all.append((item1, item2, +1))
else:
comparisons_all.append((item2, item1, -1))
# features and images are not separate for numerical datasets
partition_and_save(n_fold, dir, partition, rankings_all, comparisons_all, X, X, ranking_length_for_siamese)
def save_living_cost_data(n_fold, dir='living_cost_', partition='per_obs', flip_noise_prob=0.0,
ranking_length_for_siamese=2, n_countries=50):
'''
n: number of items
p: feature dimension
X: n*p, feature matrix
:param n_fold: number of cross validation folds
:param dir: current directory to read features and labels
'''
X = [] # 216x6
# open file in read mode
csv_reader = codecs.open('../data/' + dir + 'data/movehubcostofliving.csv', 'r', 'utf8')
# Iterate over each row in the csv using reader object
for row in csv_reader:
row = row.rstrip().split(",")
# row variable is a list that represents a row in csv, first column is row names
X.append(row[1:])
# first row is column names
X = np.array(X[1:]).astype("float")
X = X[:n_countries]
# standardize
X_mean = np.mean(X, axis=0)
X_std = np.std(X, axis=0) + rtol
X = (X - X_mean) / X_std
# generate comparisons and rankings, ranking order w.r.t. indices
all_multiway_rankings = list(combinations(range(n_countries), ranking_length_for_siamese))
rankings_all = []
comparisons_all = []
for ranking in all_multiway_rankings:
# flip ranking and comparison to add noise
eps = np.random.uniform()
if eps > flip_noise_prob:
# correct one
item1 = ranking[0]
item2 = ranking[1]
rankings_all.append(ranking)
else:
item1 = ranking[-1]
item2 = ranking[0]
if ranking_length_for_siamese > 2:
rankings_all.append((item1, item2) + tuple(ranking[1:-1]))
else:
rankings_all.append((item1, item2))
# choose which way to compare
if np.random.uniform() > 0.5:
comparisons_all.append((item1, item2, +1))
else:
comparisons_all.append((item2, item1, -1))
# features and images are not separate for numerical datasets
partition_and_save(n_fold, dir, partition, rankings_all, comparisons_all, X, X, ranking_length_for_siamese)
def save_living_quality_data(n_fold, dir='living_quality_', partition='per_obs', flip_noise_prob=0.0,
ranking_length_for_siamese=2, n_countries=50):
'''
n: number of items
p: feature dimension
X: n*p, feature matrix
:param n_fold: number of cross validation folds
:param dir: current directory to read features and labels
'''
X = [] # 216x6
# open file in read mode
csv_reader = codecs.open('../data/' + dir + 'data/movehubqualityoflife.csv', 'r', 'utf8')
# Iterate over each row in the csv using reader object
for row in csv_reader:
row = row.rstrip().split(",")
# row variable is a list that represents a row in csv, first column is row names
X.append(row[1:])
# first row is column names
X = np.array(X[1:]).astype("float")
X = X[:n_countries]
# standardize
X_mean = np.mean(X, axis=0)
X_std = np.std(X, axis=0) + rtol
X = (X - X_mean) / X_std
# generate comparisons and rankings, ranking order w.r.t. indices
all_multiway_rankings = list(combinations(range(n_countries), ranking_length_for_siamese))
rankings_all = []
comparisons_all = []
for ranking in all_multiway_rankings:
# flip ranking and comparison to add noise
eps = np.random.uniform()
if eps > flip_noise_prob:
# correct one
item1 = ranking[0]
item2 = ranking[1]
rankings_all.append(ranking)
else:
item1 = ranking[-1]
item2 = ranking[0]
if ranking_length_for_siamese > 2:
rankings_all.append((item1, item2) + tuple(ranking[1:-1]))
else:
rankings_all.append((item1, item2))
# choose which way to compare
if np.random.uniform() > 0.5:
comparisons_all.append((item1, item2, +1))
else:
comparisons_all.append((item2, item1, -1))
# features and images are not separate for numerical datasets
partition_and_save(n_fold, dir, partition, rankings_all, comparisons_all, X, X, ranking_length_for_siamese)
def save_imdb_data(n_fold, dir='imdb_', partition='per_obs', n_movies = 50, flip_noise_prob=0.0):
X = [] # n_moviesx36
ratings = []
row_ind = 0
# open file in read mode
csv_reader = codecs.open('../data/' + dir + 'data/imdb.csv', 'r', 'utf8')
# Iterate over each row in the csv using reader object
for row in csv_reader:
# first row is column names
if row_ind > 0 and len(X) < n_movies:
row = row.rstrip().split(",")
# check for invalid rows
try:
cur_rating = float(row[5])
feature1 = float(row[7])
feature2 = float(row[8])
feature3 = [float(elm) for elm in row[10:]]
res = True
except:
res = False
if res:
ratings.append(cur_rating)
features = []
features.append(feature1)
features.append(feature2)
features.extend(feature3)
X.append(features)
row_ind += 1
# first row is column names
X = np.array(X).astype("float")
# standardize
X_mean = np.mean(X, axis=0)
X_std = np.std(X, axis=0) + rtol
X = (X - X_mean) / X_std
# generate comparisons w.r.t. ratings in decreasing order
unique_ratings = np.flip(np.unique(ratings))
movie_indices_grouped = []
rankings_all = []
comparisons_all = []
for rating in unique_ratings:
movie_indices_grouped.append([i for i in range(len(ratings)) if ratings[i] == rating])
for i, movies1 in enumerate(movie_indices_grouped[:-1]):
for movies2 in movie_indices_grouped[i + 1:]:
for temp_item1 in movies1:
for temp_item2 in movies2:
# flip ranking and comparison to add noise
eps = np.random.uniform()
if eps > flip_noise_prob:
# correct one
item1 = temp_item1
item2 = temp_item2
else:
item1 = temp_item2
item2 = temp_item1
rankings_all.append((item1, item2))
if np.random.uniform() > 0.5:
comparisons_all.append((item1, item2, +1))
else:
comparisons_all.append((item2, item1, -1))
# features and images are not separate for numerical datasets
partition_and_save(n_fold, dir, partition, rankings_all, comparisons_all, X, X)
def save_imdb_4way_data(n_fold, dir='imdb_multiway_', partition='per_obs', n_movies=50, flip_noise_prob=0.0):
X = [] # n_moviesx36
ratings = []
row_ind = 0
# open file in read mode
csv_reader = codecs.open('../data/' + dir + 'data/imdb.csv', 'r', 'utf8')
# Iterate over each row in the csv using reader object
for row in csv_reader:
# first row is column names
if row_ind > 0 and len(X) < n_movies:
row = row.rstrip().split(",")
# check for invalid rows
try:
cur_rating = float(row[5])
feature1 = float(row[7])
feature2 = float(row[8])
feature3 = [float(elm) for elm in row[10:]]
res = True
except:
res = False
if res:
ratings.append(cur_rating)
features = []
features.append(feature1)
features.append(feature2)
features.extend(feature3)
X.append(features)
row_ind += 1
# first row is column names
X = np.array(X).astype("float")
# standardize
X_mean = np.mean(X, axis=0)
X_std = np.std(X, axis=0) + rtol
X = (X - X_mean) / X_std
# generate comparisons w.r.t. ratings in decreasing order
unique_ratings = np.flip(np.unique(ratings))
movie_indices_grouped = []
rankings_all = []
comparisons_all = []
for rating in unique_ratings:
movie_indices_grouped.append([i for i in range(len(ratings)) if ratings[i] == rating])
for first in np.arange(0, len(movie_indices_grouped) - 3):
for second in np.arange(first + 1, len(movie_indices_grouped) - 2):
for third in np.arange(second + 1, len(movie_indices_grouped) - 1):
for forth in np.arange(third + 1, len(movie_indices_grouped)):
movies1 = movie_indices_grouped[first]
movies2 = movie_indices_grouped[second]
movies3 = movie_indices_grouped[third]
movies4 = movie_indices_grouped[forth]
print([first, second, third, forth])
for temp_item1 in movies1:
for temp_item2 in movies2:
for temp_item3 in movies3:
for temp_item4 in movies4:
# flip ranking and comparison to add noise
eps = np.random.uniform()
ranking = [temp_item1, temp_item2, temp_item3, temp_item4]
print(ranking)
if eps > flip_noise_prob:
# correct one
item1 = ranking[0]
item2 = ranking[1]
rankings_all.append(ranking)
else:
item1 = ranking[-1]
item2 = ranking[0]
#rankings_all.append((item1, item2) + tuple(np.random.permutation(ranking[1:-1])))
rankings_all.append((item1, item2) + tuple(ranking[1:-1]))
# choose which way to compare
if np.random.uniform() > 0.5:
comparisons_all.append((item1, item2, +1))
else:
comparisons_all.append((item2, item1, -1))
# features and images are not separate for numerical datasets
partition_and_save(n_fold, dir, partition, rankings_all, comparisons_all, X, X, 4)
def save_iclr_3way_data(n_fold, dir='iclr_multiway_', partition='per_obs', n_docs=100, flip_noise_prob=0.0):
"""
Crawled data is here: https://github.com/shaohua0116/ICLR2020-OpenReviewData
Features are extracted by pre-trained BERT model
from transformers import BertTokenizer, BertModel
import torch
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
model = BertModel.from_pretrained('bert-base-uncased', return_dict=True)
ratings = []
embeddings = []
for i, m in enumerate(meta_list):
if len(m.rating) > 0:
rating = np.mean(m.rating) / (np.std(m.rating) + 1e-4)
inputs = tokenizer(m.abstract, return_tensors="pt")
outputs = model(**inputs) # 768 dimensions per document
embedding = outputs.last_hidden_state[-1, -1].data.numpy() # take the last element of the sequence and batch
print("Paper count", i)
print("Average rating", rating)
ratings.append(rating)
embeddings.append(embedding)
"""
X = np.load("../data/iclr_3way_noisy_data/iclr_2020_embeddings.npy").astype("float")[:n_docs] # n_abstracts x 768
ratings = np.load("../data/iclr_3way_noisy_data/iclr_2020_ratings.npy")[:n_docs] # n_abstracts x 1
# standardize
X_mean = np.mean(X, axis=0)
X_std = np.std(X, axis=0) + rtol
X = (X - X_mean) / X_std
# generate comparisons w.r.t. ratings in decreasing order
unique_ratings = np.flip(np.unique(ratings))
movie_indices_grouped = []
rankings_all = []
comparisons_all = []
for rating in unique_ratings:
movie_indices_grouped.append([i for i in range(len(ratings)) if ratings[i] == rating])
for first in np.arange(0, len(movie_indices_grouped) - 2):
for second in np.arange(first + 1, len(movie_indices_grouped) - 1):
for third in np.arange(second + 1, len(movie_indices_grouped)):
movies1 = movie_indices_grouped[first]
movies2 = movie_indices_grouped[second]
movies3 = movie_indices_grouped[third]
print([first, second, third])
for temp_item1 in movies1:
for temp_item2 in movies2:
for temp_item3 in movies3:
# flip ranking and comparison to add noise
eps = np.random.uniform()
ranking = [temp_item1, temp_item2, temp_item3]
print(ranking)
if eps > flip_noise_prob:
# correct one
item1 = ranking[0]
item2 = ranking[1]
rankings_all.append(ranking)
else:
item1 = ranking[-1]
item2 = ranking[0]
#rankings_all.append((item1, item2) + tuple(np.random.permutation(ranking[1:-1])))
rankings_all.append((item1, item2) + tuple(ranking[1:-1]))
# choose which way to compare
if np.random.uniform() > 0.5:
comparisons_all.append((item1, item2, +1))
else:
comparisons_all.append((item2, item1, -1))
# features and images are not separate for numerical datasets
partition_and_save(n_fold, dir, partition, rankings_all, comparisons_all, X, X, 3)
def save_iclr_4way_data(n_fold, dir='iclr_multiway_', partition='per_obs', n_docs=100, flip_noise_prob=0.0):
"""
Crawled data is here: https://github.com/shaohua0116/ICLR2020-OpenReviewData
Features are extracted by pre-trained BERT model
"""
X = np.load("../data/iclr_3way_noisy_data/iclr_2020_embeddings.npy").astype("float")[:n_docs] # n_abstracts x 768
ratings = np.load("../data/iclr_3way_noisy_data/iclr_2020_ratings.npy")[:n_docs] # n_abstracts x 1
# standardize
X_mean = np.mean(X, axis=0)
X_std = np.std(X, axis=0) + rtol
X = (X - X_mean) / X_std
# generate comparisons w.r.t. ratings in decreasing order
unique_ratings = np.flip(np.unique(ratings))
movie_indices_grouped = []
rankings_all = []
comparisons_all = []
for rating in unique_ratings:
movie_indices_grouped.append([i for i in range(len(ratings)) if ratings[i] == rating])
for first in np.arange(0, len(movie_indices_grouped) - 3):
for second in np.arange(first + 1, len(movie_indices_grouped) - 2):
for third in np.arange(second + 1, len(movie_indices_grouped) - 1):
for forth in np.arange(third + 1, len(movie_indices_grouped)):
movies1 = movie_indices_grouped[first]
movies2 = movie_indices_grouped[second]
movies3 = movie_indices_grouped[third]
movies4 = movie_indices_grouped[forth]
print([first, second, third, forth])
for temp_item1 in movies1:
for temp_item2 in movies2:
for temp_item3 in movies3:
for temp_item4 in movies4:
# flip ranking and comparison to add noise
eps = np.random.uniform()
ranking = [temp_item1, temp_item2, temp_item3, temp_item4]
print(ranking)
if eps > flip_noise_prob:
# correct one
item1 = ranking[0]
item2 = ranking[1]
rankings_all.append(ranking)
else:
item1 = ranking[-1]
item2 = ranking[0]
# rankings_all.append((item1, item2) + tuple(np.random.permutation(ranking[1:-1])))
rankings_all.append((item1, item2) + tuple(ranking[1:-1]))
# choose which way to compare
if np.random.uniform() > 0.5:
comparisons_all.append((item1, item2, +1))
else:
comparisons_all.append((item2, item1, -1))
# features and images are not separate for numerical datasets
partition_and_save(n_fold, dir, partition, rankings_all, comparisons_all, X, X, 4)
def save_iclr_5way_data(n_fold, dir='iclr_multiway_', partition='per_obs', n_docs=100, flip_noise_prob=0.0):
"""
Crawled data is here: https://github.com/shaohua0116/ICLR2020-OpenReviewData
Features are extracted by pre-trained BERT model
"""
X = np.load("../data/iclr_3way_noisy_data/iclr_2020_embeddings.npy").astype("float")[:n_docs] # n_abstracts x 768
ratings = np.load("../data/iclr_3way_noisy_data/iclr_2020_ratings.npy")[:n_docs] # n_abstracts x 1
# standardize
X_mean = np.mean(X, axis=0)
X_std = np.std(X, axis=0) + rtol
X = (X - X_mean) / X_std
# generate comparisons w.r.t. ratings in decreasing order
unique_ratings = np.flip(np.unique(ratings))
movie_indices_grouped = []
rankings_all = []
comparisons_all = []
for rating in unique_ratings:
movie_indices_grouped.append([i for i in range(len(ratings)) if ratings[i] == rating])
for first in np.arange(0, len(movie_indices_grouped) - 4):
for second in np.arange(first + 1, len(movie_indices_grouped) - 3):
for third in np.arange(second + 1, len(movie_indices_grouped) - 2):
for forth in np.arange(third + 1, len(movie_indices_grouped) - 1):
for fifth in np.arange(forth + 1, len(movie_indices_grouped)):
movies1 = movie_indices_grouped[first]
movies2 = movie_indices_grouped[second]
movies3 = movie_indices_grouped[third]
movies4 = movie_indices_grouped[forth]
movies5 = movie_indices_grouped[fifth]
print([first, second, third, forth, fifth])
for temp_item1 in movies1:
for temp_item2 in movies2:
for temp_item3 in movies3:
for temp_item4 in movies4:
for temp_item5 in movies5:
# flip ranking and comparison to add noise
eps = np.random.uniform()
ranking = [temp_item1, temp_item2, temp_item3, temp_item4, temp_item5]
print(ranking)
if eps > flip_noise_prob:
# correct one
item1 = ranking[0]
item2 = ranking[1]
rankings_all.append(ranking)
else:
item1 = ranking[-1]
item2 = ranking[0]
rankings_all.append((item1, item2) + tuple(ranking[1:-1]))
# choose which way to compare
if np.random.uniform() > 0.5:
comparisons_all.append((item1, item2, +1))
else:
comparisons_all.append((item2, item1, -1))
# features and images are not separate for numerical datasets
partition_and_save(n_fold, dir, partition, rankings_all, comparisons_all, X, X, 5)
if __name__ == "__main__":
n_fold = 5
parser = argparse.ArgumentParser(description='prep', formatter_class=argparse.ArgumentDefaultsHelpFormatter)
parser.add_argument('dir', type=str)
args = parser.parse_args()
dir = args.dir
flip_noise_prob = 0.1
if dir == 'rop_':
save_rop_data(n_fold, dir='rop_', manual_feature=False)
if dir == 'rop_manual_':
save_rop_data(n_fold, dir='rop_', manual_feature=True)
elif dir == 'fac_':
save_fac_data(n_fold, dir=dir)
elif dir == 'gifgif_happy_':
save_gifgif_happy_data(n_fold, dir=dir)
elif dir == 'living_cost_noisy_':
save_living_cost_data(n_fold, dir=dir, flip_noise_prob=flip_noise_prob)
elif dir == 'living_cost_3way_noisy_':
save_living_cost_data(n_fold, dir=dir, flip_noise_prob=flip_noise_prob, ranking_length_for_siamese=3)
elif dir == 'living_cost_4way_noisy_':
save_living_cost_data(n_fold, dir=dir, flip_noise_prob=flip_noise_prob, ranking_length_for_siamese=4)
elif dir == 'living_cost_5way_noisy_':
save_living_cost_data(n_fold, dir=dir, flip_noise_prob=flip_noise_prob, ranking_length_for_siamese=5)
elif dir == 'living_cost_6way_noisy_':
save_living_cost_data(n_fold, dir=dir, flip_noise_prob=flip_noise_prob, ranking_length_for_siamese=6)
elif dir == 'living_quality_noisy_':
save_living_quality_data(n_fold, dir=dir, flip_noise_prob=flip_noise_prob)
elif dir == 'living_quality_3way_noisy_':
save_living_quality_data(n_fold, dir=dir, flip_noise_prob=flip_noise_prob, ranking_length_for_siamese=3)
elif dir == 'living_quality_4way_noisy_':
save_living_quality_data(n_fold, dir=dir, flip_noise_prob=flip_noise_prob, ranking_length_for_siamese=4)
elif dir == 'living_quality_5way_noisy_':
save_living_quality_data(n_fold, dir=dir, flip_noise_prob=flip_noise_prob, ranking_length_for_siamese=5)
elif dir == 'living_quality_6way_noisy_':
save_living_quality_data(n_fold, dir=dir, flip_noise_prob=flip_noise_prob, ranking_length_for_siamese=6)
elif dir == 'imdb_noisy_':
save_imdb_data(n_fold, dir=dir, flip_noise_prob=flip_noise_prob)
elif dir == 'imdb_4way_noisy_':
save_imdb_4way_data(n_fold, dir=dir, flip_noise_prob=flip_noise_prob)
elif dir == 'iclr_3way_noisy_':
save_iclr_3way_data(n_fold, dir=dir, flip_noise_prob=flip_noise_prob)
elif dir == 'iclr_4way_noisy_':
save_iclr_4way_data(n_fold, dir=dir, flip_noise_prob=flip_noise_prob)
elif dir == 'iclr_5way_noisy_':
save_iclr_5way_data(n_fold, dir=dir, flip_noise_prob=flip_noise_prob)
elif dir == 'rop_par_':
save_rop_data(n_fold, dir='rop_par_', partition='per_samp', manual_feature=False)
elif dir == 'rop_par_manual_':
save_rop_data(n_fold, dir='rop_par_', partition='per_samp', manual_feature=True)
elif dir == 'gifgif_happy_par_':
save_gifgif_happy_data(n_fold, dir=dir, partition='per_samp')
elif dir == 'fac_par_':
save_fac_data(n_fold, dir=dir, partition='per_samp')
elif dir == 'living_cost_noisy_par_':
save_living_cost_data(n_fold, dir=dir, partition='per_samp', flip_noise_prob=flip_noise_prob)
elif dir == 'living_cost_3way_noisy_par_':
save_living_cost_data(n_fold, dir=dir, partition='per_samp', flip_noise_prob=flip_noise_prob,
ranking_length_for_siamese=3)
elif dir == 'living_cost_4way_noisy_par_':
save_living_cost_data(n_fold, dir=dir, partition='per_samp', flip_noise_prob=flip_noise_prob,
ranking_length_for_siamese=4)
elif dir == 'living_cost_5way_noisy_par_':
save_living_cost_data(n_fold, dir=dir, partition='per_samp', flip_noise_prob=flip_noise_prob,
ranking_length_for_siamese=5)
elif dir == 'living_cost_6way_noisy_par_':
save_living_cost_data(n_fold, dir=dir, partition='per_samp', flip_noise_prob=flip_noise_prob,
ranking_length_for_siamese=6)
elif dir == 'living_quality_noisy_par_':
save_living_quality_data(n_fold, dir=dir, partition='per_samp', flip_noise_prob=flip_noise_prob)
elif dir == 'living_quality_3way_noisy_par_':
save_living_quality_data(n_fold, dir=dir, partition='per_samp', flip_noise_prob=flip_noise_prob,
ranking_length_for_siamese=3)
elif dir == 'living_quality_4way_noisy_par_':
save_living_quality_data(n_fold, dir=dir, partition='per_samp', flip_noise_prob=flip_noise_prob,
ranking_length_for_siamese=4)
elif dir == 'living_quality_5way_noisy_par_':
save_living_quality_data(n_fold, dir=dir, partition='per_samp', flip_noise_prob=flip_noise_prob,
ranking_length_for_siamese=5)
elif dir == 'living_quality_6way_noisy_par_':
save_living_quality_data(n_fold, dir=dir, partition='per_samp', flip_noise_prob=flip_noise_prob,
ranking_length_for_siamese=6)
elif dir == 'imdb_noisy_par_':
save_imdb_data(n_fold, dir=dir, partition='per_samp', flip_noise_prob=flip_noise_prob)
elif dir == 'imdb_4way_noisy_par_':
save_imdb_4way_data(n_fold, dir=dir, partition='per_samp', flip_noise_prob=flip_noise_prob)
elif dir == 'iclr_3way_noisy_par_':
save_iclr_3way_data(n_fold, dir=dir, partition='per_samp', flip_noise_prob=flip_noise_prob)
elif dir == 'iclr_4way_noisy_par_':
save_iclr_4way_data(n_fold, dir=dir, partition='per_samp', flip_noise_prob=flip_noise_prob)
elif dir == 'iclr_5way_noisy_par_':
save_iclr_5way_data(n_fold, dir=dir, partition='per_samp', flip_noise_prob=flip_noise_prob)
| 51.884013 | 124 | 0.585967 | 6,236 | 49,653 | 4.399134 | 0.068954 | 0.018044 | 0.036489 | 0.021434 | 0.836511 | 0.810958 | 0.79645 | 0.784602 | 0.768746 | 0.755331 | 0 | 0.021477 | 0.310777 | 49,653 | 956 | 125 | 51.938285 | 0.780141 | 0.168469 | 0 | 0.642553 | 0 | 0 | 0.066143 | 0.025002 | 0 | 0 | 0 | 0 | 0 | 1 | 0.019858 | false | 0 | 0.021277 | 0 | 0.043972 | 0.012766 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
83e72d18dbaf89deb20c7a6afca1160f6e1ba4f4 | 637 | py | Python | huaweicloud-sdk-hss/huaweicloudsdkhss/v1/__init__.py | huaweicloud/huaweicloud-sdk-python-v3 | 7a6270390fcbf192b3882bf763e7016e6026ef78 | [
"Apache-2.0"
] | 64 | 2020-06-12T07:05:07.000Z | 2022-03-30T03:32:50.000Z | huaweicloud-sdk-hss/huaweicloudsdkhss/v1/__init__.py | huaweicloud/huaweicloud-sdk-python-v3 | 7a6270390fcbf192b3882bf763e7016e6026ef78 | [
"Apache-2.0"
] | 11 | 2020-07-06T07:56:54.000Z | 2022-01-11T11:14:40.000Z | huaweicloud-sdk-hss/huaweicloudsdkhss/v1/__init__.py | huaweicloud/huaweicloud-sdk-python-v3 | 7a6270390fcbf192b3882bf763e7016e6026ef78 | [
"Apache-2.0"
] | 24 | 2020-06-08T11:42:13.000Z | 2022-03-04T06:44:08.000Z | # coding: utf-8
from __future__ import absolute_import
# import HssClient
from huaweicloudsdkhss.v1.hss_client import HssClient
from huaweicloudsdkhss.v1.hss_async_client import HssAsyncClient
# import models into sdk package
from huaweicloudsdkhss.v1.model.event import Event
from huaweicloudsdkhss.v1.model.host import Host
from huaweicloudsdkhss.v1.model.list_events_request import ListEventsRequest
from huaweicloudsdkhss.v1.model.list_events_response import ListEventsResponse
from huaweicloudsdkhss.v1.model.list_hosts_request import ListHostsRequest
from huaweicloudsdkhss.v1.model.list_hosts_response import ListHostsResponse
| 39.8125 | 78 | 0.877551 | 80 | 637 | 6.7875 | 0.375 | 0.309392 | 0.338858 | 0.309392 | 0.427256 | 0.427256 | 0 | 0 | 0 | 0 | 0 | 0.015358 | 0.080063 | 637 | 15 | 79 | 42.466667 | 0.911263 | 0.095761 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
83ff378862e436cd81a9cb109dc624c080d2dba5 | 240 | py | Python | simpleflow/signal.py | nstott/simpleflow | 483602deb745a09b59ad6e24052dd5096c54fad2 | [
"MIT"
] | null | null | null | simpleflow/signal.py | nstott/simpleflow | 483602deb745a09b59ad6e24052dd5096c54fad2 | [
"MIT"
] | null | null | null | simpleflow/signal.py | nstott/simpleflow | 483602deb745a09b59ad6e24052dd5096c54fad2 | [
"MIT"
] | null | null | null | from .base import Submittable
class WaitForSignal(Submittable):
"""
Mark the executor must wait on a signal.
"""
def __init__(self, signal_name):
self.signal_name = signal_name
def execute(self):
pass
| 18.461538 | 44 | 0.65 | 29 | 240 | 5.137931 | 0.689655 | 0.201342 | 0.187919 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.266667 | 240 | 12 | 45 | 20 | 0.846591 | 0.166667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0.166667 | 0.166667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 6 |
f7d9dc4a91a0901900f652e46c96b3ad07877a12 | 89 | py | Python | wntr/utils/__init__.py | yejustme/WNTR | 4228853c84217392b57e99c486e878ddf7959bbd | [
"BSD-3-Clause"
] | null | null | null | wntr/utils/__init__.py | yejustme/WNTR | 4228853c84217392b57e99c486e878ddf7959bbd | [
"BSD-3-Clause"
] | null | null | null | wntr/utils/__init__.py | yejustme/WNTR | 4228853c84217392b57e99c486e878ddf7959bbd | [
"BSD-3-Clause"
] | null | null | null | """
The wntr.utils package contains helper functions.
"""
from wntr.utils import logger
| 14.833333 | 49 | 0.752809 | 12 | 89 | 5.583333 | 0.833333 | 0.268657 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.146067 | 89 | 5 | 50 | 17.8 | 0.881579 | 0.550562 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
f7febaca2f57a1183dd1609bb06226e92d527c6b | 22 | py | Python | db_model/post-comment.py | UtkarshR8j/evolv-challenge | 81469c2eab27db140e2c7a369885b7e3b1584b77 | [
"MIT"
] | null | null | null | db_model/post-comment.py | UtkarshR8j/evolv-challenge | 81469c2eab27db140e2c7a369885b7e3b1584b77 | [
"MIT"
] | null | null | null | db_model/post-comment.py | UtkarshR8j/evolv-challenge | 81469c2eab27db140e2c7a369885b7e3b1584b77 | [
"MIT"
] | null | null | null | from . import crud_db
| 11 | 21 | 0.772727 | 4 | 22 | 4 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.181818 | 22 | 1 | 22 | 22 | 0.888889 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
791f0980ae88ddd85f910b412e3f7940e8eced9e | 164 | py | Python | src/Entity.py | andreuesteras/sim-port | a7d8bbe04abff890f10353a83e40e16d6db64415 | [
"Apache-1.1"
] | null | null | null | src/Entity.py | andreuesteras/sim-port | a7d8bbe04abff890f10353a83e40e16d6db64415 | [
"Apache-1.1"
] | null | null | null | src/Entity.py | andreuesteras/sim-port | a7d8bbe04abff890f10353a83e40e16d6db64415 | [
"Apache-1.1"
] | null | null | null | class Entity:
def __init__(self, operationType):
self.operationType = operationType
def getOperationType(self):
return self.operationType
| 20.5 | 42 | 0.70122 | 15 | 164 | 7.4 | 0.533333 | 0.459459 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.231707 | 164 | 7 | 43 | 23.428571 | 0.880952 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.4 | false | 0 | 0 | 0.2 | 0.8 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
792a963a36ad0c981a1dc7d6232e2053616230af | 117 | py | Python | linkcheckerjs/checker/__init__.py | LeResKP/linkcheckerjs | 64a0f9da47781f324bd647546a273160ff4516fa | [
"MIT"
] | null | null | null | linkcheckerjs/checker/__init__.py | LeResKP/linkcheckerjs | 64a0f9da47781f324bd647546a273160ff4516fa | [
"MIT"
] | null | null | null | linkcheckerjs/checker/__init__.py | LeResKP/linkcheckerjs | 64a0f9da47781f324bd647546a273160ff4516fa | [
"MIT"
] | null | null | null | from .phantomjs import phantomjs_checker
from .requestspy import requests_checker
from .utils import standardize_url
| 29.25 | 40 | 0.871795 | 15 | 117 | 6.6 | 0.6 | 0.222222 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.102564 | 117 | 3 | 41 | 39 | 0.942857 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
f74741c069ee1b6a4ae1b7ff6d986f2ce19bf3a7 | 4,568 | py | Python | level3/prepare_the_bunnies_escape/solution.py | lcsm29/goog-foobar | 6ea44879d9d9f3483fa320d92d6c25b14565c899 | [
"MIT"
] | null | null | null | level3/prepare_the_bunnies_escape/solution.py | lcsm29/goog-foobar | 6ea44879d9d9f3483fa320d92d6c25b14565c899 | [
"MIT"
] | null | null | null | level3/prepare_the_bunnies_escape/solution.py | lcsm29/goog-foobar | 6ea44879d9d9f3483fa320d92d6c25b14565c899 | [
"MIT"
] | null | null | null | def count_step(m, w, h):
m = [[i for i in l] for l in m]
next_pos = [(0, 0)]
while next_pos:
x, y = next_pos.pop(0)
for i, j in ((-1, 0), (1, 0), (0, -1), (0, 1)):
x_, y_ = x + i, y + j
if 0 <= x_ < w and 0 <= y_ < h:
if not m[y_][x_]:
m[y_][x_] = m[y][x] + 1
next_pos.append((x_, y_))
step = m[-1][-1]
return step + 1 if step else float('inf')
def solution(m):
w, h = len(m[0]), len(m)
shortest_possible = w + h - 1
if count_step(m, w, h) == shortest_possible:
return shortest_possible
shortest = float('inf')
for x, y in [(x, y) for x in range(w) for y in range(h) if m[y][x]]:
tmp = [[i for i in l] for l in m]
tmp[y][x] = 0
result = count_step(tmp, w, h)
shortest = min(shortest, result)
if result == shortest_possible:
break
return shortest
if __name__ == '__main__':
from time import perf_counter_ns
basic_tests = (
([
[0, 1, 1, 0],
[0, 0, 0, 1],
[1, 1, 0, 0],
[1, 1, 1, 0]], 7),
([
[0, 0, 0, 0, 0, 0],
[1, 1, 1, 1, 1, 0],
[0, 0, 0, 0, 0, 0],
[0, 1, 1, 1, 1, 1],
[0, 1, 1, 1, 1, 1],
[0, 0, 0, 0, 0, 0]], 11)
)
additional_tests = (
([
[0, 0, 0, 0, 0, 0],
[1, 1, 1, 1, 1, 0],
[1, 0, 0, 0, 0, 0],
[0, 1, 1, 1, 1, 0],
[0, 1, 1, 1, 1, 1],
[0, 0, 0, 0, 0, 0]], 11),
([
[0, 0, 0, 0, 0, 0],
[1, 1, 1, 1, 1, 0],
[1, 0, 0, 0, 0, 0],
[0, 1, 1, 1, 1, 1],
[0, 1, 1, 1, 1, 1],
[0, 0, 0, 0, 0, 0]], 21),
([
[0, 0, 0, 1, 1, 0],
[0, 1, 1, 1, 1, 0],
[0, 1, 1, 0, 0, 0],
[0, 1, 0, 0, 1, 0],
[1, 1, 1, 1, 1, 0],
[1, 0, 0, 0, 0, 0]], 13),
([
[0, 0, 0, 0, 0, 0],
[1, 1, 1, 1, 1, 0],
[1, 1, 0, 0, 0, 0],
[0, 1, 1, 1, 1, 1],
[0, 1, 1, 1, 1, 1],
[0, 0, 0, 0, 0, 0]], float('inf')),
([
[0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 0],
[0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0]], 19),
([
[0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 1, 0],
[0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 0],
[0, 1, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0, 1, 1, 0, 1, 1, 0, 0, 1],
[0, 1, 0, 0, 1, 1, 0, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1],
[0, 1, 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1],
[0, 0, 0, 1, 0, 0, 1, 1, 0, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 1],
[1, 1, 0, 1, 0, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0, 0, 0, 1, 1, 0],
[0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 1, 1, 0, 1, 1, 1, 0, 1, 0, 0],
[0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 0, 1],
[0, 1, 0, 0, 1, 1, 0, 1, 0, 1, 1, 1, 0, 1, 1, 1, 1, 0, 0, 1],
[0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 1, 0, 0, 0, 1, 1, 0, 1, 1],
[0, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, 1, 0, 1, 0, 1, 1, 0, 1, 1],
[0, 1, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1],
[0, 1, 0, 0, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 1, 0, 1],
[0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 0, 0, 0, 1, 1, 1, 0, 1],
[1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1],
[1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1],
[0, 1, 1, 0, 1, 1, 0, 0, 1, 0, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1],
[0, 1, 1, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0]], 53),
)
results = {}
num_iters = 1
for func in [func for func in dir() if func.startswith('solution')]:
results[func] = []
print(f'\n{func}() (Number of Iterations {num_iters:,})')
for test in basic_tests + additional_tests:
matrix, expected = test
start = perf_counter_ns()
for i in range(num_iters):
result = globals()[func](matrix)
end = perf_counter_ns()
results[func].append(end - start)
print(f'{func}("{matrix}") returned {result} '
f'({"correct" if result == expected else f"expected: {expected}"})'
f' in {end - start:,} nanoseconds.')
| 39.042735 | 85 | 0.336033 | 890 | 4,568 | 1.677528 | 0.083146 | 0.277294 | 0.279303 | 0.246484 | 0.468185 | 0.45211 | 0.445412 | 0.438714 | 0.430676 | 0.399866 | 0 | 0.255633 | 0.436515 | 4,568 | 116 | 86 | 39.37931 | 0.324398 | 0 | 0 | 0.258929 | 0 | 0 | 0.044877 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.017857 | false | 0 | 0.008929 | 0 | 0.053571 | 0.017857 | 0 | 0 | 1 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
f78a52d66eb4d8744e7efa1d98de0a71abdadb72 | 5,242 | py | Python | model.py | NarendraPatwardhan/gym_venv | 9c7456cc64d416556f1d1d8eca7a72df0821cf00 | [
"MIT"
] | null | null | null | model.py | NarendraPatwardhan/gym_venv | 9c7456cc64d416556f1d1d8eca7a72df0821cf00 | [
"MIT"
] | null | null | null | model.py | NarendraPatwardhan/gym_venv | 9c7456cc64d416556f1d1d8eca7a72df0821cf00 | [
"MIT"
] | null | null | null | import numpy as np
import mxnet as mx
import matplotlib.pyplot as plt
#-----------------------------------------------------------------------------
class StateModel(mx.gluon.Block):
def __init__(self,config):
super(StateModel, self).__init__()
self.config = config
x = mx.nd.array(self.config['S0A'])
y = mx.nd.array(self.config['S1'])
self.dataset = mx.gluon.data.dataset.ArrayDataset(x,y)
self.dataloader = mx.gluon.data.DataLoader(self.dataset,batch_size=self.config['batch_size'])
with self.name_scope():
self.state_transition = mx.gluon.nn.Sequential('state_transition_')
with self.state_transition.name_scope():
self.state_transition.add(mx.gluon.nn.Dense(10, activation='relu'))
self.state_transition.add(mx.gluon.nn.Dense(20, activation='relu'))
self.state_transition.add(mx.gluon.nn.Dense(10, activation='relu'))
self.state_transition.add(mx.gluon.nn.Dense(self.config['S1'].shape[1]))
def forward(self, x):
return self.state_transition(x)
def fit(self):
self.collect_params().initialize(mx.init.Xavier(), ctx=mx.cpu())
criterion = mx.gluon.loss.HuberLoss()
optimizer = mx.gluon.Trainer(self.collect_params(), 'adam',{'learning_rate': self.config['learning_rate'],'wd': self.config['weight_decay']})
errors = []
for epoch in range(self.config['max_epochs']):
running_loss = 0.0
n_total = 0.0
for data in self.dataloader:
x, y = data
with mx.autograd.record():
output = self.forward(x)
loss = criterion(output, y)
loss.backward()
optimizer.step(self.config['batch_size'])
running_loss += mx.nd.sum(loss).asscalar()
n_total += x.shape[0]
errors.append(running_loss / n_total)
if epoch%self.config['verbosity']==0:
print('epoch [{}/{}], loss:{:.4f}'
.format(epoch + 1, self.config['max_epochs'], running_loss / n_total))
fig,ax = plt.subplots()
ax.plot(range(len(errors)),np.array(errors))
ax.set_title('State Modelling')
ax.set_ylabel('Huber Loss')
ax.set_xlabel('Epoch')
fig.savefig('state_modelling')
#-----------------------------------------------------------------------------
class RewardModel(mx.gluon.Block):
def __init__(self,config):
super(RewardModel, self).__init__()
self.config = config
x = mx.nd.array(self.config['S0AS1'])
y = mx.nd.array(self.config['R'])
self.dataset = mx.gluon.data.dataset.ArrayDataset(x,y)
self.dataloader = mx.gluon.data.DataLoader(self.dataset,batch_size=self.config['batch_size'])
with self.name_scope():
self.reward_function = mx.gluon.nn.Sequential('reward_function_')
with self.reward_function.name_scope():
self.reward_function.add(mx.gluon.nn.Dense(10, activation='relu'))
self.reward_function.add(mx.gluon.nn.Dense(20, activation='relu'))
self.reward_function.add(mx.gluon.nn.Dense(10, activation='relu'))
self.reward_function.add(mx.gluon.nn.Dense(1))
def forward(self, x):
return self.reward_function(x)
def fit(self):
self.collect_params().initialize(mx.init.Xavier(), ctx=mx.cpu())
criterion = mx.gluon.loss.HuberLoss()
optimizer = mx.gluon.Trainer(self.collect_params(), 'adam',{'learning_rate': self.config['learning_rate'],'wd': self.config['weight_decay']})
errors = []
for epoch in range(self.config['max_epochs']):
running_loss = 0.0
n_total = 0.0
for data in self.dataloader:
x, y = data
with mx.autograd.record():
output = self.forward(x)
loss = criterion(output, y)
loss.backward()
optimizer.step(self.config['batch_size'])
running_loss += mx.nd.sum(loss).asscalar()
n_total += x.shape[0]
errors.append(running_loss / n_total)
if epoch%self.config['verbosity']==0:
print('epoch [{}/{}], loss:{:.4f}'
.format(epoch + 1, self.config['max_epochs'], running_loss / n_total))
fig,ax = plt.subplots()
ax.plot(range(len(errors)),np.array(errors))
ax.set_title('Reward Modelling')
ax.set_ylabel('Huber Loss')
ax.set_xlabel('Epoch')
fig.savefig('reward_modelling')
#-----------------------------------------------------------------------------
if __name__ == '__main__':
x = np.random.randn(100,4)
xt = np.random.randn(100,4)
y = x[:,:3]
yt = xt[:,:3]
random_config = {
'max_epochs': 5000,
'batch_size': 64,
'learning_rate': 1e-3,
'weight_decay': 1e-5,
'verbosity': 25,
'S0A': x,
'S1': y
}
random_sm = StateModel(random_config)
random_sm.fit()
yp = random_sm(mx.nd.array(xt))
print(abs(yp.asnumpy() - yt).sum())
| 42.274194 | 149 | 0.553987 | 629 | 5,242 | 4.459459 | 0.195548 | 0.081996 | 0.032086 | 0.034225 | 0.819608 | 0.790731 | 0.776471 | 0.757932 | 0.73369 | 0.732264 | 0 | 0.015206 | 0.259824 | 5,242 | 123 | 150 | 42.617886 | 0.707732 | 0.044067 | 0 | 0.605505 | 0 | 0 | 0.093269 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.055046 | false | 0 | 0.027523 | 0.018349 | 0.119266 | 0.027523 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
f79e8c283908eeb9288ef98cbffba38f1005acfc | 34 | py | Python | pga/basemodel/__init__.py | pgallen90/basemodel | ab453c43b121b393055929c6c0218b50c6c73fa2 | [
"MIT"
] | 2 | 2022-01-26T04:02:29.000Z | 2022-02-05T23:29:02.000Z | pga/basemodel/__init__.py | pgallen90/basemodel | ab453c43b121b393055929c6c0218b50c6c73fa2 | [
"MIT"
] | null | null | null | pga/basemodel/__init__.py | pgallen90/basemodel | ab453c43b121b393055929c6c0218b50c6c73fa2 | [
"MIT"
] | null | null | null | from .mixin import BaseModelMixin
| 17 | 33 | 0.852941 | 4 | 34 | 7.25 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.117647 | 34 | 1 | 34 | 34 | 0.966667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
e398027bfc38424c31b972070d2ac945cd629998 | 218 | py | Python | bots.sample/serial-entrepreneur/__init__.py | 0xdc/botfriend | 6157a873c4158ccfdda4bf021059bddf14217654 | [
"MIT"
] | 39 | 2017-06-19T16:12:34.000Z | 2022-03-02T10:06:29.000Z | bots.sample/serial-entrepreneur/__init__.py | 0xdc/botfriend | 6157a873c4158ccfdda4bf021059bddf14217654 | [
"MIT"
] | null | null | null | bots.sample/serial-entrepreneur/__init__.py | 0xdc/botfriend | 6157a873c4158ccfdda4bf021059bddf14217654 | [
"MIT"
] | 5 | 2018-08-27T19:49:56.000Z | 2020-10-22T02:31:04.000Z | from .entrepreneur import Announcements
from botfriend.bot import TextGeneratorBot
class EntrepreneurBot(TextGeneratorBot):
def generate_text(self):
return Announcements().choice()
Bot = EntrepreneurBot
| 21.8 | 42 | 0.788991 | 21 | 218 | 8.142857 | 0.714286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.146789 | 218 | 9 | 43 | 24.222222 | 0.919355 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0 | 0.333333 | 0.166667 | 0.833333 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
e3b0bc290719636377964132df80d1f15f8509de | 37 | py | Python | kora/s/losses.py | wannaphong/kora | 8a9034097d07b14094e077769c02a0b4857d179b | [
"MIT"
] | 91 | 2020-05-26T05:54:51.000Z | 2022-03-09T07:33:44.000Z | kora/s/losses.py | wannaphong/kora | 8a9034097d07b14094e077769c02a0b4857d179b | [
"MIT"
] | 12 | 2020-10-03T10:09:11.000Z | 2021-03-06T23:12:21.000Z | kora/s/losses.py | wannaphong/kora | 8a9034097d07b14094e077769c02a0b4857d179b | [
"MIT"
] | 16 | 2020-07-07T18:39:29.000Z | 2021-03-06T03:46:49.000Z | from tensorflow.keras.losses import * | 37 | 37 | 0.837838 | 5 | 37 | 6.2 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.081081 | 37 | 1 | 37 | 37 | 0.911765 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
541a413a249a493df5734542608918483e11faf3 | 30 | py | Python | cursesqr/tools/__init__.py | Ruunyox/CursesQR | cd3be10eaae77f20eca765fc77c946a284712ad8 | [
"MIT"
] | 5 | 2020-11-01T17:19:25.000Z | 2020-11-05T20:28:21.000Z | cursesqr/tools/__init__.py | Ruunyox/CursesQR | cd3be10eaae77f20eca765fc77c946a284712ad8 | [
"MIT"
] | null | null | null | cursesqr/tools/__init__.py | Ruunyox/CursesQR | cd3be10eaae77f20eca765fc77c946a284712ad8 | [
"MIT"
] | null | null | null | from .cursesqr_tools import *
| 15 | 29 | 0.8 | 4 | 30 | 5.75 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.133333 | 30 | 1 | 30 | 30 | 0.884615 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.